uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,499,715 | arxiv | \section{Introduction}
The birth environment of a planetary system---the
size and density of the stellar cluster or association
where the system forms---is thought to have an impact on the nature
of the planetary system. This can occur through external
photoevaporation of the protoplanetary disc, and/or through
stellar flybys truncating the disc or perturbing
formed systems \citep{delaFuenteMarcos97,LaughlinAdams98,
Adams+06,Malmberg+07,Winter+18,Li+19,Li+20b,Li+20a}. Direct evidence
of the impact of birth environments on planetary system formation
is hard to obtain, because of the low number (in absolute terms) of
planets found in clusters compared to the field,
and the challenges of detecting planets around
young stars. \citet[henceforth W20]{Winter+20}
recently proposed an ingenious way around this,
by using the local phase space density---the
density of nearby stars in the 6D phase space
of Galactic position and velocity---of an
exoplanet host star as a proxy for the crowdedness of
its birth environment. By assuming that the current
density reflects the past density at birth,
W20 could look for correlations
between this density and the properties of
planetary systems.
One of the most significant results from W20
was that the host stars of Hot Jupiters are
nearly always in ``high-density'' regions of phase space.
This is naturally explained if the primary migration
channel for Hot Jupiters is dynamical excitation through
planet--planet scattering and/or Lidov--Kozai cycles,
followed by tidal dissipation
\citep{RasioFord96,WeidenschillingMarzari96,WuMurray03,FabryckyTremaine07}.
Here, the external
dynamical perturbations in a dense birth environment
would provide the trigger for this high-eccentricity
migration to begin
\citep{MalmbergDC07,ParkerGoodwin09,Malmberg+11,ParkerQuanz12,
Brucalassi+16,Rodet+21}. Disc migration offers an alternative
channel to produce Hot Jupiters \citep{Lin+96}, which might
also be affected by the environment through photoevaporation
of the protoplanetary disc \citep{Winter+18}.
\begin{figure}
\includegraphics[width=0.5\textwidth]{./eDR3_all_WAll_planets_a_M_smooth0_10.pdf}
\caption{Semimajor axes and masses of planets whose host stars
have masses between $0.7$ and $2.0\mathrm{\,M}_\odot$
and ages between $1.0$ and $4.5$\,Gyr. In the top panels they
are distinguished by the host stars' phase space density,
in the bottom panel by the host stars' peculiar velocity.
Typically, a low velocity corresponds to a high phase
space density and vice versa. We see an abundance
of Hot Jupiters in the high-density
and the low-velocity populations. Note that stars that cannot
be unambiguously assigned to one of the
populations are not included, and that some stars have
multiple planets.}
\label{fig:a_M}
\end{figure}
This interpretation of W20's finding, though,
relies on the assumption that a high local phase
space density for a star at the present time reflects
a high density of its formation environment.
Here we examine this assumption.
In Hamiltonian mechanics, Liouville's Theorem indeed
states that phase space density
is constant along trajectories, implying it would be inherited
from a star's formation site. But this does not apply to stars
in the Galaxy on Gyr timescales, because
the Galactic potential is not time-independent, and the dynamics are
not completely collisionless, violating the conditions
for Liouville's Theorem to apply. In particular, stellar populations get
``heated'' with age and increase their velocity dispersion and vertical
scale height, through interactions with giant molecular clouds
and spiral arms \citep{SpitzerSchwarzschild51,Wielen77,
DeSimone+04,Nordstroem+04}. Numerical simulations
\citep{Kamdar+19a,Kamdar+19b} find that the imprint of a birth
cluster, in comoving conatal pairs and phase space
overdensities, largely disappear after $\sim1$\,Gyr.
Hence, it is not clear that the foundational assumption of
W20 holds. Instead it is possible that stars' phase space
density reflects coarser features of Galactic structure and kinematics,
such as disc heating with age, or the existence of a Galactic
thick disc of larger scale height than the thin disc
to which the Sun belongs \citep{GilmoreReid83},
rather than the nature of the birth cluster.
While noting that Galactic dynamics could play
a role in determining the phase space density,
W20 interpreted their results in the context of the
birth environment hypothesis.
Here we show that, in
general, the majority of stars which are currently in ``high density''
regions of phase space as defined by W20 simply have cold kinematics
in the Galactic disc: near-circular orbits and little
vertical motion. The ``high density'' classification thus
relates to the lower average age and
lesser kinematic heating of the stars, and not to
a memory of their birth environment.
\section{Methods: Mahalanobis distance and description
of the phase space density distribution}
We follow W20 in using the Mahalanobis distance \citep{Mahalanobis36}
in 6D phase space to construct the phase space density.
This is essentially a reoriented and stretched Euclidean
metric, represented by the quadratic form
\begin{equation}
d_\mathrm{M}\left(\mathbf{x_1},\mathbf{x_2};C\right)
= \sqrt{(\mathbf{x_1}-\mathbf{x_2})C^{-1}
(\mathbf{x_1}-\mathbf{x_2})^{\mathrm{T}}},
\end{equation}
where $\mathbf{x_1}$ and $\mathbf{x_2}$ are two
(6D) phase space positions and $C$ is the $6\times6$
covariance matrix of the whole sample
\begin{equation}
C_{ij} = \big\langle \big(x_i-\langle x_i\rangle\big)
\big(x_j-\langle x_j\rangle\big)\big\rangle
\end{equation}
for $i,j=1\ldots6$, where $\langle \cdots \rangle$ denotes
the mean.
It has the advantage of defining a distance
in a space whose dimensions have different dynamic
ranges and different physical dimensions. However,
as it returns a rescaled, dimensionless quantity,
its physical interpretation is not obvious.
We will therefore relate the density derived from
this metric to the host stars' corresponding
physical quantities, especially the peculiar velocity:
the star's velocity with respect to that of a circular
orbit in the Galactic plane (the Local Standard of Rest).
We begin by following W20
as closely as possible to ensure a direct comparison with their work.
For each target star we:
\begin{itemize}
\item Query all objects in \emph{Gaia} Data Release~2 (DR2)
or Early Data Release~3 (EDR3)
\citep{Gaia2016,Gaia2018,Gaia2020} within
80\,pc of the target. The criterion for inclusion is that
the star possess a radial velocity measured by \emph{Gaia}
as well as a positive parallax.
\item Convert the astrometric, positional and RV information
from \emph{Gaia} to a heliocentric Cartesian position and velocity.
The distance was obtained by inverting the parallax. A velocity correction
from the heliocentric rest frame to the local standard of rest
(the velocity of a body on a circular orbit at the Sun's
position in the Galaxy) from \cite{Schoenrich+10} was then applied:
$(U,V,W)_\odot = (11.1, 12.24, 7.25)\mathrm{\,km\,s}^{-1}$.
\item Define the Mahalanobis metric on the sample,
using the covariance matrix of the positions and
velocities of all stars within 80\,pc of the target.
\item For stars with at least 400 neighbours within 40\,pc,
randomly choose up to 600 such neighbours.
For each of these, as well as the target,
find the 20th nearest neighbour by the Mahalanobis distance
$d_\mathrm{M,20}$,
and use this to define the local 6D phase space density
$\rho_{20} = 20d_\mathrm{M,20}^{-6}$.
\item Normalise $\rho_{20}$ so that the median of the
distribution is 1.
\item Fit a two-component Gaussian mixture model to
the distribution of the logarithm of the rescaled density.
Outliers greater than two standard deviations
from the mean, or with densities $\rho_{20}>50$, are clipped
before fitting the model.
\item Remove systems where a one-component model is a
good fit to the density distribution ($p>0.05$ on a KS test);
three systems are so removed.
\item Calculate the probability that the target star
was drawn from the high-density or the low-density component of
the Gaussian mixture model. If $\rho>50$ assign it to the
high density population.
\end{itemize}
After this, W20 analysed differences between the ``high-density''
population ($P_\mathrm{high}> 0.84$) and the ``low-density''
population ($P_\mathrm{high}<0.16$). The power of this approach
is illustrated in Figure~\ref{fig:a_M}, where we show the semimajor
axes and eccentricities of known exoplanets\footnote{From
\url{https://exoplanetarchive.ipac.caltech.edu/index.html}, accessed
2021-03-11.} whose hosts were
cross-matched to \emph{Gaia}~EDR3 and whose masses and ages
are in the range $0.7-2.0\mathrm{\,M}_\odot$ and $1.0-4.5$\,Gyr
(as in W20). As did W20, we see noticeable differences between the
distributions of planets orbiting ``high-density'' and ``low-density'' hosts. In
this paper we focus on the overabundance of Hot Jupiters
orbiting the ``high-density'' hosts: the ratio of
Hot to Cold Jupiter hosts is $1.4$ for the high-density hosts and
only $0.4$ for the low-density hosts (in this paper,
following W20, we
define Hot Jupiters as planets with mass $M>50\mathrm{\,M}_\oplus$ and
semimajor axis $a<0.2$\,au, and Cold Jupiters as planets
with mass $M>50\mathrm{\,M}_\oplus$ and
semimajor axis $a>0.2$\,au). As we later argue that
tidal effects are likely responsible for the difference
between Hot and Cold Jupiters, we also choose a cut between
Hot and Cold Jupiters of $0.1\,$au. This yields similar ratios
(for high-density hosts, the Hot to Cold Jupiter ratio is
1.2; for low-density hosts, it is again 0.4).
However,
in the bottom panels of Figure~\ref{fig:a_M} we show the
same sample of planets but with hosts broken down
by membership into a high- or low-density component
of the distribution of peculiar velocities
$|\mathbf{v}|$ relative to the Local Standard of
Rest, using the same Gaussian Mixture
procedure as we used for the densities. Although the difference
between the planet populations is not so pronounced as when the
hosts are broken down by phase space density, the same trends
are seen. This suggests that the stars' peculiar motions
are in fact conveying most of the information.
With this in mind, we now step back and
investigate what the non-dimensionalised,
rescaled phase space density is physically
telling us.
\begin{figure*}
\centering
\includegraphics[width=0.66\textwidth]{./Sun_eDR3_all_residuals.pdf}
\includegraphics[width=0.33\textwidth]{./BD+202184_eDR3_all_trend.pdf}
\caption{Left: Local 20$^\mathrm{th}$-nearest neighbour
phase space density, and probability
of membership in the high-density population,
for 600 stars within 40\,pc of the Sun.
We show the fitted quartic trend of the density
as a function of peculiar velocity.
The outlier at $\rho\sim150$ is \emph{Gaia}~EDR3~3277270538903180160
(LP~533-57, HIP~17766), a Hyades member.
Centre: The residuals to the fit, after the trend
is removed. Right: The densities and
peculiar velocities of 600 neighbours
of Pr0201 (BD+20~2184), a Hot Jupiter host
in the Pr{\ae}sepe cluster. Pr0201 and
other Pr{\ae}sepe members stand out
above the trend.
Green background shading shading shows
the approximate bounds for membership in the Galactic thin
disc, thick disc and halo, from \cite{Bensby+14}.}
\label{fig:V_rho}
\end{figure*}
\section{Phase space density and Galactic velocities}
We begin by taking the Sun as a case study. W20 identified the Sun
as belonging to a phase-space overdensity. This may reflect the
Sun's origin in a reasonably large, dense cluster
\citep{Adams10,Pfalzner+15}.
However, we note here that the Sun has a rather low
peculiar motion for its age \citep{Wielen+96,Gonzalez99}.
The colour--magnitude diagram for our sample
of Solar neighbours from \emph{Gaia} EDR3 is shown in Figure~\ref{fig:CMD}.
From the Solar neighbours within 40\,pc, we have randomly selected
600 and calculated their local phase space density as described
above. We show histograms of the phase space density distribution
in Figure~\ref{fig:GMM}. In common with the distributions for the
neighbourhoods of other target stars, the distribution
is poorly fit by a single lognormal. Instead, there is typically a
steep cutoff at high density, plus a few high-density outliers
often associated with known clusters or moving groups, and
a shallower tail towards lower densities. A two- or even higher-component
fit is usually superior; in fact, three or more components
are often favoured by the Aikake and/or Bayesian
Information Criteria, suggesting that
the distribution is rather a continuous spectrum than the sum
of a high-density and a low-density lognormal.
The middle panel of
Figure~\ref{fig:GMM} shows the decomposition into two components.
The Sun lies near the peak of the high-density component.
Finally, the right panel shows the probability that the Sun belongs to the
high-density component, which we calculate to be $0.88$.
The probability of belonging to
the high-density component actually decreases slightly at high densities
on account of the breadth of the low-density component. Visual inspection of
several distributions showed that this is usually not too extreme a problem;
it was helped by clipping the outliers before fitting the Gaussian mixture
model as described above. The Sun is found to have a high
probability of belonging to the high-density population, as W20 found.
In Figure~\ref{fig:GMM_v} we show the equivalent Gaussian Mixture models
for the velocity distribution; here the Sun belongs to the low-velocity component
(the probability it belongs to the high-velocity component is $0.03$.)
As the Mahalanobis density is constructed from
both spatial and kinematic information, we now ask
which of these is most significant. We begin with
the velocities, as suggested in Figure~\ref{fig:a_M}.
Figure~\ref{fig:V_rho} shows the
local phase space density for neighbours of the
Sun and BD+20\,2184 (alias Pr0201, a Hot Jupiter host
in the Pr{\ae}sepe open cluster, \citealt{Quinn+12}), as
a function of the stars' peculiar velocities.
In each case we see a strong correlation:
the phase space density is primarily determined by a star's
peculiar velocity. ``High-density'' stars are those with low
peculiar velocities, ``low-density'' stars those with high
peculiar velocities. We show in Figure~\ref{fig:V_rho} with
the background colouring the approximate velocity ranges
corresponding to membership in the Galactic thin
disc, thick disc and halo \citep{Bensby+14}. The ``low-density''
stars appear to be a heterogeneous mix of dynamically hot thin disc
stars, thick disc stars, and a handful of halo stars, while the
``high-density'' stars are lower velocity thin disc stars. A natural
interpretation, then is that stars move from the high-density population
to the low-density population as they age and are kinematically
heated in the disc.
We also see in
Figure~\ref{fig:V_rho} the Pr{\ae}sepe cluster at
$|\mathbf{v}|\approx30 - 40\mathrm{\,km\,s}^{-1}$,
standing out from the field star trend.
The field star population is rather smooth;
we detrend this in the next section.
In contrast, large-scale spatial structure (\emph{i.e.,}
10s of pc) has little
influence on the phase space density. Figure~\ref{fig:D_rho} shows the
local phase space density for neighbours of the
Sun and BD+20\,2184. For
the Sun, we see little large-scale spatial structure: the
Mahalanobis phase space density of a star is not strongly dependent on
its distance to the Sun. For the neighbours of BD+20\,2184,
the other Pr{\ae}sepe members are clearly identified
by the Mahalanobis density measure as a distinct group,
attaining densities of $\gtrsim10^3$
within around 10\,pc of the target. However,
most of the ``high-density'' stars as defined by the
Gaussian Mixture model are not cluster
members: 167 ``high-density'' stars lie beyond 20\,pc
of BD+20~2184, and only 74 within 20\,pc, those beyond
20\,pc having furthermore densities only a little
higher than the rest of the field star population, rather
than orders of magnitude higher as is the case for the
cluster members.
\section{Hunting for a trend in the residuals}
\label{sec:residuals}
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{eDR3_all_HJs_CJs_logv_log_rho.pdf}
\includegraphics[width=0.49\textwidth]{eDR3_all_HJs_CJs_logv_residuals.pdf}
\caption{Left: phase space density versus peculiar velocity
for hot Jupiter host stars (`HJs') and for cold Jupiter host
stars (`CJs'). Both marginal distributions (top and right
sub-panels) are statistically
significantly different: the KS-test $p$-values are shown in the
figure. HJ hosts have lower velocity and higher density than
CJ hosts. Right: residuals to the detrended phase space density
versus peculiar velocity for the same stars. The difference in
the distribution of residuals between the Hot and Cold Jupiter
populations is not significant ($p_\mathrm{KS,residuals}=0.40$).}
\label{fig:HJs_CJs}
\end{figure*}
We now investigate whether there is any correlation of phase
space density with the presence of a Hot Jupiter after correcting for
the dependence of phase space density on peculiar velocity.
Are Hot Jupiter hosts ``high-density'' when compared to stars
of similar kinematics? This could be the case if, for example,
stars are born from regions of similar $|\mathbf{v}|$ but
different densities, and this density difference persists
as the stars get heated in the disc.
First we detrend the $\log\rho-\log|\mathbf{v}|$ relation. For each of
our host stars, we fit a quartic polynomial to $\log\rho$ as a
function of $\log|\mathbf{v}|$ for each of its
sample of neighbours. The example of the
Sun is shown in Figure~\ref{fig:V_rho}. In performing this fit,
we exclude densities greater than 50 to avoid the fit being
biased by clusters: we wish to fit only field stars. We see in
Figure~\ref{fig:V_rho} that, although the Sun is a ``high-density'' host,
it lies very close to the fitted trend and is quite unremarkable given
its cold kinematics.
We repeat this detrending for each of a sample
of Hot Jupiter hosts (planet mass $\ge50\mathrm{\,M_\oplus}$,
planet semimajor axis $\le0.2$\,au, stellar mass
$\in [0.7,2.0]\mathrm{\,M}_\odot$\footnote{This is the same
definition of Hot Jupiters as that used by W20, except that
we have removed the age constraint: when including
the age constraint, the sample was too small to see a significant
difference in either $\rho$ or $|\mathbf{v}|$.}),
as well as for a control sample of
cold Jupiter hosts (same criteria except with
semimajor axis $>0.2$\,au). We show the phase space densities
as a function of peculiar velocity in the left-hand panel
in Figure~\ref{fig:HJs_CJs}; both populations follow a
similar trend to the Solar neighbours in Figure~\ref{fig:V_rho}
The marginal distributions of both peculiar velocity and phase
space density differ significantly ($p=7.6\times10^{-4}$ and
$p=1.1\times10^{-4}$ on KS tests), with the Hot Jupiter hosts having
lower velocities and higher densities. Our velocity dispersions
for the Hot and Cold Jupiter hosts are $37.2\mathrm{\,km\,s}^{-1}$
and $43.3\mathrm{\,km\,s}^{-1}$ respectively,
similar to the values obtained by
\cite{HamerSchlaufman19}\footnote{Taking the semi-major axis cut
at $a=0.1$\,au, we find $35.3\mathrm{\,km\,s}^{-1}$
and $43.7\mathrm{\,km\,s}^{-1}$ respectively.}. In the right-hand panel, we
show instead the residuals of each host star to its fitted
trend. The marginal distributions of these residuals
are statistically indistinguishable
($p=0.40$ on a KS test). Thus, after accounting for the
lower velocity dispersion, there is no evidence that
the Hot Jupiter hosts are
located in denser regions of phase space compared
to the Cold Jupiter control sample.
In Appendix~\ref{sec:compare} we describe an alternative control
experiment, in which we compare the hot Jupiter hosts
to the 600 randomly-chosen neighbours of the Sun.
Again, we find that after accounting for the
dependence of the phase space density on velocity, there is no
evidence that Hot Jupiter hosts are in regions of high density
compared to stars with similar kinematics.
\section{Discussion}
We have now demonstrated that the main determinant
of a star's 6D phase space density is the magnitude
of its peculiar motion, \emph{i.e.,} how much its
Galactic orbit deviates from a circular orbit
exactly in the Galactic plane. At the present time,
and for the past $\sim8$\,Gyr, stars have been typically
born close to the Galactic midplane with a low peculiar
velocity, and are heated with age through interactions
with matter inhomogeneities in the Galaxy. This heating
occurs on a timescale of Gyr: for eample, with the
good asteroseismic
ages derivable from \emph{Kepler} data, \cite{Miglio+21} find
that the vertical
velocity dispersion for thin disc stars
rises from $\approx10\mathrm{\,km\,s}^{-1}$
at an age of 1\,Gyr to $\approx20\mathrm{\,km\,s}^{-1}$ at
an age of 10\,Gyr. The age--velocity relation for the
exoplanet host stars with ages given in the
NASA Exoplanet Archive is shown in Figure~\ref{fig:age-vel}.
We caution that these ages come with large errors,
as most exoplanet hosts are main-sequence stars,
and are more over not derived homogeneously
\citep[see][for discussion on this]{Adibekyan+21}.
Nonetheless, we do indeed see an increase in the velocity
dispersion with age.
\begin{figure}
\includegraphics[width=0.5\textwidth]{eDR3_all_All_age_velocity.pdf}
\caption{Age--velocity relation for exoplanet host
stars with ages given in the NASA Exoplanet Archive.
Stars are divided into Hot Jupiter hosts and non-Hot Jupiter
hosts. Error bars on the ages are not shown (indeed,
they are not always available) but can easily be several Gyr.}
\label{fig:age-vel}
\end{figure}
This age--velocity relation pertains to the Galactic thin
disc. There also exists a chemically distinct and
kinematically hotter stellar population, the thick disc,
whose stars have a higher velocity dispersion
even than thin disc stars of comparable age \citep{Miglio+21}.
However, the existence of a clean kinematic separation,
and the exact relation of kinematics to adundances
and to the early history of the Galaxy are still debated
\citep[see, \emph{e.g.,} discussion in][]{Agertz+21}.
In principle, a clean kinematic separation between
thin and thick discs could be a natural way to interpret
the stellar phase space densities, with ``high-density''
stars being thin disc members and ``low-density'' stars
being thick disc members. However, when we use the
rough kinematic classifications from \cite{Bensby+14}---thin
disc stars at $|\mathbf{v}| < 50\mathrm{\,km\,s^{-1}}$ and
thick disc stars at
$70\mathrm{\,km\,s^{-1}}\lesssim |\mathbf{v}| \lesssim
180\mathrm{\,km\,s^{-1}}$---we see that the ``low-density''
stars are drawn from both the thick disc and the
heated end of the thin disc, with a handful
of halo stars as well (see Figure~\ref{fig:V_rho}).
It is likely that the thick disc stars formed on kinematically
hot orbits early in the Galaxy's history due to early mergers
and the turbluent nature of the gas disc at early times
\citep{Bird+13,Agertz+21,Renaud+20}, and they have maintained
their higher velocity dispersion \cite[e.g.,][]{Miglio+21} to
the present day. This does not affect our argument, since these stars
are all old and kinematically hot, and thus have a low phase
space density.
As the ``high-density'' stars are kinematically cold and
therefore on average young, and the ``low-density'' stars
are a mix of old thick disc stars and old heated thin
disc stars, the stellar age naturally suggests itself
as an explanation for the overabundance of Hot Jupiters
orbiting ``high-density'' hosts. Hot Jupiters can spiral
in to their host stars under tidal drag
\citep[\emph{e.g.,}][]{Jackson+09,Levrard+09,CCJ18}, and if the
tidal dissipation is effective enough, the timescale for this
is also $\lesssim$\,Gyrs, similar to that for kinematic heating
of the host star. \cite{HamerSchlaufman19} previously
identified that Hot Jupiter hosts have colder kinematics
than Cold Jupiter hosts, with similar values of the velocity
dispersions to those we have found, and found that
this difference corresponds to tidal decay of Hot Jupiter orbits
if the tidal quality factor $Q_\star^\prime\lesssim10^7$.
Hot Jupiter hosts, then, are predominantly in high-density
regions of phase space because of a bias towards detecting the
Hot Jupiters around young stars before their tidal destruction,
a bias noted by \cite{CCJ18}. A potential confounding
factor we have not considered is stellar metallicity, which
has a strong influence on the probability of forming
a giant planet \citep{Fischer05}. However, in the Solar neighbourhood
the age--metallicity relation is rather flat back to
ages of around $10$\,Gyr \citep{Freeman02,Sahlholdt21};
for very old stars such as the thick disc, both
metallicity and $\alpha$-element abundance may
affect planet formation \citep{Adibekyan+21b}.
We note that we have not shown that there is no
impact of birth environment on planetary system
architecture, only that the differences found
through the phase space method
primarily arise as a result of age and that nothing
is seen when this confounding factor is removed, at
least for the Hot Jupiters. Surveys directly looking
at planets in clusters \citep[\emph{e.g.}][]{Rizzuto+20,
Nardiello+20} could address this, but conclusions
may be tentative because of the low yield of
discoveries: \cite{Brucalassi+16,Brucalassi+17}
found a Hot Jupiter rate higher in the
M67 open cluster than in the field,
but this relies on only three Hot Jupiters found in the cluster.
There are also other trends found by W20 and subsequent
papers \citep{Kruijssen+20,Chevance+21,Longmore+21} that
must be explained; we note however that the finding
of \cite{Chevance+21} that there is a stronger gradient
in planetary radius in multi-planet systems orbiting
``low-density'' hosts may also be an age effect, as this
gradient can result from photoevaporation of the
planets' atmospheres \citep{OwenWu13} and the older ``low-density''
stars have more time for this process to proceed.
The trend in multiplicity found by \cite{Longmore+21}
is harder to explain: they found that low-density hosts have
more multiple systems than high-density hosts. This would seem to
go against an age dependence (multiplicity should reduce with
time), but we note that their good-quality low-density samples
had only 5 or 6 stars, and a larger sample would be required
to confirm or refute this.
We have followed W20 in using the heterogeneous
sample of exoplanets provided by the NASA Exoplanet Archive.
Recently, \cite{Adibekyan+21}
used a smaller homogeneous sample to look for differences between
the ``high-density'' and ``low-density'' populations; the sample
was unfortunately too small to see a significant difference.
A difference did emerge in a larger sample, although
\cite{Adibekyan+21} noted that the ``low-density'' hosts
are older (with a homogeneous age determination)
than the ``low-density'' hosts. This again
underlines the importance of correcting for age. We finish with
two further caveats for further studies that may wish
to look for trends after the age dependence is removed:
first, the differential completeness of \emph{Gaia} across
a target star's neighbourhood should be accounted for
(see Figure~\ref{fig:complete}); and second,
coherent structures can arise in velocity space among
stars of divers ages through interactions with
matter inhomogeneities in the Galaxy
\citep[\emph{e.g.,}][]{DeSimone+04,Antoja+18,Kushniruk+20}, so
phase space overdensities need not reflect a coeval origin
in a dense environment.\footnote{While
this paper was under review, \cite{Kruijssen21} submitted a paper
linking the phase space densities to such features of
Galactic dynamics: the ripples within the Galactic disc
generated by matter inhomogeneities such as the bar, arms and
satellite galaxies.}
\section{Conclusions}
\begin{enumerate}
\item Classifying stars according to
their local 6D phase space densities, we
verify that Hot Jupiter hosts preferentially
belong to the population of high phase space density.
\item Phase space density shows an extremely
strong anti-correlation with a star's peculiar
velocity with respect to the local standard of rest
in the Galaxy. The high phase space density of Hot Jupiter
hosts is primarily a manifestation of their cold
kinematics.
\item After correcting for the dependency of phase space
density on peculiar motion, there is no evidence that
Hot Jupiter hosts lie in denser regions of phase
space than other stars.
\item The observed correlation is likely to
arise from the bias towards detecting Hot Jupiters
around younger (and therefore kinematically colder)
host stars, before the Hot Jupiters are destroyed by
tidal orbital decay.
\end{enumerate}
A Jupyter notebook and ancillary files
to reproduce these results are
available\footnote{\url{https://github.com/AJMustill/HJGalaxy}}.
\begin{acknowledgements}
AJM acknowledges funding from the Swedish Research
Council (grant 2017-04945), the Swedish National Space
Agency (grant 120/19C), and the Fund of the Walter
Gyllenberg Foundation of the Royal Physiographic
Society in Lund. This research has made use of
the Aurora cluster hosted at LUNARC at Lund University.
AJM wishes to thank Ross Church, Sofia
Feltzing, Diederik Kruijssen, Steve Longmore, Paul McMillan,
Pete Wheatley, Andrew Winter and the anonymous referee for useful
comments and discussions.
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia}\footnote{\url{https://www.cosmos.esa.int/gaia}}, processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC)
\footnote{\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}}. Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
This research made use of Astropy,\footnote{\url{http://www.astropy.org}}
a community-developed core Python package for
Astronomy \citep{astropy:2013, astropy:2018}.
This research made use of NumPy \citep{2020NumPy-Array},
SciPy \citep{2020SciPy-NMeth}, MatPlotLib \citep{2007CSE.....9...90H},
and Scikit-learn \citep{scikit-learn}.
This research has made use of the NASA Exoplanet Archive,
which is operated by the California Institute of Technology,
under contract with the National Aeronautics and Space Administration
under the Exoplanet Exploration Program.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,499,716 | arxiv | \section{Discussion and Conclusion}
\label{sec:conclusion}
We developed \textbf{CEIP}, a method for reinforcement learning which combines explicit and implicit priors obtained from task-agnostic and task-specific demonstrations. For implicit priors we use normalizing flows. For explicit priors we use a database lookup with a push-forward retrieval. In three challenging environments, we show that \textbf{CEIP} improves upon baselines.
\textbf{Limitations.} Limitations of CEIP are as follows: 1) \textit{Training time}. The use of demonstrations requires training a decent number of flows which can be time-consuming, albeit mitigated to some extent by parallel training. 2) \textit{Reliance on optimality of expert demonstrations.} Similar to prior work like SKiLD~\cite{pertsch2021skild} and FIST~\cite{Hakhamaneshi2022FIST}, our method assumes availability of optimal state-action trajectories for the target task. Accuracy of those demonstrations impacts results. Future work will focus on improving robustness and generality. 3) \textit{Balance between the degree of freedom and generalization in fitting the flow mixture.} Fig.~\ref{fig:fetchreach_plot_ablation_ours_dataset} reveals that more degrees of freedom in the flow mixture improve results of CEIP. Our current design uses a linear combination which offers $O(n)$ degrees of freedom ($\mu$ and $\lambda$), where $n$ is the number of flows. In contrast, too many degrees of freedom will result in overfitting. It is interesting future work to study this tradeoff.
\textbf{Societal impact.} Our work helps to train RL agents more efficiently from demonstrations for the same and closely related tasks, particularly when the environment only provides sparse rewards. If successful, this expands the applicability of automation. However, increased automation may also cause job loss which negatively impacts society.
\textbf{Acknowledgements.} This work was supported in part by NSF under Grants 1718221, 2008387, 2045586, 2106825, MRI 1725729, NIFA award 2020-67021-32799, the Jump ARCHES endowment through the Health Care Engineering Systems Center, the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign through the NCSA Fellows program, and the IBM-Illinois Discovery Accelerator Institute. We thank NVIDIA for a GPU.
\section{Experiments}
\label{others}
\section{Experiments}
In this section, we evaluate our CEIP approach on three challenging environments: fetchreach (Sec.~\ref{sec:fetchreach}), kitchen (Sec.~\ref{sec:kitchen}), and office (Sec.~\ref{sec:office}), which are all tasks that manipulate a robot arm. In each experiment, we study the following questions: 1) Can the algorithm make good use of the demonstrations compared to baselines?
2) Are our core design decisions (e.g., state augmentation with explicit prior and the push-forward technique) indeed helpful?
\textbf{Baselines.} We compare the proposed method to three baselines: PARROT~\cite{Singh2021ParrotDB}, SKiLD~\cite{pertsch2021skild}, and FIST~\cite{Hakhamaneshi2022FIST}. In all environments, we use reward as our criteria (higher is better). The results are averaged over $3$ runs for SKiLD (much slower to train) and $9$ runs for all other methods unless otherwise mentioned. To differentiate variants of the PARROT baseline and our method, we use suffixes. We use ``EX'' to refer to variants with explicit prior, and ``forward'' for variants with the push-forward technique. For our method, if we train a task-specific flow on $D_{\text{TS}}=D_{n+1}$, we append the abbreviation ``TS.'' For PARROT, the use of the task-specific data is indicated with ``TS'' and the use of task-agnostic data is indicated with ``TA.''\footnote{The original PARROT in~\cite{Singh2021ParrotDB} is essentially PARROT+TA. It is straightforward to use PARROT directly on the task-specific dataset. Hence, we tried PARROT+TS and PARROT+(TS+TA) as well.} See Table~\ref{tab:abbrCEIP} and Table~\ref{tab:abbrPARROT} for precise correspondence.
\subsection{FetchReach Environment}
\label{sec:fetchreach}
\textbf{Environment Setup.} The agent needs to control a robot arm to move its gripper to a goal location in 3D space, and remain there.
During an episode of $40$ steps, the agent receives a $10$-dimensional state about its location and outputs a $4$-dimensional action, which indicates the change of coordinates of the agent and the openness of the gripper. It will receive a reward of $0$ if it arrives and stays in the vicinity of its target. Otherwise, it will receive a reward of $-1$. This environment is a harder version of the FetchReach-v1 robotics environment in gym~\cite{Plappert2018MultiGoalRL}, where we increase the average distance of the starting point to the goal, effectively increasing the training difficulty. Moreover, to test the robustness of the algorithm, we sample a random action from a normal distribution at the beginning of each episode, which the agent executes for $x$ steps before the episode begins. We use $x\sim U[5, 20]$. For simplicity, we denote the goal generated with azimuth $\frac{\pi d}{4}$ as ``direction $d$'' (e.g., direction $4.5$).
\textbf{Dataset Setup.} We use trajectories from directions $d\in\{0, 1, \dots, 7\}$ as the task-agnostic data. Each task includes $40$ trajectories, and each of the trajectories has $40$ steps, i.e., $1600$ environment steps in total. The task-specific datasets contain directions $4.5, 5.5, 6.5$, and $7.5$. (The robot cannot reach the other four $.5$ directions due to physical limits.) For each task-specific dataset, we use $4$ trajectories, for a total of $160$ environment steps.
\textbf{Experimental Setup.} For fetchreach, we use a fully-connected deep net with one hidden layer of width $32$ and ReLU~\cite{Agarap2018relu} activation as a standard ``block'' of our algorithm (each block corresponds to a red ``NN'' rectangle in Fig.~\ref{fig:flowcombine}). We have a pair of blocks for $c_i(s)$ and $d_i(s)$ for each flow $f_i$. For flow training, we train $8$ flows for $8$ directions in the task-agnostic dataset without the explicit prior.
We use a batch size of $40$ and train for $1000$ epochs for both each flow and the combination of flows, with gradient clipping at norm $10^{-4}$, learning rate $0.001$, and Adam optimizer~\cite{KingmaB2014Adam}. We use the model that has the best performance on the validation dataset at the end of every epoch. For each dataset, we randomly draw $80\%$ state-action pairs (or transitions in ablation) as the training set and $20\%$ state-action pairs as the validation set. The combination of flow is also a block, which outputs both $\mu(s)$ and $\lambda(s)$. See the Appendix for the implementation details of SKiLD, FIST, and PARROT. For each method with RL training, we use a soft-actor-critic (SAC)~\cite{Haarnoja2018SoftAO} with 30K environment steps, a batch size of $256$, and $1000$ steps of initial random exploration. Unless otherwise noted, all other RL hyperparameters in all experiments use the default values of Stable-baselines3~\cite{Raffin2021stable-baselines3}.
\textbf{Main Results.} Fig.~\ref{fig:fetchreach_plot_main} shows the results for different methods without explicit priors or task-specific single flow $f_{n+1}$. In all four tasks, our method significantly outperforms the other baselines.
This indicates that the flow training indeed helps boost the exploration process.
Na\"ive reinforcement learning from scratch fails in most cases, which underscores the necessity of utilizing demonstrations to aid RL exploration. As this is a simple task with only a few wildly varied trajectories, adding a flow for the task-specific dataset does not improve our method. Noteworthy, neither SKiLD nor FIST works on fetchreach. Their VAE-based architecture with each action sequence as the agent's output (``skill'') can not be trained with the little amount of wildly varied data with short horizon.
Flow-based models like ours and PARROT, which only consider the action of the current step instead of the action sequence, work better.
\textbf{Are more flows helpful for CEIP?} Fig.~\ref{fig:fetchreach_plot_ablation_ours_dataset} shows the performance of our method using a different number of flows, which are trained on the data of the directions that are the closest to the task-specific direction (e.g., directions $5$ and $6$ for $2$ flows with the target being direction $5.5$). The result shows that within a reasonable range, increasing the number of flows improves the expressivity and consequently results of our model. See Appendix~\ref{sec:extraexp} for more ablation studies.
\begin{figure}[t]
\centering
\subfigure[Direction 4.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-main-4.5.pdf}
\end{minipage}
}
\subfigure[Direction 5.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-main-5.5.pdf}
\end{minipage}
}
\subfigure[Direction 6.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-main-6.5.pdf}
\end{minipage}
}
\subfigure[Direction 7.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-main-7.5.pdf}
\end{minipage}
}
\vspace{-0.2cm}
\caption{Main performance results on the fetchreach environment for different directions, where the lines are the mean reward (higher is better) and shades are the standard deviation. FIST is represented by a dashed line as it does not require RL.}
\label{fig:fetchreach_plot_main}
\vspace{-0.2cm}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Direction 4.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-ours-dataset-ablation-4.5.pdf}
\end{minipage}
}
\subfigure[Direction 5.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-ours-dataset-ablation-5.5.pdf}
\end{minipage}
}
\subfigure[Direction 6.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-ours-dataset-ablation-6.5.pdf}
\end{minipage}
}
\subfigure[Direction 7.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-ours-dataset-ablation-7.5.pdf}
\end{minipage}
}
\caption{Ablation on the number of flows used in our method. We observe more flows to lead to better performance, likely because expressivity increases which helps in fitting the expert policy.}
\label{fig:fetchreach_plot_ablation_ours_dataset}
\end{figure}
\iffalse
\textbf{Ablation Study.} To better understand the properties of our method and PARROT, we ablate the architecture of our method (Fig.~\ref{fig:fetchreach_plot_ablation_ours_arch}), the number of flows used for our method (Fig.~\ref{fig:fetchreach_plot_ablation_ours_dataset}), and the data used when training PARROT (See Fig.~\ref{fig:fetchreach_plot_ablation_parrot_dataset} in the appendix). In the first case, we study the effect of different components of our method (the task-specific flow $f_{n+1}$ (TS), the explicit prior (EX) and the push-forward technique with explicit prior (forward)). In the second case, we change the number of flows used in the final combination of $f_\text{TS}$ with no explicit prior or task-specific single flow $f_{n+1}$. In the third case, we select a subset of task-agnostic data that is more relevant to the task-specific dataset and study the effect of how data in the task-agnostic dataset with different levels of relevance to the downstream task affects results. We also test the effect of explicit prior and push-forward technique with task-specific data only. The results can be summarized as follows: 1) using explicit prior and the push-forward technique slows down the reward growth during RL training if applied on a relatively easy and short-horizon environment for our method and PARROT; 2) 3) selecting more relevant data for PARROT is an effective way to improve PARROT, which supports our motivation for combining the flows to select the most useful prior.
\fi
\subsection{Kitchen Environment}
\label{sec:kitchen}
\textbf{Environment Setup.} We use the kitchen environment adopted from D4RL~\cite{fu2020d4rl}, which serves as a testbed for many reinforcement learning and imitation learning approaches, like SPiRL~\cite{pertsch2020spirl}, SKiLD~\cite{pertsch2021skild}, relay policy learning~\cite{Gupta2019RelayPL}, and FIST~\cite{Hakhamaneshi2022FIST}. The agent needs to control a 7-DOF robot arm to complete a sequence of four tasks (e.g., move the kettle, or slide the cabinet) in the correct order. The agent will receive a $+1$ reward only upon finishing a task, and $0$ otherwise. The action space is $9$-dimensional and the state space is $60$-dimensional. This environment is very challenging, as high-precision control of the robot arm is needed and also a long horizon of $280$ timesteps is needed. Moreover, there is a small noise applied to each agent action, which requires the agent to be robust.
\textbf{Dataset Setup.} We use two dataset settings, which are adopted from SKiLD and FIST (denoted as \textit{Kitchen-SKiLD} and \textit{Kitchen-FIST} below). In Kitchen-SKiLD, we use $601$ teleoperated sequences that perform a variety of task sequences as the task-agnostic dataset, and use \textit{only one} trajectory for the task-specific dataset. In Kitchen-FIST, we use part of the task-agnostic dataset (about $200-300$ trajectories) that \textit{does not} contain a particular task in the task-specific dataset, and use \textit{only one} trajectory for the task-specific dataset. There are two different task-specific datasets in Kitchen-SKiLD, and four different task-specific datasets in Kitchen-FIST. The latter is significantly harder, as the agent must learn a new task from very little data. For simplicity, we denote them as ``SKiLD-A/B'' and ``FIST-A/B/C/D'' respectively. See Appendix~\ref{sec:app1} for details on each task.
\textbf{Experimental Setup.} We use $k$-means to partition the task-agnostic datasets into $24$ different clusters, and train $24$ flows accordingly. For each flow, we use a fully-connected network with $2$ hidden layers of width $256$ with ReLU activation as a ``block'' for our algorithm. For the combination of flows, we use a fully-connected network with $1$ hidden layer of width $64$ with ReLU activation. Each layer of the deep net (except the output layer) described above has a 1D batchnorm function. The blocks are used analogously to the fetchreach environment. We use a batch size of $256$ for the task-agnostic dataset and a batch size of $128$ for the task-specific dataset. Other training hyperparameters are identical to the fetchreach environment. For each RL training, we use proximal policy optimization (PPO)~\cite{Schulman2017PPO} for 200K environment steps, with update interval being $2048$ (Kitchen-SKiLD) / $4096$ (Kitchen-FIST), $60$ epochs per update, and a batch size of $64$.
\begin{figure}[t]
{
\begingroup
\centering
\begin{minipage}[c]{0.28\linewidth}
\subfigure[Kitchen-SKiLD-A]{\includegraphics[width=\linewidth]{pic/kitchen-SKiLD/plot-main-easy.pdf}}
\subfigure[Kitchen-FIST-B]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-main-B.pdf}}
\end{minipage}
\begin{minipage}[c]{0.28\linewidth}
\subfigure[Kitchen-SKiLD-B]{\includegraphics[width=\linewidth]{pic/kitchen-SKiLD/plot-main-hard.pdf}}
\subfigure[Kitchen-FIST-C]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-main-C.pdf}}
\end{minipage}
\begin{minipage}[c]{0.28\linewidth}
\subfigure[Kitchen-FIST-A]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-main-A.pdf}}
\subfigure[Kitchen-FIST-D]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-main-D.pdf}}
\end{minipage}
\vspace{-0.2cm}
\caption{Comparison on Kitchen-SKiLD and Kitchen-FIST environments.}
\label{fig:kitchen_main}
\endgroup
}
\vspace{-0.3cm}
\end{figure}
\textbf{Main Results.} Fig.~\ref{fig:kitchen_main} shows the main results on Kitchen-SKiLD and Kitchen-FIST. Our method outperforms all other baselines in all of the $6$ settings of the task-agnostic and task-specific datasets.
For our method, we use the task-specific single flow $f_{n+1}$, explicit prior, and the push-forward technique. We compare to the original PARROT formulation. See Appendix~\ref{sec:extraexp} for ablation studies of PARROT with explicit prior and our method without $f_{n+1}$ or explicit prior.
\textbf{Does CEIP overly rely on the task-specific flow if it is used?} One concern for our method could be: does the task-specific single flow dominate the model? Theoretically, when all flows are perfect, a trivial combination of flows that minimizes the training objective is to set $\lambda_{n+1}=1$, $\mu_{n+1}=1$ for the task-specific single flow, and $\lambda_i\approx 0, \mu_i\approx 0$ for $i\neq n+1$. To study this concern, we plot the change of the coefficient $\mu$ during an episode in Fig.~\ref{fig:coeff_and_pathlen}. We observe that the single flow on the task-specific dataset is not dominating the combination of the flow, despite being trained on the task-specific dataset. The blue curve with legend `TA-8' in Fig.~\ref{fig:coeff_and_pathlen} shows the coefficient for the $8$th flow trained in the task-agnostic dataset. It exhibits an increase of $\mu$ at the end of an episode, as the last subtask in the target task is more relevant to the prior encoded in the $8$th flow. Intuitively, over-reliance in our design (Fig.~\ref{fig:arch_illu} in the Appendix) is discouraged, because of the softplus function and the positive offset applied on $\mu$. For over-reliance, all task-agnostic flows $f_i$ with $i\in\{1,2,\dots,n\}$ should have a coefficient of $\mu_i=0$, which is hard to approach due to the offset of $\mu$ and softplus. In fact, a degenerated CEIP is essentially PARROT+TS(+EX+forward), which is worse than our method but still a powerful baseline.
\textbf{Is reinforcement learning useful in cases with a perfect initial reward?} Fig.~\ref{fig:coeff_and_pathlen} shows the episode length of our method on Kitchen-SKiLD-A. Even if the reward is already perfect, the reinforcement learning process is still able to maximize discounted reward, which optimizes the path.
\begin{figure}[t]
\centering
\begin{minipage}[c]{0.4\linewidth}
\centering
\subfigure[Illustration of coefficient]{\includegraphics[height=3cm]{pic/coeff.pdf}}
\end{minipage}
\begin{minipage}[c]{0.4\linewidth}
\centering
\subfigure[Average episode length]{\includegraphics[height=3cm]{pic/kitchen-SKiLD/plot-pathlen-easy.pdf}}
\end{minipage}
\caption{a) Illustration of the coefficient change of a trained CEIP model during an episode of Kitchen-SKiLD-A. This CEIP model is trained with the task-specific single flow and without the explicit prior. `TA-$8$' is the $8$-th single flow for the task-agnostic dataset, and `TS' is the single flow for the task-specific dataset. The grey dotted lines are the partition of different subtasks. b) Average episode length of our method on the Kitchen-SKiLD-A task. The episode ends immediately when all the tasks are completed; thus, shortening length means that RL helps to find policy with more efficient completion of tasks.}
\label{fig:coeff_and_pathlen}
\end{figure}
\iffalse
\textbf{Ablation Study}. Fig.~\ref{fig:kitchen_ablation_ours} shows the difference of performance using different architectures for our method. In both of the environments, we can see that the explicit prior plays a crucial rule in both Kitchen-SKiLD and Kitchen-FIST. Also, for Kitchen-FIST, where one of the target sub-tasks is only part of the task-specific data, the presence of the task-specific single flow $f_{n+1}$ is also crucial for success. We don't find the push-forward technique to help much. Fig.~\ref{fig:kitchen_ablation_PARROT} shows the difference of using different architectures for PARROT. As one target sub-task is completely missing from the task-agnostic data, PARROT+TA fails as expected. Also note that the explicit prior also boosts the performance of PARROT, making it comparable to our method if given enough training time.
\fi
\subsection{Office Environment}
\label{sec:office}
\textbf{Environment and Dataset Setups.} We follow SKiLD, where a robot with $8$-dimensional action space and $97$-dimensional state space needs to put three randomly selected items on a table into three containers in the correct, randomly generated order. The agent will receive a $+1$ reward when it completes a subtask (e.g., picking up an item, or dropping the item at the right place), and $0$ otherwise. This environment is even harder than the kitchen environment, as the agent must manipulate freely movable objects and the number of possible subtasks in the task-agnostic dataset is much larger than that in the kitchen environment. We use the same task-agnostic dataset as SKiLD, which contains $2400$ trajectories with randomized subtasks sampled from a script policy. For the task-specific dataset, we use $5$ trajectories for a particular combination of tasks.
\textbf{Experimental Setup.} Similar to the kitchen environment, we use $k$-means over the last state of each trajectory and partition the task-agnostic dataset into $24$ clusters. The architecture and training paradigm of the flow model are identical to those used in the kitchen environment. For RL training, we use PPO for 2M environment steps, with update interval being $4096$ environment steps, $60$ epochs per update, and a batch size of $64$. All other hyperparameters follow the kitchen environment setting. We run each method with $3$ different seeds
\begin{wrapfigure}{r}{0.73\textwidth}
{
\begingroup
\centering
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Main result]{\includegraphics[width=\linewidth]{pic/office/plot-main.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Ours ablation]{\includegraphics[width=\linewidth]{pic/office/plot-ours-ablation.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[PARROT ablation]{\includegraphics[width=\linewidth]{pic/office/plot-PARROT-ablation.pdf}}
\end{minipage}
\caption{Main result and ablation of our method and PARROT on the office environment.}
\label{fig:office}
\endgroup
}
\end{wrapfigure}
\textbf{Main Results and Ablation.} Fig.~\ref{fig:office}a shows the main result across different methods. Our method with explicit prior, push-forward technique, and task-specific flow outperforms all baselines. FIST works well in this environment, probably because of two reasons: 1) there are a sufficient number of task-specific trajectories for the VAE-architecture, and 2) the office environment is less noisy than the kitchen environment.
However, as FIST does not contain a reinforcement learning stage, it has no chance to improve on a decent policy which could have been a good start for an RL agent. Fig.~\ref{fig:office}b shows the ablation of our method. While the task-specific single flow $f_{n+1}$ does not help in this environment, the explicit prior greatly improves results. Also, as illustrated, the reward curve of the variants with the explicit prior but without the push-forward skill does not grow, which is due to the agent getting stuck as described at the end of Sec.~\ref{sec:explicitprior}.
Fig.~\ref{fig:office}c shows the ablation result of PARROT, which also emphasizes that the explicit prior and push-forward skill greatly improve results.
\section{Introduction}
\label{sec:intro}
Reinforcement learning (RL) has found widespread use across domains from robotics~\cite{xiali2020relmogen} and game AI~\cite{Silver2017MasteringCA} to recommender systems~\cite{chen2019generative}. Despite its success, reinforcement learning is also known to be sample inefficient. For instance, training a robot arm with sparse rewards to sort objects from scratch still requires many training steps if it is at all feasible~\cite{Singh2021ParrotDB}.
To increase the sample efficiency of reinforcement learning, prior work aims to leverage demonstrations~\cite{Brys2015RLfD, pertsch2021skild, Rengarajan2022LOGO}. These demonstrations can be {\em task-specific}~\cite{Hester2018DQLfD, Brys2015RLfD}, i.e., they directly correspond to and address the task of interest. More recently, the use of {\em task-agnostic} demonstrations has also been studied~\cite{pertsch2021skild, Hakhamaneshi2022FIST, Singh2021ParrotDB, Gupta2019RelayPL}, showing that demonstrations for loosely related tasks can enhance sample efficiency of reinforcement learning agents.
To benefit from either of these two types of demonstrations, most work distills the information within the demonstrations into an {\em implicit prior}, by encoding available demonstrations in a deep net. For example,
SKiLD~\cite{pertsch2021skild} and FIST~\cite{Hakhamaneshi2022FIST} use a variational auto-encoder (VAE) to encode the ``skills,'' i.e., action sequences, in a latent space, and train a prior conditioned on states based on demonstrations to use the skills. Differently, PARROT~\cite{Singh2021ParrotDB} adopts a state-conditional normalizing flow to encode a transformation from a latent space to the actual action space. However, the idea of using the available demonstrations as an \textit{explicit prior} has not received a lot of attention. Explicit priors enable the agent to maintain a database of demonstrations, which can be used to retrieve state-action sequences given an agent's current state. This technique has been utilized in robotics~\cite{Chaplot2020SLAM, pari2021surprising} and early attempts of reinforcement learning with demonstrations~\cite{Brys2015RLfD}. It was also implemented as a baseline in~\cite{Gupta2019RelayPL}. One notable recent exception is FIST~\cite{Hakhamaneshi2022FIST}, which queries a database of demonstrations using the current state to retrieve a likely next state. The use of an explicit prior was shown to greatly enhance the performance. However, FIST uses pure imitation learning without any RL, hence losing the chance for trial and remedy if the imitation is not good enough.
Our key insight is to leverage demonstrations both explicitly \emph{and} implicitly, thus benefiting from both worlds. To achieve this, we develop \textbf{CEIP}, a method which \textbf{c}ombines \textbf{e}xplicit and \textbf{i}mplicit \textbf{p}riors. \textbf{CEIP} leverages implicit demonstrations by learning a transformation from a latent space to the real action space via normalizing flows. More importantly, different from prior work, such as PARROT and FIST which combine all the information within a single deep net, \textbf{CEIP} selects the most useful prior by combining multiple flows \textit{in parallel} to form a single large flow. To benefit from demonstrations explicitly, \textbf{CEIP} augments the input of the normalizing flow with a likely future state, which is retrieved via a lookup from a database of transitions. For an effective retrieval, we propose a push-forward technique which ensures the database to return future states that have not been referred to yet, encouraging the agent to complete the whole trajectory even if it fails on a single task.
We evaluate the proposed approach on three challenging environments: fetchreach~\cite{Plappert2018MultiGoalRL}, kitchen~\cite{fu2020d4rl}, and office~\cite{Singh2020cog}. In each environment, we study the use of both task-specific and task-agnostic demonstrations. We observe that integrating an explicit prior, especially with our proposed push-forward technique, greatly improves results. Notably, the proposed approach works well on sophisticated long-horizon robotics tasks with a few, or sometimes even one task-specific demonstration.
\iffalse
\begin{itemize}
\item Reinforcement Learning are known to be sample inefficient. For example, training a robotic arm to grab things from scratch will usually leads to the arm wandering around with random initialization.
\item To help the agents to learn faster, many prior works using demonstrations to guide reinforcement learning, and a common way of utilizing demonstrations is to train action prior on the trajectories.
\subitem More specifically, we consider two common types of dataset in real-life applications, which are the task-specific dataset and task-agnostic dataset. The former is often small but performs exactly the same task as the final goal; the latter can be large but performs different tasks.
\item There are two ways of using action prior, one is implicit and the other is explicit.
\subitem Implicit prior tries to encode the action prior in a deep learning model. For example, PARROT encodes action prior in a state-conditioned normalizing flow, which is utilized by transforming RL agent's initially random actions into meaningful interactions with the environment; SKiLD trains a VAE to encode action sequences into "skills", and trains a RL agent to learn utilizing the skills with a trained prior; TRAIL recovers transition model and jointly trains a action decoder on the task-agnostic dataset.
\subitem Explicit prior stores the state-action pairs in a database, and perform lookups when encountering a new state. FIST, developed on the basis of SKiLD, performs a database lookup in the task-specific dataset to find the most possible future state, and use it as the model's input, which is proved to be very informative. Similar dataset lookup ideas, though not action prior, is the storage of topological graph in robotics.
\item However, all of the method mentioned above has limitations.
\subitem TRAIL, while its performance is theoretically guaranteed, requires the task-agnostic dataset to be of uniform action;
\subitem FIST does not have RL tuning, which means it has no remedy when the data size is small and imitation learning becomes brittle; / \subitem SKiLD has a heavy architecture with many moving parts and long time to train;
\subitem Furthermore, all these methods, including PARROT, have no selection over task-agnostic data, which may run the risk of learning useless behavior from data that is very unrelated.
\item In this paper, we propose an algorithm that combines explicit and implicit priors on the task-agnostic dataset and a few trajectories in the task-specific dataset, to help the downstream reinforcement learning task.
\subitem For the explicit prior, we found the future state lookup from FIST, if applied directly, may cause the agent to stagnate.
\subitem For the implicit prior, different from previous works, we partition the task-agnostic dataset, and only choose the implicit prior from the most relevant prior. This is inspired by recent works, e.g. FLUTE in the few-shot learning literature, which tries to adapt to a new domain by combining existing architectures over different tasks. In order to achieve this, we do not use the VAE architecture. Instead, we adopt the flow-based architecture from PARROT to ensure we have full control over the dimensions. The linear combination of flow can also be easily formulated as a new flow, which is a property that VAE does not possess. Our method is also faster to train than the VAE architecture of SKiLD.
\item We run experiments on three environments: the fetchreach environment, the kitchen environment and the office environment, validating the superiority of our method.
\item Our contributions are:
\subitem 1) We propose a new algorithm for RL with demonstrations which reaches state-of-the-art, and conduct experiments to validate the performance;
\subitem 2) We propose a technique to push the agent forward from stagnation when referring to explicit priors;
\subitem 3) We develop a novel way of combining normalizing flows trained from different tasks to quickly adapt to new data.
\end{itemize}
\fi
\section{CEIP: Combining Explicit and Implicit Priors}
\label{sec:method}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{pic/overview-new.pdf}
\caption{Overview of our proposed approach, CEIP. Our approach can be divided into three steps: a) cluster the task-agnostic dataset into different tasks, and then train one flow on each of the $n$ tasks of the task-agnostic dataset; b) train a flow on the task-specific dataset, and then train the coefficients to combine the $n+1$ flows into one large flow $f_\text{TS}$, which is the implicit prior; c) conduct reinforcement learning on the target task; for each timestep, we perform a dataset lookup in the task-specific dataset to find the state most similar to current state $s$, and return the likely \textit{next state} $\hat{s}_{\text{next}}$ in the trajectory, which is the explicit prior.}
\label{fig:overview}
\end{figure}
\subsection{Overview}
\label{sec:overview}
As illustrated in Fig.~\ref{fig:overview}, our goal is to train an autonomous agent to solve challenging tasks despite sparse rewards, such as controlling a robot arm to complete item manipulation tasks (like turning on a switch or opening a cabinet). For this we aim to benefit from available demonstrations. Formally, we consider a task-specific dataset $D_{\text{TS}}=\{\tau^{\text{TS}}_1, \tau^{\text{TS}}_2, \dots, \tau^{\text{TS}}_m\}$, where $\tau^{\text{TS}}_i$ is the $i$-th trajectory of the task-specific dataset, and a task-agnostic dataset $D_{\text{TA}}=\{\bigcup D_i|i\in\{1,2,3,\dots,n\}\}$, where $D_i=\{\tau^i_1, \tau^i_2, \dots, \tau^i_{m_i}\}$ subsumes the demonstration trajectories for the $i$-th task in the task-agnostic dataset. Each trajectory $\tau=\{(s_1, a_1),(s_2,a_2),\dots\}$ in the dataset is a state-action pair sequence of a complete episode, where $s$ is the state, and $a$ is the action. We assume that the number of available task-specific trajectories is very small, i.e., $\sum_{i=1}^{n}m_i\gg m$, which is common in practice. For readability, we will also refer to $D_{\text{TS}}$ using $D_{n+1}$
Our approach leverages demonstrations implicitly by training a normalizing flow $f_\text{TS}$, which transforms the probability distribution represented by a policy $\pi(z|s)$ over a simple latent probability space $\mathcal{Z}$, i.e., $z\in\mathcal{Z}$, into a reasonable expert policy over the space of real-world actions $\mathcal{A}$. As before, $s$ is the current environment state. Thus, the downstream RL agent only needs to learn a policy $\pi(z|s)$ that results in a probability distribution over latent space $\mathcal{Z}$, which is subsequently mapped via the flow $f_\text{TS}$ to a real-world action $a\in\mathcal{A}$. Intuitively, the MDP in the latent space is governed by a less complex probability distribution, making it easier to train because the flow increases the exposure of more likely actions, while reducing the chance that a less-likely action is chosen. This is because the flow reduces the probability mass for less likely actions given the current state.
\iffalse
Our approach leverages demonstrations implicitly by training a normalizing flow $f_\text{TS}$, which transforms the probability distribution represented by a policy $\pi(z|s)$ over a simple latent probability space $\mathcal{Z}$, i.e., $z\in\mathcal{Z}$, into a reasonable expert policy over the more complex probability space of real-world actions $\mathcal{A}$. As before, $s$ is the current environment state. In other words, instead of learning a policy for the original MDP directly in the real-world action space $\cal A$, we let the downstream RL agent learn a policy $\pi(z|s)$ that results in a probability distribution over latent space $\mathcal{Z}$, which is subsequently mapped via the flow $f_\text{TS}$ to a real-world action $a$ in the real-world action space $\mathcal{A}$. Intuitively, the MDP in the latent space is governed by a less complex probability distribution, making it easier to train because the flow increases the exposure of more likely actions, while reducing the chance that a less-likely action is chosen. This is due to the fact that the flow reduces the probability mass for less likely actions given the current state.
\fi
Task-agnostic demonstrations contain useful patterns that may be related to the task at hand.
However, not all the task-agnostic data are always equally useful, as different task-agnostic data may require to expose different parts of the action space. Therefore, different from prior work where all data are fed into the same deep net model, we first partition the task-agnostic dataset into different groups according to task similarity so as to increase flexibility. For this we use a classical $k$-means algorithm. We then train different flows $f_i$ on each of the groups, and finally combine the flows via learned coefficients into a single flow $f_\text{TS}$. Beneficially, this process permits to expose different parts of the action space as needed and according to perceived task similarity.
Lastly, our approach further leverages demonstrations explicitly, by conditioning the flow not only on the current state but also on a likely next state, to better inform the agent of the state it should try to achieve with its current action. In the following, we first discuss the implicit prior of \textbf{CEIP} in Sec.~\ref{sec:implicitprior}; afterward we discuss our explicit prior in Sec.~\ref{sec:explicitprior}, and the downstream reinforcement learning with both priors in Sec.~\ref{sec:RL}.
\iffalse
we have implicit prior, explicit prior, main function
policy maps to latent space;
flow maps from latent to action space - implicit prior in 3.2
explicit prior
\begin{itemize}
\item Problem formulation
\subitem We consider a task-specific dataset $D_{\text{TS}}=\{\tau^{\text{TS}}_1, \tau^{\text{TS}}_2, ..., \tau^{\text{TS}}_m\}$, where $\tau^{\text{TS}}_i$ is the $i$-th trajectory of the task-specific dataset, and a task-agnostic dataset $D_{\text{TA}}=\{\bigcup D_i|i\in\{1,2,3,...,n\}\}$, where $D_i=\{\tau^i_1, \tau^i_2, ..., \tau^i_{m_i}\}$ is the demonstration for $i$-th task in the task-agnostic dataset. We assume that $\sum_{i=1}^{n}m_i>>m$, which is common in real life.
For simplicity of notations, we will also refer to $D_{\text{TS}}$ using $D_{n+1}$.
\end{itemize}
\fi
\subsection{Implicit Prior}
\label{sec:implicitprior}
To better benefit from demonstrations implicitly, we use a 1-layer normalizing flow as the backbone of our implicit prior. It essentially corresponds to a conditioned affine transformation of a Gaussian distribution. We choose a flow-based model instead of a VAE-based one for two reasons: 1) as the dimensionality before and after the transformation via a normalizing flow remains identical and since the flow is invertible, the agent is guaranteed to have control over the whole action space. This ensures that all parts of the action space are accessible, which is not guaranteed by VAE-based methods like SKiLD or FIST; 2) normalizing flows, especially coupling flows such as RealNVP~\cite{Dinh17RealNVP}, can be easily stacked \textit{horizontally}, so that the combination of parallel flows is also a flow. Among feasible flow models, we found that the simplest 1-layer flow suffices to achieve good results, and is even more robust in training than a more complex RealNVP.
Next, in Sec.~\ref{sec:nf} we first introduce details regarding the normalizing flow $f_i$, before we discuss in Sec.~\ref{sec:adapt} how to combine the flows into one flow $f_\text{TS}$ applicable to the task for which the task-specific dataset contains demonstrations.
\iffalse
\yxw{I think this paragraph can be better re-organized by explicitly splitting the explaination at two levels. First, why we use flow rather than VAE; second, we use 1-layer flow and it is sufficient.}
To better benefit from demonstrations implicitly, we use a 1-layer normalizing flow as the backbone of our implicit prior.
It essentially corresponds to a conditioned affine transformation of a Gaussian distribution. There are three reasons for using such an architecture: 1) as the dimensionality before and after the transformation via a normalizing flow remains identical and since the flow is invertible, the agent is guaranteed to have control over the whole action space. This ensures that all parts of the action space are accessible, which is not guaranteed by VAE-based architectures like SKiLD or FIST; 2) normalizing flows, especially coupling flows such as RealNVP~\cite{Dinh17RealNVP}, can be easily stacked \textit{horizontally}, so that the combination of parallel flows is also a flow. This provides a simple way of combining models which is not possible for VAE-based methods; 3) in our experiments, we find that the simplest 1-layer flow suffices to achieve good results, and is even more robust in training than a more complex RealNVP due to its simple architecture. Next, in Sec.~\ref{sec:nf} we first introduce details regarding our use of the normalizing flow $f_i$, before we discuss in Sec.~\ref{sec:adapt} how to combine the flows into one flow $f_\text{TS}$ applicable to tasks in the task-specific dataset.
\fi
\subsubsection{Normalizing Flow Prior.}
\label{sec:nf}
For each task $i$ in the task-agnostic dataset, i.e., for each $D_i$, we train a conditional 1-layer normalizing flow $f_i(z; u)=a$ which maps a latent space variable $z\in\mathbb{R}^q$ to an action $a\in\mathbb{R}^{q}$, where $q$ is the number of dimensions of the real-valued action vector. We let $u$ refer to a conditioning variable. In our case $u$ is either the current environment state $s$ (if no explicit prior is used) or a concatenation of the current and a likely next state $[s, s_{\text{next}}]$ (if an explicit prior is used). Concretely, the formulation of our 1-layer flow i
\begin{equation}
f_i(z; u)=a=\exp\{c_i(u)\}\odot z + d_i(u),
\end{equation}
where $c_i(u)\in\mathbb{R}^q$, $d_i(u)\in\mathbb{R}^q$ are trainable deep nets, and $\odot$ refers to the Hadamard product. The $\exp$ function is applied elementwise. When training the flow, we sample state-action pairs (without explicit prior) or transitions (with explicit prior) $(u, a)$ from the dataset $D_i$, and maximize the log-likelihood $\mathbb{E}_{(u,a)\sim D_i}\log p(a|u)$; refer to \cite{Kobyzev2021NFreview} for how to maximize this objective.
In the discussion above, we assume the decomposition of the task-agnostic dataset into tasks to be given. If such a decomposition is not provided (e.g., for the kitchen and office environments in our experiments), we perform a $k$-means clustering to divide the task-agnostic dataset into different parts. The clustering algorithm operates on the last state of a trajectory, which is used to represent the whole trajectory. The intuition is two-fold. First, for many real-world MDPs, achieving a particular terminal state is more important than the actions taken~\cite{Seyed2019Divergence}. For example, when we control a robot to pick and place items, we want all target items to reach the right place eventually;
however, we do not care too much about the actions taken to achieve this state. Second, among all the states, the final state is often the most informative about the task that the agent has completed.
The number of clusters $k$ in the $k$-means algorithm is a hyperparameter, which empirically should be larger than the number of dimensions of the action space.
Though we assume the task-agnostic dataset is partitioned into labeled clusters, our experiments show that our approach is robust and good results are achieved even without a precise ground-truth decomposition
In addition to the clusters in the task-agnostic dataset, we train a flow $f_{n+1}(z; u)=a$ on the task-specific dataset $D_{n+1}=D_{\text{TS}}$, using the same maximum log-likelihood loss, which is optional but always available. This is not necessary when the task is relatively simple and the episodes are short (e.g., the fetchreach environment in the experiment section), but becomes particularly helpful in scenarios where some subtasks of a task sequence only appear in the task-specific dataset (e.g., the kitchen environment).
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{pic/combine-new.pdf}
\caption{An illustration of how we combine different flows into one large flow for the task-specific dataset. Each red block of ``NN'' stands for a neural network. Note that $c_i(u)$ and $d_i(u)$ are vectors, while $\mu_i$ and $\lambda_i$ are the $i$-th dimension of $\mu(u)$ and $\lambda(u)$.}
\label{fig:flowcombine}
\end{figure}
\subsubsection{Few-shot Adaptation.}
\label{sec:adapt}
The flow models discussed in Sec.~\ref{sec:nf} learn which parts of the action space to be more strongly exposed from the latent space. However, not all the flows expose useful parts of the action space for the current state.
For example, the target task needs the agent to move its gripper upwards at a particular location, but in the task-agnostic dataset, the robot more often moves the gripper downwards to finish another task. In order to select the most useful prior, we need to tune our set of flows learned on the task-agnostic datasets to the small number of trajectories available in the task-specific dataset. To ensure that this does not lead to overfitting as only a very small number of task-specific trajectories are available, we train a set of coefficients that selects the flow that works the best for the current task.
Concretely, given all the trained flows, we train a set of coefficients to combine the flows $f_1$ to $f_n$ trained on the task-agnostic data, and also the flow $f_{n+1}$ trained on the task-specific data. The coefficients select from the set of available flows the most useful one. To achieve this, we use the combination flow illustrated in Fig.~\ref{fig:flowcombine} which is formally specified as follows:
\begin{equation}
f_\text{TS}(z;u)=\left(\sum_{i=1}^{n+1}\mu_i(u)\exp\{c_i(u)\}\right)\odot z+\left(\sum_{i=1}^{n+1}\lambda_i(u)d_i(u)\right).
\end{equation}
Here, $\mu_i(u)\in\mathbb{R}$, $\lambda_i(u)\in\mathbb{R}$ are the $i$-th entry of the deep nets $\mu(u)\in\mathbb{R}^{n+1}$, $\lambda(u)\in\mathbb{R}^{n+1}$, respectively, which yield the coefficients while the deep nets $c_i$ and $d_i$ are frozen. As before, the $\exp$ function is applied elementwise. We use a softplus activation and an offset at the output of $\mu$ to force $\mu_i(u)\geq 10^{-4}$ for any $i$ for numerical stability. Note that the combined flow $f_{\text{TS}}$ consisting of multiple 1-layer flows is also a 1-layer normalizing flow. Hence, all the compelling properties over VAE-based architectures described at the beginning of Sec.~\ref{sec:implicitprior} remain valid. To train the combined flow, we use the same log likelihood loss $\mathbb{E}_{(u,a)\sim D_{\text{TS}}}\log p(a|u)$ as that for training single flows. Here, we optimize the deep nets $\mu(u)$ and $\lambda(u)$ which parameterize $f_{\text{TS}}$.
Obviously, the employed combination of flows can be straightforwardly extended to a more complicated flow, e.g., a RealNVP~\cite{Dinh17RealNVP} or Glow~\cite{Kingma2018GlowGF}.
However, we found the discussed simple formulation to work remarkably well and to be robust.
\subsection{Explicit Prior}
\label{sec:explicitprior}
Beyond distilling information from demonstrations into deep nets which are then used as implicit priors, we find explicit use of demonstrations to also be remarkably useful.
To benefit, we encode future state information into the input of the flow. More specifically, instead of sampling $(s,a)$-pairs from a dataset $D$ for training the flows, we consider sampling a \textit{transition} $(s, a, s_{\text{next}})$ from $D$. During training, we concatenate $s$ and $s_{\text{next}}$ before feeding it into a flow, i.e., $u=[s, s_{\text{next}}]$ instead of $u=s$.
However, we do not know the future state $s_{\text{next}}$ when deploying the policy. To obtain an estimate, we use task-specific demonstrations as explicit priors.
More formally, we use the trajectories within the task-specific dataset $D_{\text{TS}}$ as a database. This is manageable as we assume the task-specific dataset to be small. For each environment step of reinforcement learning with current state $s$, we perform a lookup, where $s$ is the query, states $s_{\text{key}}$ in the trajectories are the keys, and their corresponding next state $s_{\text{next}}$ is the value. Concretely, we assume $s_{\text{next}}$ belongs to trajectory $\tau$ in the task-specific dataset $D_{\text{TS}}$, and define $\hat{s}_ {\text{next}}$ as the result of the database retrieval with respect to the given query $s$, i.e.,
\begin{equation}
\begin{aligned}
\hat{s}_{\text{next}}&= \text{argmin}_{s_{\text{next}}|(s_{\text{key}}, a, s_{\text{next}})\in D_{\text{TS}}} [(s_{\text{key}}-s)^2+C\cdot \delta (s_{\text{next}})], \text{where}\\
\delta(s_{\text{next}})&=\begin{cases}1\ \text{if }\exists s'_{\text{next}}\in\tau, \text{ s.t. } s'_{\text{next}}\text{ is no earlier than $s_{\text{next}}$ in $\tau$ and has been retrieved},\\0\ \text{otherwise}.\end{cases}
\label{eq:ex}
\end{aligned}
\end{equation}
In Eq.~\eqref{eq:ex}, $C$ is a constant and $\delta$ is the indicator function. We set $u=[s, \hat{s}_{\text{next}}]$ as the condition, feed it into the trained flow $f_{\text{TS}}$, and map the latent space element $z$ obtained from the RL policy to the real-world action $a$. The penalty term $\delta$ is a push-forward technique, which aims to push the agent to move forward
instead of staying put, imposing monotonicity on the retrieved $\hat{s}_{\text{next}}$. Consider an agent at a particular state $s$ and a flow $f_{\text{TS}}$, conditioned on $u=[s, \hat{s}_{\text{next}}]$ which maps the chosen action $z$ to a real-world action $a$ that does not modify the environment. Without the penalty term, the agent will remain at the same state, retrieve the same likely next state, which again maps onto the action that does not change the environment. Intuitively, this term discourages 1) retrieving the same state twice, and 2) returning to earlier states in a given trajectory. In our experiments, we set $C=1$.
\subsection{Reinforcement Learning with Priors}
\label{sec:RL}
Given the implicit and explicit priors, we use RL to train a policy $\pi(z|s)$ to accomplish the target task demonstrated in the task-specific dataset. As shown in Fig.~\ref{fig:overview}, the RL agent receives a state $s$ and provides a latent space element $z$. The conditioning variable of the flow is retrieved via the dataset lookup described in Sec.~\ref{sec:explicitprior} and the real-world action $a$ is then computed using the flow.
Note, our approach is suitable for any RL method, i.e., the policy $\pi(z|s)$ can be trained using any RL algorithm such as proximal policy optimization (PPO)~\cite{Schulman2017PPO} or soft-actor-critic (SAC)~\cite{Haarnoja2018SoftAO}.
\section{Preliminaries}
\label{sec:pre}
\textbf{Reinforcement Learning.} Reinforcement learning (RL) aims to train an \textit{agent} to make the `best' decision towards completing a particular task in a given environment. The environment and the task are often described as a Markov Decision Process (MDP), which is defined by a tuple $(\mathcal{S}, \mathcal{A}, T, r, \gamma)$. In timestep $t$ of the Markov process, the agent observes the current \textit{state} $s_t\in\mathcal{S}$, and executes an \textit{action} $a_t\in\mathcal{A}$ following some probability distribution, i.e., \textit{policy} $\pi(a_t|s_t)\in\Delta(\mathcal{A})$, where $\Delta(\mathcal{A})$ denotes the probability simplex over elements in space $\mathcal{A}$. Upon executing action $a_t$, the state of the agent changes to $s_{t+1}$ following the dynamics of the environment, which are governed by the \textit{transition function} $T(s_t,a_t):\mathcal{S}\times\mathcal{A}\rightarrow \Delta(\mathcal{S})$. Meanwhile, the agent receives a \textit{reward} $r(s_t, a_t)\in\mathbb{R}$.
The agent aims to maximize the cumulative reward $\sum_t \gamma^tr(s_t,a_t)$, where $\gamma\in[0, 1]$ is the discount factor. One complete run in an environment is called an \textit{episode}, and the corresponding state-action pairs $\tau = \{(s_1, a_1), (s_2, a_2), \dots\}$ form a \textit{trajectory} $\tau$.
\textbf{Normalizing Flows.} A normalizing flow~\cite{Kobyzev2021NFreview} is a generative model that transforms elements $z_0$ drawn from a simple distribution $p_z$, e.g., a Gaussian, to elements $a_0$ drawn from a more complex distribution $p_a$. For this transformation, a bijective function $f$ is used, i.e., $a_0=f(z_0)$. The use of a bijective function ensures that the log-likelihood of the more complex distribution at any point is tractable and that samples of such a distribution can be easily generated by taking samples from the simple distribution and pushing them through the flow. Formally, the core idea of a normalizing flow can be summarized via $p_a(a_0)=p_z(f^{-1}(a_0)) \left|\frac{\partial f^{-1}(a)}{\partial a}|_{a=a_0}\right|$, where $\left|\cdot\right|$ is the determinant (guaranteed positive by flow designs), $a$ is a random variable with the desired more complex distribution, and $z$ is a random variable governed by a simple distribution. To efficiently compute the determinant of the Jacobian matrix of $f^{-1}$, special constraints are imposed on the form of $f$. For example, coupling flows like RealNVP~\cite{Dinh17RealNVP} and autoregressive flows~\cite{Papamakarios2017MAFDE} impose the Jacobian of $f^{-1}$ to be triangular.
\iffalse
\begin{itemize}
\item MDP and RL basics
Introduce the notion of trajectories here so we do not need to expand trajectories to state-action pairs at problem formulation, as the superscripts and subscripts are already complicated.
\item Normalizing Flow
\end{itemize}
\fi
\section{Related Work}
\label{gen_inst}
\textbf{Reinforcement Learning with Demonstrations.} Using demonstrations to improve the sample efficiency of RL is an established direction~\cite{Schaal1996LfD, Wang2021LearningTW, Giusti2016Forest, Lynch2019LearningLP}. Recently, the use of task-agnostic demonstrations has gained popularity, as task-specific data need to be sampled from a particular expert and can be expensive to acquire~\cite{pertsch2020spirl, pertsch2021skild, Singh2021ParrotDB}. To utilize the prior, skill-based methods such as SPiRL~\cite{pertsch2020spirl}, SKiLD~\cite{pertsch2021skild}, and SIMPL~\cite{nam2022simpl} extract action sequences from the dataset with a VAE-based model, while TRAIL~\cite{Yang2021TRAIL} recovers transitions from a task-agnostic dataset with uniformly randomly sampled action. Our method considers the situation where both task-agnostic \emph{and} task-specific data exist and significantly improves results over prior work with similar settings, e.g., SKiLD~\cite{pertsch2021skild}.
\textbf{Action Priors.} An action prior is a common way to utilize demonstrations for reinforcement learning~\cite{pertsch2021skild} and imitation learning~\cite{Hakhamaneshi2022FIST}. Most work uses an implicit prior, where a probability distribution of actions conditioned on a state is learned by a deep net and then used to rule out unlikely attempts~\cite{Biza2021ActionPrior}, to form a hierarchical structure~\cite{pertsch2021skild, Singh2021ParrotDB}, or to serve as a regularizer for RL training~\cite{pertsch2021skild, Rengarajan2022LOGO}, preventing the agent to stray too far from expert demonstrations. Explicit priors are less explored. They come in the form of nearest neighbors~\cite{Arunachalam2022dexterbot} (as in our work) or in the form of locally weighted regression~\cite{pari2021surprising}. They are utilized in robotics~\cite{Arunachalam2022dexterbot,Chaplot2020SLAM, Schaal1994Juggling, pari2021surprising} and early work of RL with demonstrations~\cite{Brys2015RLfD}.
Another way to explicitly use demonstrations includes
filling the buffer of offline RL algorithms with transitions sampled from an expert dataset to help exploration~\cite{Vecerk2017LeveragingDF,Nair2020AWAC, Hester2018DQLfD}. Different from all such work, we propose a novel way of using both implicit and explicit priors.
\textbf{Normalizing Flow.} Normalizing flows are a generative model that can be used for variational inference~\cite{vdberg2018sylvester, Kingma2016VIflow} and density estimation~\cite{Papamakarios2017MAFDE, Huang2018NAF}
and come in different forms: RealNVP~\cite{Dinh17RealNVP}, Glow~\cite{Kingma2018GlowGF}, or autoregressive flow~\cite{Papamakarios2017MAFDE}. Many methods use normalizing flows in reinforcement learning~\cite{Ma2020NFMAS, Tang2018BoostingTR, TouatiSRPV19, Ward2019ImprovingEI, mazoure2019leveraging, Khader2021Stable} and imitation learning~\cite{Chang2021ILFLOW}. However, most prior work uses normalizing flows as a strong density estimator to exploit a richer class of policies. Most closely related to our work is PARROT~\cite{Singh2021ParrotDB}, which trains a single normalizing flow as an implicit prior. Different from our work, PARROT does not differentiate tasks among the task-agnostic dataset and does not use an explicit prior. More importantly, different from prior work, we develop a simple yet effective way to combine flows using learned coefficients. While there are some approaches that combine flows via variational mixtures~\cite{Ciobaru2021Mixtures, Pires2020VariationalMO}, they have not been shown to succeed on challenging RL tasks.
\textbf{Few-shot Generalization.}
Few-shot generalization~\cite{TriantafillouZD20} is broadly related, as
a model is first trained across different datasets, and then adapted to a new dataset with small sample size. For example, similar to our work, FLUTE~\cite{Triantafillou2021FLUTE}, SUR~\cite{Dvornik2020SUR}, and URT~\cite{Lu2021URT} use models for multiple datasets, which are then combined via weights for few-shot adaptation. Other methods have shared parameters across different tasks and only used some components within the model for adaptation~\cite{Puigcerver2021STL, Triantafillou2021FLUTE, Zintgraf2019FastCA, Flennerhag2020MetaLearningWW,RenYehNEURIPS2020}. While most work focuses on classification tasks, we address more complex RL tasks. Also, different from existing work, we found training of independent 1-layer flows without shared layers to be more flexible, and free from negative transfer as also reported by~\cite{Javaloy2021RotoGradGH}.
\iffalse
\begin{itemize}
\item Demonstration-guided Reinforcement Learning
\subitem SPiRL, SKiLD, FIST, TRAIL, PARROT (flow model; most important)
\subitem our work differs from the first four algorithm for using a flow-based architecture;
\subitem our work differs from PARROT for we consider the diversity of the task-agnostic dataset and design an algorithm to combine and extract the most useful part.
\item Few-shot Learning from Different Datasets
\subitem Selecting Relevant Features from a Multi-domain Representation for Few-shot Classification
\subitem Learning a Universal Template for Few-shot Dataset Generalization
\item Normalizing flow
\item Explicit and Implicit Priors
\end{itemize}
\fi
\section*{Appendix: CEIP: Combining Explicit and Implicit Priors for Reinforcement Learning with Demonstrations}
\label{sec:app}
This Appendix is organized as follows. First, we reiterate and highlight our key observations. In Sec.~\ref{sec:alg}, we then provide the pseudocode for training the implicit prior and the downstream reinforcement learning. Afterwards, we provide additional implementation details of the proposed method and major baselines in Sec.~\ref{sec:implementation}, and additional details of experimental settings in Sec.~\ref{sec:app1}. In Sec.~\ref{sec:extraexp}, we provide additional experimental results and ablation studies. In Sec.~\ref{sec:comp_resource}, we describe the computational resources consumed by and the training time of each method. Finally, in Sec.~\ref{sec:license}, we describe the licenses of assets which we used to develop our code.
The key findings of our work include the following:
\begin{itemize}
\item \textbf{Is a task-specific flow necessary?} In environments where the episode length is relatively short and the dynamics are relatively simple, CEIP works better without the task-specific flow, explicit prior, and push-forward technique as the training complexity is unnecessarily increased. This is shown in Sec.~\ref{sec:expFR}.
\item \textbf{When is a task-specific flow helpful?} In environments where some tasks of the task-specific dataset are not part of the task-agnostic dataset, a flow trained on the task-specific dataset improves performance. This is shown in Sec.~\ref{sec:expKT}.
\item \textbf{How related should the tasks in the task-agnostic dataset be to the task at hand?} For both PARROT and CEIP, more related data in the task-agnostic dataset are beneficial. However, CEIP can automatically discover and compose related flows; in contrast, PARROT works better only when the dataset fed into the normalizing flow is manually picked to be more relevant to the target task. This is shown in Sec.~\ref{sec:expFR}.
\item \textbf{Will ground-truth labels help the performance of CEIP?} Ground-truth labels will sometimes improve the performance of CEIP; however, this is not always the case. This is shown in Sec.~\ref{sec:expKT}.
\item \textbf{How will simple baselines, e.g., behavior cloning and replaying demonstrations do?} We find those simple baselines to not work very well, which indicates the non-trivial nature of our testbed. However, introducing an explicit prior will significantly improve the performance of behavior cloning. This is shown in Sec.~\ref{sec:expKT}.
\item \textbf{How robust is CEIP with respect to the precision of task-specific demonstrations?} Similar to prior work such as FIST, imprecise task-specific demonstrations will affect performance. Nevertheless, we find CEIP to be more robust than prior work. This is shown in Sec.~\ref{sec:expKT}.
\item \textbf{What is the impact of using an explicit prior in PARROT?} PARROT results improve when an explicit prior is used, which further supports the design of CEIP. See ablation studies in Sec.~\ref{sec:expFR} and Sec.~\ref{sec:expKT}
\end{itemize}
To easily compare CEIP to baselines, we summarize all results achieved at the end of the training process for the proposed method and baselines on all testbeds in Table~\ref{tab:summary}. To better understand the behavior of each method, please also see the code and videos of trajectories which are part of this Appendix.
\begin{table}[t]
\setlength{\tabcolsep}{1pt}
\centering
\begin{tabular}{cccccc}\toprule
Environment & CEIP (ours) & PARROT+TA & PARROT+TS & FIST & SKiLD \\ \midrule
Fetchreach-4.5 &$\mathbf{-10.03}^\dagger${\scriptsize $\pm 0.64$} & $-19.33${\scriptsize$\pm 9.59$} & $-20.30${\scriptsize $\pm 10.62$} & $-34.80${\scriptsize$\pm 8.33$} & $-39.91${\scriptsize$\pm 0.14$} \\
Fetchreach-5.5 & $\mathbf{-9.76}^\dagger${\scriptsize$\pm 0.47$} & $-20.49${\scriptsize $\pm 11.51$} & $-14.32${\scriptsize$\pm 7.53$} & $-39.86${\scriptsize$\pm 0.50$} & $-38.38${\scriptsize$\pm 2.81$}\\
Fetchreach-6.5 & $\mathbf{-9.08}^\dagger${\scriptsize$\pm 0.36$} & $-14.52${\scriptsize$\pm 9.44$} & $-18.52${\scriptsize$\pm 2.34$} & $-38.30${\scriptsize$\pm 5.28$} & $-40.00${\scriptsize $\pm 0.00$}\\
Fetchreach-7.5 & $\mathbf{-10.29}^\dagger${\scriptsize$\pm 0.67$} & $\mathbf{-10.34}${\scriptsize$\pm 0.79$} & $\mathbf{-10.24}${\scriptsize$\pm 0.69$} & $-39.87${\scriptsize $\pm 0.72$} & $-38.45${\scriptsize $\pm 2.67$}\\ \midrule
Kitchen-SKiLD-A & $\mathbf{4.00}${\scriptsize $\pm 0.00$} & $2.52${\scriptsize $\pm 0.96$} & $0.51${\scriptsize $\pm 0.46$} & $2.70${\scriptsize $\pm 1.23$} & $0.06${\scriptsize $\pm 0.10$}\\
Kitchen-SKiLD-B & $\mathbf{3.93}${\scriptsize $\pm 0.08$} & $1.13${\scriptsize $\pm 0.35$} & $1.25${\scriptsize$\pm 0.60$} & $1.17${\scriptsize$\pm 0.93$} & $0.48${\scriptsize$\pm 0.48$}\\ \midrule
Kitchen-FIST-A & $\mathbf{3.95}${\scriptsize $\pm 0.05$} & $1.94${\scriptsize $\pm 0.07$} & $2.40${\scriptsize $\pm 0.31$} & $0.33${\scriptsize $\pm 0.70$} & $0.67${\scriptsize $\pm 1.15$}\\
Kitchen-FIST-B & $\mathbf{3.89}${\scriptsize $\pm 0.07$} & $0.00${\scriptsize $\pm 0.00$} & $1.85${\scriptsize $\pm 0.05$} & $1.20${\scriptsize$\pm 0.54$} & $0.00${\scriptsize $\pm 0.00$}\\
Kitchen-FIST-C & $\mathbf{3.92}${\scriptsize $\pm 0.06$} & $0.96${\scriptsize $\pm 0.06$} & $2.07${\scriptsize $\pm 0.23$} & $0.00${\scriptsize$\pm 0.00$}& $0.33${\scriptsize $\pm 0.57$}\\
Kitchen-FIST-D & $\mathbf{3.94}${\scriptsize $\pm 0.07$} & $1.92${\scriptsize$\pm 0.06$} & $2.27${\scriptsize$\pm 0.24$} & $0.53${\scriptsize $\pm 0.50$}& $1.67${\scriptsize $\pm 0.58$}\\
Office & $\mathbf{6.33}${\scriptsize$\pm 0.30$} & $2.05${\scriptsize $\pm 0.31$} & $1.97${\scriptsize $\pm 0.22$} & $5.50${\scriptsize $\pm 1.12$}& $0.50${\scriptsize $\pm 0.50$} \\ \bottomrule
\end{tabular}
\caption{Summary of the results of each method on all environments at the end of training (higher is better). For CEIP (our method), we are not using the explicit prior, task-specific single flow, and push-forward technique for fetchreach (which is denoted by `$\dagger$'). We use all of them for the other experiments. For PARROT, we are not using the explicit prior, task-specific single flow, and push-forward technique, as all of them are our contributions. However, as shown in ablation study in Sec.~\ref{sec:extraexp}, these components are general and can be used to improve the performance of PARROT.}
\label{tab:summary}
\end{table}
\iffalse
~\yxw{I think this point can be made stronger like (a) the explicit prior, task-specific single flow, and push-forward technique are our contribution; (b) the explicit prior, task-specific single flow and push-forward technique are general and can be used to improve the performance of PARROT, and see... }.
~\yxw{(1) The entire table is out of space, you may want to resize it to be aligned with the text; (2) do we need `()' for std?}
\fi
\section{Algorithm Details}
\label{sec:alg}
Alg.~\ref{alg:NF} provides the pseudocode for training the implicit prior. Alg.~\ref{alg:RL} illustrates how we use the policy $\pi(z|s)$ and the flows to compute the real-world action $a$, when an explicit prior is available (i.e., condition $u=[s, s_{\text{next}}]$) and when using the push-forward technique.
\begin{algorithm}[t]
\caption{Training of Implicit Prior}
\label{alg:NF}
\SetAlgoVlined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKw{KwBy}{by}
\Input{dataset $D_1, D_2, ..., D_n ,D_{\text{TS}}$}
\Input{training epoch for single flow $M$, for combination $M_2$}
\Input{learning rate $a$}
\Output{normalizing flow $f_{TS}$, parameterized by $\mu(u)$, $\lambda(u)$, $c_i(u)$, and $d_i(u)$ where $i\in\{1,2,\dots,n+1\}$}
\newlength{\commentWidth}
\setlength{\commentWidth}{6cm}
\newcommand{\atcp}[1]{\tcp*[f]{\makebox[\commentWidth]{#1\hfill}}}
\Begin{
\sf{
\tcp{Training single flows}
\nl \For(\atcp{recall that we denote $D_{\text{TS}}=D_{n+1}$}){$i\in\{1,2,\dots,n+1\}$}{
\nl \For(\atcp{for loop over epochs}){$j\in\{1,2,\dots,M\}$}{
\nl \ForEach(\atcp{for each data point}){$(u,a)\sim D_i$}{
\nl {$z_0\gets\frac{a-d_i(u)}{\exp\{c_i(u)\}}$}\atcp{elementwise division}
\nl {$L=\log p_z(z_0)-c_i(u)^T\mathbf{1}$}\atcp{$z\sim N(0,I)$}
\nl {$c_i\gets c_i+ a\times\frac{\partial L}{\partial c_i}$}\\
\nl {$d_i\gets d_i+a\times\frac{\partial L}{\partial d_i}$}
}
}
}
\tcp{Training the combination of flows}
\nl \For(\atcp{for loop over epochs}){$j\in\{1,2,\dots,M_2\}$}{
\nl \ForEach(\atcp{for each data point}){$(u,a)\sim D_{\text{TS}}$}{
\nl {$\mu_0\gets \mu(u)$}\\
\nl {$\lambda_0\gets \lambda(u)$}\\
\nl {$c = \sum_{i=1}^{n+1}\mu_{0,i}c_i(u)$}\\
\nl {$d = \sum_{i=1}^{n+1}\lambda_{0,i}d_i(u)$}\\
\nl {$z_0\gets\frac{a-d}{c}$}\atcp{elementwise division}\\
\nl {$L\gets\log p_z(z_0)-c^T\mathbf{1}$}\atcp{$z\sim N(0, I)$}\\
\nl {$\mu\gets\mu+a\times\frac{\partial L}{\partial\mu}$}\\
\nl {$\lambda\gets\lambda+a\times\frac{\partial L}{\partial \lambda}$}
}
}
}
}
\end{algorithm}
\begin{algorithm}[t]
\caption{Step Function of Reinforcement Learning}
\label{alg:RL}
\SetAlgoVlined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKw{KwBy}{by}
\Input{current state $s$, RL policy $\pi(z|s)$}
\Output{action in actual action space $a$}
\newlength{\commentWidthB}
\setlength{\commentWidthB}{9cm}
\newcommand{\atcp}[1]{\tcp*[f]{\makebox[\commentWidthB]{#1\hfill}}}
\Begin{
\sf{
\tcp{$r$ is the last step referred to in the trajectory}
\If{A new episode begins}{\ForEach(\\\atcp{reset last reference in each trajectory}){$\tau\in D_{\text{TS}}$}{$r(\tau)\gets -1$}}
\nl \ForEach{$\tau\in D_{\text{TS}}$}{
\nl \ForEach(\\\atcp{Assume this is the $i$-th step}){$(s_{\text{key}},a,s_{\text{next}})\in\tau$}{
\nl{\If(\\\atcp{The second term is an indicator function}){$(s_0, j_0, \tau_0)$ undefined or $(s_{\text{key}}-s)^2+[i\leq r(\tau)]<(s_0-s)^2+[j_0\leq r(\tau_0)]$}{\nl{$s_0\gets s_{\text{key}}$}\\\nl{$j_0\gets i$}\\\nl{$\tau_0\gets \tau$}}}
}
}
\nl {$r(\tau_0)\gets j_0$}\atcp{update last reference for the chosen trajectory}\\
\nl {$\mu_0\gets \mu(u)$}\\
\nl {$\lambda_0\gets \lambda(u)$}\\
\nl {$c \gets \sum_{i=1}^{n+1}\mu_{0,i}c_i(u)$}\\
\nl {$d \gets \sum_{i=1}^{n+1}\lambda_{0,i}d_i(u)$\atcp{get transformation from latent to action space}}\\
\nl {$\text{\fontfamily{cmtt}\selectfont Sample } z_0 \text{\fontfamily{cmtt}\selectfont\ from RL policy } \pi(z|s)$}\\
\nl {$a\gets c\odot z_0+d$}
}
}
\end{algorithm}
\section{Additional Implementation Details}
\label{sec:implementation}
We provide our code in the github repository \url{https://github.com/289371298/CEIP} for reference.
\subsection{CEIP}
\subsubsection{Architecture.} We use slightly different architectures for fetchreach and kitchen/office, because the number of dimensions of the states and actions in fetchreach is much smaller than that in the other two experiments. Moreover, the size of fetchreach is much smaller too. Hence, a smaller network is used for fetchreach to prevent overfitting
\textbf{Fetchreach.} For each single flow, we use a pair of simple Multi-Layer Perceptron (MLP), one for $c_i(u)$ and the other one for $d_i(u)$. Each network has two hidden layers of width $32$ for each single flow. The number of dimensions for the feature is $20$ (with explicit prior) or $10$ (without explicit prior). For the combination of flows, we use one fully-connected neural net with two hidden layers of width $32$, which outputs both $\mu$ and $\lambda$. $\mu$ has an additional softplus activation and a $10^{-4}$ offset. If not otherwise specified, all activation functions in this section are ReLU.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{pic/arch_new.pdf}
\caption{Illustration of our architecture used for kitchen and office environments.}
\label{fig:arch_illu}
\end{figure}
\textbf{Kitchen and Office.} The architecture for the kitchen and office environments is roughly the same as that for the fetchreach environment. The difference is that we use three hidden layers of width $256$ for $c_i$ and $d_i$ of each single flow, and that we use two hidden layers of width $64$ for $\mu$ and $\lambda$. Also, we use a batchnorm function before each ReLU activation. See Fig.~\ref{fig:arch_illu} for an illustration.
\subsubsection{Flow Training.} We use the standard flow training method~\cite{Kobyzev2021NFreview} for training the task-agnostic and task-specific single flows $f_1, \dots, f_{n+1}$, which is to maximize the (empirical) log-likelihood
\begin{equation}
\label{eq:flowloss_app}
\begin{aligned}
\max_{f_i}&\ \mathbb{E}_{(u,a)\sim D_i}\log p_a(a|u),\\
\text{where }\log p_a(a|u)=\log p_z(f_i^{-1}(a; u))&+\log \left|\frac{\partial f_i^{-1}(a; u)}{\partial a}\right|
=\log p_z(f_i^{-1}(a; u))-c_i(u)^T\mathbf{1}, \text{ and }\\
f^{-1}_i(a;u)&=z=\frac{a-d_i(u)}{\exp\{c_i(u)\}}.
\end{aligned}
\end{equation}
Here, $c_i(u)\in\mathbb{R}^q$, $d_i(u)\in\mathbb{R}^q$ are trainable deep nets. The $\exp$ function and division are applied elementwise. We use a standard normal distribution over the latent space, i.e., $p_z = N(0,I)$. Moreover, we use maximization w.r.t.\ $f_i$ to denote maximization w.r.t.\ the parameters of the deep nets $c_i$, $d_i$.
To train the combined flow, we use a similar loss function to Eq.~\eqref{eq:flowloss_app}, i.e.,
\begin{equation}
\max_{f_{\text{TS}}}\ E_{(u,a)\in D_{\text{TS}}}\log p_a(a|u),\text{ where }\log p_a(a|u)=\log p_z(f_{\text{TS}}^{-1}(a; u))+\log \left|\frac{\partial f_{\text{TS}}^{-1}(a; u)}{\partial a}\right|.
\end{equation}
Again, $p_z$ is a standard normal distribution. Here, maximization w.r.t.\ $f_\text{TS}$ denotes maximization w.r.t.\ the parameters of the deep nets $\mu$ and $\lambda$ as shown in Fig.~\ref{fig:arch_illu}.
\textbf{Training Hyperparameters.} To train each single flow, we use $1000$ epochs on each cluster of the task-agnostic dataset $D_1, D_2, \dots, D_{n}$ and task-specific $D_{n+1}$ with a batchsize of $256$. We use the Adam~\cite{KingmaB2014Adam} optimizer with a learning rate of $0.001$ and a gradient clipping at norm $10^{-4}$. For each dataset, we randomly draw $80\%$ of the state-action pairs / transitions (regardless of which trajectory they are in) as the training set and use the rest for validation. We use an early stopping that triggers when the current number of batches fed into the network is greater than $1000$ (fetchreach) or $4000$ (kitchen/office) and the validation loss does not improving during the last $20\%$ of the batches. The model with the lowest loss on the validation set is stored and utilized. Each flow is trained separately and parameters are not shared. Note, we did not optimize the implementation for efficiency, but this can be accelerated via parallelization.
\subsubsection{Reinforcement Learning.} We use a well-established reliable implementation of RL algorithms, stable-baselines3\footnote{https://stable-baselines3.readthedocs.io/en/master/}, to carry out reinforcement learning. As stable-baselines3 needs a bounded action space, we set the latent (action) space $\mathcal{Z}$ of the RL agent $\pi(z|s)$ to be $[-3,3]$ on each dimension.
\subsection{PARROT}
PARROT can be seen as a special case of CEIP, where the number of single flows is $1$ and $\mu=1, \lambda=1$. This single flow is trained on all task-agnostic data. The original PARROT does not use an explicit prior or a push-forward technique, which are our contribution in this work. But these components can be added to PARROT in the same way as they are used in our method. For a fair comparison, PARROT uses exactly the same architecture and training paradigm of a single flow as CEIP.
\subsection{SKiLD}
As CEIP, SKiLD also uses an implicit prior. However, different from CEIP which is flow-based, SKiLD uses a VAE-based architecture where the latent space is for an action sequence called ``skill,'' and the decoder of the VAE maps actions from latent space to actual action sequences. In addition, SKiLD uses two implicit priors that take the current state as input and mimic the state-action sequence encoder, one for the entire task-agnostic dataset and the other for the task-specific dataset. To utilize both priors, a discriminator that takes the current state as input is trained. This discriminator approximates the confidence of the task-specific prior. A reward shaping in the downstream RL stage is then used to drive the agent back to states similar to those in the task-specific dataset, where the discriminator reports higher confidence for the task-specific prior. The reward shaping also encourages the RL agent to form a policy similar to the task-agnostic prior when the confidence is low, and a policy similar to the task-specific prior when the confidence is high. SKiLD does not use an explicit prior or the push-forward technique. However, in a similar spirit, the reward-shaping mechanism encourages the agent to visit states similar to those in the task-specific dataset. We follow the settings described by SKiLD~\cite{pertsch2021skild}, except for some minor modifications to better adapt SKiLD to the environments. These modifications are discussed next
{
\interfootnotelinepenalty=10000
We change the configuration mostly for the fetchreach environment, because skills with $10$ steps are too long for the fetchreach environment with $40$ steps in an episode, and because the number of dimensions of the data and the number of datapoints are much smaller than they are in other environments. Therefore, we shorten a skill from $10$ to $3$ steps, and reduce the size of the skill prior and posterior, which are now $3$-layer MLPs with width $32$ instead of the original $5$-layer MLP with width $256$. Also, as the dataset size decreases, we change the number of epochs. For the skill prior, we use a batchsize of $20$, and train for $7500$ cycles over the task-agnostic dataset (for each cycle, one sub-trajectory of length $3$ is sampled for each trajectory).\footnote{See ``RepeatedDataLoader'' in SKiLD's official repository \url{https://github.com/clvrai/spirl/blob/5cd34db7c5e48137550801bf5ac3f8c452590e2c/spirl/utils/pytorch\_utils.py} and \url{https://github.com/clvrai/spirl/blob/5cd34db7c5e48137550801bf5ac3f8c452590e2c/spirl/train.py} for the meaning of ``cycles.''} For the posterior, we use $30$K training cycles over the task-specific dataset. The discriminator is trained for $300$ epochs, sampling both task-agnostic and task-specific datasets. For RL, we use the settings employed for the kitchen environment in the original paper of SKiLD, where the hyperparameter $\alpha=5$ is fixed
}
For the kitchen and office environments, we follow the original paper and use the same architecture: a $5$-layer MLP with width $128$ for the skill prior and posterior, a linear layer and long-short term memory (LSTM) with width $128$ for the encoder, and a $3$-layer MLP with width $32$ for the discriminator. The training paradigm is almost the same as the one in the original paper, except that the cycles over the task-specific dataset are increased due to a decreased dataset size. We also use exactly the same RL settings as the original paper.
\subsection{FIST}
Conceptually, FIST can be viewed as SKiLD combined with an explicit prior. However, FIST uses pure imitation learning, while SKiLD includes a reinforcement learning phase. Also, different from SKiLD, FIST only uses one prior, which is first trained on the task-agnostic dataset and then fine-tuned on the task-specific dataset. To decide which key is the ``closest'' to the query in dataset retrieval, FIST conducts contrastive learning for the distance metric between states using the InfoNCE loss~\cite{INFONCE}, where the positive sample is the future state (exactly $H$ steps later, where $H$ is the length of a skill) of a state in a dataset, and the negative samples are the future states of other states in the same dataset. This metric is trained on the combined task-agnostic and task-specific data. However, in our experiment we found that using Euclidean distance as the metric suffices to achieve good result
For FIST, we mostly follow the settings described in the original paper~\cite{Hakhamaneshi2022FIST}, with the exception of some minor modifications. Similar to SKiLD, on fetchreach we use $3$ steps for a skill, and a lighter architecture for the skill prior and posterior network with $2$ hidden layers of width $32$ instead of $5$ hidden layers of width $128$ for the other experiments. We use the settings for the kitchen environment in the original paper for all other experiments. Moreover, the original FIST is occasionally unstable at the beginning of skill prior training in the office environment, due to an initial loss being too large. To remove this instability, we add gradient clipping at norm $10^{-3}$ during the first $100$ steps.
\section{Additional Details of Experimental Settings}
\label{sec:app1}
In this section, we introduce additional details related to the environment settings and dataset settings for each environment.
\begin{figure}[t]
\vspace{0.2cm}
{
\begingroup
\centering
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Fetchreach]{\includegraphics[height=0.8\linewidth]{pic/fetchreach-01.pdf}\label{fig:fr01}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Kitchen]{\includegraphics[height=0.8\linewidth]{pic/kitchen-00.pdf}\label{fig:kc00}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Office]{\includegraphics[height=0.8\linewidth]{pic/office-00.pdf}\label{fig:office00}}
\end{minipage}
\vspace{-0.2cm}
\caption{Illustration of each environment. For fetchreach, the task-agnostic dataset consists of demonstrations which move the gripper in the directions of the red arrows, and the task-specific dataset contains demonstrations which move the gripper in the directions of the yellow arrows. For the kitchen environment, the agent needs to complete four out of seven tasks mapped on the picture in the correct order. For the office environment, the agent needs to put items in the container as illustrated in the figure, using the correct order.
}
\label{fig:env}
\endgroup
}
\vspace{-0.3cm}
\end{figure}
\subsection{Fetchreach}
\textbf{Environment Settings.} In our version of fetchreach (illustrated in Fig.~\ref{fig:fr01}), we need to train a robot arm to move its gripper to a given but unknown location as quickly as possible, and stay there once the goal is reached. The state is $10$-dimensional, with the first three dimensions describing the current location of the gripper. The other dimensions are the openness of the gripper and the current velocity. For each of the $40$ steps, the agent needs to output a $4$-dimensional action $a\in [-1, 1]^{4}$, where the first three dimensions are the direction which the gripper is moving to and the fourth is the openness of the gripper (unused in this experiment). The agent receives a reward of $0$ if the Euclidean distance between the gripper and the target is smaller than $0.05$, and $-1$ otherwise. A perfect agent should achieve a reward of around $-10$. The goal denoted as direction $d$ (e.g., direction $4.5$) is generated by first assigning a direction $d\in[0, 8)$, then selecting the goal with the Euclidean distance being $0.3$ away and the azimuth being $\frac{d\pi}{4}$, and finally applying a uniform noise of $U[-0.015, 0.015]$ on each of the three dimensions. In order to test the robustness of the algorithms and increase difficulty, before each episode begins, we first sample a random action from a normal distribution, and then let the agent execute the action for $x$ steps, where $x\sim U[5, 20]$. This greatly increases the variety of the trajectories, as shown in Fig.~\ref{fig:fr00}.
\textbf{Dataset Settings.} The dataset is acquired by first training an RL agent with soft actor critic (SAC) which receives the negative current Euclidean distance as a reward until convergence, and then sampling trajectories on the trained RL agent. For each direction in $\{0,1,2,\dots,7\}$, $40$ trajectories ($1600$ steps) are sampled. For each direction in $\{4.5,5.5,6.5,7.5\}$, $4$ trajectories ($160$ steps) are sampled.
\begin{figure}[t]
\vspace{0.2cm}
{
\begingroup
\centering
\begin{minipage}[c]{0.32\linewidth}
\subfigure[No randomization]{\includegraphics[height=0.8\linewidth]{pic/fr03.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Random action at each step]{\includegraphics[height=0.8\linewidth]{pic/fr02.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Our randomization]{\includegraphics[height=0.8\linewidth]{pic/fetchreach-00.pdf}}
\end{minipage}
\vspace{-0.2cm}
\caption{Illustration of expert trajectories with no start randomization (left), sampling a randomized action for $10$ steps (middle), and our way of randomization (right). Note that our randomization greatly increases the variation of the trajectories.}
\label{fig:fr00}
\endgroup
}
\vspace{-0.3cm}
\end{figure}
\subsection{Kitchen-SKiLD}
\textbf{Environment Settings}. We adopt the same setting as SKiLD~\cite{pertsch2021skild} and FIST~\cite{Hakhamaneshi2022FIST}, where an agent needs to finish four out of seven tasks in the correct order. The tasks are: open the microwave, move the kettle, turn on the light switch, turn on the bottom burner, turn on the top burner, slide the right cabinet, and hinge the left cabinet. The agent needs to complete all four tasks within $280$ timesteps, and a $+1$ reward is given when one task is completed. The state is $60$-dimensional, where the first $9$ dimensions describe the current location of the robot, the next $21$ dimensions describe the current object location (the unrelated objects will be zeroed out), and the rest are constant and describe the initial location of each object. The action $a$ is $9$-dimensional where $a\in [-1, 1]^9$, which controls the joints of the arm. A uniform noise of $[-0.1, 0.1]$ is applied to the observation of the robot in every step.\footnote{See SKiLD's official repository \url{https://github.com/kpertsch/d4rl/blob/master/d4rl/kitchen/adept\_envs/franka/robot/franka\_robot.py} for details.}
\textbf{Dataset Settings.} We use exactly the same task-agnostic dataset as SKiLD, which includes $33$ different task sequences with a total of $136950$ state-action pairs generated by `relay policy learning'~\cite{Gupta2019RelayPL}. We choose the first trajectory from the task-specific dataset of SKiLD as our task-specific dataset, which includes $214$ state-action pairs for task A and $262$ state-action pairs for task B. Task A is ``move the kettle, turn on the bottom burner, turn on the top burner, and slide the right cabinet;'' task B is ``open the microwave, turn on the light switch, slide the right cabinet, and hinge the left cabinet.'
\subsection{Kitchen-FIST}
\textbf{Dataset Settings.} We use exactly the same task-agnostic dataset as FIST~\cite{Hakhamaneshi2022FIST}. There are four pairs of task-agnostic and task-specific datasets, which are illustrated in Table~\ref{tab:task}.
\begin{table}[t]
\setlength{\tabcolsep}{4pt}
\centering
\begin{tabular}{cccc}\toprule
Task & Subtask Missing & Target Task & Dataset Size \\\midrule
A & Top Burner & Microwave, Kettle, Top Burner, Light Switch & 66823 / 210 \\
B & Microwave & Microwave, Bottom Burner, Light Switch, Slide Cabinet & 52898 / 200 \\
C & Kettle & Microwave, Kettle, Slide Cabinet, Hinge Cabinet & 53576 / 246 \\
D & Slide Cabinet & Microwave, Kettle, Slide Cabinet, Hinge Cabinet & 45267 / 246 \\ \bottomrule
\end{tabular}
\caption{List of four different settings in Kitchen-FIST. The dataset size format is ``task-agnostic / task-specific.'' The dataset size is counted in state-action pairs.}
\label{tab:task}
\end{table}
\subsection{Office}
\textbf{Environment Settings}. We adopt the settings from SKiLD, where we need to train a robot arm to put three out of seven items into three containers (illustrated in Fig.~\ref{fig:office00}), which are the two trays and one drawer. The robot arm receives a $97$-dimensional state and has $8$ dimensions of actions which control the position and angle of the gripper, as well as the continuous gripper actuation. There are $8$ subtasks in the experiments: pick up the first/second item, drop the first/second item in the right place, open the drawer, pick up the third item, drop the third item correctly, and close the drawer. The agent will receive a $+1$ reward upon completion of each subtask. The episode length is at most $350$ steps, and will finish as soon as all the tasks are finished.
\textbf{Dataset Settings.} We use the same task-agnostic dataset and a subset of the task-specific dataset. We use a subset of the task-specific dataset because SKiLD, CEIP, PARROT+TS, and FIST can all solve the problem very well with the whole task-specific dataset, making it hard to compare. The task-agnostic dataset contains $2417$ trajectories from randomly-generated tasks, which has $7\times 6\times 5=210$ possibilities and contains $456033$ state-action pairs. The task-specific dataset contains $5$ trajectories with $1155$ state-action pairs.
\section{Additional Experimental Results
\label{sec:extraexp}
This section includes additional experimental results, which are ablation studies and auxiliary metrics that help to better understand the properties of different methods. See the beginning of the Appendix for a summary of the key findings. Please also see our supplementary material for sample videos of each trained method on the kitchen and office environments
\subsection{Abbreviations for Ablation Tests}
In our experiments, we test multiple variants of CEIP and PARROT for a more thorough ablation. To more easily differentiate the variants of both methods with different components, we will use abbreviations as listed in Table~\ref{tab:abbrCEIP} and Table~\ref{tab:abbrPARROT}.
\begin{table}
\setlength{\tabcolsep}{1pt}
\centering
\begin{tabular}{cccc}\toprule
Method & \ \shortstack{Task-specific\\ flow}\ &\ \shortstack{Explicit\\ prior}\ & \ \shortstack{Push-\\forward}\ \\ \midrule
CEIP & & & \\
CEIP+EX & & \checkmark & \\
CEIP+EX+forward & & \checkmark & \checkmark \\
CEIP+TS & \checkmark & & \\
CEIP+TS+EX & \checkmark & \checkmark & \\
CEIP+TS+EX+forward & \checkmark & \checkmark & \checkmark \\
\bottomrule
\end{tabular}
\caption{Abbreviations for variants of CEIP. See Fig.~\ref{fig:fetchreach_plot_ablation_ours_dataset} for difference between CEIP, CEIP+2way, and CEIP+4way; the latter two only appear in the fetchreach ablation.}
\label{tab:abbrCEIP}
\end{table}
\begin{table}
\setlength{\tabcolsep}{1pt}
\centering
\begin{tabular}{ccccc}\toprule
Method & \shortstack{Use \\ task-agnostic data} & \shortstack{Use \\ task-specific data} & \ \shortstack{Explicit \\ prior}\ & \ \shortstack{Push-\\forward}\ \\ \midrule
PARROT+TA & \checkmark & & & \\
PARROT+TS & & \checkmark & & \\
PARROT+(TS+TA) & \checkmark & \checkmark & & \\
PARROT+TA+EX & \checkmark & & \checkmark & \\
PARROT+TS+EX & & \checkmark & \checkmark & \\
PARROT+(TS+TA)+EX & \checkmark & \checkmark & \checkmark & \\
PARROT+TA+EX+forward & \checkmark & & \checkmark & \checkmark \\
PARROT+TS+EX+forward & &\checkmark &\checkmark &\checkmark \\
PARROT+(TS+TA)+EX+forward & \checkmark & \checkmark& \checkmark & \checkmark \\
PARROT+2way+TS & two most related directions & \checkmark & & \\
PARROT+4way+TS & four most related directions & \checkmark & & \\
PARROT+2way & two most related directions & & & \\
\bottomrule
\end{tabular}
\caption{Abbreviations for variants of PARROT. All variants of PARROT only use a single flow for all data, which is the key difference between CEIP and PARROT with explicit prior. ``2way'' and ``4way'' only appear in the fetchreach environment where there are $8$ directions in the task-agnostic dataset.}
\label{tab:abbrPARROT}
\end{table}
\subsection{Fetchreach}
\label{sec:expFR}
\begin{figure}[t]
\centering
\subfigure[Direction 4.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-ours-arch-ablation-4.5.pdf}
\end{minipage}
}
\subfigure[Direction 5.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-ours-arch-ablation-5.5.pdf}
\end{minipage}
}
\subfigure[Direction 6.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-ours-arch-ablation-6.5.pdf}
\end{minipage}
}
\subfigure[Direction 7.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-ours-arch-ablation-7.5.pdf}
\end{minipage}
}
\caption{Ablation on architecture and components of our method. We observe that the reward actually grows slower when using the task-specific single flow, explicit prior, and push-forward technique, likely because the training complexity is unnecessarily increased.}
\label{fig:fetchreach_plot_ablation_ours_arch}
\end{figure}
\textbf{Ablation on components of our method.} Fig.~\ref{fig:fetchreach_plot_ablation_ours_arch} shows the ablation study on different components of our method. Results for our method show that using an explicit prior and the push-forward technique slows down the reward growth during RL training, if applied on a relatively easy and short-horizon environment. This is likely because we unnecessarily add training complexity to an environment with a relatively easy task, short horizon, and well-clustered task-agnostic datasets. However, our method with those components still works better than many baselines and performs well given more steps.
\iffalse
~\yxw{did not quite follow this, especially how ``with a relatively easy task, short horizon and well-clustered task-agnostic datasets'' is relevant to training complexity; or maybe some grammar issue?}
\fi
\iffalse
\textbf{Ablation on the number of flows in our method}. Fig.~\ref{fig:fetchreach_plot_ablation_ours_dataset} shows the performance of our method using a different number of flows. Within a reasonable range, increasing the number of flows improves the expressivity and consequently results of our model
\fi
\begin{figure}[t]
\centering
\subfigure[Direction 4.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-PARROT-dataset-ablation-4.5.pdf}
\end{minipage}
}
\subfigure[Direction 5.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-PARROT-dataset-ablation-5.5.pdf}
\end{minipage}
}
\subfigure[Direction 6.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-PARROT-dataset-ablation-6.5.pdf}
\end{minipage}
}
\subfigure[Direction 7.5]{
\begin{minipage}[b]{0.23\linewidth}
\includegraphics[width=\linewidth]{pic/fetchreach/plot-PARROT-dataset-ablation-7.5.pdf}
\end{minipage}
}
\caption{Ablation on the data used for PARROT. ``2way'' and ``4way'' mean that we feed the two/four directions in the task-agnostic dataset that are closest to the target direction (e.g., if the target direction is $4.5$, we refer to data from directions $4$ and $5$ as ``2way,'' and data from directions $3,4,5,6$ as ``4way''). Note, PARROT with the task-specific data and ``2way'' data is significantly better than other variants of PARROT. However, PARROT is only improved when such data are selected manually, while our method can automatically combine the flows and select useful priors. Also, PARROT with ``2way'' data from the task-agnostic dataset but without task-specific dataset works well, but it is unstable, which emphasizes the importance of the task-specific dataset even if it is small.}
\label{fig:fetchreach_plot_ablation_parrot_dataset}
\end{figure}
\iffalse
~\yxw{Similar to the comment made in the main text, I think this sentence can be rephrased and made stronger - rather than saying that the PARROT result supports our motivation, it might be better to contrast the difference between ours and PARROT and demonstrate the limitation of PARROT.}
\fi
\textbf{Ablation on components and data relevance in PARROT.} To better understand the properties of PARROT, we ablate the data used when training PARROT (see Fig.~\ref{fig:fetchreach_plot_ablation_parrot_dataset}). We select a subset of the task-agnostic data that is more relevant to the task-specific dataset, and study how data in the task-agnostic dataset with different levels of relevance to the downstream task affect results. We also test the effect of the explicit prior and the push-forward technique using task-specific data only. The results can be summarized as follows: 1) using the explicit prior and the push-forward technique slows down the reward growth during RL training, if applied on a relatively easy and short-horizon environment; 2) selecting more relevant data for PARROT is an effective way to improve PARROT, which supports our motivation to combine the flows to select the most useful prior
\textbf{Illustration of trajectories.} To validate the effect of the implicit prior, Fig.~\ref{fig:traj} shows the trajectories generated by our method and PARROT without any RL training. We clearly observe that the trajectories generated by PARROT become more accurate when the data are more related (from left to right), which is achieved by manual selection but can be done automatically in CEIP. Our method improves when more flows are used (from right to left), as more flows increase expressivity.
{
\begingroup
\begin{figure}[t]
\begin{minipage}[c]{0.32\linewidth}
\subfigure[PARROT+(TS+TA)]{\includegraphics[width=\linewidth]{pic/app_traj/alone_all_beforetraining.pdf}}
\subfigure[ours]{\includegraphics[width=\linewidth]{pic/app_traj/ours_rep_beforetraining.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[PARROT+4way+TS]{\includegraphics[width=\linewidth]{pic/app_traj/alone_fourwayrelatedwithDdemo_beforetraining.pdf}}
\subfigure[ours with 4 flows]{\includegraphics[width=\linewidth]{pic/app_traj/ours_fourwayrelated_rep_beforetraining.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[PARROT+2way+TS]{\includegraphics[width=\linewidth]{pic/app_traj/alone_relatedwithDdemo_beforetraining.pdf}}
\subfigure[ours with 2 flows]{\includegraphics[width=\linewidth]{pic/app_traj/ours_related_rep_beforetraining.pdf}}
\end{minipage}
\begin{minipage}[c]{0.24\linewidth}
\end{minipage}
\caption{Illustration of trajectories generated by our method and PARROT under the direction 4.5 setting in the fetchreach environment \textbf{without any RL training}. Both methods do not use the explicit prior or the push-forward technique. Our method does not use the task-specific single flow. For PARROT, 2/4way means the two/four most related directions in the task-agnostic dataset (i.e., directions 4, 5 / 3, 4, 5, 6). For our method, 2/4 flows are trained on the two/four most related directions in the task-agnostic dataset. The orange line is the task-specific dataset for reference. All orange lines converge at the red star, which is the goal.}
\label{fig:traj}
\end{figure}
\endgroup
}
Fig.~\ref{fig:traj_others} illustrates the trajectories generated by each method after RL training. As shown in the figure, our method exhibits smoother trajectories after RL training, enabling the agent to reach its goal faster. FIST, SKiLD, and na\"ive RL fail to generate trajectories that steadily converge to the goal. Although PARROT+(TS+TA) (PARROT with both task-agnostic and task-specific datasets) struggles at the beginning of RL, the prior enables the agent to reach the goal occasionally. Because of this, it learns to rule out other infeasible directions. PARROT+TA fails to reach the goal when the starting location is too far, as it has no idea about how to reach the goal.
{
\begingroup
\begin{figure}[t]
\begin{minipage}[c]{0.32\linewidth}
\subfigure[ours]{\includegraphics[width=\linewidth]{pic/app_traj/ours_rep.pdf}}
\subfigure[PARROT+(TS+TA)]{\includegraphics[width=\linewidth]{pic/app_traj/alone_all.pdf}}
\vspace{146pt}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[SKiLD]{\includegraphics[width=\linewidth]{pic/app_traj/SKiLD.pdf}}
\subfigure[PARROT+TA]{\includegraphics[width=\linewidth]{pic/app_traj/alone_allwithoutDdemo.pdf}}
\subfigure[naive]{\includegraphics[width=\linewidth]{pic/app_traj/naked.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[FIST]{\includegraphics[width=\linewidth]{pic/app_traj/FIST.pdf}}
\subfigure[PARROT+TS]{\includegraphics[width=\linewidth]{pic/app_traj/alone_Ddemoonly.pdf}}
\vspace{146pt}
\end{minipage}
\begin{minipage}[c]{0.24\linewidth}
\end{minipage}
\caption{Illustration of trajectories generated by all methods \textbf{after RL training}; similar to Fig.~\ref{fig:traj}, the blue curves are the trajectories, the orange curves are the demonstrations, and the red star is the goal. Our method and PARROT do not use the explicit prior or the push-forward technique. Our method does not use the task-specific single flow.}
\label{fig:traj_others}
\end{figure}
\endgroup
}
\subsection{Kitchen and Office}
\label{sec:expKT}
{
\begingroup
\begin{figure}[t]
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Kitchen-SKiLD-A]{\includegraphics[width=\linewidth]{pic/kitchen-SKiLD/plot-ours-ablation-easy.pdf}}
\subfigure[Kitchen-FIST-B]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-ours-ablation-B.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Kitchen-SKiLD-B]{\includegraphics[width=\linewidth]{pic/kitchen-SKiLD/plot-ours-ablation-hard.pdf}}
\subfigure[Kitchen-FIST-C]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-ours-ablation-C.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Kitchen-FIST-A]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-ours-ablation-A.pdf}}
\centering
\subfigure[Kitchen-FIST-D]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-ours-ablation-D.pdf}}
\end{minipage}
\begin{minipage}[c]{0.24\linewidth}
\end{minipage}
\caption{Ablation on the components of our method in the kitchen environment. For both environments, the presence of an explicit prior greatly enhances the results; for kitchen-FIST where part of the target task disappears from the task-agnostic dataset, a task-specific flow is also very important.}
\label{fig:kitchen_ablation_ours}
\end{figure}
\endgroup
}
\textbf{Ablation on components of CEIP.} Fig.~\ref{fig:kitchen_ablation_ours} shows the difference of performance using different architectures for our method. We observe that the explicit prior plays a crucial role in both Kitchen-SKiLD and Kitchen-FIST. Also, for Kitchen-FIST, where one of the target sub-tasks is only part of the task-specific data, the presence of the task-specific single flow $f_{n+1}$ is also crucial for success. We do not find the push-forward technique to help much in this setting.
{
\begingroup
\begin{figure}[t]
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Kitchen-SKiLD-A]{\includegraphics[width=\linewidth]{pic/kitchen-SKiLD/plot-PARROT-ablation-easy.pdf}}
\subfigure[Kitchen-FIST-B]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-PARROT-ablation-B.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Kitchen-SKiLD-B]{\includegraphics[width=\linewidth]{pic/kitchen-SKiLD/plot-PARROT-ablation-hard.pdf}}
\subfigure[Kitchen-FIST-C]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-PARROT-ablation-C.pdf}}
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\subfigure[Kitchen-FIST-A]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-PARROT-ablation-A.pdf}}
\centering
\subfigure[Kitchen-FIST-D]{\includegraphics[width=\linewidth]{pic/kitchen-FIST/plot-PARROT-ablation-D.pdf}}
\end{minipage}
\caption{Ablation results for PARROT in the kitchen environment. For convenience, we also list CEIP+TS+EX+forward for reference. Note, CEIP+TS+EX+forward outperforms all variants of PARROT. Similar to CEIP, PARROT can be improved by using an explicit prior. Note, in Kitchen-FIST-B, PARROT+TA cannot learn anything, because the very first subtask in the target task sequence is missing in the task-agnostic dataset. It can only learn all subtasks before the missing subtask.}
\label{fig:kitchen_ablation_PARROT}
\end{figure}
\endgroup
}
\textbf{Ablation on components in PARROT.} Fig.~\ref{fig:kitchen_ablation_PARROT} shows the difference when different architectures are used. As one target sub-task is completely missing from the task-agnostic data, PARROT+TA fails as expected. Also note that the explicit prior boosts the results of PARROT, making it comparable to our method if given enough training time.
\textbf{Ablation on the effect of using ground-truth labels.} Table~\ref{tab:GT1} and Table~\ref{tab:GT2} show the performance comparison between using ground-truth labels and labels acquired by $k$-means in the kitchen environment.\footnote{The office environment has 210 ground-truth labels, which is hard to train.} As we are using $24$ labels in the main paper, but not all the task-agnostic datasets have 24 ground-truth labels, we also show the result using ground-truth pruned to 24 labels for a fair comparison. For Kitchen-SKiLD where the number of ground-truth labels is 33, there are exactly 9 labels that have no more than 3 demonstrations. We merge each of them into the label that is next to them in the dictionary order of concatenated task names. For kitchen-FIST where the number of ground-truth labels is $x$, $x<24$, we select the $24-x$ labels with the most demonstrations, and divide them evenly into two halves; each half is a new label. Note, no task information is taken into account when merging.
For readability, we use some suffixes in Table~\ref{tab:GT1} and Table~\ref{tab:GT2} to differentiate variants of CEIP in the ``label'' column. The meaning of the suffixes are as follows:
\begin{itemize}
\item\textbf{GT24}: Ground-truth labels, but merged or split to form 24 labels;
\item\textbf{GT}: Ground-truth labels; the number of subtasks differs;
\item\textbf{KM}: K-means labels.
\end{itemize}
\begin{table}[t]
\setlength{\tabcolsep}{3pt}
\centering{
\begin{tabular}{cccccccc}\toprule
Label & Method & \shortstack{Kitchen-\\SKiLD-A} & \shortstack{Kitchen-\\SKiLD-B} & \shortstack{Kitchen-\\FIST-A} & \shortstack{Kitchen-\\FIST-B} & \shortstack{Kitchen-\\FIST-C} & \shortstack{Kitchen-\\FIST-D} \\ \midrule
GT & CEIP+TS+EX & \textbf{4} & 3.96 & 3.59 & \textbf{4} & 3.94 & 3.6 \\
GT & CEIP+TS+EX+forward & \textbf{4} & 3.95 & 3 & \textbf{4} & 3.81 & 3.4 \\
GT24 & CEIP+TS+EX & \textbf{4} & \textbf{4} & 3.68 & 3.76 & 3.8 & 3.96 \\
GT24 & CEIP+TS+EX+forward & \textbf{4} & \textbf{4} & 3.24 & 3.75 & 3.85 & 3.9 \\
KM & CEIP+TS+EX & \textbf{4} & 3.81 & 3.44 & 3.8 & \textbf{4} & 3.75 \\
KM & CEIP+TS+EX+forward & \textbf{4} & 3.32 & 3.41 & \textbf{4} & 3.94 & 3.76 \\
\bottomrule
\end{tabular}}
\caption{Ground-truth label and $k$-means label impact for CEIP+TS+EX and CEIP+TS+EX+forward before RL.}
\label{tab:GT1}
\end{table}
\begin{table}[t]
\setlength{\tabcolsep}{3pt}
\centering{
\begin{tabular}{cccccccc}\toprule
Label & Method & \shortstack{Kitchen-\\SKiLD-A} & \shortstack{Kitchen-\\SKiLD-B} & \shortstack{Kitchen-\\FIST-A} & \shortstack{Kitchen-\\FIST-B} & \shortstack{Kitchen-\\FIST-C} & \shortstack{Kitchen-\\FIST-D} \\ \midrule
GT & CEIP+TS+EX & \textbf{4} & 3.87 & 3.93 & 3.8 & 3.94 & 3.71 \\
GT & CEIP+TS+EX+forward & \textbf{4} & 3.87 & 3.9 & 3.74 & 3.96 & 3.93 \\
GT24 & CEIP+TS+EX & \textbf{4} & \textbf{4} & 3.92 & 3.97 & 3.99 & 3.87 \\
GT24 & CEIP+TS+EX+forward & \textbf{4} & \textbf{4} & 3.99 & 3.88 & 3.95 & 3.96 \\
KM & CEIP+TS+EX & \textbf{4} & \textbf{4} & 3.94 & 3.92 & 3.93 & 3.95 \\
KM & CEIP+TS+EX+forward & \textbf{4} & \textbf{4} & 3.95 & 3.89 & 3.92 & 3.94 \\
\bottomrule
\end{tabular}}
\caption{Ground-truth label and $k$-means label impact for CEIP+TS+EX and CEIP+TS+EX+forward after RL.}
\label{tab:GT2}
\end{table}
The result suggests that for Kitchen-SKiLD, ground truth (both 24 flows and 33 flows) helps as CEIP with ground-truth labels works better than CEIP with $k$-means label (Table~\ref{tab:GT1} shows higher reward). For Kitchen-FIST, the reward is similar before and after RL training, and the precise label does not matter.
\textbf{Performance of behavior cloning and replaying demonstrations.} We test behavior cloning and replaying demonstrations (which is duplicating actions regardless of current state) on the kitchen and office environment to see if the task-specific dataset already provides the optimal solution for our testbeds. Table~\ref{tab:BCRP} shows the result of vanilla behavior cloning (BC), behavior cloning with explicit prior (BC+EX), with explicit prior and push-forward (BC+EX+forward), and replaying demonstrations (replay). The result shows that: 1) behavior cloning and replay are very brittle, and cannot directly solve our testbed; 2) an explicit prior significantly improves the performance of behavior cloning, which proves the validity of our design.
\begin{table}[htbp]
\setlength{\tabcolsep}{2pt}
\centering
{
\begin{tabular}{cccccc}\toprule
Environment & BC & BC+EX & BC+EX+forward & Replay & CEIP+TS+EX+forward\\ \midrule
Kitchen-SKiLD-A & $0.02${\scriptsize $\pm 0.04$} & $1.52${\scriptsize $\pm 1.15$} & $2.2${\scriptsize $\pm 0.62$} & $1.0${\scriptsize $\pm 0.82$} & $\mathbf{4.0}${\scriptsize $\pm 0.00$} \\
Kitchen-SKiLD-B & $0.03${\scriptsize$\pm 0.08$} & $1.03${\scriptsize$\pm 0.90$} & $0.8${\scriptsize$\pm 0.75$} & $0.67${\scriptsize$\pm 0.94$} & $\mathbf{3.93}${\scriptsize$\pm 0.08$}\\
Kitchen-FIST-A & $0.67${\scriptsize$\pm 0.76$} & $2.17${\scriptsize$\pm 0.06$} & $3.03${\scriptsize$\pm 0.15$} & $2.33${\scriptsize$\pm 0.47$} & $\mathbf{3.95}${\scriptsize$\pm 0.05$}\\
Kitchen-FIST-B & $0.4${\scriptsize$\pm 0.59$} & $2.13${\scriptsize$\pm 0.47$} & $1.87${\scriptsize$\pm 0.29$} & $0.67${\scriptsize$\pm 0.47$} & $\mathbf{3.89}${\scriptsize$\pm 0.07$}\\
Kitchen-FIST-C & $0.5${\scriptsize$\pm 0.75$} & $2.2${\scriptsize$\pm 1.61$} & $1.9${\scriptsize$\pm 0.96$} & $2.33${\scriptsize$\pm 0.94$} & $\mathbf{3.92}${\scriptsize$\pm 0.06$}\\
Kitchen-FIST-D & $0.67${\scriptsize$\pm 0.39$} & $1.63${\scriptsize$\pm 1.42$} & $2.17${\scriptsize$\pm 1.67$} & $2.33${\scriptsize$\pm 0.94$} & $\mathbf{3.94}${\scriptsize$\pm 0.07$} \\
Office & $0.62${\scriptsize$\pm 0.59$} & $0.53${\scriptsize$\pm 0.42$} & $1.83${\scriptsize$\pm 0.49$} & $4.67${\scriptsize$\pm 0.83$} & $\mathbf{6.33}${\scriptsize$\pm 0.30$} \\
\bottomrule
\end{tabular}
}
\caption{Performance of behavior cloning and replaying demonstrations. For convenience, we also list CEIP+TS+EX+forward for reference.}
\label{tab:BCRP}
\end{table}
\textbf{Robustness of CEIP with respect to the precision of task-specific demonstrations.} We test the robustness of CEIP and FIST with imprecise task-specific demonstrations in the office environment. The original office environment uses a $[-0.01, 0.01]$ uniformly random noise for the starting position of each dimension for each item in the environment. We increase this noise at test time (which the agent never sees in imitation learning) and summarize the result in Table~\ref{tab:robust}. The result shows that albeit an improvement upon FIST, CEIP is still not robust to imprecise demonstrations, which is a limitation that we discussed in the limitation section.
\begin{table}[htbp]
\centering{
\begin{tabular}{cccc}\toprule
Noise level & CEIP+TS+EX & CEIP+TS+EX+forward & FIST\\ \midrule
$0.01$ (original) & $4.17$ & $\mathbf{6.33}$ & $5.6$ \\
$0.02$ & $\mathbf{4.20}$ & $4.17$ & $3.8$ \\
$0.05$ & $0.57$ & $\mathbf{0.83}$ & $0.6$ \\
$0.1$ & $0.05$ & $\mathbf{0.1}$ & $0$ \\
$0.2$ & $0.01$ & $\mathbf{0.02}$ & $0$ \\
\bottomrule
\end{tabular}}
\caption{Comparison of the reward for CEIP and FIST when noise increases.}
\label{tab:robust}
\end{table}
\section{Computational Resource Usage}
\label{sec:comp_resource}
All experiments are conducted on an Ubuntu 18.04 server with $72$ Intel Xeon Gold 6254 CPUs @ 3.10GHz, with a single NVIDIA RTX 2080Ti GPU. Under such settings, our method and PARROT+TA require around
$1.5-3.5$ hours for training the implicit prior in the kitchen and office environments, depending on early stopping. FIST requires around $40$ minutes for prior training and $5$ minutes for the other parts. SKiLD requires around $9$ hours for prior training, $6-7$ hours for posterior training, and $6-7$ hours for discriminator training. PARROT+TS only needs a few minutes. As for reinforcement learning / deployment, our method on kitchen needs on average $10$ minutes for each run on fetchreach, and less than $2$ hours for the kitchen environment. For the office environment, we reach a speed of $12$ steps per second (including updates); SKiLD and PARROT can reach a speed of $20$ steps per second; FIST can reach a speed of $25-30$ steps per second as it has no RL updates.
\section{Dataset and Algorithm Licenses}
\label{sec:license}
We developed our code on the basis of multiple environment testbeds and algorithm repositories.
\textbf{Environment testbeds.} We adopt fetchreach using the gym package from OpenAI, which has an MIT license. For the kitchen environment, we are using a forked version of the d4rl package which has an Apache-2.0 license. For the office environment, we are using a forked version of the roboverse, which has an MIT license.
\textbf{Algorithm repositories.} We implement PARROT from scratch as PARROT is not open-sourced. For SKiLD and FIST, we use their official github repositories. SKiLD has no license, but we have informed the authors and got their consent for using code academically. FIST has a BSD-3-clause license.
\section*{References}
\newpage
|
1,116,691,499,717 | arxiv | \section*{Introduction}
\label{introduction}
Electrons injected into chiral molecules like DNA become spin polarized after being transmitted through the molecule.~\cite{Goehler2011,Xie2011,Mishra2020,Naaman2019,Waldeck2021}.
Such a spin-filtering effect has been termed ``chirality-induced spin selectivity" (CISS)~\cite{Naaman2012,Michaeli2016,Michaeli2017}.
This is a remarkable effect, since organic molecules do not contain magnetic atoms, which would be apparent candidates for spin-dependent phenomena.
Although CISS is established experimentally, its theoretical understanding is still debated~\cite{Evers2022}.
In many theories, the spin-orbit interaction (SOI) is considered to be the origin of the spin asymmetry~\cite{Guo2012,Gutierrez2013,Guo2014,Matityahu2016,Michaeli2019,Geyer2020,YU2020,SierraBioMol2020,Liu2021,Wolf2022,Michaeli2022}.
However, since the SOI does not break time-reversal symmetry, the appearance of SOI-induced spin-filtering is a non-trivial effect:
Bardarson's theorem \cite{Bardarson2008} imposes a constraint stating that \textit{in time-reversal-symmetric systems with half integer spins, the transmission eigenvalues of the scattering matrix come in degenerate pairs}.
Assuming that this Kramers-type degeneracy carries up and down spins in the same direction, the theorem prohibits spin filtering in systems coupled to two terminals.
However, the theorem forbids spin selectivity through two-terminal time-reversal symmetric systems only when there is only one orbital-channel.
Therefore, several previous theories broke time-reversal symmetry by introducing spin dissipation~\cite{Guo2012,Guo2014,Matityahu2016}, which effectively introduces many terminals.
Another option to overcome Bardarson's theorem, recently formulated explicitly~\cite{YU2020}, is to introduce two-orbital conducting channels.
Bardarson's theorem does not specify which spin states are associated with the doubly-degenerate transmission eigenvalues.
Therefore, spin-filtering is possible if there exist two pairs of doubly-degenerate transmission eigenvalues, in which one pair carries two up spins in one direction and the other pair carries two down spins in the opposite direction.
The origin of this idea is spin filtering brought about by the Rashba SOI in two-terminal quantum point contacts~\cite{Eto2005}, in tubular two-dimensional gases~\cite{Entin-WohlmanPRB2010}, and in quasi-one dimensional quantum wires~\cite{NagaevPRB2014}.
In the context of CISS, the idea appeared implicitly in the models of a particle traveling on~\cite{Michaeli2019,Geyer2020} or along~\cite{Gutierrez2013} the surface of a helical tube, and in the double-helix model with two orbitals residing on different helices~\cite{SierraBioMol2020}.
In a previous paper~\cite{YU2020}, we demonstrated that a $p$-orbital helical atomic chain, a toy model of a single strand of the DNA molecule, can be reduced to an effective two-orbital tight-binding model realizing a two-terminal spin filter for specific parameters, without breaking time-reversal symmetry.
In that paper we mainly focused on an ideal configuration: two pairs of up and down spins propagating in opposite directions.
In the present paper we extend our previous work to a broader range of parameters.
Especially, we analyze the helical symmetry of our model for all $p$-orbital states.
The consequences of including the orbital degrees of freedom has been discussed before:
(i) Orbital polarization emerges in the $p$-orbital helical atomic chain~\cite{Otsuto2021}; (ii) The sign of the hopping matrix elements in a neighboring $p$-$p$ block is related to the chirality and to the direction of spin-polarization~\cite{Zoellner2020}.
In particular we discuss the consequences of the helical symmetry of the hopping matrix elements connecting neighboring $p$-orbitals.
Since the DNA molecule is complex and it is difficult to derive systematically its effective $p$-orbital tight-binding model, we only account for the constraint imposed by the helical symmetry.
In the following, we use $\hbar=1$.
\section*{Spin filtering in a two-terminal two-orbital time-reversal symmetric conductor}
\label{sec:ttsf}
We begin by briefly explaining the reason why the SOI, which does not break time-reversal symmetry, cannot realize two-terminal spin filtering in a single-orbital conducting channel.
In such a system, there are 2 channels when the spin degree of freedom is accounted for [Fig.~\ref{fig:even_helical} (a)]:
For each spin $\sigma$, there are right- and left-going modes, $|k; \sigma \rangle$ and $|-k; \sigma \rangle$.
Spin filtering occurs if, e.g., the right-going $\downarrow$-spin and left-going $\uparrow$-spin states, $|k;\downarrow \rangle$ and $|-k;\uparrow \rangle$, dominate the transport.
For this purpose, the other two states, $|k;\uparrow \rangle$ and $|-k;\downarrow \rangle$, have to be gapped away.
Note that they are time-reversed of one another, $\hat{\Theta}|k;\sigma \rangle = \sigma |-k; \bar{\sigma} \rangle$, where $\hat{\Theta}=-i \hat{\sigma}_y \hat{K}$ is the time-reversal operator ($\hat{\sigma}_y$ is the Pauli matrix and $\hat{K}$ is the complex conjugate operator)~\cite{JJSakurai1985}.
Here $\bar{\sigma}=\downarrow (\uparrow)$ for $\sigma=\uparrow (\downarrow)$
and the coefficient $\sigma$ has the values $\sigma = +1 (-1)$ for $\sigma=\uparrow (\downarrow)$.
The Hamiltonian which hybridizes time-reversed states is given by,
\begin{align}
\hat{V} = a \hat{c}_{k;\uparrow}^\dagger \hat{c}^{}_{-k;\downarrow} + a^* \hat{c}_{-k;\downarrow}^\dagger \hat{c}^{}_{k;\uparrow} \, , \label{eqn:V}
\end{align}
where $a$ is a complex number.
Since the annihilation operator transforms as $\hat{\Theta} \hat{c}^{}_{k;\sigma} \hat{\Theta}^{-1} = \sigma \hat{c}^{}_{-k; \bar{\sigma}}$ ~\cite{Bernevig2013},
the mixing Hamiltonian is odd under the time reversal operation as $\hat{\Theta} \hat{V} \hat{\Theta}^{-1} = - \hat{V}$.
Consequently, it is not possible to realize spin filtering without breaking time-reversal symmetry in a single-orbital conducting channel.
The situation is different when there are two orbital channels, which we denote as orbital $1$ and orbital $2$ [Fig.~\ref{fig:even_helical} (b)].
In this case one may consider hybridizing the right-going $\uparrow$-spin and the left-going $\downarrow$-spin residing on different orbitals.
Such a Hamiltonian,
\begin{align}
\hat{V}^\prime =& a \, \hat{c}_{k;2, \uparrow}^\dagger \hat{c}^{}_{-k;1, \downarrow} - a \, \hat{c}_{k;1, \uparrow}^\dagger \hat{c}^{}_{-k;2, \downarrow} \nonumber \\
&+ a^* \hat{c}_{-k;1, \downarrow}^\dagger \hat{c}^{}_{k;2, \uparrow} - a^* \hat{c}_{-k;2, \downarrow}^\dagger \hat{c}^{}_{k;1, \uparrow} \, , \label{eqn:Vp}
\end{align}
where $a$ is a complex number, is even under the time-reversal operation, $\hat{\Theta} \hat{V}^\prime \hat{\Theta}^{-1} = \hat{V}^\prime$.
Therefore, the SOI can in principle lead to spin filtering when there are two-orbital channels.
In the next section we provide a different argument, based on the scattering matrix of the molecule, and demonstrate that spin filtering by a two-terminal two-orbital setup does not contradict Bardarson's theorem~\cite{Bardarson2008}.
\begin{figure}
\begin{center}
\includegraphics[width=1.0 \columnwidth]{orbital_channel.pdf}
\caption{
(a) Single-orbital case.
For each spin, there are left- and right-going states.
Time-reversed states (TRSs) are indicated by dashed arrows.
The Hamiltonian, which mixes up and down spins propagating in the opposite directions is odd under the time-reversal operation.
(b) Two-orbital case.
The Hamiltonian, which mixes states connected by solid arrows can be even under the time-reversal operation. }
\label{fig:even_helical}
\end{center}
\end{figure}
\subsection*{Scattering matrix}
\label{sec:Bardarson_T}
Let us consider a chiral molecule attached to left and right leads.
The scattering states in the left ($s=L$) and right ($s=R$) leads are,
\begin{align}
|\psi \rangle^{}_s = |{k} \rangle^{}_s c_s^{\mathrm in} + |-{k} \rangle^{}_s c_s^{\mathrm out} \, .
\label{eqn:scattering_state_}
\end{align}
The scattering matrix ${S}$, which connects the amplitude of right-going $|{k} \rangle^{}_s$ and left-going $|-{k} \rangle^{}_s$ states,
\begin{align}
{c}^{\mathrm out} = {S} c^{\mathrm in} \, ,
\;\;\;\;
{c}^{\mathrm out} = \left[ \begin{array}{c} {c}_L^{\mathrm out} \\ {c}_R^{\mathrm out} \end{array} \right] \, ,
\;\;\;\;
c^{\mathrm in}= \left[\begin{array}{c} c_L^{\mathrm in} \\ c_R^{\mathrm in} \end{array} \right]\, ,
\label{eqn:s_def_1}
\end{align}
reads
\begin{align}
{S} = \left[\begin{array}{cc} {r} & {t}' \\ {t} & {r}' \end{array} \right] \, .
\label{eqn:s_matrix}
\end{align}
When there are $N^{}_s$ orbital channels in terminal $s$, the amplitudes $c^{\mathrm in}_{s}$ and ${c}^{\mathrm out}_{s}$ form 2$N^{}_s-$component vectors,
\begin{align}
c_s^{{\mathrm in}} = \left[ \begin{array}{c} c_{1 \uparrow s}^{{\mathrm in} } \\ c_{1 \downarrow s}^{{\mathrm in} } \\ \vdots \\ c_{N_s \uparrow s}^{{\mathrm in} } \\ c_{N_s \downarrow s}^{{\mathrm in} }
\end{array} \right]\ ,
\;\;\;\;
{c}_s^{{\mathrm out}} = \left[ \begin{array}{c} {c}_{1 \downarrow s}^{{\mathrm out} } \\ {c}_{1 \uparrow s}^{{\mathrm out}} \\ \vdots \\ {c}_{N_s \downarrow s}^{{\mathrm out} } \\ {c}_{N_s \uparrow s}^{{\mathrm out}} \end{array} \right] \, .
\label{amp}
\end{align}
For a time-reversal symmetric system, the scattering matrix is self-dual~\cite{Bardarson2008},
\begin{align}
S=({\bm 1}_{2 N^{}_{s}} \otimes \sigma^{}_y) S^T ({\bm 1}_{2 N^{}_{s}} \otimes \sigma^{}_y) \ ,
\label{eqn:BT}
\end{align}
where $S^{T}_{}$ is the transposed scattering matrix (${\bm 1}_{ n}$ is the $n \times n$ unit matrix).
The block-diagonal component of the scattering matrix is the reflection matrix, $r=({\bm 1}_{N^{}_{s}} \otimes \sigma^{}_y) r^T ({\bm 1}_{N^{}_{s}} \otimes \sigma^{}_y)$.
Hence, the reflection amplitude from the state with orbital $\alpha'$ and spin $\sigma'$ into the state with orbital $\alpha$ and spin $\sigma$, $r^{}_{\alpha \sigma,\alpha' \sigma'}$, satisfies,
\begin{align}
r^{}_{\alpha \sigma , \alpha' \sigma'} = \sigma \sigma' \, r^{}_{\alpha' \bar{\sigma}' , \alpha \bar{\sigma}} \, . \label{eqn:sym_ref}
\end{align}
The transmission eigenvalues mentioned above are the eigenvalues of the matrix of transmission probabilities,
\begin{align}
{t}^\dagger {t} = {\bm 1}_{2 N_s}- r^\dagger_{} r \, .
\end{align}
For the single-orbital channel, $N_{s}^{}=1$, the reflection matrix is a $2 \times2$ matrix.
It is diagonal \cite{Bardarson2008}, as by Eq. (\ref{eqn:sym_ref})
\begin{align}
r^{}_{\uparrow , \uparrow} =& r^{}_{\downarrow , \downarrow}=r^{}_0 , \\
r^{}_{\sigma , \bar{\sigma}} =& -r^{}_{\sigma , \bar{\sigma}}=0.
\end{align}
The matrix of transmission probabilities is also diagonal, ${t}^\dagger {t} = (1-|r^{}_0|^2) \sigma^{}_0$.
Therefore, the transmission eigenvalues are degenerate.
Since spin asymmetry is absent, spin filtering is forbidden.
For the two-orbital channel case, $N_{s}^{}=2$, the reflection matrix is a $4 \times 4$ matrix.
A simple example, which satisfies (\ref{eqn:sym_ref}) and is capable of producing spin filtering, is~\cite{YU2020}:
\begin{align}
r &= \left[ \begin{array}{cccc}
0 & 0^{}_{} & 0^{}_{} & r^{}_{1 \uparrow , 2 \downarrow} \\
0 & 0 & r^{}_{1 \downarrow , 2 \uparrow} & 0 \\
0 & r^{}_{2 \uparrow, 1 \downarrow} & 0 & 0 \\
r^{}_{2 \downarrow , 1 \uparrow} & 0 & 0 & 0
\end{array} \right]\nonumber \\
&= \left[ \begin{array}{cccc}
0 & 0^{}_{} & 0^{}_{} & r^{}_{1 \uparrow , 2 \downarrow} \\
0 & 0 & -r^{}_{2 \downarrow , 1 \uparrow} & 0 \\
0 & -r^{}_{1\uparrow,2\downarrow} & 0 & 0 \\
r^{}_{2 \downarrow , 1 \uparrow} & 0 & 0 & 0
\end{array} \right] \ .
\label{eqn:rm_2c}
\end{align}
The matrix (\ref{eqn:rm_2c}) can be rearranged in a block-diagonal form,
$r=\mathrm{diag}(r^{}_+,r^{}_-)$, where
\begin{align}
r^{}_+ = \left[ \begin{array}{cc} 0 & r^{}_{1 \uparrow , 2 \downarrow} \\ r^{}_{2 \downarrow , 1 \uparrow} & 0 \end{array} \right] ,\
r^{}_- = \left[ \begin{array}{cc} 0 & -r^{}_{1 \uparrow , 2 \downarrow} \\ -r^{}_{2 \downarrow , 1 \uparrow} & 0 \end{array} \right] \ . \label{eqn:rp_rm}
\end{align}
The two matrices $r^{}_{+}$ and $r^{}_{-}$ are time-reversed of one another, $r^{}_- = \sigma^{}_y r_+^T \sigma^{}_y$.
The four transmission eigenvalues are the solutions of the characteristic polynomial equation
\begin{align}
\mathrm{det} \{ \Lambda {\bm 1}_{4}- t^\dagger t \} &= ( \mathrm{det} \{ (\Lambda-1) {\bm 1}^{}_{2} + r_\pm^\dagger r^{}_\pm \})^2=0 \ .
\end{align}
They come in pairs of degenerate eigenvalues~\cite{Bardarson2008},
\begin{align}
1+|r_{1 \uparrow, 2 \downarrow}|^2, 1+|r_{1 \uparrow, 2 \downarrow}|^2, 1+|r_{2 \downarrow, 1 \uparrow}|^2, 1+|r_{2 \downarrow, 1 \uparrow}|^2 \, .
\end{align}
Let us examine the spin filtering associated with the reflection matrix (\ref{eqn:rm_2c}).
The spin conductance at the left lead is given by~\cite{YU2020},
\begin{align}
G^{}_{j;LL} = {\mathrm Tr} [ {\bm 1}^{}_2 \otimes \sigma^{}_j ( {\bm 1}^{}_4 - r r^\dagger _{}) ]/(2\pi)\ , \label{eqn:Gj}
\end{align}
where $\sigma^{}_{j}$ ($j=x,y,z$) is the $j$th Pauli matrix.
Inserting Eq. (\ref{eqn:rm_2c}) into Eqs.~(\ref{eqn:Gj}) yields a finite spin conductance for the $z$ component of the spin
\begin{align}
G^{}_{z;LL} =( |r^{}_{2 \downarrow, 1 \uparrow}|^2 - |r^{}_{1 \uparrow, 2 \downarrow}|^2 )/\pi \ .
\end{align}
The spin conductances for the other components vanish, $G^{}_{x;LL} =G^{}_{y;LL}=0$.
The charge conductance is obtained from Eq.~(\ref{eqn:Gj}) by replacing the Pauli matrix with the unit matrix $\sigma_0={\bm 1}_2$,
\begin{align}
G^{}_{0;LL} =(2 - |r^{}_{2 \downarrow, 1 \uparrow}|^2 - |r^{}_{1 \uparrow, 2 \downarrow}|^2)/\pi\ ,
\end{align}
It follows that the (normalized) spin polarization is
\begin{align}
P^{}_{z;L} = \frac{G^{}_{z;LL}}{G^{}_{0;LL}} = \frac{ |r_{2 \downarrow, 1 \uparrow}|^2 - |r_{1 \uparrow, 2 \downarrow}|^2 }{2 - |r^{}_{2 \downarrow, 1 \uparrow}|^2 - |r^{}_{1 \uparrow, 2 \downarrow}|^2} \ .
\label{eqn:pzL}
\end{align}
Perfect spin-filtering $P^{}_{z;L} =1$ (or $P^{}_{z;L} =-1$) is achieved for
$|r_{2 \downarrow, 1 \uparrow}|^2=1$ (or $|r^{}_{1 \uparrow, 2 \downarrow}|^2=1$). The condition $|r_{2 \downarrow, 1 \uparrow}|^2=1$ corresponds to the Hamiltonian (\ref{eqn:Vp}).
The specific reflection matrix in the example (\ref{eqn:rm_2c}) demonstrates the possibility of spin filtering in a two-terminal molecule.
However, it is still a non-trivial task to construct a microscopic model realizing the two-orbital spin-filter.
In the next section, following our previous work~\cite{YU2020}, we describe a $p$-orbital helical atomic chain realizing the spin-filtering when it is attached to two terminals.
\section*{A $p$-orbital helical tight-binding model}
\label{sec:effective_hamiltonian}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1 \columnwidth]{setupDNA_.pdf}
\caption{
(a) Schematic picture of a $p$-orbital helical atomic chain, a toy model for a single strand of a double-stranded DNA.
${\bm R}(\phi^{}_{n})$ is the radius-vector to the $n$th site, within the Frenet-Serret frame [Eq. (\ref{Rphi})],
$\Delta h$ is the pitch, $\Delta\phi=2\pi/N$, and $\phi^{}_{n}=n p \Delta\phi$.
(b) Ladders threaded by a fractional flux $2\pi/N$.
The vertical lines represent the tunneling amplitudes
$\pm p \Delta_{so} \exp(ip \phi_n)$ connecting $\uparrow$- and $\downarrow$-spins on different orbitals at the $n$th rung.
The site index $n$ increases from left to right.
}
\label{fig:setupDNA}
\end{center}
\end{figure}
Here we summarize the construction of the tight-binding Hamiltonian describing the $p$-orbital helical atomic chain shown in Fig.~\ref{fig:setupDNA} (a) (see Appendix B of Ref.~\cite{YU2020}).
The vector from the origin to a point on a continuous helix of radius $R$ and pitch $\Delta h$ is,
\begin{align}
{\bm R}(\phi) = \left [ R \cos ( \phi),R \sin (p \phi), \Delta h \, \phi/(2 \pi) \right] \ ,
\label{Rphi}
\end{align}
where $p=1$ ($p=-1$) for a helix twisted in the right-handed (left-handed) sense.
In the Frenet-Serret frame, the tangent ${\bm t}$ (along the helix), normal ${\bm n}$, and bi-normal ${\bm b}$ unit vectors at the point on the helix are
\begin{align}
&{\bm t}(\phi) = \left [ - \kappa \sin (\phi), p \kappa \cos (\phi), |\tau| \right]\ ,\nonumber\\
&{\bm n}(\phi) = \left [- \cos (\phi), - p \sin (\phi), 0 \right] \ ,\nonumber\\
&{\bm b}(\phi) = {\bm t}(\phi) \times {\bm n}(\phi) = \left [ p |\tau| \sin (\phi), - |\tau| \cos (\phi), p \kappa \right] \ ,
\label{tnb}
\end{align}
where the `normalized' curvature and torsion, $\kappa$ and $\tau$, are
\begin{align}
\kappa = \frac{R}{\sqrt{R^2+[\Delta h/(2 \pi)]^2}} , \;\; \tau = \frac{p \Delta h/(2 \pi)}{\sqrt{R^2+[\Delta h/(2 \pi)]^2}} . \label{eqn:tau}
\end{align}
The position of the $n$th site in the tight-binding scheme is specified by ${\bm R}(\phi_n)$, where the increment of $\phi$ between neighboring sites is $\Delta\phi=2\pi/N$, and $\phi^{}_n = p 2 \pi n/N$.
The tight-binding Hamiltonian of the helical atomic chain is,
\begin{align}
\hat{\mathcal H}^{}_{\mathrm mol} =&\Big (\sum_{n}^{} - \hat{c}_{n+1}^\dagger {\bm J} \otimes \sigma^{}_0 \tilde{c}^{}_n + {\mathrm H.c.} \Big ) \nonumber \\ &+ \sum_{n}^{} \epsilon^{}_0 \, \hat{c}_{n}^\dagger \hat{c}^{}_{n} - 2 \Delta^{}_{so} \, \hat{c}_n^\dagger {\bm L} \cdot {\bm S} \hat{c}^{}_n \nonumber \\ &+ \sum_{n}^{} K_{ {\bm t} } \, \hat{c}_n^\dagger [ ( {\bm t}(\phi_n) \cdot {\bm L})^2 - {\bm 1}_3 ] \hat{c}^{}_n \nonumber \\ &+ \Delta \epsilon \, \hat{c}_n^\dagger [ ( {\bm b}(\phi_n) \cdot {\bm L})^2 - ( {\bm n}(\phi_n) \cdot {\bm L})^2 ] \hat{c}^{}_n \ ,
\label{eqn:original_hamiltonian}
\end{align}
where
\begin{align}
\hat{c}^{\dagger}_{n} = \left[ \begin{array}{cccccc} \hat{c}^{\dagger}_{n;p_x \uparrow} & \hat{c}^{\dagger}_{n;p_x \downarrow} & \hat{c}^{\dagger}_{n;p_y \uparrow} & \hat{c}^{\dagger}_{n;p_y \downarrow} & \hat{c}^{\dagger}_{n;p_z \uparrow} & \hat{c}^{\dagger}_{n;p_z \downarrow} \end{array} \right] \ , \label{eqn:vec_c}
\end{align}
is the vector of creation operators: $\hat{c}_{n;o \sigma}^\dagger$ is a creation operator of an electron residing at site $n$ with orbital $o$ and spin $\sigma$.
The first term on the right-hand side of Eq.~(\ref{eqn:original_hamiltonian}) describes the hopping between nearest-neighbor sites, with the hopping amplitude ${\bm J}$ being a $3 \times 3$ matrix.
Since the system is time-reversal symmetric, all the components of the matrix are real
\begin{align}
\hat{\Theta} {\bm J} \hat{\Theta}^{-1} = \hat{K} {\bm J} \hat{K}^{-1} = {\bm J} ,
\end{align}
where we have used $\Theta \hat{c}^{}_{n;\alpha} \Theta^{-1} = -i {\bm 1}_3 \otimes \sigma^{}_y \hat{c}^{}_{n;\alpha}$.
In the second term, $\epsilon_0$ is the on-site energy.
The third term represents the intra-atomic spin-orbit interaction whose strength is denoted $\Delta^{}_{so}$.
Here ${\bm L}= (L_x,L_y,L_z)$ is the vector of the orbital angular-momentum operators,
\begin{align}
L^{}_x =& \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & -i \\ 0 & i & 0 \end{array} \right], \\
L^{}_y =& \left[ \begin{array}{ccc} 0 & 0 & i \\ 0 & 0 & 0 \\ -i & 0 & 0 \end{array} \right], \\
L^{}_z =& \left[ \begin{array}{ccc} 0 & -i & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{array} \right]
,
\end{align}
and ${\bm S}={\bm \sigma}/2$ is the vector of the spin angular-momentum, with ${\bm \sigma}$ being the vector of the Pauli matrices.
The other terms in the Hamiltonian describe the crystalline fields created by neighboring atoms:
$K_{\bm t}$ is the orbital anisotropy energy along the tangential direction ${\bm t}(\phi_n)$.
$\Delta \epsilon$ is the difference between the orbital anisotropy energies along the normal direction ${\bm n}(\phi_n)$ and the bi-normal direction ${\bm b}(\phi_n)$.
We assume the leading orbital anisotropy energy is the one along the tangential direction, $K_{\bm t}$.
This condition would effectively mimic the situation discussed in the helical tube models~\cite{Michaeli2019,Geyer2020,Gutierrez2013} in a simple manner.
\subsection*{Helical symmetry}
\label{Helical_symmetry}
In the following, we analyze the helical symmetry for the infinite chain.
Although the system is finite in the transport experiment, for a sufficiently long molecule, the transport properties would be dominated by the bulk electric states.
Then the system we consider possesses helical symmetry, i.e., the Hamiltonian is invariant under the screw operation~\cite{Otsuto2021}: a translation by one site and a rotation by $p \Delta \phi = p 2 \pi/N$ along the $z$ axis,
\begin{equation}
\hat{D}_z(p \Delta \phi) \hat{T} \hat{\mathcal H}^{}_{\mathrm mol} \hat{T}^{-1} \hat{D}_z^{-1}(p \Delta \phi) = \hat{\mathcal H}^{}_{\mathrm mol} .
\end{equation}
The translation operator $\hat{T}$ shifts the site index of the operators by one,
\begin{align}
\hat{T} \hat{c}_{n;o,\sigma} \hat{T}^{-1} = \hat{c}_{n+1;o,\sigma} \, .
\end{align}
The rotation operator around the $z$ axis is,
\begin{align}
\hat{D}_z(p \Delta \phi) = e^{-i (\hat{L}_z + \hat{S}_z) p \Delta \phi} ,
\end{align}
where
$\hat{L}_z=\sum_n \hat{c}_n^\dagger \left( L_z \otimes {\bm 1}_2 \right) \hat{c}_n$ and
$\hat{S}_z=\sum_n \hat{c}_n^\dagger \left( {\bm 1}_3 \otimes S_z \right) \hat{c}_n$.
Thus, the rotation changes Eq.~(\ref{eqn:vec_c}) to be
\begin{align}
\hat{D}_z(p \Delta \phi) \hat{c}_n \hat{D}_z^{-1}(p \Delta \phi) = e^{i L_z p \Delta \phi} \otimes e^{i S_z p \Delta \phi} \hat{c}_n \, .
\end{align}
This operation does not change the on-site energy term in the Hamiltonian.
The intra-atomic SOI does not change either since $[L_z+S_z, {\bm L} \cdot {\bm S}]=0$.
The crystalline field terms in the third and forth lines of (\ref{eqn:original_hamiltonian}) do not change as well.
This can be verified by exploiting the following relations:
\begin{align}
{\bm t}(\phi_{n+1}) \cdot {\bm L} =& e^{-i L_z p\Delta \phi} [ {\bm t}(\phi_{n}) \cdot {\bm L} ] e^{i L_z p\Delta \phi} \, , \\
{\bm n}(\phi_{n+1}) \cdot {\bm L} =& e^{-i L_z p\Delta \phi} [ {\bm n}(\phi_{n}) \cdot {\bm L} ] e^{i L_z p\Delta \phi} \, , \\
{\bm b}(\phi_{n+1}) \cdot {\bm L} =& e^{-i L_z p\Delta \phi} [ {\bm b}(\phi_{n}) \cdot {\bm L} ] e^{i L_z p\Delta \phi} \, .
\end{align}
The first (hopping) term is transformed as,
\begin{align*}
\sum_{n}^{} \hat{c}_{n+1}^\dagger {\bm J} \otimes \sigma^{}_0 \hat{c}^{}_n
\to
\sum_{n}^{} \hat{c}_{n+2}^\dagger e^{-i L_z p \Delta \phi} {\bm J} e^{i L_z p \Delta \phi} \otimes \sigma^{}_0 \hat{c}^{}_{n+1} \, .
\end{align*}
Therefore, the hopping matrix ${\bm J}$ satisfies,
\begin{align}
\bm J =e^{-i L_z p \Delta \phi} {\bm J} e^{i L_z p \Delta \phi} \, . \label{eqn:constraint}
\end{align}
Consequently, the elements of the hopping matrix are parameterized by three parameters, $J$, $\alpha$ and $\varphi$ as,
\begin{align}
{\bm J} = J \left[ \begin{array}{ccc} \alpha \cos \varphi & - \alpha \sin \varphi & 0 \\ \alpha \sin \varphi & \alpha \cos \varphi & 0 \\ 0 & 0 & 1 \end{array} \right] .
\end{align}
The three parameters are real numbers, since the Hamiltonian is time-reversal symmetric.
In our previous work~\cite{YU2020}, we demonstrated the two-terminal two-orbital spin filtering for a specific condition: $K_t \to \infty$, $\varphi=-p \Delta \phi$ and $\alpha=1$.
In the next section, we analyze the filtering for other various parameters.
\section{Band structure}
\label{sec:band_structure}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=16.4cm]{bandSOI_IJC.pdf}
\caption{Band structures for various parameters
:
(a) $\alpha=1$, $\varphi=-\Delta \phi$, $\tau=0$ and $N=4$,
(b) $\alpha=1.2$, $\varphi=-\Delta \phi$, $\tau=0$ and $N=4$
(c) $\alpha=\sqrt{2}$, $\varphi=\pi/4$, $\tau=0$ and $N=4$,
and
(d) $\alpha=\sqrt{2}$, $\varphi=\pi/4$, $\tau=0.48$ and $N=10$.
Other parameters are fixed as $p=1$, $\Delta_{so}=0.4 J$, $\epsilon_0=\Delta \epsilon=0$ and $K_t=7 J$.
The color scheme indicates the $z$ component of the average spin (red for $\uparrow$ spin and blue for $\downarrow$ spin, see the color bar).
}
\label{fig:band}
\end{center}
\end{figure*}
Figure \ref{fig:band} shows the band structure in the reduced zone scheme for various parameters, obtained by imposing periodic boundary condition $\hat{c}^{}_{MN+n}=\hat{c}^{}_n$ with $M \to \infty$.
The color scheme indicates the $z$ component of the average spin (red for $\uparrow$ spin and blue for $\downarrow$ spin, see the color bar).
In the following, we fix $p=1$, and $\epsilon_0=\Delta \epsilon=0$.
The strength of the SOI is taken as $\Delta^{}_{so}=0.4 J$.
This estimation is based on a band of width $4J \sim 120 \mathrm{meV}$~\cite{Gutierrez2012} and the intra-atomic SOI energy in carbon nanotubes $\Delta_{so} \sim 12 \mathrm{meV}$~\cite{HuertasHernandoPRB2006}.
The crystalline field is taken to be sufficiently large, as $K_t=7J$.
Due to this strong crystalline field along the tangential direction of the helix, $K_t$, there are two energetically split bands.
The lower band is the $\sigma$-band and the upper band is the $\pi$-band.
Panel (a) in Fig. \ref{fig:band} shows the band structure for $\alpha=1$, $\varphi=-\Delta \phi$ and $\tau=0$, parameters for which the spin filtering is almost ideal.
It almost recovers our previous result in Ref.~\cite{YU2020}.
As seen, the lower band is spin degenerate.
The upper band can be effectively described by two decoupled ladders threaded by a fractional flux in each rung [Fig.~\ref{fig:setupDNA} (b)]~\cite{YU2020}.
The flux is induced by the intra-atomic SOI and the helical structure.
The Hamiltonians of two decoupled ladders, $\hat{\mathcal H}^{}_+$ and $\hat{\mathcal H}^{}_-$, are~\cite{YU2020},
\begin{align}
\hat{\mathcal H}^{}_\pm =&\sum_{n}^{} ( - J \hat{a}_{n+1;\pm}^\dagger \hat{a}^{}_{n;\pm} +{\mathrm H.c.} ) \nonumber \\ & \pm p \Delta^{}_{so} \, \hat{a}_{n;\pm}^\dagger \left[\begin{array}{cccc} 0 & e^{-i p \phi^{}_n} \\ e^{i p \phi^{}_n} & 0 \end{array} \right] \hat{a}^{}_{n;\pm} \ , \label{eqn:H_pm}
\end{align}
where
\begin{align}
\hat{a}^{\dagger}_{n;+} = \left[ \begin{array}{cc} \hat{a}^{\dagger}_{n;p_x \uparrow} & \hat{a}^{\dagger}_{n;p_z \downarrow} \end{array} \right],
\ \ \hat{a}^{\dagger}_{n;-} = \left[ \begin{array}{cc} \hat{a}^{\dagger}_{n;p_z \uparrow} & \hat{a}^{\dagger}_{n;p_x \downarrow} \end{array} \right] \, ,
\label{eqn:ann_FB}
\end{align}
and $\hat{a}^{}_n = e^{i L_z p \phi_n} \hat{c}^{}_n$.
The two ladders are time-reversed of one another $\hat{\Theta} \hat{\mathcal H}^{}_+ \hat{\Theta} ^{-1} = \hat{\mathcal H}^{}_-$~\cite{YU2020}, which is reminiscent of the quantum spin Hall system~\cite{Bernevig2013}.
At the boundary of the Brillouin zone, $k=\pm \pi$, i.e., around $E=\pm 2 J \cos(\pi/N)$ (indicated by dotted lines), there are left-going $\uparrow$($\downarrow$)-spin states and right-going $\downarrow$($\uparrow$)-spin states.
The width of the energy window in which the helical states reside is compatible with the intra-atomic SOI, $\Delta^{}_{so}$.
These states are responsible for the spin filtering~\cite{YU2020}.
They are degenerate and, away from the prescribed condition, the degeneracy is lifted:
Panel (b) is drawn for $\alpha=1.2$, which also broadens the width of the bands.
Pairs of up and down spin states propagating at opposite directions are clearly observed around $E=\pm 2 J \cos(\pi/N)$ (dotted lines).
Since we take $\tau=0$, the pitch is zero, $\Delta h=0$, and this ideal situation is realized only hypothetically.
In panel (c), we chose the parameter $\varphi \neq -\Delta \phi$.
In this case the Hamiltonian cannot be separated into two time-reversed ones as in Eqs.~(\ref{eqn:H_pm}).
The deviation from $\varphi = -\Delta \phi$ induces the mixing between $\sigma$ and $\pi$ orbitals.
As seen in the figure, there appears spin splitting in the lower band, induced by the inter-atomic SOI:
It results from the intra-atomic SOI combined with the mixing of the $\sigma$ and $\pi$ orbitals on neighboring atoms due to the curved geometry~\cite{HuertasHernandoPRB2006,VarelaPRB2016}.
The inter-atomic Rashba-like SOI induced in this way is reduced by $\sim J \Delta_{so}/K_t$ as compared with the bare intra-atomic SOI.
Panel (d) shows the result for parameters taken to mimic a DNA molecule.
The number of sites in each turn is $N=10$ corresponds to the number of base pairs.
The dimensionless torsion is taken to be $\tau=0.48$, as estimated for a B-form DNA: $R=1 \mathrm{nm}$ and $\Delta h=3.4 \mathrm{nm}$~\cite{Sasao2019}.
The finite torsion approximately reduces the energy window of the helical states by ${\kappa} \Delta^{}_{so}$~\cite{YU2020}.
In panel (d), one still finds helical states close to the top and bottom of the upper band.
Although our model realizes the spin current without breaking time-reversal symmetry, it is not sufficient to explain the experimentally observed magneto conductance~\cite{Xie2011,Mishra2020}.
Earlier papers~\cite{Yang2019,Naaman2020,Yang2020,Yang2020a} argued that the Onsager relations forbid linear magneto-conductance in chiral molecules which connect a ferromagnet with a normal metal, and attributed the observed non-linear magneto-conductance to electro-electron~\cite{Fransson2019} or electron-phonon~\cite{Fransson2020,Michaeli2022} interactions.
Here we showed that a linear spin conductance can be generated even when time-reversal symmetry is not broken.
However, the full explanation of the experimental observations probably also need these additional interactions.
\section*{Conclusion}
\label{conclusion}
Chirality-induced spin selectivity, invoked by the spin-orbit interaction, has been discussed within the single-particle picture.
The appearance of spin current in a time-reversal symmetric system when two orbital channels participate in the transport is demonstrated.
We analyze the helical symmetry of the infinite $p$-orbital helical atomic chain with intra-atomic spin-orbit interaction and a strong crystalline field along the helix introduced in Ref.~\cite{YU2020}.
The helical symmetry imposes constraints on the nearest-neighbor $p$-orbital hopping matrix elements:
They are parameterized by 3 independent real numbers.
We explore parameters away from the condition analyzed in Ref.~\cite{YU2020}, in which the ideal spin-filtering is realized.
We demonstrate that for a wide range of parameters, pairs of up and down spins propagating along opposite directions survive around the top and the bottom of the band.
These helical states in the infinite atomic chain would be responsible for spin filtering in the two terminal transport experiments.
The deviation from the ideal spin-filtering condition would not spoil our previous findings~\cite{YU2020}.
As pointed out in Ref.~\cite{Gutierrez2013}, the two orbitals can be on the same helix and thus the intra-atomic SOI is sufficient for the spin-filtering.
In our simple $p$-orbital helical atomic chain, the typical energy scale of the helical states is approximately the intra-atomic SOI times the curvature of the helix.
The intra-atomic SOI is typically larger than the inter-atomic SOI induced by the mixing between $\pi$- and $\sigma$-bands and thus would be a likely candidate for explaining the CISS effect.
\section*{Acknowledgments}
This work was supported by JSPS KAKENHI Grants No. 18KK0385, No. 20H01827 and No. 20H02562.
\begin{shaded}
\noindent\textsf{\textbf{Keywords:} \keywords}
\end{shaded}
\setlength{\bibsep}{0.0cm}
\bibliographystyle{Wiley-chemistry}
|
1,116,691,499,718 | arxiv | \section{Introduction}
We present a study on dissipative effects on neutrino evolution, such as the decoherence and relaxation effects, and their consequences in neutrino oscillations. These effects are obtained when we consider neutrinos as an open quantum system~\cite{ben,ben1,workneut}. In this approach, neutrinos are considered as a subsystem that is free to interact with the environment that presents a reservoir behavior.~\footnote{Some possible sources of violations of quantum mechanics fundamentals include the spontaneous evolution of pure states into mixed decoherent states~\cite{1} induced by interactions with the space-time at Planck scale~\cite{2} which unavoidly appear in any formulation of a quantum gravity theory. Such sources of decoherence was first analyzed in Ref.~\cite{3} which considered oscillating systems propagating over large distances and the corresponding damping effects in the usual interferometric pattern characterizing the oscillation phenomenon.}
The decoherence effect is the most usual dissipative effect. In the neutrino oscillation phenomenon, the decoherence effect acts only on the quantum interference, dynamically eliminating the oscillating terms in oscillation probabilities. This feature has
been investigated in a number of previous studies~\cite{dea,lis,fun1,gab,fo,liss,mmy,workneut3}.
The relaxation effect acts in a different way and it does not affect the oscillating terms. It changes only the pure mixing terms in the probabilities, leading all averaged conversion probabilities to $1/n$, where $n$ is the number of neutrino families. Then, the relaxation effect can change the probability behavior even when the oscillation terms are not important, like the solar neutrino case \cite{workneut}.
The relaxation effect can be confused with the decoherence effect and this can occur in those particular cases where quantum coherence is averaged out in neutrino oscillations. In Ref.~\cite{fo}, the authors analyzed quantum decoherence effect with solar and KamLAND neutrinos. However, for solar neutrinos the decoherence effect could be investigated only using a model-dependent approach, because in general, the quantum coherence is averaged out for solar neutrinos and just relaxation effects can be investigated.
There are some experimental bounds on dissipative effects and we will compare some concrete bounds obtained from some experimental data analyses found in the literature. All these limits were obtained for neutrino propagation in vacuum and in two neutrino approximation. For example, in Ref.~\cite{workneut3}, the analysis was made considering MINOS experiment. There, the decoherence parameter has a superior limit given by $\gamma<9.11 \times 10^{-23}$ GeV at $95\%$ C.L. and this result agrees with the upper limit found in Ref.~\cite{lis} where $\gamma<4.10 \times 10^{-23}$ GeV at $95 \%$ C.L., which was obtained for atmospheric neutrino case.
A very interesting upper limit was introduced by Ref. \cite{fo} obtained in a model-dependent approach that constrain decoherence effect using solar neutrinos. It was obtained that decoherence parameter is limited to $\gamma<0.64\times 10^{-24}$ GeV at $95\%$ C.L. As it is known, the matter effect is important in this case, and we will address this issue later on this article. In~\cite{balieiro} an analysis using only reactor neutrinos found different bounds on the decoherence effect, $\gamma<{6.8\times 10^{-22}}$ eV at $95\%$ at C.L. All bounds presented above can be found in Table \ref{table1}.\footnote{Following the arguments of the present article, decoherence effect can be described by one parameter and relaxation effect by another parameter. However, in the case of three neutrino oscillation there are three different decoherence parameters and two different relaxation parameters. As we can see in Ref.~\cite{workneut2}, the decoherence parameters describe the quantum effect between specific families and then, the decoherence bound for accelerator or atmospheric neutrinos can be different from the one for reactor neutrinos.}
In general, bounds on dissipative parameters come from $e^{-\gamma x}\lesssim 1$ since this is the kind of damping terms which appear in the oscillation probabilities. This can be checked to work reasonably well for all the limits presented above, with terrestrial experiments with a typical baseline $x=10^{20}\sim10^{22}$ GeV$^{-1}$ (20$\sim$ 2000 km).
However for the numbers presented in \cite{fo}, using the bound found for $\gamma< 0.64\times 10^{-24}$ GeV, the exponential term tends strongly to $1$.
As it will be clear in this work, the model-dependent approach used in Ref. \cite{fo} also constrains the relaxation effect with $\gamma_{relax.}<10^{-25}$ GeV at $95\%$ C.L. For solar-neutrinos $x=10^{26}$ GeV$^{-1}$, and the exponential term in this case makes the survival probability for solar neutrinos to have a unique constant value equal to $1/2$. This result should spoil the usual solution for solar neutrinos. In our model, the constraint for $\gamma$ is expected to be two order of magnitude smaller \cite{escrever}.
In the particular case investigated in Ref.~\cite{fo}, where this limit was obtained in a model-dependent approach, the exponential argument depends on other oscillation parameters, including necessarily the neutrino energy, and this makes the bound on $\gamma$ just suitable in that situation.
In the model-independent approach that we will introduce in this work, the damping term will not depend on any oscillation parameters and the addition of any energy dependence on $\gamma$ will be an ansatz, as those found in Refs. \cite{lis,fo,workneut3}. Besides, in our model the damping term for solar neutrino does not describe the decoherence effect, but only the relaxation effect. In fact, following the definitions that we will present in this work, the bound found in Ref. \cite{fo} can be called of decoherence just because it is proportional to the relaxation effect, which is, in fact, the only dissipative effect that remains after averaging out the solar-neutrino oscillations. Furthermore, our model respects the usual bound condition $(e^{-x \gamma} \lesssim 1)$ for the damping terms in the neutrino probabilities.
Our analysis will consider these two non-standard effects. We analyze the propagation in vacuum and in matter. We show that with a careful application of the open quantum system theory it is possible to write the probabilities in vacuum and in constant matter in a similar way, which is not an obvious result in this context. From this result, we analyze the situation where neutrino evolution satisfies the adiabatic limit, and analyze solar neutrinos in two neutrino approximation to show that the decoherence effect cannot be bounded in general using this neutrino source~\cite {farzan}. We discuss the current results \cite{fo, bur} where solar neutrinos were used to put limits on decoherence effect through a model-dependent approach. We argue how and why these models are not general and we reinterpret these constraints.
We conclude this work arguing that the decoherence limit in the channel $\nu_{e}\rightarrow \nu{_\mu}$ can be different from the limit obtained in Ref.~\cite{fo}. A limit for decoherence parameters can be obtained using a model-independent approach studying neutrinos from sources other than the sun.
\begin{center}
\begin{table}[!t]
\caption{\label{table1}Upper limits on decoherence parameters at $95\%$ C. L. obtained from accelerator, atmospheric, reactor and solar experiments, respectively. These bounds assume that the decoherence parameters are energy independent. }
\vspace{0.5 cm}
\hspace{0.5cm}\begin{tabular}{||c||c||c||}
\hline
\hline
$P(\nu_\alpha \nu_\alpha)$ & $\gamma$ in GeV & baseline/E \\
\hline
$P(\nu_\mu \nu_\mu)$ & $9.11 \times 10^{-23}$~\cite{workneut3}& $\sim 730$ km $/3$ GeV \\
\hline
$P(\nu_\mu \nu_\mu)$ & $4.10 \times 10^{-23}$~\cite{lis} & $\lesssim 10^{4}$ km $/ 10^{3}$ GeV\\
\hline
$P(\bar{\nu}_e\bar{\nu}_e)$ & $6.8\times 10^{-22}$~\cite{balieiro} & $\sim 200$ km $/5$ MeV\\
\hline
$P(\nu_e\nu_e)$ & $0.64\times 10^{-24}$~\cite{fo} & $\sim 10^{8}$ km $/ 2$ MeV \\
\hline
\hline
\end{tabular}
\end{table}
\end{center}
\vspace{-0.5 cm}
\section{Neutrinos as an Open Quantum System}
In open quantum system approach, a global state formed by a subsystem of interest and an environment must be defined. As the environment in this approach is a quantum reservoir, it interacts with the subsystem of interest as a whole.
The subsystem of interest can be represented by $S$ states which are associated with the Hilbert space $\mathbbm{H}_{S}$, while the quantum reservoir can be represented by $R$ states which are associated with the Hilbert space $\mathbbm{H}_{R}$. Basically, those are the fundamental definition about these two different quantum states. The subsystem of interest may be composed by more than one Hilbert space associated with each element that can be added in the usual quantum description of a system. For instance, when the matter potential is added to mass Hamiltonian in neutrino oscillation in vacuum.
The product tensor from these spaces form the total Hilbert space or the global states space, $\mathbbm{H}_{G}=\mathbbm{H}_{S}\otimes\mathbbm{H}_{R}$. This means that we can write a global state as~\cite{len, pet}
\begin{equation}
\rho_{G} = \rho_{S}\otimes \omega_{R}\,,
\label{i}
\end{equation}
where $\rho_{S}$ is the subsystem of interest state, and $\omega_{R}$ is the reservoir state. The system evolution is obtained using the following transformation:
\begin{equation}
\rho_{G}(t) = U(\rho_{S}\otimes \omega_{R})U^{\dag},
\label{ii}
\end{equation}
such that $U=Exp[-i H_{tot} t]$ is the unitary operator and the time evolution is governed by the total Hamiltonian that can be defined as $H_{tot}=H_{S}+H_{R}+H_{int}$, where $H_{S}$ is the subsystem of interest Hamiltonian, $H_{R}$ is the reservoir Hamiltonian and $H_{int}$ is the interaction Hamiltonian between reservoir and subsystem of interest.
The subsystem of interest changes its characteristic in time due to the internal dynamic and the interaction with the reservoir~\cite{len, pet}. On the other hand, as that reservoir state does not change in time, its dynamics is not important. Then, the dynamic of the subsystem of interest is obtained taking the trace over the reservoir states in Eq. (\ref{ii})~\cite{lin,dav,dum,kra}, i. e.,
\begin{equation}
\rho_{S}(0)\rightarrow \rho_{s}(t)=\Lambda \rho_{s}(0)=Tr_{R} U(\rho_{S}\otimes \omega_{R})U^{\dag},
\label{iii}
\end{equation}
where $\Lambda$ is a dynamic map. Eq. (\ref{iii}) is known as the reduced dynamic of $S$. Solving the partial trace in Eq. (\ref{iii}), we can rewrite this relation as
\begin{equation}
\Lambda \rho_{S}(0)=\sum_{\alpha} W_{\alpha}\rho_{S} W^{\dag }_{\alpha},
\label{iv}
\end{equation}
where $W_{\alpha}\in \mathbbm{H}_{S}$ and $\sum_{\alpha} W_{\alpha}W^{\dag}_{\alpha}=\mathbbm 1$~\cite{kra}. In order to evolve the state, this map must satisfy the complete positivity constraint. Besides, we need a family of linear maps which must satisfy the semigroup properties \cite{lin,dav,kra}. From this, we can obtain a dynamical generator, which can be written as
\begin{equation}
\frac{d\rho_{\nu}(t)}{dt}=-i[H_{S},\rho_{\nu}(t)]+D[\rho_{\nu}(t)]\,.
\label{v}
\end{equation}
This equation has been studied in literature and more information about it and its properties can be found in Refs.~\cite{len,pet,lin,dav, dum,kra,joo,uri}. This equation is called Lindblad Master Equation and it is composed by an usual Hamiltonian term and a non-Hamiltonian one which gives origin to dissipative effects. The dissipator in Eq. (\ref{v}) can be defined as
\begin{equation}
D[\rho_{\nu}] =\frac{1}{2}\sum_{k=1}^{N^{2}-1}\bigg(\Big[V_{k},\rho_{\nu} V_{k}^{\dag}\Big]+\Big[V_{k}\rho_{\nu},V_{k}^{\dag}\Big]\bigg) \,,
\label{vi}
\end{equation}
where $V_{k}$ are dissipative operators which act only on the $N$-dimensional $\mathbbm{H}_{S}$ space. The trace preservation of $\rho_{\nu}$ occurs only if $\sum_{k} V^{\dag}_{k} V_{k} =1$ is satisfied. The $V_{k}$ operators arise from the interaction of the subsystem of interest with the environment. The propagation through equation (\ref{v}) leads an initial density matrix state into a new density matrix state~\cite{ben}. The evolution is complete positive, transforming pure states into mixed states due to dissipation effects \cite{len,lin,dav,dum,kra}. The Von Neumann entropy of the subsystem of interest, $S=-Tr[\rho_{\nu}ln \rho_{\nu}]$, must be increasing in time and this is guaranteed if we impose $V^{\dag}_{k}=V_{k}$ \cite{nar}.
Let us start considering only two neutrino families and the relation between the mass and flavor bases in vacuum is given by \cite{moh,kim}
\begin{equation}
\rho_{m}=U^{\dag}\rho_{f}U\,,
\label{vii}
\end{equation}
where $\rho_{m}$ is written in mass basis, $\rho_{f}$ is written in flavor basis and $U$ is the usual $2\times 2$ unitary mixing matrix.
The transformation in Eq. (\ref{vii}) can be used to write the Eq. (\ref{v}) in the flavor basis or any other basis. Since any unitary transformation over $V_{k}$, i. e., $A V_{k}A^{\dag}$ with $A A^{\dag}=1$, leads to a new matrix of the form:
\begin{equation}
V'_{k} = A V_{k}A^{\dag}= A \left(\begin{array}{ c c }
V_{11} & V_{12}\\
V^{*}_{12} & V_{22} \end{array} \right) A^{\dag} =
\left(\begin{array}{ c c }
V'_{11} & V'_{12}\\
V'^{*}_{12} & V'_{22} \end{array} \right)\,,
\label{viiextra}
\end{equation}
where the new dissipator can be reparametrized such that it has the same form of the old dissipation operator.
Expanding Eqs.~(\ref{v}) and (\ref{vi}) in $SU(2)$ basis matrices we can write Eq. (\ref{v}) as:
\begin{equation}
\frac{d}{dx}\rho_{\mu}(x)\sigma_{\mu}=2\epsilon_{ijk}H_{i}\rho_{j}(x)\sigma_{\mu}\delta_{\mu k}+D_{\mu\nu}\rho_{\nu}(x)\sigma_{\mu}\,,
\label{viii}
\end{equation}
with $D_{\mu 0} =D_{0\nu}=0$ to keep the probability conservation. The matrix $D_{m n}$ can be parametrized as
\begin{equation}
D_{mn} = -
\left(\begin{array}{ c c c}
\gamma_{1} & \alpha & \beta\\
\alpha & \gamma_{2} & \delta\\
\beta & \delta & \gamma_{3} \end{array} \right)\,,
\label{ix}
\end{equation}
where the complete positivity constrains each parameter in the following form
\begin{eqnarray*}
2R & \equiv & \gamma_{1}+\gamma_{2}-\gamma_{3} \geq 0;
\quad\mbox{ }\quad RS-\alpha^{2} \geq 0;\nonumber\\
2S & \equiv & \gamma_{1}+\gamma_{3}-\gamma_{2} \geq 0;
\quad\mbox{ }\quad RT-\beta^{2}\geq 0;\nonumber\\
2T &\equiv & \gamma_{2}+\gamma_{3}-\gamma_{1} \geq 0;
\quad\mbox{ }\quad ST-\delta^{2} \geq 0\,;
\label{iv.xx}
\end{eqnarray*}
\begin{equation}
RST \geq 2\alpha\beta\delta+T\delta^{2}+S\beta^{2}+R\alpha^{2}\,.
\label{x}
\end{equation}
When we take out the reservoir Hamiltonian, $H_{R}$, and the interaction Hamiltonian, $H_{int}$, the quantum evolution return to usual way and then the Eq. (\ref{v}), which is just the known Liouville quantum equation.
\subsection{The Subsystem of Interest}
Our subsystem of interest will be the neutrinos. As it is well known, many experiments give evidence that neutrinos have mass and mixing, as defined in Eq. (\ref{vii}), such that flavors oscillation can occur \cite{moh,kim}.
Neutrinos propagate in vacuum or in matter. In both situations it is possible to evolve neutrinos as an open quantum system, through direct application of the Eqs.~(\ref{v}) and (\ref{vi}). However, it is important to take into account in which circumstances these equations were developed and how the subsystem of interest was defined. Then, the definition of neutrinos like a subsystem of interest can change in each case.
We can use a previous knowledge of the Hamiltonian in standard quantum mechanics to define this general subsystem of interest S. As we have seen, the total Hamiltonian in open quantum system approach can be defined as $H_{tot}=H_{S}+H_{R}+H_{int}$. In this case, $H_{S}$ is the usual Hamiltonian in closed approach. Then, the more general subsystem of interest is the physical object described by basis in which $H_{S}$ is diagonal
\subsection{Quantum Dissipator and the Effects in $S$}
It is possible to study how each entry in the matrix in Eq. (\ref{ix}) changes the neutrino probabilities~\cite{workneut}. For simplicity we will work with only two models for quantum dissipator. One with only one new parameter that will describes decoherence effect and another with two new different parameters that will describe decoherence and relaxation effects.
The most usual dissipator is obtained imposing energy conservation on the subsystem of interest $S$. This constraint satisfies the following commutation relation: $[H_{S}, V_{k}]=0$. This dissipator adds only decoherence to the system of interest $S$ and it is given by
\begin{equation}
D_{mn}=-diag\{\gamma_{1},\gamma_{1},0\}
\label{xi}
\end{equation}
where, in this case $\gamma_{1}=\gamma_{2}$ and all other parameters vanish. This statement defines uniquely a particular interaction between the subsystem of interest $S$ and the reservoir.
Therefore, the energy conservation constraint in subsystem of interest $S$ is obtained only if the commutation relation $[H_{S}, V_{k}]=0$ is satisfied and the consequence is a quantum dissipator with only one parameter, $\gamma_{1}$, that describes decoherence effect. In other words, the dynamic evolution is purely decoherent when this specific constraint is applied and no other dissipative effect is present.
To include the relaxation effect we need to violate the above constraint. As the subsystem of interest is free to interact with the reservoir the energy flux can fluctuate and the energy conservation condition imposed over the subsystem of interest can be not satisfied. In this case, the matrix in Eq. (\ref{ix}) can assume its complete form. However, as the matrix in Eq. (\ref{ix}) needs to be positive, all off-diagonal parameters must be smaller than the diagonal parameters. Then, only the diagonal parameter necessarily must be present in case of new physics. For simplicity, we will disregard all off-diagonal elements.
By assuming $[H_{S},V_{k}]\neq0$, a non null $D_{33}$ parameter can be included in the dissipator in Eq. (\ref{xi}) and then a new quantum dissipator can be written as
\begin{equation}
D_{mn}=-diag\{\gamma_{1},\gamma_{1},\gamma_{3}\}\,,
\label{xii}
\end{equation}
where $\gamma_{1}$ continues describing the decoherence effect and $\gamma_{3}$ describes the relaxation effect.
\subsection{Dissipation in other Specific Subsystem of Interest S'}
\label{s}
The quantum dissipator written in Eq.~(\ref{vi}) can be defined in many different ways for neutrinos propagating in vacuum or in constant matter density, but it can have the same form in both cases. It is easy to prove this statement since we can always write $H_{S}$ in Eq. (\ref{ii}) as being diagonal in vacuum or in matter propagation. However, the parameter values in operator $V_{k}$ are different in each case.
In the presence of matter the transformation between the effective mass basis and flavor basis can be written changing $\rho_{m}\rightarrow \tilde{\rho}_{m}$ and $U\rightarrow \tilde{U}$, where $\tilde{U}$ is composed by effective mixing angles \cite{kim, moh}. This transformation may not bring anything new to the quantum evolution equation in (\ref{v}) and it can be again parametrized as we made in Eq. (\ref{viii}) with a $\tilde{D}_{mn}$ that has the same form of the $D_{mn}$ that was given by Eq. (\ref{ix}).
In the usual situation in matter propagation we can define $H_{S}=H_{osc}+H_{mat}$ and then, the interaction constraints between a specific subsystem of interest $S'$ and the reservoir can be imposed in different ways. Thus, it is possible to define a specific subsystem of interest $S'$ that can have a commutation relation with a particular $V_{k}$. While, the $H_{S}$ defines the more general subsystem of interest S, the $ H_{osc}$ or $H_{mat}$ could be used to define other specific subsystems of interest S'.
If we assume, for instance, that $[H_{osc},V_{k}]=0$, the energy conservation is kept when the propagation is in vacuum and only decoherence can act during the propagation. However, to the same case, when the propagation is in the matter $H_{S}\neq H_{osc}$ and therefore this constraint no longer preserve the energy conservation in the subsystem of interest $S$ and we have the situation where $[H_{S},V_{k}]\neq0$. Thus, the relaxation and decoherence effects may act during the propagation.
Therefore, when one defines $H_{S}$ and its relation with the $V_{k}$ operators, all the dissipative effects are determined. So, a consequence of the definition of the subsystem of interest $S$ from $H_{S}$ can be summarized as follow: if the subsystem of interest $S$ has its energy conserved then $[H_{S},V_{k}]=0$ and the dissipator has the form of Eq. (\ref{xi}). In this case we are dealing with decoherence effects. If it is not, then $[H_{S},V_{k}]\neq0$ and the dissipator can be written in its more general form, Eq. (\ref{xii}). Thus, there are both decoherence and relaxation effects taking place during neutrino evolution.
The difference between decoherence and relaxation effect was discussed in this section. Now, we will apply this formalism in neutrino oscillation in vacuum and in constant matter case in order to eliminate any confusion between these two dissipative effects.
\section{Propagation in Vacuum and in Constant Matter Density}
With the Lindblad Master Equation we can study many dissipative effects in neutrino oscillations. Decoherence is the most usual dissipative effect~\cite{fo,dea,gab,fun1,lis,liss,mmy,dan}, but it is not the only one, as we have seen in previous section. In particular, we are going to study how decoherence and relaxation effects act on the state during its propagation and how these dissipative effects change the oscillation probabilities.
In general, we can calculate the evolution using the dissipator in Eq.~(\ref{xii}). We can obtain the evolution using the dissipator given in Eq.~(\ref{xi}) just setting $\gamma_{3}=0$. The oscillation Hamiltonian in vacuum and in matter is taken in its diagonal form. Usually in vacuum $H_{S}$ is written in the mass basis as $H_{S}=diag\{E_{1},E_{2}\}$ and when the oscillation occurs in constant matter, it is possible to write the Hamiltonian as $H_{S}=diag\{\tilde{E}_{1},\tilde{E}_{2}\}$ using the effective mass basis. Note that we have defined two different subsystems of interest $S$, one for neutrinos in vacuum and another for neutrinos in constant matter, but both $H_{S}$ are diagonal.
We are going to use the approximation $E_{i}=E + m_{i}/2E$ and $\tilde{E}_{i}=E +\tilde{m}_{i}/2E$. The Eq.~(\ref{viii}) can be written as
\begin{equation}
\left(\begin{array}{ c }
\dot{\rho}_{1}(x) \\
\dot{\rho}_{2}(x) \\
\dot{\rho}_{3}(x) \end{array} \right) =
\left(\begin{array}{ c c c}
-\gamma_{1} & -\Delta & 0\\
\Delta & -\gamma_{1} & 0\\
0 & 0 & -\gamma_{3} \end{array} \right)\left(\begin{array}{ c }
{\rho}_{1}(x) \\
{\rho}_{2}(x) \\
{\rho}_{3}(x) \end{array} \right)\,,
\label{xiii}
\end{equation}
where $\Delta=\Delta m^{2}/2 E$. If the propagation is in matter, we can evoke the effective quantities, which are $\Delta\rightarrow \tilde{\Delta}=\Delta \tilde{m}^{2}/2E$, $\gamma_{i} \rightarrow \tilde{\gamma}_{i}$ by following the Eq. (\ref{viiextra}) and $\rho_{i} \rightarrow \tilde{\rho}_{i}$. Of course, this changes nothing from the point of view of the equation solution and from now on, we do not mention more this similarity. Further, the component $\rho_{0}$ has a trivial differential equation given by $\dot{\rho}_{0}(x)=0$ and its solution is $\rho_{0}(x)= \rho_{0}(0)$ that in two neutrino oscillation means $\rho_{0}(x)=1/2$.
The Eq. (\ref{xiii}) can be written in short form as
\begin{equation}
\dot{R}(t)= \mathbbm H R(t) \,,
\label{xiv}
\end{equation}
where the eigenvalues of $\mathbbm H$ are $\lambda_{0}=-\gamma_{3}$, $\lambda_{1}=-\gamma_{1} - i \Delta$ and $\lambda_{2}=-\gamma_{1} + i \Delta $. For each eigenvalue it is possible to obtain a correspondent eigenvector, $\textbf{u}_{0}$, $\textbf{u}_{1}$, $\textbf{u}_{2} $ that compose the matrix $\mathbbm A =[\textbf{u}_{0}, \textbf{u}_{1}, \textbf{u}_{2} ] $ that diagonalizes the matrix $\mathbbm H$ by performing the following similarity transformation: $\mathbbm A^{\dag} \mathbbm H \mathbbm A$. The solution of the Eq.~(\ref{xiv}) is given by
\begin{equation}
R(x)= \mathbbm M(x) R(0) \,,
\label{xv}
\end{equation}
where $\mathbbm M(x)$ is obtained making
\begin{equation}
\mathbbm M(x)= \mathbbm A . diag\{e^{\lambda_{0}x},e^{\lambda_{1}x},e^{\lambda_{2}x} \} . \mathbbm A^{\dag} \,.
\label{xvi}
\end{equation}
Furthermore, it is useful to write the propagated state which in this case is given by
\begin{equation}
\rho(x) = \left(\begin{array}{ c c }
\rho_{0}(x)+\rho_{3}(x) & \rho_{1}(x)- i \rho_{2}(x)\\
\rho_{1}(x)+ i \rho_{2}(x) & \rho_{0}(x)-\rho_{3}(x)\, \end{array} \right)\,.
\label{xvii}
\end{equation}
From the Eq. (\ref{xiii}), one can see that the propagated state is written as
\begin{equation}
\rho(x)=\left(\begin{array}{c c}
\frac{1}{2}+\frac{1}{2}e^{-\gamma_{3} x}\cos 2\theta & \frac{1}{2}e^{-(\gamma_{1}-i\Delta)x}\sin 2\theta \\
\frac{1}{2}e^{-(\gamma_{1}+i\Delta)x}\sin 2\theta & \frac{1}{2}-\frac{1}{2}e^{-\gamma_{3} x}\cos 2\theta \\\end{array} \right)\,,
\label{xviii}
\end{equation}
where it is possible to identify two unusual behaviors. The off-diagonal entries are called coherence elements and it has a damping term that eliminates the quantum coherence during the propagation. This is the exact definition for decoherence effect and we can see clearly that such effect is associated with the matrix elements $\gamma_1$. The diagonal elements in Eq.~(\ref{xviii}) are known as population elements and they are related to the quantum probabilities of obtaining the eigenvalue $E_{1}$ or $E_{2}$ of the observable $H_{S}$.
In the absence of dissipative effects, the observable is diagonal in the mass basis and the diagonal elements of the state are independent of the distance, but in the state in Eq.~(\ref{xviii}) the probability elements change with the propagation. This dissipative effect implies that the neutrinos may change their flavor without using the oscillation mechanism. As the asymptotic state is a complete mixing, the $\gamma_{3}$ in diagonal elements is called relaxation effect.
The flavor oscillation probabilities can be obtained from the Eq.~(\ref{vii}) and $\rho^{f}_{11}$ element is the survival probability that is written as
\begin{equation}
P_{\nu_{\alpha}\rightarrow\nu_{\alpha}} = \frac{1}{2}\bigg[1+e^{-\gamma_{3} x}\cos^{2}2\theta+ e^{-\gamma_{1} x}\sin^{2}2\theta\cos\left(\Delta x\right)\bigg]\,.
\label{xix}
\end{equation}
In Eq. (\ref{xix}), the asymptotic probability, $x\rightarrow \infty$, goes to a maximal statistical mixing, $ P_{\nu_{\alpha}\rightarrow\nu_{\alpha}}=1/2$, and this happens for any mixing angle. Thus, by means of this approach, the neutrino may change its flavor and it does not need to use the oscillatory mechanism to this end~\cite{ben, workneut}. In fact, while the decoherence effect, through $\gamma_{1}$ parameter, eliminates the oscillation term , the relaxation effect, $\gamma_{3}$ parameter, eliminates the term in the probability that depends only on the mixings.
When the propagation is performed with the dissipator given in Eq.~(\ref{xi}), we obtain some important differences. In this case, $\mathbbm{H}$ has only two non-trivial eigenvalues which are equal to $\lambda_{1}$ and $\lambda_{2}$ which were derivated before. Then, the matrix $\mathbbm{M}(x)$ is changed to
\begin{equation}
\mathbbm{M}(x) = \mathbbm{A} .diag \{1,e^{\lambda_{1} x},e^{\lambda_{2} x}\}. \mathbbm{A}^{\dag}\,,
\label{xx}
\end{equation}
and consequently, the state is written as
\begin{equation}
\rho(x)=\left(\begin{array}{c c}
\frac{1}{2}+\frac{1}{2}\cos^{2}\theta & \frac{1}{2}e^{-(\gamma_{1}-i\Delta)x}\sin 2\theta \\
\frac{1}{2}e^{-(\gamma_{1}+i\Delta)x}\sin 2\theta & \frac{1}{2}-\frac{1}{2}\cos^{2}\theta \\\end{array} \right)\,.
\label{xxi}
\end{equation}
In the state above, there is only influence of the decoherence effect and only the coherent elements are eliminated during the propagation. In this case, the survival oscillation probability is written as
\begin{equation}
P_{\nu_{\alpha}\rightarrow\nu_{\alpha}} = 1 - \frac{1}{2}\sin^{2}(2\theta)\Big[1-e^{-\gamma_{1} x}\cos(\Delta x)\Big]\,.
\label{xxii}
\end{equation}
This probability was discussed in Refs.~\cite{ ben,workneut, lis} only in the vacuum approach, but we are showing that when the open quantum system approach is applied carefully a similar probability is obtained for the propagation in matter as well.
Then, when there is energy conservation in subsystem of interest, $[H_{S},V_{k}]=0$, the asymptotic probability, $x\rightarrow \infty$, still depends on the mixing angle as
\begin{equation}
P_{\nu_{\alpha}\rightarrow\nu_{\alpha}} = 1 - \frac{1}{2}\sin^{2}(2\theta)\,.
\label{xxiiextra}
\end{equation}
In this approach, the dynamics is made through Eq.~(\ref{ii}) and it depends on how the subsystem of interest interacts with the environment following constraint: $[H_{S},V_{k}]=0$ or $[H_{S},V_{k}]\neq0$.
From a mathematical point of view, when we consider neutrinos like an open quantum system and taking into account the considerations explored in this section, one can see that there are not significant differences in deriving the quantum evolution in vacuum or in constant matter. This result is trivial in closed approach, but it is not a trivial result in this open approach. In fact, the similarity between these two propagation conditions in open approach is only true when the reservoir interacts in some way with the subsystem of interest represented by $S$ that here, it was defined using mass state in vacuum propagation or effective mass state in matter propagation. Otherwise, there will not be similarities between the vacuum and matter propagation \cite{ ben1, fo}.
\section{Neutrinos in Non-Uniform Matter}
In many situations the neutrino propagation occurs where the matter density is not constant. We are going to assume neutrino evolution in non-constant matter only in situations where the adiabatic limit is valid~\cite{kim,moh}. Thus, the results obtained in this situation are similar to those obtained for propagation in constant matter. The main focus now is to understand which dissipative effects act on neutrinos supposing that the source is far away from the Earth. Solar neutrinos are a great example that we want to study.
Using the same point of view from the previous section, we can write a diagonal Hamiltonian using the effective mass basis. We start with the quantum dissipator written in Eq.~(\ref{xii}). Thus, we have to solve the same evolution equation given in Eq.~(\ref{xiii}), but on the right side, the elements of the first matrix are distance dependent as well. So, the Eq.~(\ref{xiv}) is written now as
\begin{equation}
\dot{R}(x)= \mathbbm H(x) R(x) \,,
\label{xxiii}
\end{equation}
and it has a solution similar to Eq. (\ref{xv}), but $\mathbbm{M}(x)$ is proportional to
\begin{equation}
\mathbbm M(x)\propto diag\{e^{\int^{R_{\odot}}_{r}\lambda_{0}(x)dx},e^{\int^{R_{\odot}}_{r}\lambda_{1}(x)dx},e^{\int^{R_{\odot}}_{r}\lambda_{2}(x)dx} \} \,,
\label{xxiv}
\end{equation}
where $r$ and $R_{\odot}$ are the creation and detection point, respectively. As $\mathbbm{A}$ is defined in the same way of the previous section, the $\lambda_{i}(x)$ has the same form of $\lambda_{i}$ defined in Eq. (\ref{xiv}), but here $\Delta \rightarrow \tilde \Delta (x)$ and $\gamma_{i} \rightarrow \tilde{\gamma}_{i}$ may depend on distance. Even for $ \lambda_{0}$ the distance dependence may exist~\cite{fo}.
Notice that the energy conservation is given by $[H_{S}, V_{k}]=0$, but in general the $H_{S}$ in vacuum propagation is different from $H_{S}$ in matter propagation. Consequently, when one imposes energy conservation in matter propagation there is not energy conservation in vacuum propagation and vice-versa. On the other hand, it is possible to obtain a model where the energy conservation is always kept even when $H_{S}$ in vacuum and in matter propagation are different. In this case, the dissipative quantum operator has a distance dependence such that $V_{k}$ changes to $V_{k}(x)$ and can be written as
\begin{equation}
V_{k}=V_{k}(x)=\left(\begin{array}{c c}
2\sqrt{\gamma_{1}}\cos[\Theta(x)] & \sqrt{\gamma_{1}}\sin[\Theta(x)] \\
\sqrt{\gamma_{1}}\sin[\Theta(x)]& 0 \\\end{array} \right)\,,
\label{pi}
\end{equation}
where $\Theta(x)=2(\theta-\tilde{\theta}(x))$ and the effective angle is given by
\begin{align}
\tilde{\theta}(x)&=\frac{1}{2}\arcsin \left(\sqrt{\frac{\Delta^{2} \sin^{2}[2\theta]}{(\Delta \cos[2\theta]-A(x))^{2}+\Delta^{2} \sin^{2}[2\theta]}}\right) \,.
\label{pii}
\end{align}
The off-diagonal elements in vacuum case are null and the element $\{V_{k}(x)\}_{11}=2\sqrt{\gamma_{1}}$ such that the quantum dissipator in Eq. (\ref{xi}) is not changed. Supposing the adiabatic limit or constant density matter, we can rewrite the evolution in mass basis into effective mass basis where one considers the addition of the potential matter. In this case, the dissipation operator $V_{k}(x)$ in vacuum changes to $\tilde{V}_{k}(x)=\tilde{U}^{\dag} U V_{k} U^{\dag}\tilde{U}$ in matter propagation, such that it is written as
\begin{equation}
\tilde{V}_{k}(x)=\left(\begin{array}{c c}
2\sqrt{\gamma_{1}}\cos^{2}[\Theta(x)] &0 \\
0& -2\sqrt{\gamma_{1}}\sin^{2}[\Theta(x)]\\\end{array} \right)\,,
\label{piii}
\end{equation}
and the dissipator in Eq. (\ref{xi}) continues unchanged as well.
Thus, disregarding models where the operator in Eq. (\ref{pi}) differs by a unitary matrix, this is a unique model where energy conservation constraint in matter propagation and in vacuum propagation are satisfied simultaneously. This occurs due to the fact that energy conservation in matter propagation is given by $[\tilde{H}_{S}(x), \tilde{V}_{k}(x)]=0$, and this result is valid for any choice of matter potential.
So, as we can mentioned before, if we want that the evolution is purely decoherent, i. e., that the energy conservation, $[\tilde{H}_{S}(x), \tilde{V}_{k}(x)]=0$, is satisfied during the propagation even when the density matter varies, we must have a dissipation operator like the one in Eq. (\ref{pi}), because it takes into account how much the matter effect could change it.
Returning to the evolution given by Eq. (\ref{xxiii}), the state evolved using Eq. (\ref{xxiii}) is written as
\begin{equation}
\tilde{\rho}_{m}(x)=\left(\begin{array}{c c}
\frac{1}{2}+\frac{1}{2}e^{-\Gamma}\cos 2\tilde{\theta} & \frac{1}{2}e^{-\Gamma_{1}}\sin 2\tilde{\theta} \\
\frac{1}{2}e^{-\Gamma^{*}_{1}}\sin 2\tilde{\theta} & \frac{1}{2}-\frac{1}{2}e^{-\Gamma}\cos 2\tilde{\theta} \\\end{array} \right)\,,
\label{xxv}
\end{equation}
where we have defined
\begin{equation}
\Gamma=-\int^{R_{\odot}}_{r}\tilde{\gamma}_{3}(x)dx\,
\label{xxvi}
\end{equation}
and
\begin{equation}
\Gamma_{1}=-\int^{R_{\odot}}_{r}\tilde{\gamma}_{1}(x)dx +i \int^{R_{\odot}}_{r}\tilde{\Delta}(x)dx \,,
\label{xxvii}
\end{equation}
where $\tilde{\gamma}_{1}(x)=\gamma_{1}$ if we consider the dissipation operator in Eq. (\ref{pi}).
In general, the second term in Eq.~(\ref{xxvii}) gives rise to fast oscillation terms in the off-diagonal elements and it is usually averaged out. Thus, the state has the following form
\begin{equation}
\tilde{\rho}_{m}(x)=\left(\begin{array}{c c}
\frac{1}{2}+\frac{1}{2}e^{-\Gamma}\cos 2\tilde{\theta} & 0 \\
0 & \frac{1}{2}-\frac{1}{2}e^{-\Gamma}\cos 2\tilde{\theta} \\\end{array} \right)\,,
\label{xxviii}
\end{equation}
where, we conclude that in general we cannot have information about the decoherence effect in this situation.
To obtain the usual adiabatic probability we use the fact that the effective mixing angle changes during the neutrino propagation and then, the mixing angle in the detection point must be different. We define the initial mass state from the Eq.~(\ref{vii}), where in the creation point, we used the effective mixing angles written as $\tilde{\theta}$. Then, we can change the representation by applying another mixing matrix with another mixing angle. Defining these angles in detection point as $\tilde{\theta}_{d}$, we have
\begin{equation}
\rho_{f}(x)=U_{d}\tilde{\rho}_{m}(x)U^{\dag}_{d}\,,
\label{xxix}
\end{equation}
where $U_{d}$ is the usual mixing matrix, but with mixing angle $\tilde{\theta}_{d}$. Then, the adiabatic survival probability, $\rho^{f}_{11}(x)$, is given by
\begin{equation}
P^{adiab.}_{\nu_{e}\rightarrow\nu{e}}=\frac{1}{2}+\frac{1}{2}e^{-\Gamma}\cos2\tilde{\theta}\cos2\tilde{\theta}_{d}\,.
\label{xxx}
\end{equation}
In the survival probability above, if $\Gamma=0$, we recover the usual survival probability in the adiabatic limit case \cite{kim, moh}. The dissipation operator in Eq. (\ref{pi}) is obtained when the energy constraint, $[H_{S},V_{k}]=0$ is imposed and hence only decoherence effect might be described by $\tilde{\gamma}_{1}$ using the operator in Eq.~(\ref{xi}). However, the state more general for solar neutrinos does not hold the $\tilde{\gamma}_{1}$ in its description and then, we can conclude that quantum decoherence cannot be limited by solar neutrinos in general. On the other hand, as only $\tilde{\gamma}_{3}$ remains in the state (\ref{xxviii}) and in the probability (\ref{xxx}), in general, just the relaxation effect can be limited when one considers solar neutrinos
Now we analyze a situation mentioned in the subsection \ref{s} that is, for example, the same supposition that the authors in Ref. \cite{fo} used to put limit on decoherence effect using solar neutrinos.
So, we assume neutrinos propagate in matter in the situation where the adiabatic limit is satisfied. As usual, the Hamiltonian is $H_{S}=H_{osc}+H_{mat}$, where $H_{osc}$ is the oscillation Hamiltonian in vacuum and $H_{mat}$ is the matter potential. In addition, we assume energy conservation with two different conditions. One of them is when we suppose energy conservation only with the vacuum piece, $[H_{osc},\bar{V}_{k}]=0$, and another one is when we assume energy conservation only with the matter potential piece, $[H_{mat},V'_{k}]= 0$. Note that the $\bar{V}_{k}$ and $ V'_{k}$ follow the definition given by in Eq. (\ref{viiextra}) and both of them are different of $ V_{k}$ that may commutate with $H_{S}$.
These two situations can try to investigate only the decoherence effect. One of them the neutrino state in vacuum can be changed due to the decoherence effect even it is present in the Sun, for instance. With another one, it is possible to study decoherence effect in the Sun environment in order to change the matter effect through a dissipative phenomenon.
As the energy conservation constraint in subsystem of interest was assumed whatever the place that neutrino will go through, for both situations the quantum dissipator used in the propagation in Eq. (\ref{v}) is given by Eq. (\ref{xi}). However, as we have mentioned, this quantum dissipation includes only quantum decoherence effect in the propagation. So, for the quantum evolution in both situations, the Eq.~(\ref{xxiii}) with $\mathbbm{H}$ is now given by
\begin{equation}
\mathbbm{H}=\left(
\begin{array}{ccc}
-\gamma_{1} & -\Delta -A \cos 2\theta & 0 \\
\Delta +A \cos2 \theta & -\gamma_{1} & -A \sin2 \theta \\
0 & A \sin 2 \theta & 0
\end{array}
\right)\,,
\label{xxxi}
\end{equation}
where $\gamma_{1}$ comes from $D_{mn}$ in Eq. (\ref{ix}) for both cases and in the equation above, $\mathbbm{H}$ was written in mass basis representation.
The characteristic polynomial of the above matrix has a complicated solution, but if we consider $\gamma_{1}$ is small such that it can be treated like a perturbation, we obtain in first order approximation the following eigenvalues:
\begin{eqnarray}
\lambda_{0}=-\gamma_{1}\frac{A^{2}}{\Delta^{2}}\sin^{2} 2\tilde{\theta};\nonumber\\
\lambda_{1}=-\gamma_{1}+\gamma_{1}\frac{A^{2}}{\Delta^{2}}\sin^{2}2\tilde{\theta}-i\tilde{\Delta};\nonumber\\
\lambda_{2}=-\gamma_{1}+\gamma_{1}\frac{A^{2}}{\Delta^{2}}\sin^{2}2\tilde{\theta}+i\tilde{\Delta}.
\label{xxxii}
\end{eqnarray}
where $A = \sqrt{2} G_{F} n_{e}$ and, for sake of simplicity, we can rewrite $\mathbbm{H}$ in the effective mass basis, such that we get
\begin{equation}
\mathbbm{H}=\left(
\begin{array}{ccc}
-\tilde{\gamma}_{1} & -\tilde{\Delta} & 0 \\
\tilde{\Delta} & -\tilde{\gamma}_{1} & 0 \\
0 & 0 & -\tilde{\gamma}_{3}
\end{array}
\right)\,,
\label{xxxiii}
\end{equation}
with $\tilde{\gamma}_{3}=\gamma_{1} A^{2}\sin^{2}2\tilde{\theta}/\Delta^{2}$ and $\tilde{\gamma}_{1}=\gamma_{1}-\tilde{\gamma}_{3}$. From $\mathbbm{H}$ given by Eq.~(\ref{xxxiii}) we obtain the same state that was given in Eq.~(\ref{xxv}) where $\Gamma_{1}$ would be defined by $\tilde{\gamma}_{1}$ while $\Gamma$ by $\tilde{\gamma}_{3}$. With the same arguments that was given before, $\Gamma_{1}$ becomes null and we obtain the state in Eq. (\ref{xxviii}). The interpretation is similar that was done before where $\Gamma_{1}$ is not important and only the relaxation effect, $\Gamma \propto \tilde{\gamma}_{3}$, may change the probability.
In these two situations the constraints are $[H_{S},\bar{V}_{k}]\neq 0$ and $[H_{S},V'_{k}]\neq 0$. Thus, we could expect that the result for these different constraints, $[H_{osc},\bar{V}_{k}]=0$ and $[H_{mat},V'_{k}]= 0$, are obtained by an evolution using the dissipator in Eq. (\ref{xii}), as we have seen in subsection \ref{s}. Besides, this result show that there is not a way to separate the subsystem of interest $S$ in pieces which may or may not interact with the environment and here, as we have $[H_{S},\bar{V}_{k}]\neq 0$ and $[H_{S},V'_{k}]\neq 0$ the relaxation effect appears naturally.
The decoherence and relaxation effects when the propagation in matter may have different magnitude from the vacuum case. However, independently of we assume $[H_{osc},\bar{V}_{k}]=0$ or $[H_{mat},V'_{k}]= 0$, we have the same result for the dissipative effects. This looks like an apparent problem because we cannot differentiate between these dissipative models in the solar neutrino case, for example.
In special, the case where $[H_{osc},\bar{V}_{k}]=0$ the Eq. (\ref{xxxiii}) shows relaxation effect is proportional to the decoherence effect for neutrinos propagating in vacuum (the same occurs for the case $[H_{mat},V'_{k}]= 0$ \cite {bur}). This was the result obtained by Ref. \cite{fo} and thus, from this model-dependent approach, the decoherence effect in vacuum, $\gamma_{1}$, was limited by authors in Ref. \cite{fo}. Besides, the $\bar{V}_{k}$ wrote there in our notation is written as
\begin{equation}
\bar{V}_{k}=\left(\begin{array}{c c}
2\sqrt{\gamma_{1}} &0 \\
0& 0\\\end{array} \right)\,.
\label{piv}
\end{equation}
which is different from the $V_{k}(x)$ given in Eq. (\ref{pi}), where the matter potential becomes important and the energy conservation is always satisfied even when the propagation is through in non-constant matter.
In the Ref. \cite{bur} the authors made a microscopic model to the interaction between neutrinos and the solar environment and they reached a dynamic equation similar to Eq. (\ref{xxxii}), but there the dissipation effect appears as a consequence of this microscopic model where $[H_{mat},V'_{k}]= 0$ was satisfied. The dynamic obtained in Ref. \cite{fo} was also obtained by authors in Ref. \cite{bur} even the study propose being different one another, of course, they reached to same probability as well.
Therefore, the result of the last example is interesting because it has not trivial interpretation. And there is not in the literature a reliable limit for decoherence effect in the channel $\nu_{e}\rightarrow \nu{_\mu}$ obtained from a model-independent approach. Surely, it exists only limits on the relaxation and decoherence effects in the case of a particular model-dependent approach used by Ref. \cite{fo} in two neutrino approximation. So, other analysis using a general model-independent approach can be done using neutrinos that come from other sources, where the constraint $[H_{S},V_{k}]=0$ can without any doubt be satisfied and the decoherence effect be limited.
\section{Comments and Conclusion}
The quantum dissipator in Eq.~(\ref{xi}) is related to decoherence effects while the quantum dissipator in Eq.~(\ref{xii}) is related to decoherence plus relaxation effects. We explicitly relate decoherence effects with a quantum dissipator that conserves energy in the subsystem of interest, a condition that is fulfilled if $[H_{S},V_{k}]=0$. If such condition is violated, then we relate such quantum dissipator with relaxation effects. So, we introduce the unique form in which this condition is satisfied in all points of the evolution since $H_{S}$ is the Hamiltonian that governs the evolution in the usual approach. This means that $H_{S}$ is composed by mass and interaction Hamiltonians in matter propagation and only mass Hamiltonian in the case of the vacuum propagation.
We emphasized the differences and similarities between the $\mathbbm{H}$ eigenvalues that are obtained when we used the dissipators in Eqs.~(\ref{xi}) and~(\ref{xii}). We clearly see when the relaxation effect is present in the model and how the behavior of the states is changed in the situation with and without the relaxation effects. We discussed the neutrino evolution in vacuum and in matter with constant density and pointed out how these situations can have similar treatments in open quantum system formalism. We showed that in general the probabilities in vacuum and in constant matter can be written in similar ways, which is not an obvious result in this approach. It is interesting to note that through the model developed in this article, we do not need to use any method of approximation to obtain the probabilities in all cases. This is different from what we can find in the literature \cite{ben1,fo, bur}.
We analyzed also the situation where the matter density is not constant. We obtained a dissipation operator in Eq. (\ref{pi}) that conserves energy during the neutrino propagation through a variable matter density. We showed that the decoherence effect from our model-independent analysis cannot be limited in situations where experiments can no longer access the oscillation term in the probabilities, as it is the case when the source is very far away from the detection point. On the other hand, the relaxation effect may still be tested and limited in such situations. Although, as it was made in Ref \cite{fo} through a model-dependent approach, it is possible to limit the decoherence in this case because the decoherence effect is connected in some way with the relaxation effect. However, as we have pointed out, the relaxation and decoherence effects are different phenomena and both bring different behavior to the neutrinos.
We identified some ambiguities in the definition of decoherence effects present in the literature~\cite{fo}, where there is no clear distinction between decoherence and relaxation effects. In our understanding, the term {\it decoherence} is often used to describe a combined effect of decoherence and relaxation when neutrino evolves in a medium with variable density. We described how it would be a dissipative model with only quantum decoherence effects for propagation in matter with non-constant density. From the dissipative operator obtained in Eq. (\ref{pi}), it was possible to see why the decoherence effect was limited in Refs. \cite{fo} and mentioned in Ref. \cite{bur}. In fact, in those cases it could not exist decoherence effect only, but another effect related in some way with the decoherence effect, because the dissipative operator used by these references, Eq. (\ref{piv}), violates the condition $[H_{S},V_{k}]=0$, when neutrinos propagate in constant or non-constant matter.
Comparing our approach with the ones found in the literature, it is possible to conclude that it avoids all the ambiguities about which kind of dissipative effect is acting on neutrinos. As stated before, the limit for the decoherence effect should be obtained through experiments that access the oscillation pattern in the flavor neutrino probabilities, like KamLAND~\cite{balieiro}, for instance. The result of the Ref.~\cite{fo} can be interpreted as an upper limit on the decoherence effect which comes, in fact, from the restriction on the relaxation effect, once that both effects are connected in this model dependent analysis. Our model independent approach is able to put bounds on all dissipation effects in a direct way.
\section{Acknowledgments}
We would like to thank CNPq and FAPESP for several financial supports.
R.L.N.O. is grateful to L. Ostrar. and J.A.B. Coelho for instructive discussions and thanks for the support of funding grants 2012/00857-6 and 2013/11651-2 São Paulo Research Foundation (FAPESP).
|
1,116,691,499,719 | arxiv | \section{Introduction}
\label{sec:intro}
Decarbonising the power sector plays a key role in Europe’s ambition to be the first climate-neutral continent by 2050~\cite{EU2019, EU2020}. For the required reduction of carbon dioxide emissions in the electricity system, it is crucial to make detailed and transparent emission data available to consumers, regulators, and other stakeholders~\cite{Hamels2021}. Regulators, for instance, need to keep track of the average emission intensity of the electricity mix in their domain to monitor the fulfillment of given climate targets. Meanwhile, dynamic grid electricity emission measures with a high temporal resolution provide a signal for smart energy systems with storage or demand side flexibility. Such systems, ranging from data centers or urban districts to charging networks for electric vehicles, rely on dynamic grid electricity emission signals to optimise their operational schedule aiming for a minimum carbon footprint~\cite{Clauss2019,Alavijeh2020,Google2021,RMI2021}.
Since actual comprehensive emission measurements are not yet available~\cite{climatetrace}, the development of emissions in the power sector is typically tracked through the calculation of total emissions based on emission factors (EF). The EF is the quotient that relates the amount of a pollutant (e.g., carbon dioxide emissions) released into the atmosphere to an activity (e.g., production of one MWh of electricity) associated with the release of that pollutant~\cite{EEA2021}.
One of the most prominent applications is the emission determination for reporting emissions under the United Nations Framework Convention on Climate Change (UNFCCC) \cite{1992UNITEDNATIONS}. Under this convention, all participating countries must prepare and submit a National Inventory Report for their greenhouse gas emissions based on one out of three methodological approaches.
Tier~1 uses default EFs provided in the IPCC guidelines. The method can be combined with spatially explicit activity data from remote sensing, as for example satellite data. Tier~2 generally uses the same methodological approach as tier~1, but applies EFs that are country specific. Country-specific EFs are those that are more appropriate for the climate regions and systems used in that country. At tier~3, more complex methods, including measurements and analytical models to address national circumstances are used~\cite{Eggleston20062006Inventories}.
In life-cycle assessments (LCA), similar approaches are used for assessing the carbon intensity of energy and material inputs to a product, process or service.
In LCA, it is common to use an annual mean value that refers to the average emissions that are caused by the production of one MWh of final energy~\cite{Turconi2013}. In many studies, this emission value is often denoted as the grid based carbon intensity (CI) of a given country. In most cases of grid based CIs, the annual emissions are divided by the annual electricity production, which leads to a static average emission value per year. This approach to evaluate emissions from different sectors is similar to the tier~2 approach from the IPCC guideline~\cite{Astudillo2017LifeOpportunities,Itten2012LifeGrid}.
Several studies have shown that there are significant variations in the electricity mix from month to month, and even hour to hour variations within a day~\cite{Tranberg2019Real-timeMarkets}. These variations are caused by fluctuations in the contribution of both conventional and renewable power generation. Consequently, the emissions associated with the use of electricity also vary across time~\cite{Spork2015IncreasingFactors,Noussan2018PerformanceItaly,Kopsakangas-Savolainen2017Hourly-basedEmissions,Vuarnoz2018TemporalGrid,Marrasso2019ElectricBasis}. This shows that reporting schemes based on aggregated emissions and power generation are not suitable to provide emission intensity signals on hourly time scales. However, this timely resolved information is crucial the operation of flexible systems, like charging stations for electric vehicles, heat pumps to data centers, are to be optimized with respect their associated carbon dioxide emissions~\cite{RMI2021}.
The provision of a widely applicable and transparent carbon intensity signal is facilitated by the incorporation of publicly available consistent and consolidated data, as well as the use of a transparent and easy-to-use method for determining CO$_2$ intensities~\cite{Hamels2021}. Currently, the regulations as well as the scientific research in the area of energy system analysis often uses standard EFs that are neither technology nor country specific. These standard EFs do not fulfill either of the criteria 'applicability' and 'transparency' sufficiently. Often, the data sources used are not openly available, and the underlying calculations are hard to replicate due to insufficient documentation. In addition, the used methods vary from country to country, making it difficult to compare the resulting EFs or CIs.
In this study, we introduce a consistent and flexible framework to calculate emission factors based on per unit generation and emission data. Through this bottom-up approach, emissions can be assessed from unit up to country level on different time scales, depending on the spatial and temporal resolution of the given generation data. The derived values for the average carbon intensity of electricity generation for European countries are compared to results calculated through a top-down approach based on national statistics~\cite{EEA2021}. The accuracy of the methods as well as the need of further data consolidation is discussed. The key novelty of this study is the development and assessment of such a modular bottom-up approach for the calculation of emission factors based on publicly available data only. Through the provisioning of all code and secondary data, the calculation framework is transparent and accessible for modifications and extensions, providing a solid foundation for future studies.
The remainder of this article is structured as follows. Section~\ref{sec:back_and_lit} provides an overview of different calculation schemes and applications of emission factors. In Section~\ref{sec:data}, the data sources underlying this study are reviewed, followed by the presentation of the bottom-up and top-down method in Section~\ref{sec:method}. Results are presented in Section~\ref{sec:application}, with a subsequent discussion in Section~\ref{sec:discussion}. Section~\ref{sec:conclusion} concludes this article.
\section{Research background}
\label{sec:back_and_lit}
A brief literature overview of the calculation and application of emission factors is given in the following. This is complemented by a summary of the different methods used to model EFs and resulting CIs, including use cases in the literature in Table~\ref{tab:literature_review}.
\begin{table*}[ht]
\begin{tabular}{@{}p{3.3cm}p{3.6cm}p{3cm}llp{3.5cm}@{}}
\toprule
Source & EF calculation method & EF properties & \multicolumn{2}{c}{CI resolution} & Application \\
& & & Temporal & Spatial & \\ \midrule
\multicolumn{2}{l}{Calculation-based EF} & & & & \\ \midrule Braeuer et al. \cite{Braeuer2020ComparingGermany} & Empirical emission \& generation data & Tech. specific, direct emissions & hourly & Germany & Battery storage dispatch optimization \\
Staffell et al. \cite{Staffell2017MeasuringElectricity} & Carbon content of fuel & Tech. specific, direct emissions & yearly & Great Britain & Decarbonisation progress measurement \\
Hein et al. \cite{Hein2020Agorameter-Dokumentation} & Carbon content of fuel & Tech. specific, direct emissions & yearly & Germany & Emission visualisation \\ \midrule
\multicolumn{2}{l}{Literature-based EF} & & & & \\ \midrule
Spork et al. \cite{Spork2015IncreasingFactors} & Literature-based (IEA) & Tech. specific, direct emissions & hourly & Spain & Company dispatch optimization \\
Marrasso et al. \cite{Marrasso2019ElectricBasis} & Literature-based (ISPRA) & Tech. specific, direct emissions & hourly & Italy & Decarbonisation progress measurement \\
Noussan et al. \cite{Noussan2018PerformanceItaly} & Literature-based (ISPRA) & Country specific, direct emissions & hourly & Italy & Decarbonisation progress measurement \\
Dixit et al. \cite{Dixit2014CalculatingSectors} & Literature-based & Country specific, direct emissions & yearly & United States & Decarbonisation progress measurement \\
Vuarnoz et al. \cite{Vuarnoz2018TemporalGrid} & Literature-based (LCA) & Tech. specific, direct and indirect emissions & hourly & Switzerland & LCA application \\
Moro and Lonza \cite{Moro2018ElectricityVehicles} & Literature-based (LCA) & Tech. specific, direct and indirect emissions & yearly & Europe & Electric vehicle charging \\
Kopsakangas-Savolaine et. al. \cite{Kopsakangas-Savolainen2017Hourly-basedEmissions} & Literature-based & Tech. specific, direct emissions & hourly & Finland & Household consumption optimization \\
Arciniegas and Hittinger \cite{Arciniegas2018TradeoffsOperation} & Literature-based & Country specific, direct emissions & hourly & United States & Battery storage dispatch optimization \\
Ang and Su \cite{Ang2016CarbonAnalysis} & Literature-based & Country specific, direct emissions & yearly & World & Decarbonisation progress measurement \\
Tranberg et. al. \cite{Tranberg2019Real-timeMarkets} & Literature-based (LCA) & Tech. specific, direct and indirect emissions & yearly & Europe & Electricity market carbon accounting \\
\bottomrule
\end{tabular}
\caption{Summary of the methods used for calculating EF and the use cases presented in the literature.}
\label{tab:literature_review}
\end{table*}
\subsection{Calculation of emission factors}
Most approaches to calculating emission factors follow either a process-based (LCA) scheme, or a balancing method in which total emissions and total generated electricity are used.
The process-based LCA method derives the resulting emissions from a systematic analysis of the underlying processes and infrastructure. In LCA studies, total emissions are usually classified as either direct emissions (e.g., from the combustion of the fuel) or indirect emissions (e.g., related to upstream fuel supply, resources, construction of the power plant, etc.)~\cite{Turconi2013LifeLimitations}. Many LCA studies can be found for different power generation technologies. In~\cite{Odeh2008LifePlants}, for instance, the authors derive an EF of 882\,gCO$_2$/kWh for coal-fired power plants associated with the direct emissions resulting from the fuel combustion.
In~\cite{Turconi2013LifeLimitations}, the authors examine 167 LCA studies of electricity generation for various technologies. One key finding of the analysis is the broad range of literature values for emission factors for different generation technologies, showing a strong dependence on parameters such as location, efficiency, fuel quality, or type of use. With respect to direct emissions, the emission factors range from 660 to 1\,050\,gCO$_2$/kWh for hard coal-fired power plants, whereas for lignite power plants, values between 800 and 1\,300\,gCO$_2$/kWh can be found. For gas power plants, the emission factors vary between 380 and 1\,000\,gCO$_2$/kWh.
The balancing approach aims to determine the emission factor via substance and energy flows. The EFs for energy systems are often based on data for the power generation as well as the amount of fuel used to generate the respective power. Furthermore, a fuel-specific emission factor is required. These factors can be obtained from chemical fuel analyses. In this method, the fuel quantity is multiplied by the fuel-specific EF, and divided by the respective energy yield. The resulting value represents the technology-specific EF. For instance, in~\cite{Hein2020Agorameter-Dokumentation}, data from the Federal Environment Agency of Germany~\cite{Umweltbundesamt2021Entwicklung2020} is used to determine a technology-specific EF via substance flows. These factors are combined with hourly generation data to provide a CO$_2$ signal of the German electricity mix with hourly resolution. Nevertheless, since the input data is usually not available in a technology-specific manner, only country-specific values are usually determined.
In~\cite{Braeuer2020ComparingGermany}, two approaches for calculating a dynamic grid emission factor are compared. Different to other studies, which in general include reported annual power generation and emissions, the authors pair plant emission data from the European Union Emissions Trading System (EU~ETS) with generation per unit data from \mbox{ENTSO-E} to calculate power plant specific emission factors. However, the study is limited to Germany, and the resulting EFs are not evaluated for consistency and verification based on further data sources.
\subsection{Application of emission factors}
Technology-specific EF are widely used in the literature for various applications. In~\cite{Marrasso2019ElectricBasis}, Marrasso et al. present performance indicators for country-wide power systems that are based on annual technology-specific EFs for the Italian power system. In contrast, Noussan et al.~\cite{Noussan2018PerformanceItaly} calculate similar performance indicators for the Italian power system, but use yearly aggregated EFs. In both studies, emission factors are taken from the literature, and hourly CO$_2$ emission signals are derived.
In~\cite{Vuarnoz2018TemporalGrid}, EFs per technology are used to calculate the hourly greenhouse gas emissions of the national electricity supply mix in Switzerland and neighboring countries. The applied per technology EFs are based on the life-cycle inventory data of the ecoinvent 2.2 database and include transport as well as distribution losses.
In~\cite{Dixit2014CalculatingSectors}, the authors review current methods for calculating the primary energy use and the carbon emissions associated with electricity consumption. For calculating the carbon emissions, they use different technology-specific emission factors provided by the U.S. Energy Information Administration~\cite{USEIA}.
The “electricityMap” project displays hourly emission factors associated with electricity generation for various countries~\cite{electricitymap}. The calculation for the European region is based on generation per production type data from various sources, predominantly \mbox{ENTSO-E}. These generation time series are multiplied by carbon emission intensities mostly taken from the IPCC (2014) Fifth Assessment Report~\cite{electricitymapgithub,IPCC2014}. Also, a consumption-based carbon intensity of electricity is derived, which makes use of a tracing approach to take into account imports through the power grid~\cite{Tranberg2019Real-timeMarkets,deChalendar2019,Schaefer2020}.
\section{Used data and preparation}
\label{sec:data}
In general, data on power generation has to be associated with data representing the corresponding emissions in order to calculate carbon intensity and emission factors. For the different methods presented here, generation as well as emission data sets on different scales from multiple sources are used. For the generation side, these are production time series for individual generation units (Sec.~\ref{sec:entsoe_unit}), for each production type and country (cp. Section~\ref{sec:entsoe_prod}) and consolidated aggregated yearly production per type and country (cp. Section~\ref{sec:entsoe_sheet}), all from \mbox{ENTSO-E}, as well as reported energy balances per country from Eurostat (cp. Section~\ref{sec:balance}). The included emission data sets comprise yearly emissions for individual power plants from the EU~ETS scheme (Sec.~\ref{sec:euets}) and reported annual emissions for individual countries as part of the UNFCCC framework (cp. Section~\ref{sec:unfccc}). Table~\ref{tab:used_data} gives an overview of the used data sets and their properties.
It should be emphasized that both generation and emission data sets are not consistent across different sources and across different scales. For instance, aggregated per unit generation time series do not match with per production type time series; aggregated production type time series from \mbox{ENTSO-E} do no match with the reported consolidated annual values in the \mbox{ENTSO-E} fact sheet, which in turn does not accord with the energy balances published by Eurostat. Analogously, annually reported emissions per sector and country differ from aggregated reported emissions per power plants. All these inconsistencies originate from different reporting schemes, coverage and definitions, misallocations or gaps and errors in the data. Providing consistent open data sets for generation and emissions on different temporal and spatial scales, based on the sources reviewed below, is a much-needed endeavor, but beyond the scope of this study. Consequently, in our approach we aim to incorporate different data sets in separate parts of the method to reduce uncertainties originating from the inconsistencies outlined above. In the following, the data sets applied here, and the data processing are described in further detail.
\begin{table*}[]
\begin{tabular}{@{}p{3.5cm}llp{1.3cm}p{1.9cm}p{4.3cm}l}
\toprule
Name & Type & Unit & \multicolumn{2}{c}{Resolution} & Description & Reference \\
& & & Temporal & Spatial & & \\\midrule
\mbox{ENTSO-E} generation per block-unit & Generation & MW & 15\,min to hourly & Power plant unit & Net electricity production time series for large power plants
& \cite{2020ENTSO-EUnit}\\
\mbox{ENTSO-E} generation per type & Generation & MW & 15\,min to hourly & Country & Net electricity production time series & \cite{2020ENTSO-EType} \\
Eurostat energy balance & Generation & ktoe & yearly & Country & Gross electricity production and associated energy input & \cite{EnergyEurostat} \\
EU ETS Data (EUTL) & Emissions & t CO$_2$ & yearly & Power plant & Reported emissions in the EU ETS mechanism & \cite{EUROPALog}\\
UNFCCC & Emissions & t CO$_2$ & yearly & Country & Reported emissions for different sectors in all countries to the UNFCCC & \cite{NationalAgency} \\ \bottomrule
\end{tabular}
\caption{Overview of used data sets and their properties}
\label{tab:used_data}
\end{table*}
\subsection{ENTSO-E per generation unit data}
\label{sec:entsoe_unit}
The data set ``Actual Generation per Generation Unit'' provided by~\mbox{ENTSO-E} contains dispatch time series for large power plant units located in the \mbox{ENTSO-E} area~\cite{2020ENTSO-EUnit}. The generation data is published five days after the operational period. Since this data set has only been established in 2015, up to now, no critical review is available in the literature. The time series has been corrected for obviously erroneous entries and duplicates, but we did not fill gaps due to the heterogeneous and often irregular nature of the individual dispatch time series. In a final step, all generation time series were converted to the same temporal resolution of one hour.
\subsection{ENTSO-E Statistical Factsheets}
\label{sec:entsoe_sheet}
The annually published ``Statistical Factsheet'' from~\mbox{ENTSO-E} contains, among other things, yearly aggregated values for electricity load and generation for European countries~\cite{ENTSO-E2018}. In the fact sheet, it is stated that the data has been consolidated by taking into account national statistical resources, but no further details about the correction process is given.
\subsection{\mbox{ENTSO-E} per production type data}
\label{sec:entsoe_prod}
The data set ``Actual Generation per Production Type'' provided by \mbox{ENTSO-E} contains power generation time series for all countries in the \mbox{ENTSO-E} area for different production types~\cite{2020ENTSO-EType}. The temporal resolution of the data ranges, depending on the country, from 15 minutes to up to 1 hour. The completeness and consistency of the data varies across countries as well as across generation types~\cite{Hirth2018}. For instance, for Germany the aggregated generation time series from \mbox{ENTSO-E} for gas only covers around 50\,\% of the corresponding generation reported by the German Working Group on Energy Balances~\cite{AGEnergiebilanzene.V.2020Stromerzeugung2020}. We have cleaned the data and checked for gaps and duplicates. Gaps with a length of up to two hours long were filled linearly, and duplicates were removed. In a final step, all generation time series were converted to the same temporal resolution of one hour. Although \mbox{ENTSO-E} classifies more than 20 different generation types, we have reduced the technologies to twelve types, for reasons of applicability. The corresponding assignment of the \mbox{ENTSO-E} generation technologies to the selected technology is shown in Tab.~\ref{tab:technologies}.
\begin{table}[h]
\begin{tabular}{p{2.2cm}p{5.5cm}}
\toprule
Technology & Subsumed ENTSO-E technologies \\
\midrule
hard\_coal & Fossil hard coal \\
lignite & Fossil brown coal/lignite \\
gas & Fossil gas \\
other\_fossil & Fossil coal-derived gas, fossil peat, other, fossil oil, fossil oil shale \\
nuclear & Nuclear \\
biomass & Biomass \\
waste & Waste \\
other\_renewable & Geothermal, marine, other renewable \\
hydro & Hydro pumped storage, hydro run-of-river and poundage, hydro water reservoir \\
solar & Solar \\
wind\_onshore & Wind onshore \\
wind\_offshore & Wind offshore \\ \bottomrule
\end{tabular}
\caption{Mapping for generation technologies from \mbox{ENTSO-E} to the classification used in this study.}
\label{tab:technologies}
\end{table}
\subsection{Eurostat energy balances}
\label{sec:balance}
The energy balances published by Eurostat provide an overview of energy products and their flow in the economy~\cite{EnergyEurostat}. This accounting framework for energy products allows studying the total amount of energy extracted from the environment, traded, transformed and used by the different European countries. In the balance sheet, also the relative contribution of each energy carrier (fuel, product) is represented. For the method presented in this study, we incorporate data for energy input quantities used for power generation, as well as the resulting gross electricity generation. In contrast to the data provided by \mbox{ENTSO-E}, Eurostat subdivides the energy transformation input into different energy carriers. For example, \mbox{ENTSO-E} distinguishes between lignite and hard coal, whereas Eurostat lists nine different coal types (anthracite, coking coal, other bituminous coal, sub-bituminous coal, lignite, coke oven coke, gas coke, coal tar, brown coal briquettes) in their energy balance. The resulting electricity output is given by Eurostat as the gross electricity production, whereas \mbox{ENTSO-E} reports net production, i.e., subtracting the power plant's own consumption.
\subsection{EU emissions trading system}
\label{sec:euets}
For determining emissions at the power plant level, we use data from the EU~ETS~\cite{EUAction}. For the electricity sector, the EU~ETS register records all emissions from enlisted individual power plants. This central information is represented by the verified emissions, which refers to the number of CO$_2$ certificates (amount of emitted CO$_2$) needed by an installation during the review period. For our method, the so-called free allocations (certificates distributed to installations free of charge) are also taken into account. Free certificates are granted to installations when they provide products that are transferred to a sector that is not covered by the EU~ETS system, such as heat for private households \cite{EuropeanCommission2015EUHandbook}.
\subsection{National emissions reported to the UNFCCC}
\label{sec:unfccc}
The UNFCCC and the Kyoto Protocol 2020 specify and regulate emissions reporting for all participating countries. The corresponding annual reports contain data for all emissions in individual countries and sectors. The accuracy of the emission data depends on the method used in the respective countries. It should be noted that the reporting guidelines allow some degree of freedom for the individual countries with respect to the application of different methods for calculating emissions attributed to different sectors. In general, the corresponding reporting schemes are biased towards overestimating emissions if less elaborated calculation methodologies are applied \cite{Eggleston20062006Inventories}. The individual reports provide information about the underlying method for each country data. For the method presented in this study, yearly emissions per country from the public electricity and heat sector are used.
\section{Methodology}
\label{sec:method}
In the following, we present two methodological approaches to assess the carbon emission intensity of electricity generation in European countries. First, the bottom-up method based on emission data for individual power plants and time series for electricity generation is introduced in Section~\ref{sec:bottomupmethod}. The top-down method, which uses energy balances and reported per country annual emissions is reviewed in Section~\ref{sec:topdownmethod}.
\subsection{Bottom-up method}
\label{sec:bottomupmethod}
The key outcome of the bottom-up method are yearly aggregated CO$_2$ EFs of electricity generation per type and country. It is based on individual emission and generation data per power plant. The method consists of four main steps involving two data sources, as visualized in Figure~\ref{fig:Bottom-up_process}. In the first step, two data sets are matched, one containing production time series (\mbox{ENTSO-E} generation per unit), and one stating yearly CO$_2$ emissions reported as part of the EU~ETS scheme (cp. Section~\ref{matching}). In a second step, the share of CO$_2$ emissions which can be allocated to heat generation, as further specified in Section~\ref{heat}, is estimated. In a subsequent step, the EFs per power plant are calculated based on emission and production data from the matched data sets, as explained in Section~\ref{EF_pp}. In a final step, a representative power plant sample for each technology and country is chosen to determine their respective EFs (cp. Section~\ref{EF_ct}).
The method can be applied for each country separately. For convenience, we omit an explicit index for the country in our presentation. If a temporal index $t$ is given, the quantity is defined for the hour $t$, whereas quantities without a temporal index are defined for the year under consideration. If not otherwise stated, all data discussed in the following refers to the year 2018.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Bottom-up_method.png}
\caption{Process illustration of the bottom-up method}
\label{fig:Bottom-up_process}
\end{figure}
\subsubsection{Matching records in \mbox{ENTSO-E} and EUTL data sets}\label{matching}
To derive individual EFs for power plants, we combine two data sets. Hourly production time series for large power plants are published by \mbox{ENTSO-E} through the transparency portal~\cite{2020ENTSO-EUnit}. In the following, $G_{t}(\alpha)$ denotes the per unit generation for the power plant with index $\alpha$ in hour $t$. Carbon dioxide emission data for power plants is published through reports from the European Union Transaction Log (EUTL), which contains all transactions between accounts from the EU~ETS mechanism~\cite{EUROPALog}. This data yields the emissions $CO_2(\alpha)$ associated with electricity generation from power plant $\alpha$ for the given year. In order to associate the annual generation $G(\alpha)=\sum_{t}G_{t}(\alpha)$ with the corresponding emissions $CO_{2}(\alpha)$, we matched the \mbox{ENTSO-E} energy identification codes (EIC) with the ETS plant identifier (EUTL-ID). This matching procedure is non-trivial, since entries from both data sets use different name formats or even different names for the same power plants. Using other power plant databases in combination with manual searches resulted in a matching list containing 853 entries with a total installed capacity of 296.19\,GW (232 coal units with 87\,GW, 452 gas units with 153.78\,GW and 131 lignite units with 45.17\,GW)\cite{gemwiki,JRCPPDB,Gotzens2019,OPSD2020}. The matching between the two data sets leads to certain data inaccuracies. First, it is not possible to match all generation units from the \mbox{ENTSO-E} data set to an installation listed in the EUTL data set. In total, 907 units out of 1759 generation units are not matched, representing 258.90\,GW of installed capacity. In addition, several individual generation units in the \mbox{ENTSO-E} data set are listed under a single location name in the EUTL. In these cases, we allocated the emissions to the units according to the installed capacities.
\subsubsection{Determining the heat generation}\label{heat}
The reported emissions $CO_{2}(\alpha)$ do not discriminate between emissions associated with electricity or heat generation. In order to take the heat extraction from power plants into account when calculating EFs, the amount of CO$_2$ which can be allocated to the heat export has to be estimated. Here, we approximate emissions that can be allocated to heat production using the emission allowances allocated free of charge in the EU~ETS system. Based on the allocation quantity in 2018, a free allocation of 50\,\% of the emission allowances for heat production is assumed, which is consistent with~\cite{EuropeanCommission2015EUHandbook}. For 2018, subtracting twice the free allowances from the reported emission thus yields the annual emissions ${CO_{2}}^{\mathrm{el}}(\alpha)$ which are associated with the electricity generation from the power plant~$\alpha$.
\subsubsection{Emission factor per power plant}\label{EF_pp}
The annual average emission factor $EF(\alpha)$ for a power plant~$\alpha$ is calculated as the ratio of the total generation per year and power plant, $G(\alpha)=\sum_t G_{t}(\alpha)$, to the corresponding emissions per power plant, ${CO_{2}}^{\mathrm{el}}(\alpha)$,
\begin{equation}\label{EF_y}
EF(\alpha)= \frac{{CO_2}^{\mathrm{el}}(\alpha)}
{G(\alpha)}~.
\end{equation}
After calculating the individual values $EF(\alpha)$, a plausibility check was performed for each power plant. The EF was only included in the final list if it was within the plausibility range, based on EFs calculated by the German Environment Agency~\cite{Hein2020Agorameter-Dokumentation} with a buffer of $\pm50\,\%$. The resulting upper and lower EF limits are shown in Table~\ref{tab:EF_plausibility}. If the calculated power plant EF is outside this plausibility range, we assume potential issues with the underlying data (errors, gaps, misreporting) and omit the corresponding power plant from further calculations.
\begin{table}[]
\begin{tabular}{@{}p{2cm}p{1.5cm}p{2cm}p{1.5cm}@{}}
\toprule
& \multicolumn{3}{c}{Emissions} \\
Fuel type & Low {(}gCO$_2$/kWh{)} & Literature {(}gCO$_2$/kWh{)} & High {(}gCO$_2$/kWh{)} \\ \midrule
Gas & 185 & 370 & 555 \\
Coal & 410 & 820 & 1\,230 \\
Lignite & 545 & 1\,090 & 1\,635 \\
Other fossil & 750 & 1\,500 & 2\,250 \\
\bottomrule
\end{tabular}
\caption{Upper and lower limits for plausibility check of the EF calculation; the literature value is taken from the German Environment Agency~\cite{Hein2020Agorameter-Dokumentation}.}
\label{tab:EF_plausibility}
\end{table}
As a result, 554 European generation units and their individual average annual emission factors were considered as valid. Based on the \mbox{ENTSO-E} time series, these power plants supplied 673\,TWh of electricity to the grid in Europe, which represents roughly 50\,\% of the conventional generation in 2018 reported in \cite{ENTSO-E2018}. The resulting range of EFs per technology is shown in Figure~\ref{fig:CO2_technology}. For gas, coal and lignite, it is visible that most of the entries are distributed around the average value. For the few non-specific power plants (other fossil), the variance of the calculated EF is considerably higher. This indicates a variety of the underlying technologies, including for instance oil or waste power plants.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{_CO2_intensity_by_technology.png}
\caption{Average EF per technology and distribution of per unit $EF(\alpha)$ over the entire data set. The points represent the annual average emission factor $EF(\alpha)$ for an individual power plant $\alpha$. The bars represent the average EF per technology in the entire data set.}
\label{fig:CO2_technology}
\end{figure}
\subsubsection{Emission factors per country and technology}\label{EF_ct}
For each country, we partition the set of power plants $\mathcal{G}$ into four sets $\mathcal{G}(\mathrm{tech})$ corresponding to the different technology categories. To obtain country-specific EFs per technology, the ratio of all emissions associated with electricity generation from a certain technology class to the corresponding generation is calculated:
\begin{equation}\label{EF_i_tech}
EF(\mathrm{tech})=\frac{
\sum_{\alpha\,\in\,\mathcal{G}(\mathrm{tech})}{CO_{2}}^{\mathrm{el}}(\alpha)
}
{
\sum_{\alpha\,\in\,\mathcal{G}(\mathrm{tech})}G(\alpha)
}
\end{equation}
It should be noted that we calculate these per technology emission factors here for countries, but since the underlying quantities are given for individual power plants with a known geographic location, analogous calculations can be performed on other spatial scales as well.
Both emission factors $EF(\alpha)$ and $EF(\mathrm{tech})$ are calculated as yearly averages. To calculate an emission signal for the generation mix in an individual hour, these factors have to be multiplied with the corresponding hourly generation. Since the time series for the generation per production type can be shown to have a higher coverage, we use the per technology emission factors $EF(\mathrm{tech}$) in the following to calculate such hourly emission signals as well as the average carbon intensity per country. However, pooling generation capacities from the same technology category will smoothen the heterogeneity in the emission intensity and dispatch of individual power plants. To estimate this loss of information, we compare the hourly carbon intensity per country associated with the power plant dispatch time series $G_{t}(\alpha)$ and the emission factors $EF(\alpha)$ on the one hand, and $EF(\mathrm{tech})$ on the other hand:
\begin{align}
\tilde{CI}^{\mathrm{plant}}_t &= \frac{\sum_{\alpha}G_{t}(\alpha)EF(\alpha)
}{\sum_{\alpha}G(\alpha)
}~,
\\
\tilde{CI}^{\mathrm{tech}}_t &= \frac{\sum_{\mathrm{tech}}G_t(\mathrm{tech})EF(\mathrm{tech})
}{
\sum_{\mathrm{tech}}G_t(\mathrm{tech})
}~.
\end{align}
Here, we use
\begin{equation}
G_t(\mathrm{tech})=\sum_{\alpha\,\in\,\mathcal{G}(\mathrm{tech})}G_t(\alpha)~.
\end{equation}
Figure~\ref{fig:deviation_CO2_unit_tech} shows the relative deviation $\left(\tilde{CI}^{\mathrm{tech}}-\tilde{CI}^{\mathrm{plant}}\right)/\tilde{CI}^{\mathrm{tech}}$ for the resulting carbon intensity in gCO$_2$/kWh when using technology-specific EFs versus the original unit-specific EFs. One point in this figure represents the resulting carbon intensity for one hour in the year 2018, based on the generation and emissions of the power plants in the validated data set. Note that this calculation was done based on per unit generation time series from the matched data, so the resulting values do not correspond to the carbon intensity of the entire generation mix, since, among other things, renewable generation is missing. The results shown in Fig.~\ref{fig:deviation_CO2_unit_tech} indicate that for the given data set, the deviations range between minus 2\,\% and plus 1.5\,\%, without a clear bias for over- or underestimation. Therefore, we conclude that the loss of information originating from using per production type emission factors $EF(\mathrm{tech})$ compared to per power plant emission factors $EF(\alpha)$ is negligible for the calculation of the per country hourly carbon intensity of electricity generation.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{_CO2_intensity_deviation_tech_vs_unit.png}
\caption{Relative deviation $\left(\tilde{CI}^{\mathrm{tech}}-\tilde{CI}^{\mathrm{plant}}\right)/\tilde{CI}^{\mathrm{tech}}$ between the hourly carbon intensity based on unit specific and technology specific EFs, respectively. Each dot corresponds to a value for one hour, with the deviation on the y-axis and the technology-based carbon intensity $\tilde{CI}^{\mathrm{tech}}$ on the x-axis. The carbon intensities have been calculated over all power plants from the validated matched data set.}
\label{fig:deviation_CO2_unit_tech}
\end{figure}
The per technology emission factors $EF(\mathrm{tech})$ can be used in combination with hourly per technology generation time series to provide an hourly carbon intensity signal. Such time series are given by \mbox{ENTSO-E} per country, denoted in the following by $G^{\mathrm{type}}_t(\mathrm{tech})$. This data set has a higher coverage compared to the per unit generation time series, of which, only a subset is included in the calculation of the per power plant emission factors $EF(\alpha)$ due to the matching procedure. On the other hand, for most countries and technologies, the aggregated generation time series $\sum_t G^{\mathrm{type}}_t(\mathrm{tech})$ yields a lower annual generation than the consolidated values reported by \mbox{ENTSO-E} in the Statistical Factsheet. In order to implement a first-order correction of the time series, we thus scaled the hourly generation to yield the same reported annual aggregate per country and technology. If the aggregated time series was larger than the corresponding value in the Factsheet, we kept these generation values unchanged.
In order to assess the representativeness of the per technology emission factors $EF(\mathrm{tech})$, we compared the aggregated underlying per power plant generation with the corresponding aggregated (scaled) per technology generation data,
\begin{equation}\label{eq:coverage}
\mathrm{Coverage(tech)}=\frac{
\sum_t\sum_{\alpha\,\in\,\mathcal{G}(\mathrm{tech})}G_t(\alpha)
}
{\sum_t G_t^{\mathrm{type}}(\mathrm{tech})}~.
\end{equation}
Table~\ref{tab:EF_per_country_tech} displays the calculated emission factors $EF(\mathrm{tech})$ per technology for each country, and gives an overview of the underlying data. We assume an emission factor to be representative for a country and a technology if the corresponding coverage as calculated in Eq.~(\ref{eq:coverage}) is larger than $25\,\%$. In Table~\ref{tab:EF_per_country_tech}, we list for each country and technology the number of power plants and their aggregated capacity as contained in the \mbox{ENTSO-E} per unit generation time series data set, the subset which we have matched to power plants from the EUTL, and the final validated matched data set which fulfills the coverage criterion of $25\,\%$.
It should be emphasized that the coverage of the matched power plants can be extended in the future, facilitated by the publication of all code as well as the matching used for the results in this study, and the public availability of all underlying data.
\begin{table*}[]
\begin{tabular}{@{}llrrrrrrrr@{}}
\toprule
& & & & \multicolumn{6}{c}{Data set analysis} \\
Country & Technology & EF {(}gCO$_2$/kWh{)} & Coverage {(}\%{)} & \multicolumn{2}{c}{Plausible} & \multicolumn{2}{c}{Matched} & \multicolumn{2}{c}{ENTSO-E} \\
& & & & Cap. {(}GW{)} & Count & Cap. {(}GW{)} & Count & Cap. {(}GW{)} & Count \\
\midrule
AT & gas & 288.71 & 68.0 & 3.508 & 9 & 3.902 & 12 & 3.902 & 12 \\
AT & hard\_coal & 884.07 & 39.0 & 0.2 & 1 & 0.682 & 3 & 0.682 & 3 \\
BE & gas & 389.64 & 64.0 & 3.936 & 17 & 4.506 & 19 & 4.646 & 20 \\
CZ & hard\_coal & 985.55 & 36.0 & 0.561 & 3 & 0.748 & 4 & 0.748 & 4 \\
CZ & lignite & 928.3 & 58.0 & 4.119 & 18 & 5.772 & 25 & 4.494 & 21 \\
DE & gas & 334.09 & 17.0 & 8.427 & 21 & 17.977 & 58 & 16.972 & 53 \\
DE & hard\_coal & 871.05 & 85.0 & 17.724 & 38 & 24.395 & 51 & 22.794 & 48 \\
DE & lignite & 1125.56 & 96.0 & 19.238 & 34 & 19.59 & 35 & 19.59 & 35 \\
DE & other\_fossil & 1619.69 & 50.0 & 0.825 & 3 & 0.925 & 4 & 2.341 & 11 \\
DK & gas & 329.78 & 10.0 & 0.108 & 1 & 0.108 & 1 & 0.108 & 1 \\
DK & hard\_coal & 775.84 & 69.0 & 2.151 & 6 & 2.895 & 8 & 2.523 & 7 \\
EE & other\_fossil & 1057.97 & 109.0 & 1.971 & 11 & 2.251 & 13 & 2.251 & 13 \\
ES & gas & 386.51 & 48.0 & 20.686 & 61 & 24.671 & 72 & 24.671 & 72 \\
ES & hard\_coal & 975.78 & 84.0 & 6.744 & 17 & 9.535 & 26 & 9.535 & 26 \\
FI & hard\_coal & 674.45 & 69.0 & 1.571 & 9 & 2.274 & 12 & 2.274 & 12 \\
FI & other\_fossil & 759.81 & 13.0 & 0.12 & 1 & 0.12 & 1 & 1.641 & 10 \\
FR & gas & 396.98 & 49.0 & 4.881 & 11 & 6.12 & 18 & 6.521 & 19 \\
FR & hard\_coal & 834.94 & 97.0 & 2.93 & 5 & 2.93 & 5 & 2.93 & 5 \\
GB & gas & 467.99 & 56.0 & 23.875 & 47 & 32.981 & 77 & 34.515 & 84 \\
GB & hard\_coal & 1103.22 & 39.0 & 7.718 & 15 & 12.981 & 25 & 13.223 & 27 \\
GR & gas & 332.13 & 71.0 & 3.589 & 11 & 4.899 & 14 & 4.899 & 14 \\
GR & lignite & 1401.71 & 90.0 & 3.359 & 12 & 3.905 & 14 & 3.905 & 14 \\
HU & gas & 371.92 & 72.0 & 1.761 & 10 & 1.761 & 10 & 1.761 & 10 \\
HU & lignite & 1355.62 & 81.0 & 0.613 & 3 & 0.613 & 3 & 0.613 & 3 \\
IE & gas & 347.42 & 60.0 & 2.476 & 7 & 3.73 & 16 & 3.73 & 16 \\
IE & hard\_coal & 1032.2 & 89.0 & 0.855 & 3 & 0.855 & 3 & 0.855 & 3 \\
IT & gas & 387.74 & 50.0 & 21.204 & 46 & 27.572 & 69 & 28.462 & 72 \\
IT & hard\_coal & 997.67 & 59.0 & 5.081 & 15 & 6.926 & 18 & 6.926 & 18 \\
NL & gas & 353.88 & 38.0 & 7.336 & 18 & 12.707 & 34 & 13.371 & 37 \\
NL & hard\_coal & 948.56 & 43.0 & 1.362 & 2 & 4.662 & 6 & 4.662 & 6 \\
PL & gas & 370.31 & 10.0 & 0.332 & 2 & 0.457 & 3 & 1.572 & 5 \\
PL & hard\_coal & 942.41 & 56.0 & 12.056 & 52 & 14.901 & 56 & 19.861 & 82 \\
PL & lignite & 1158.65 & 78.0 & 5.576 & 13 & 7.064 & 19 & 8.678 & 26 \\
PT & gas & 430.33 & 42.0 & 2.838 & 7 & 3.828 & 10 & 3.828 & 10 \\
PT & hard\_coal & 546.44 & 27.0 & 0.576 & 2 & 1.756 & 6 & 1.756 & 6 \\
RO & gas & 315.03 & 17.0 & 0.353 & 3 & 3.428 & 21 & 3.428 & 21 \\
RO & hard\_coal & 1157.44 & 72.0 & 0.743 & 4 & 1.133 & 6 & 1.133 & 6 \\
RO & lignite & 1000.76 & 88.0 & 3.068 & 11 & 4.011 & 14 & 4.011 & 14 \\
SK & hard\_coal & 903.09 & 65.0 & 0.22 & 2 & 0.22 & 2 & 0.22 & 2 \\
SK & lignite & 1295.55 & 87.0 & 0.22 & 2 & 0.33 & 3 & 0.33 & 3 \\ \bottomrule
Sum & & & & 205.16 & 554 & 280.37 & 797 & 290.61 & 852 \\ \bottomrule
\end{tabular}
\caption{Technology-specific emission factors $EF(\mathrm{tech})$ per country determined via the bottom up method. The coverage of the underlying per unit generation time series from matched power plants is defined in Eq.~(\ref{eq:coverage}). A coverage value of more than $100\,\%$ for EE is due to misreported values in the \mbox{ENTSO-E} Statistical Factsheet. Total capacity per country and number of generation units is given for the entries of the original per unit generation data set, the matched entries, and the validated matched entries selected through the plausibility criterion shown in Table~\ref{tab:EF_plausibility}.}
\label{tab:EF_per_country_tech}
\end{table*}
\subsubsection{Average emission intensity per country}
The carbon intensity per country is calculated as
\begin{equation}
CI=\frac{\sum_{\mathrm{tech}}\sum_t \left(G^{\mathrm{type}}_t(\mathrm{tech})\cdot EF(\mathrm{tech})\right)}
{\sum_t\sum_{\mathrm{tech}}G^{\mathrm{type}}_t(\mathrm{tech})}~,
\end{equation}
using the scaled per generation type time series $G^{\mathrm{type}}_t(\mathrm{tech})$ and the per technology emission factors $EF(\mathrm{tech})$ as calculated in the last section. If no representative emission factor is available for one country and technology pair, we use a weighted average value for this technology, based on the EFs given for the other countries. The resulting CI for each county is displayed in Table~\ref{tab:CI_countries}, where these results are compared with corresponding values calculated by the top-down method presented in the next section.
\subsection{Top-down method}
\label{sec:topdownmethod}
The top-down method makes use of nationally reported data on electricity and heat generation as well as associated emissions. This approach is applied by the European Environment Agency (EEA), which publishes average annual values for the carbon dioxide intensity of electricity generation on country- and EU-level \cite{EEA2021}. The input data sets are the energy balance sheets published by Eurostat~\cite{EnergyEurostat} for the electricity and heat generation, and the reported emissions to the UNFCCC \cite{NationalAgency} for the associated emissions. The derivation of the national carbon intensity factors involves four steps which are visualized in Figure~\ref{fig:Top-down_process}. The input data is always given for a specific country and year. Thus, for convenience, we omit corresponding indices in the notation used in the following description of the process steps.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Top-down_method.png}
\caption{Process illustration of the top-down method}
\label{fig:Top-down_process}
\end{figure}
\subsubsection{CO2 emissions and energy balance}
The input values $CO_2(MAP)$ for the emissions per country and year are given by the total emissions from main activity produces (MAP) with activities public electricity generation, public combined heat and power generation, and public heat generation, as reported to the UNFCC (sector 1.A.1.a). These reported values do not include life-cycle greenhouse gas emissions, and therefore assume no CO$_2$ emissions from nuclear renewable power generation. Also, biomass-related emissions are not reported in the energy sector according to the UNFCC Reporting Guidelines, but they are associated with the Land Use, Land Use Change and Forestry (LULUCF) sector. Accordingly, corresponding emissions are not included in the input data here~\cite{EEA2021}.
In order to associate the reported emissions $CO_2(MAP)$ with electricity and heat generation, input data from the energy balance sheet \emph{nrg\_bal\_c}, published by Eurostat, were used. The main input variables are the energy transformation input values $TI(p,s)$ for main activity producers ($p=MAP$) and auto producers ($p=AP$), distinguished by the sectors electricity only ($s=E$), heat only ($s=H$) and combined heat and power ($s=CHP$). Main activity producers generate electricity and heat for sale to third parties as their primary activity, whereas auto producers generate electricity and heat for their own use as a supporting activity for their main activity. The input values $TI(p,s)$ have been aggregated over the fuel types solid fossil fuel, oil and petroleum products (excl. biofuel), natural gas, manufactured gases, peat and peat products, oil shale and oil sands, and non-renewable waste. In addition, the resulting gross electricity production $GE(p,s)$ for $p=AP,MAP$ and $s=E,CHP$, and the derived heat $dh(p,s)$ for $p=AP,MAP$ and $s=H,CHP$ enter into the calculation.
\subsubsection{CO$_2$ emission intensity calculation}
The CO$_2$ emission intensity of electricity generation $CI$ is calculated by the ratio of all CO$_2$ emissions from electricity generation to total electricity generation. For the given input data, this apparently simple definition involves certain methodological choices:
\begin{itemize}
\item Auto producers: The reported emissions $CO_2(MAP)$ do not include emissions from auto producers, so the calculation of the carbon intensity $CI$ has to be limited to main activity producers only, or the auto producers' emissions have to be estimated based on their share in the energy balance. We follow the latter approach and include auto producers in the calculation of the carbon intensity $CI$.
\item Heat generation: The reported emissions $CO_2(MAP)$ include emissions from heat generation. Consequently, the corresponding share of emissions has to be estimated based on the energy balance and then be subtracted, which in particular involves some approximations done with respect to the combined heat and power sector.
\end{itemize}
\paragraph{Main activity producers}
As stated above, the reported emissions $CO_2(MAP)$ represent not only emissions associated with electricity generation, but also take into account public heat generation. The share of emissions associated with electricity generation is estimated based on the corresponding share of energy transformation input into the energy balance:
\begin{equation}\label{eq:CO2_MAP}
{CO_2}^{\mathrm{el}}(MAP)=CO_2(MAP)\cdot\frac{TI^{\mathrm{el}}(MAP)}{TI(MAP)}~,
\end{equation}
with
\begin{align}
\label{eq:TI_def_1}
TI(MAP) &=\sum_{s=E,CHP,H}TI(MAP,s)~,\\
\label{eq:TI_def_2}
TI^{\mathrm{el}}(MAP) &=TI(MAP,E)\nonumber\\
&\phantom{=}+TI(MAP,CHP)-\frac{dh(MAP,CHP)}{0.9}~.
\end{align}
The denominator in Eq.~(\ref{eq:CO2_MAP}) contains the sum of the transformation input from the three relevant sectors for main activity producers as calculated in Eq.~(\ref{eq:TI_def_1}). The enumerator contains the transformation input from electricity and combined heat and power generation. In a later step, the CO$_2$ emissions that can be attributed to heat generation have to be subtracted. These are estimated from the derived heat of the sector, assuming a typical boiler efficiency of $90\%$ for the heat production \cite{Li2009NOxStaging}. Note that the EEA uses a slightly different variant of Eq.~(\ref{eq:TI_def_2}), including the additional term $TI(MAP,H)-dh(MAP,H)/0.9$ in the enumerator. This term first adds the transformation input for heat only generation, and then subtracts the corresponding value based on the derived heat from this sector, also with an assumed efficiency of $90\,\%$. Since both approaches have been shown to yield very similar results, we use the simpler version displayed in Eq.~(\ref{eq:TI_def_2}).
\paragraph{Auto producers}
Since the reported emissions to the UNFCC do not include emissions for auto producers, these emissions are estimated based on the ratio between the electricity-related energy transformation input for auto producers and for main activity producers, respectively:
\begin{equation}
\frac{{CO_2}^{\mathrm{el}}(AP)}{{CO_{2}}^{\mathrm{el}}(MAP)}
=
\frac{TI^{\mathrm{el}}(AP)}{TI^{\mathrm{el}}(MAP)}~.
\end{equation}
The energy transformation input $TI^{\mathrm{el}}(AP)$ for auto producers is calculated analogously to the one for main activity producers in Eq.~(\ref{eq:TI_def_2}).
\paragraph{Carbon emission intensity}
To derive the carbon emission intensity of electricity generation, the aggregated emissions $CO_2(MAP)$ and $CO_2(AP)$ from the main activity and auto producers have to be divided by the corresponding electricity generation. Following the approach by the EEA, this generation is given by the sum of gross electricity production $GE(p,s)$ for $p=MAP,AP$ and $s=E,CHP$ from the Eurostat energy balance sheet. In addition, we transform the gross electricity production to the net electricity production assuming a self-consumption of 6\,\% \cite{AGEnergiebilanzene.V.2020Stromerzeugung2020} for all power plants:
\begin{equation}
CI=
\frac{{CO_2}^{\mathrm{el}}(MAP)+{CO_2}^{\mathrm{el}}(AP)}
{\sum_{p=MAP,AP}\sum_{s=E,CHP}GE(p,s)\cdot 0.94}~.
\end{equation}
\section{Results}
\label{sec:application}
In the following, we present the results of the application of the top-down and bottom-up method on the emission and generation data sets for the year 2018.
\subsection{Carbon intensity of countries}
\label{sec:country_CI}
Table~\ref{tab:CI_countries} shows the CI per country for the year 2018 based on the top-down and the bottom-up method, respectively. Both methods yield comparatively low CIs for Sweden, Norway and France due to their high shares of nuclear and hydro power generation. On the other end of the scale, Greece, Cyprus, Poland and Estonia have high CIs associated with their fossil-fuel based generation mix~\cite{ENTSO-E2018}. The table shows that although the top-down and bottom-up method in general yield similar results, for some countries, significant differences occur. Their is no overall bias for over- or underestimation of one method over the other, but variations differ in magnitude and sign. The CI of Italy, for instance, is $43\,\%$ higher with the bottom-up method compared to the top-down approach, whereas for Portugal, it is $30\,\%$ lower. This indicates that these differences are not the result of a single systematic methodological choice, but rather are connected to multiple country-dependent causes (see Section~\ref{sec:discussion}).
\begin{table}[t]
\begin{tabular}{lrrr}
\toprule
Country & CI bottom-up & CI top-down & Diff. \\
& (gCO$_2$/kWh) & (gCO$_2$/kWh) & (gCO$_2$/kWh) \\
\midrule
SE & 40.90 & 13.29 & 27.60 \\
NO & 13.84 & 19.45 & $-5.60$ \\
FR & 35.48 & 56.57 & $-21.09$ \\
LT & 109.49 & 60.45 & 49.05 \\
AT & 110.71 & 106.16 & 4.55 \\
FI & 144.92 & 115.63 & 29.29 \\
SK & 208.56 & 144.24 & 64.33 \\
LV & 221.80 & 148.24 & 73.56 \\
DK & 222.01 & 200.42 & 21.59 \\
BE & 182.04 & 218.56 & $-36.52$ \\
GB & 276.12 & 256.56 & 19.56 \\
SI & 336.93 & 260.85 & 76.08 \\
IT & 376.20 & 262.45 & 113.75 \\
HU & 312.76 & 265.61 & 47.15 \\
ES & 326.53 & 291.1 & 35.44 \\
RO & 302.41 & 308.48 & $-6.070$ \\
PT & 229.44 & 326.01 & $-96.57$ \\
IE & 341.23 & 369.52 & $-28.29$ \\
DE & 426.93 & 423.38 & 3.55 \\
BG & 522.62 & 449.79 & 72.83 \\
NL & 400.54 & 465.48 & $-64.94$ \\
CZ & 453.15 & 470.43 & $-17.28$ \\
GR & 667.18 & 691.17 & $-23.99$ \\
CY & 717.68 & 703.98 & 13.70 \\
PL & 834.26 & 833.30 & 0.96 \\
EE & 894.98 & 943.07 & $-48.09$ \\ \bottomrule
\end{tabular}
\caption{Carbon intensity of electricity generation for EU countries for the year 2018. Bottom-up CIs are based on per unit emission and generation data, and per technology annual generation (see~Section~\ref{sec:bottomupmethod}), whereas top-down CIs are calculated using nationally reported energy balances and emissions (see~Section~\ref{sec:topdownmethod}).}
\label{tab:CI_countries}
\end{table}
\subsection{Dynamic carbon intensity signal}
\label{sec:CO2_signal}
Publicly available data sets often contain only information about the average carbon intensity of electricity generation in a given country. Such a static value neglects the temporal variability of the generation mix. In contrast, per technology emission factors allow to take into account this generation mix, leading to a dynamic assessment of the carbon intensity of power generation at a certain point in time. To illustrate this difference, the hourly generation mix and the associated carbon intensity in Germany and Poland for the week from December 4 to December 24, 2018 is depicted in Figure~\ref{fig:CO2_signal_DE}. The variability of the generation mix is reflected by the hourly carbon intensity of power generation $CI_t$ based on per technology emission factors,
\begin{equation}
CI_t=\frac{
\sum_{\mathrm{tech}}G^{\mathrm{type}}_{t}(\mathrm{tech})\cdot EF(\mathrm{tech})}
{
\sum_{\mathrm{tech}}G^{\mathrm{type}}_{t}(\mathrm{tech})}~.
\end{equation}
Here, we have assigned zero CO$_2$ emissions to all non-fossil generation technologies. Figure~\ref{fig:CO2_signal_DE} indicates that the resulting hourly emission intensity signal $CI_t$ often significantly differs from the average per country carbon intensity $CI$.
\begin{figure*}[h]
\centering
\includegraphics[width=0.85\textwidth]{CO2_signal_example_DE.png}
\caption{German generation mix in hourly resolution from December 4 until December 24, 2018. The dynamic hourly carbon intensity of the generation mix is shown as a black line, compared to the static annually average carbon intensity as a dashed line.}
\label{fig:CO2_signal_DE}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.85\textwidth]{CO2_signal_example_PL.png}
\caption{Poland generation mix in hourly resolution from December 4 until December 24, 2018. The dynamic hourly carbon intensity of the generation mix is shown as a black line, compared to the static annual average carbon intensity as a dashed line.}
\label{fig:CO2_signal_PL}
\end{figure*}
\subsection{Carbon intensity duration curve}
\label{sec:CO2_durationcurve}
In order to evaluate the variability of the carbon intensity of power generation across the year, carbon intensity duration curves for 26 European countries are represented in Figure~\ref{fig:CO2_duration_curve}. These sort the hourly carbon intensity signals in descending order, for each country. The variability differs considerably between the countries, depending on the composition and usage of the national power generation fleet. Sweden, Norway and France, for instance, show a very flat and low carbon intensity variability caused by their high and constant use of nuclear and hydro power. In contrast, Greece has a high as well as strongly varying CO$_2$ emission signal due to its power generation mix of approximately one third of lignite, fossil gas and renewable energy, respectively~\cite{ENTSO-E2018}. For Poland, the low share of low-carbon generation causes the CI to remain high despite some variability around the average value. For countries with a significant share of intermittent renewable generation like Germany and Denmark (share of generation from wind and solar is approximately $25\%$ for Germany, and $52\,\%$ for Denmark), the associated variability in the generation mix translates into a wide range of carbon intensity values.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.85\textwidth]{CO2_duration_curve.png}
\caption{Carbon intensity duration curves for European countries for 2018. Hourly carbon intensity of electricity generation was calculated using the bottom-up method (see~Section\ref{sec:bottomupmethod}).}
\label{fig:CO2_duration_curve}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
\subsection{Comparison of top-down and bottom-up carbon intensity results}
Since each calculation of the carbon intensity of electricity generation depends on the underlying assumptions and definitions, data sets, and specific methodological choices, there is not one ``correct'' result to compare other approaches against. In order to assess the influence of different factors in the calculation process, we thus focus on the comparison between the CIs resulting from the top-down and the bottom-up method presented in Sections~\ref{sec:bottomupmethod} and~\ref{sec:topdownmethod}. While the top-down method only yields static average annual values, the bottom-up method combines per technology emission factors based on per unit data with generation time series to derive dynamic hourly carbon intensities. Table~\ref{tab:CI_countries} shows that the resulting average yearly CIs differ, depending on the method and country. Since these differences vary in magnitude and sign, it stands to reason that multiple underlying causes are involved.
A first possible origin of discrepancy could be erroneous per unit emission factors in the bottom-up method for some individual power plants. We have excluded implausible emission factors from apparent outliers (see Figure~\ref{fig:CO2_technology}), but for a further analysis, plant-specific emission factors based on technical properties and fuel types would have to be evaluated. Although this is beyond the scope of our work, the suggested bottom-up method would allow to integrate corresponding results from future studies. A further uncertainty is the representativeness of the matched power plants for the generation technology categories of a given country. Although Figure~\ref{fig:deviation_CO2_unit_tech} indicates a minor influence of using per technology CFs compared to per unit emission factors, generation units not contained or not matched in the underlying EUTL and \mbox{ENTSO-E} data sets could affect the calculations. In most cases, we expect these influences to be minor due to the already significant coverage of our data set (see Tab.~\ref{tab:EF_per_country_tech}), but an extension of the matching data set or estimating the emissions of units that are not represented could provide a more accurate assessment.
Using the bottom-up method, the annual average CI of a country is calculated through multiplication of hourly per technology generation time series with per country and technology emission factors. Although these time series take into account consolidated data from \mbox{ENTSO-E}, the aggregated values differ considerably from the nationally reported electricity generation as published by Eurostat. The inconsistencies are related to different coverages and categorizations in both data sets, originating from unequal reporting structures (for a discussion focusing on Germany, see~\cite{Schumacher2015}). For Italy, for example, it has been observed that most ``other'' generation reported in the \mbox{ENTSO-E} generation time series represents fossil gas~\cite{emberdata}. Since the associated ``other\_fossil'' in the bottom-up method (see Table~\ref{tab:technologies}) has a higher emission factor than fossil gas, the resulting CI is higher compared to the one calculated through the top-down method. Resolving this and related issues would involve to establish a consolidated data set of hourly per country and technology generation time series, based on a comprehensive assessment of nationally and internationally published data and the underlying reporting methodologies. Such an endeavor is beyond the scope of this study, but, as in the case for per unit emission factors, future revised generation time series can easily be integrated into the published bottom-up method.
As discussed in Section~\ref{sec:topdownmethod}, the top-down method employs energy flows as published by Eurostat to estimate the share of total reported emissions associated with electricity generation~\cite{EEA2021}. This process explicitly incorporates auto producers, for whom the production of electricity is not their principal activity. These units in general are not included in the per unit generation data from \mbox{ENTSO-E}, so that this sector is underrepresented in the emission factors contained in the bottom-up method. France, for example, has a share of $26\,\%$ of electricity-related energy input associated with auto producers, which could be the cause for the higher CI calculated through the top-down method, compared to bottom-up. This discrepancy could be resolved by either implementing corrections in the bottom-up emission factors, or by removing auto producers from the top-down method. As for the consolidation of the generation data, a detailed per country analysis is necessary to account for this influence.
\subsection{Relation to results from the literature}
The German think tank Agora Energiewende employs per technology emission factors from the Federal Environment Agency of Germany to calculate a close to real-time CI signal for the German electricity mix~\cite{Hein2020Agorameter-Dokumentation}. This approach is similar to the bottom-up approach in the present study, combining per technology emission factors with generation time series. Both methods consider only direct emissions, and imports and exports are not included. Our method, however, allows a more flexible application to other countries, since the necessary data sets are available for all European countries. Also, the calculation of per technology emission factors is integrated in our method, whereas~\cite{Hein2020Agorameter-Dokumentation} makes use of data from other sources~\cite{Umweltbundesamt2021Entwicklung2020}. It should be noted that for 2018, these literature values are similar to the ones presented in this study ($1\,090, 820, 370$ gCO$_2$/kWh for lignite, hard coal, gas in~\cite{Hein2020Agorameter-Dokumentation} compared to $1\,126, 871, 334$ gCO$_2$/kWh in Table~\ref{tab:EF_per_country_tech}).
The company Tomorrow publishes real-time signals for the carbon intensity of electricity generation and consumption for European countries through the ``electricityMap''~\cite{electricitymap}. While historical data and the provisioning of an API is a fee-based offer which which constitutes the business model of the company, the electricityMap itself is open source and integrates contributions from the community~\cite{electricitymapgithub}. The generation-based CI is based on time series for per country and technology generation from various sources merged with emission factors from the literature. These factors take into account emissions from the whole life-cycle of the plant, different than the direct emissions used in this study. The main source for these factors, according to~\cite{electricitymapgithub}, is the IPCC Fifth Assessment Report from 2014~\cite{IPCC2014}. Compared with the factors derived in this study, the literature values have a wider scope than direct emissions only, but do not discriminate between different countries. This neglects considerable country-wise differences in the per technology emission factors, as they were shown in Table~\ref{tab:EF_per_country_tech} (for instance, $1\,126$ gCO$_2$/kWh for lignite in Germany compared to $1\,402$ gCO$_2$/kWh for lignite in Greece).
In~\cite{Vuarnoz2018TemporalGrid}, the hourly greenhouse gas emissions of the national electricity supply mix in Switzerland are analyzed, taking into account imports as well as exports with neighboring countries. For their analysis, the authors use CIs based on LCA studies. For the neighboring countries, a uniform EF is considered, neglecting hourly variations of the CI. As we have shown in Figure~\ref{fig:CO2_duration_curve}, these variations can be considerable, and neglecting them leads to imprecise results.
Following a similar approach to the bottom-up method presented in this study,~\cite{Braeuer2020ComparingGermany} derive a CI signal based on power plant emission and generation data. The analysis is limited to the German energy system, with no detailed discussion of possible inconsistencies of the underlying data or the consideration of heat generation.
\section{Summary and conclusions}
\label{sec:conclusion}
Transparent and comprehensible emission factors address the increasing need for dynamic grid carbon intensity signals for low-carbon system operation and emission accounting~\cite{Hamels2021}. In this contribution, we introduce a bottom-up framework to calculate per country and per technology direct emission factors for European countries, based on publicly available per unit generation and emission data. The resulting emission factors are merged with generation data in order to calculate the hourly carbon intensity of electricity generation. A comparison with results from a top-down approach based on emission and generation data from national statistics shows the feasibility of this approach, but also indicates the necessity of further consolidation of the underlying input data. In the proposed method, these extensions can be included on different levels, ranging from individual power plants to national generation time series or correction factors, taking into account not represented generation or emissions. The use of publicly available data and the publication of all code and auxiliary data, as well as the modularity of the approach facilitates to build on the presented work in a flexible way.
\section*{Data availability}
The complete data processing and emission factor model is implemented in Python using the developing environment Jupyter Notebook \cite{Kluyver2016jupyter}. This facilitates transparent and accessible publication of the code, allowing other users to extend and adapt the method and integrate further data sources. The analysis and processing of the emission factors was also done in Jupyter Notebook. All code is available on GitHub \cite{co2emissionsfactorsgithub}, with the processed input data set published on Zenodo \cite{unnewehr_jan_frederick_2021_5336486}.
\section*{Acknowledgements}
We thank Dave Jones and the team from Ember for supporting the work. Support with data processing and visualization from Paul Reggentin is acknowledged. We also thank Dr. Kevin Hausmann and Hauke Hermann for fruitful discussions.
\bibliographystyle{elsarticle-num}
\biboptions{sort}
|
1,116,691,499,720 | arxiv | \section{Introduction}
Early works suggested that the Galactic stellar halo formed in a dissipative collapse of a single protogalactic gas cloud \citep{els}, but in today's prevalent cosmology, galaxies grow hierarchically by accreting smaller structure \citep{white1978, diemand2008, springel2008, klypin2011}.
The archeological record of these galactic building blocks is especially well retained in stellar halos, in part because of their long relaxation times.
Upon accretion, smaller galaxies completely disrupt within several orbital periods \citep{helmi1999b}, and become a part of the smooth halo.
Still, global halo properties contain information on the original systems.
For example, the total mass of a halo is related to the number of massive accretion events \citep{bj2005, cooper2010}, the outer slope of the outer halo density profile increases with the current accretion rate \citep{diemer2014}, while the presence of a break in the density profile is indicative of a quiet merger history at late times \citep{deason2013}.
Therefore, when studying the stellar halo of a galaxy, we are in fact analyzing a whole population of accreted systems.
Modern works suggest that there are two distinct processes that contribute to the buildup of a stellar halo: most of the halo stars have been \emph{accreted} from smaller galaxies, but a fraction has been formed \emph{in situ} inside of the main galaxy \citep{zolotov2009, font2011, cooper2015}.
The in-situ component can contain stars formed in the initial gas collapse \citep{samland2003} and/or stars formed in the disk, which have subsequently been kicked out and placed on halo orbits \citep{purcell2010}.
Throughout this work we use the term in-situ halo to address both of these origin scenarios.
Numerical simulations show that the total number of halo stars formed in situ depends on the details of the formation history, but in general, their contribution decreases with distance from the galactic center \citep[e.g.,][]{zolotov2009, cook2016}.
For Milky Way-like galaxies, $10-30$\% of halo stars in the Solar neighborhood are expected to have formed in situ \citep{zolotov2009}.
Thorough understanding of stellar halos relies on disentangling its accreted and in-situ components.
Recent accretion events are still coherent in configuration space, and have been observed throughout the Local Universe.
The first evidence of accretion came from studies of globular clusters \citep{sz}, but in recent years deep photometric surveys on large scales have revealed a plethora of systems undergoing tidal disruption in the Milky Way halo, with progenitors ranging from dwarf galaxies \citep[e.g.,][]{ibata1994, belokurov2007a, belokurov2007b, juric2008, bonaca2012a} to disrupted globular clusters \citep[e.g.,][]{rockosi2002, grillmair2006, grillmair2009, bonaca2012b, bernard2016}.
A complete list of tidal structures identified in the Milky Way has been recently compiled by \citet{grillmair2016}.
Extragalactic surveys of low surface brightness features indicate that ongoing accretion is common in present-day galaxies of all masses \citep{ibata2001, ferguson2002, martinez-delgado2010, martinez-delgado2012, romanowsky2012, crnojevic2016}.
Evidence of past accretion events remains even after they no longer stand out as overdensities in configuration space.
For example, \citet{bell2008} measured that the spatial distribution of halo stars in the Milky Way is highly structured, at the degree expected by models where the entire halo was formed by accretion of dwarf galaxies.
Expanding this approach to clustering in the space of both positions and radial velocities, \citet{janesh2016} found that the amount of halo structure grows with distance from the Galactic center.
However, phase-space substructure associated with mergers has been discovered even in the Solar neighborhood \citep{helmi1999,smith2009,helmi2017}.
While there are multiple avenues for identifying accreted stars in a halo, isolating an in-situ component has been more challenging.
Early observations of globular clusters and individual stars indicated that the inner halo is more metal-rich, has a metallicity gradient, and is slightly prograde, while objects in the outer halo are more metal-poor and retrograde with respect to the Galactic disk \citep[e.g.,][]{sz}.
This has been interpreted as evidence that the inner halo is formed in situ, while the outer halo is accreted.
With the advent of wide-field sky surveys, these findings of a dual stellar halo have been confirmed using much larger samples of halo stars \citep{carollo2007, carollo2010, beers2012}.
Adding further evidence for a presence of an in-situ component, \citet{schlaufman2012} ruled out accretion as the main origin of stars in the inner halo due to their lack of spatial coherence with metallicity.
Most recently, \citet{deason2017} measured a small rotation signal among the old halo stars, which is consistent with the halo having a minor in-situ component.
Even though there is abundant evidence that both in-situ and accreted stars are present in the Milky Way halo, their contributions haven't yet been properly accounted for.
A straightforward way to distinguish between these two origin scenarios would be to directly compare halo stars in the Milky Way to a simulated halo, where the origin of every star is known.
This comparison is most easily performed by matching stars on their orbital properties, but precise observations of halo stars that would allow such a match have so far been limited to a distance of a few hundred pc from the Sun -- a volume poorly resolved in hydrodynamical simulations of large galaxies such as the Milky Way.
Fortunately, major improvements recently occurred on both the observational and theoretical fronts.
The \emph{Gaia} mission \citep{perryman2001} has increased the available observed volume by an order of magnitude.
Furthermore, \emph{Gaia} measurements are much more precise than previously available data, whose role in establishing the presence of a dual halo drew some criticism \citep{schonrich2011, schonrich2014}.
On the theory side, \citet{wetzel2016} recently presented the Latte high-resolution cosmological zoom-in simulation of a Milky Way-mass galaxy.
We leverage the joint powers of the new \emph{Gaia} data set and the Latte simulation to reveal the origin of the stellar halo in the Solar neighborhood.
This paper is organized as follows: we start by introducing our observational data that combine the first year of \emph{Gaia} astrometry with ground-based radial velocity measurements in \S\ref{sec:data}.
Once we've compiled a data set that has 6D information for stars in the Solar neighborhood, we split the sample into a disk and a halo component in \S\ref{sec:halostars} and analyze their chemical abundances and orbital properties.
We interpret these observations in terms of a simple toy model, as well as using the Latte cosmological hydrodynamic simulation of a Milky Way-like galaxy in \S\ref{sec:origin}, where we also propose a formation scenario for kinematically selected halo stars close to the Sun.
Section \S\ref{sec:discussion} discusses broader implications of our findings, which are summarized in \S\ref{sec:summary}.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{toomre.pdf}
\caption{(Left) Toomre diagram of stars in the Solar neighborhood, from a combined catalog of \emph{Gaia}--TGAS proper motions and parallaxes, and RAVE-on radial velocities, thus covering the full 6D phase space.
We kinematically divide the sample into a disk and a halo component.
The halo stars are defined as having $|V-V_{LSR}|>220$\,km/s, and the dividing line is shown in black.
(Right) Positions of TGAS--RAVE-on stars with a measured metallicity in the Toomre diagram.
The color-coding corresponds to the average metallicity of stars in densely populated regions of the diagram, and individual metallicities otherwise.
Interestingly, many halo stars are metal-rich, with $\rm[Fe/H]>-1$.}
\label{fig:toomre}
\end{center}
\end{figure*}
\section{Data}
\label{sec:data}
Studying orbital properties in a sample of stars requires the knowledge of their positions in 6D phase space.
Currently, \emph{Gaia} Data Release 1 (DR1) provides the largest and most precise 5D dataset for stars in the Solar neighborhood, which we describe in \S\ref{gaia}.
We complement this data with radial velocities from ground-based spectroscopic surveys whose targets overlap with \emph{Gaia} (\S\ref{rvsurveys}).
Finally, we describe our sample selection in \S\ref{sample}.
\subsection{Gaia}
\label{gaia}
\emph{Gaia} is a space-based mission that will map the Galaxy over the next several years \citep{perryman2001}.
The first data from the mission was released in September 2016, and contains not only positions of all \emph{Gaia} sources ($G<20$), but also positions, parallaxes and proper motions for $\sim$2~million of the brightest stars in the sky \citep{gaiadr1, gaiamission}.
Obtaining the 5D measurements after just a year of \emph{Gaia}'s operation was possible by referencing the positions measured with Hipparcos \citep{michalik2015}.
The faintest stars observed by Hipparcos \citep{hipparcos, vleeuwen2007} and released as a part of Tycho~II catalog have $V\sim12$ \citep{hog2000}, which limits the size of the 5D sample in \emph{Gaia} DR1 to $\approx2$ million stars.
The joint solution, known as Tycho--Gaia Astrometric Solution \citep[TGAS,][]{gaiaastrometry}, is comparable in attained proper motions and parallaxes to the Hipparcos precision (typical uncertainty in positions and parallaxes is 0.3\;mas and 1\;mas/yr in proper motions), but already on a sample that is more than an order of magnitude larger, making TGAS an unrivaled dataset for precision exploration of the Galactic phase space.
\subsection{Spectroscopic surveys}
\label{rvsurveys}
\emph{Gaia} is measuring radial velocities for $\sim150,000$ stars brighter than $G<16$ \citep{gaiamission}, but the first spectroscopic data will become available only in the second data release.
Thus, we completed the phase-space information of TGAS sources by using radial velocities from ground-based spectroscopic surveys.
We used two distinct spectroscopic datasets, from the RAVE and APOGEE projects, and provide an overview below.
The Radial Velocity Experiment \citep[RAVE,][]{steinmetz2006} is a spectroscopic survey of the southern sky, and its magnitude range $9<I<12$ is well matched to TGAS.
The latest data release, RAVE DR5 \citep{kunder2017}, contains $\sim450,000$ unique radial velocity measurements.
Since RAVE avoided targeting regions of low galactic latitude, the actual overlap with TGAS is $\sim250,000$ stars -- the largest of any spectroscopic survey.
The survey was performed at the UK Schmidt telescope with the 6dF multi-object spectrograph \citep{6df}, in the wavelength range $8410-8795\,\rm\AA$ at a medium resolution of $R\sim7,500$, so the typical velocity uncertainty is $\sim2$\,km/s.
Abundances of up to seven chemical elements are available for a subset of high signal-to-noise spectra.
\citet{casey2016} reanalyzed the RAVE DR5 spectra in a data-driven fashion with The Cannon \citep{ness2015}, providing de-noised measurements of stellar parameters and chemical abundances in the RAVE-on catalog.
In particular, typical uncertainty in RAVE-on abundances is 0.07\;dex, which is at least 0.1\;dex better than precision achieved using the standard spectroscopic pipeline.
Therefore, we opted to use RAVE-on chemical abundances, focusing on metallicities, [Fe/H], and $\alpha$-element abundances.
The Apache Point Observatory Galactic Evolution Experiment (APOGEE) is one of the programs in the Sloan Digital Sky Survey III \citep{majewski2015, sdss3}, which acquired $\sim500,000$ infrared spectra for $\sim150,000$ stars brighter than $H\sim12.2$ \citep{holtzman2015}.
To capitalize on the infrared wavelength coverage, APOGEE mainly targeted the disk plane, but several high latitude fields are included as well \citep{zasowski2013}.
Its higher resolution $R\sim22,500$, provides more precise abundances for a larger number of elements \citep[e.g.,][]{ness2015}.
APOGEE targets are preferentially fainter than stars targeted by RAVE, so its overlap with TGAS is limited to a few thousand stars.
APOGEE and RAVE have different footprints, targeting strategy, and the wavelength window observed, so despite the smaller sample size, we found APOGEE to be a useful dataset for validating conclusions reached by analyzing the larger RAVE sample.
\subsection{Sample selection}
\label{sample}
After matching Gaia--TGAS to the spectroscopic surveys, we increase the quality of the sample by excluding stars with large observational uncertainties.
However, the overlap between TGAS and spectroscopic surveys is limited, and the number density of halo stars in the Solar neighborhood is low.
To ensure that we have a sizeable halo sample, we chose to use very generous cuts on observational uncertainties and propagate them when interpreting our results, rather than restricting our sample size by more stringent cuts.
In particular, we included stars with radial velocity uncertainties smaller than 20\,km/s, and relative errors in proper motions and parallaxes smaller than 1.
In addition, we removed all stars with a negative parallax, to simplify the conversion to their distance.
These criteria select 159,352 stars for the TGAS--RAVE-on dataset, and 14,658 stars for the TGAS--APOGEE sample.
The spatial distribution of stars in our sample is entirely determined by the joint selection function of TGAS and the spectroscopic surveys, as we performed no additional spatial selection.
\emph{Gaia} is an all-sky survey, but since the data is still being acquiring, completeness of the TGAS catalog varies across the sky.
Ground-based spectroscopic surveys have geographically restricted target-list, in addition to the adopted targeting strategy.
This results in a spatially non-uniform sample, whose distribution in distances we provide in Appendix~\ref{sec:distances}.
On the other hand, \citet{wojno2016} have shown that the RAVE survey is both chemically and kinematically unbiased.
Thus, focusing on kinematic properties of the sample will result in robust conclusions.
\section{Properties of the local halo stars}
\label{sec:halostars}
In this section we analyze the properties of halo stars in a sample of bright stars, $V\lesssim12$, within 3\;kpc from the Sun, that have positions, proper motions and parallaxes in the TGAS catalog, and radial velocities from either RAVE or APOGEE.
This sample, though spatially incomplete due to survey selection functions, is kinematically unbiased.
We define a kinematically-selected halo in \S~\ref{sec:sample}, and present its chemical and orbital properties in \S~\ref{sec:chem} and \S~\ref{sec:l}, respectively.
\subsection{Defining a local sample of halo stars}
\label{sec:sample}
Access to the full 6D phase space information allows us to calculate Galactocentric velocities for all of the stars in the sample.
We summarize the kinematic properties of the sample with a Toomre diagram (Figure~\ref{fig:toomre}), where the Galactocentric $Y$ component on the velocity vector, $V_Y$ (which points in the direction of the disk rotation), is on the x axis, while the perpendicular Toomre component, $\sqrt{V_X^2+V_Z^2}$, is on the y axis.
This space has been widely used to identify major components of the Galaxy: the thin and thick disks, and the halo \citep[e.g.,][]{venn2004}.
Disk stars dominate a large overdensity at $V_Y\approx220$\;km/s, which corresponds to the circular velocity of the Local Standard of Rest (LSR, $V_{LSR}$).
The density of stars (left panel of Figure~\ref{fig:toomre}) decreases smoothly for velocities progressively more different from $V_{LSR}$, extending all the way to retrograde orbits ($V_Y<0$).
We distinguish between the disk and the halo following \citet{ns2010}: halo stars are identified with a velocity cut $|V-V_{LSR}|>220$\;km/s, where $V_{LSR} = (0,220,0)$\;km/s in the Galactocentric Cartesian coordinates.
The dividing line between the components is marked with a black line in Figure~\ref{fig:toomre}, and both components are labeled in the left panel.
The halo definition employed here is more conservative than similar cuts adopted by previous studies; e.g., \citet{ns2010} defined halo as stars with velocities that satisfy $|V-V_{LSR}|>180$\;km/s.
For example, \citet{sb2009} have shown that the velocity distribution of a Galactic thick disk can be asymmetric, in which case the region $180<|V-V_{LSR}|<220$\;km/s could still contain many thick disk stars.
A higher velocity cut ensures that the contamination of our halo sample with thick disk stars is minimized.
In total, we identified 1,376 halo and 157,976 disk stars, with the halo constituting $\sim1\%$ of our sample.
This is in line with the expectations from number count studies in large-scale surveys \citep[e.g.,][]{juric2008}, although we do not expect an exact match, as TGAS is not volume complete.
\subsection{Chemical composition}
\label{sec:chem}
In this section we study the chemical composition of the Solar neighborhood stars observed by both Gaia--TGAS and RAVE.
The signal-to-noise ratio of 142,086 RAVE spectra was high enough to allow a measurement of metallicity [Fe/H].
Alpha-element abundances, [$\alpha$/Fe], were obtained for a subset of 56,259 stars.
Right panel of Figure~\ref{fig:toomre} shows the average metallicity in densely populated velocity bins of the Toomre diagram, while the points in the lower density regions are individually colored-coded by $\rm[Fe/H]$.
As expected, the halo is more metal poor than the disk \citep[e.g.,][]{gilmore1989, ivezic2008}.
Within the disk itself, there is a smooth decrease in metallicity further from the $V_{LSR}$, starting from [Fe/H]$\sim0$ in the thin disk region, $(V_Y, V_{XZ})=(220,0)$\;km/s, to $\rm[Fe/H]\sim-0.5$ in the thick disk region, $(V_Y, V_{XZ})=(100,100)$\;km/s.
Surprisingly, however, there are many stars with thick disk-like metallicities found in the halo region of the Toomre diagram, and some of them are on very retrograde orbits.
Figure~\ref{fig:mdf} (top) shows the metallicity distribution for the two kinematic components identified above: the disk in red and the halo in blue.
The disk is more metal rich than the halo, and peaks at the approximately solar metallicity, $\rm[Fe/H]=0$.
The halo is more metal poor, and exhibits a peak at $\rm[Fe/H]\sim-1.6$, typical of the inner halo \citep[e.g.,][]{allende-prieto2006}.
However, the metallicity distribution of the halo has an additional peak at the metal-rich end, centered on $\rm[Fe/H]\sim-0.5$ and extending out to the super-solar values.
To corroborate the existence of metal-rich stars on halo orbits, we also show the metallicity distribution function for TGAS stars observed with APOGEE at the bottom of Figure~\ref{fig:mdf}.
The disk--halo decomposition for the APOGEE sample was performed in the identical manner to that of RAVE-on.
The metallicity distributions between the two surveys are similar: the disk stars are metal-rich, while the halo has a wide distribution ranging from $\rm[Fe/H]\sim-2.5$ to $\rm[Fe/H]\sim0$.
A bimodality is present in the metallicity distribution of APOGEE halo stars, although it is less prominent than in the RAVE-on sample due to the smaller sample size.
The apparent bimodality in the metallicity distribution of RAVE-on halo stars is slightly more metal-poor, $\rm[Fe/H]\approx-1.1$, than observed in the APOGEE sample, $\rm[Fe/H]\approx-0.8$.
In what follows, we compromise between these two values, and split the halo sample at $\rm[Fe/H]=-1$, into a metal-rich ($\rm[Fe/H]>-1$) and a metal-poor component ($\rm[Fe/H]\leq-1$).
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{mdf.pdf}
\caption{Metallicity distribution function of the Solar neighborhood in TGAS and RAVE-on catalogs on the top, and TGAS and APOGEE at the bottom.
Kinematically-selected disk stars are shown in red, while the halo distribution is plotted in blue.
In both samples, there is a population of metal-rich halo stars, with $\approx50\%$ of stars having $\rm[Fe/H]>-1$ (marked with a vertical dashed line).}
\label{fig:mdf}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{afeh.pdf}
\caption{Chemical abundance pattern, [$\alpha$/Fe] vs [Fe/H], for TGAS--RAVE-on sample on the top and TGAS--APOGEE at the bottom.
The pattern for disk stars is shown as a red-colored Hess diagram (logarithmically stretched), while the halo stars are shown individually as blue points.
In both surveys, the metal-poor halo is $\alpha$-enhanced, while the metal-rich halo follows the abundance pattern of the disk.}
\label{fig:afeh}
\end{center}
\end{figure}
Chemical abundances have been used to discern different components of the Galaxy \citep[e.g.,][]{gilmore1989}.
The abundance space of [$\alpha$/Fe] vs [Fe/H] is particularly useful in tracing the origin of individual stars \citep[e.g.,][]{lee2015}.
Figure~\ref{fig:afeh} shows this space for RAVE-on spectra on the top, and APOGEE on the bottom.
The disk distribution is shown as a red density map, while the less numerous halo stars are shown individually in blue.
Similarly to the overall metallicity distribution function, RAVE-on and APOGEE surveys are in a qualitative agreement in terms of the more detailed chemical abundance patterns as well.
At low metallicities, the halo is $\alpha$-enhanced, but at high metallicities its [$\alpha$/Fe] declines, following the disk abundance pattern, both in terms of the mean [$\alpha$/Fe] and its range at a fixed [Fe/H].
In particular, thick disk stars have higher $\alpha$-element abundances at a given metallicity \citep[e.g.,][]{nidever2014}, and follow a separate sequence visible in the more precise APOGEE data ([Fe/H]$\sim-0.2$, [$\alpha$/Fe]$\sim0.2$), while the thin disk is in general more metal-rich and $\alpha$-poor.
Metal-rich halo stars in both samples span this range of high- and low-$\alpha$ abundances.
At lower metallicities, \citet{ns2010} reported that halo stars follow two separate [$\alpha$/Fe] sequences, with the high-$\alpha$ stars being on predominantly prograde orbits, whereas the low-$\alpha$ stars are mostly retrograde.
We do not resolve the two sequences, or see any correlation between the orbital properties and [$\alpha$/Fe] at a fixed [Fe/H].
However, the RAVE-on abundances are fairly uncertain (typical uncertainty is $\sim0.07$\;dex, marked by a black cross on the top left in Figure~\ref{fig:afeh}), so the sequences seen in the higher-resolution data from \citet{ns2010} would not be resolved in this data set.
In the APOGEE sample, where the typical uncertainty is smaller, $\sim0.03$\;dex, the number of halo stars is too small to unambiguously identify multiple $[\alpha/\rm Fe]$ sequences.
The evidence presented so far shows that the stellar halo in the Solar neighborhood has a distinct metal-poor and a metal-rich component.
Given that the metal-rich component follows the abundance pattern of the disk, we discuss the possible contamination by the thick disk in Appendix~\ref{sec:tdcontamination}, and rule out the possibility that a sizeable fraction of the metal-rich halo is attributable to the canonical thick disk.
In the next section, we proceed to characterize the orbital properties of the two halo components.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{ltheta_lines.pdf}
\caption{Orientation of angular momenta with respect to the Galactocentric $Z$ axis for different Galactic components: the disk in red, metal-poor halo in dark blue and metal-rich halo in light blue.
The angular momenta of disk stars are aligned with the $Z$ axis, while those of halo stars are more uniformly distributed.
There is an excess of metal-rich halo stars on prograde orbits, $\theta_L>90^\circ$, compared to the metal-poor halo orbital orientations.}
\label{fig:ltheta}
\end{center}
\end{figure}
\subsection{Angular momenta}
\label{sec:l}
Stellar orbits can be classified in terms of their integrals of motion, however, calculating these requires the knowledge of the underlying gravitational potential \citep{bt2008}.
Furthermore, in a realistic Galactic environment, some stars are on chaotic orbits \citep[e.g.,][]{price-whelan2016}, where the integrals of motion do not exist.
On the other hand, any star with a measured position in a 6D phase space has a well defined angular momentum.
In this section we use angular momenta as empirical diagnostics of stellar orbits, and focus in particular on the orientation of the angular momentum vector with respect to the Galactocentric $Z$ axis, quantified by the angle
\begin{equation}
\theta_L \equiv \arctan(L_Z/|\vec{L}|)
\label{eq:thetal}
\end{equation}
where $L_Z$ is the $Z$ component of the angular momentum, and $|\vec{L}|$ its magnitude.
$L_Z$, and hence $\theta_L$, are conserved quantities in static, axisymmetric potentials, such as that of the Milky Way disk.
This has already been utilized to identify coherent structures in the phase space of local halo stars \citep[e.g.,][]{helmi1999, smith2009}.
In the adopted coordinate system, the disk orientation angle is $\theta_L=180^\circ$, so that prograde orbits are those with $\theta_L>90^\circ$ and retrograde have $\theta_L<90^\circ$.
We show the distribution of angular momentum orientations $\theta_L$ for the identified Galactic components in Figure~\ref{fig:ltheta}.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{toy_model.pdf}
\caption{Toy model for the phase space of the solar neighborhood.
The model consists of a halo (blue) and a disk component (red), with their positions drawn directly from the TGAS--RAVE-on sample, and kinematics from the velocity ellipsoids of \citet{bensby2003}.
Top panel shows the model in the Toomre diagram.
The black line is the employed demarcation between the halo and the disk, which does fairly good job in separating the two in the toy model as well.
The middle panel shows the orientation of angular momenta in the model, with each component shown as a shaded histogram, and a model total with a black line.
Toy model angular momenta successfully reproduce the angular momentum orientations observed in the Milky Way (gray dashed line).
Kinematically selecting the halo in the model (bottom panel, shaded histogram) produces a distribution in excellent agreement with the distribution of metal-poor halo stars in the Milky Way (dark blue line).
The metal-rich halo in the Milky Way (light blue line) is inconsistent with being a part of an isotropic halo studied in this toy model.}
\label{fig:toy}
\end{center}
\end{figure}
As expected, most of the disk stars are indeed on orbits in the disk plane with $V_Z\approx0$, and have $\theta_L\approx180^\circ$ (red histogram in Figure~\ref{fig:ltheta}).
Angular momenta of both halo components span the entire range of $0^\circ<\theta_L<180^\circ$, but in detail, their distributions are significantly different from each other.
The metal-poor halo has almost a flat distribution as a function of $\theta_L$, with a slight depression at very prograde angles (dark blue histogram in Figure~\ref{fig:ltheta}).
The metal-rich halo is predominantly prograde, but also has a long tail to retrograde orbits (light blue histogram in Figure~\ref{fig:ltheta}).
In the next section, we explore the origin of these distributions in terms of a toy model for the kinematic distribution of the Galaxy, as well as in comparison to a hydrodynamical simulation of a Milky Way-like galaxy.
\section{Origin of halo stars in the Solar neighborhood}
\label{sec:origin}
In the previous section, we presented a metal-rich component of the stellar halo in the solar neighborhood that is preferentially on prograde orbits with respect to the Galactic disk.
In this section, we test whether the observed orientations of these angular momenta can be understood in the context of a simple toy model (\S\ref{sec:toymodel}), as well as in comparison to a solar-like neighborhood in a cosmological hydrodynamical simulation (\S\ref{sec:latte}).
Finally, we study the formation paths of the simulated stellar halo to provide a possible origin for this metal-rich halo component in \S\ref{sec:formation}.
\subsection{Toy model}
\label{sec:toymodel}
To construct a toy model for our TGAS--RAVE-on sample, we assign stars to one of the three main components of the Milky Way galaxy: a thin disk, a thick disk, or a halo \citep[e.g.,][]{bhg2016}.
The model is defined by the number of stars in each component, their spatial and kinematic properties.
To set the number of halo stars in the toy model, we assumed that the halo is isotropic.
In that case, an equal number of halo stars are on prograde and retrograde orbits.
Since we expect no contamination from the disk on retrograde orbits, we set the total size of the halo in the toy model to be twice the number of retrograde stars in our Milky Way sample.
For the remaining disk stars, we vary the ratio of thin to thick disk stars to best match the distribution of prograde orbits.
Once the number of stars in each component had been determined, we proceeded to assign them their phase space coordinates.
Our sample is spatially confined to within only several kpc from the Sun (see Appendix~\ref{sec:distances}), so we see no differences in spatial distributions of the kinematically defined components from section \S\ref{sample}.
This allowed us to take the spatial distribution of stars in our sample, and randomly designate them a component in the toy model, thus ensuring that the spatial selection function of both TGAS and RAVE-on is properly reproduced in the model.
For each star in the model, we drew a 3D velocity from its component's velocity ellipsoid measured by \citet{bensby2003} on a smaller sample of local stars with Hipparcos parallaxes and proper motions, and which accounts for the asymmetric drift.
With positions and velocities in place, we calculated the angular momenta and their orientation angles with respect to the $Z$ axis, $\theta_L$, for all stars in the toy model.
The properties of our toy model are summarized in Figure~\ref{fig:toy}.
The top panel shows the components of the model in the Toomre diagram, with the disk stars in red, halo stars in blue, and the black line delineating our kinematic boundary between the halo and the disk.
The velocity ellipsoids of these components overlap, so our kinematic definition of a halo produces a sample which is likely neither pure, nor complete.
This is illustrated in the toy model by several thick disk stars that enter the halo selection box at $(V_Y, V_{XZ}) \simeq (100,200)$\;km/s, and also halo stars on prograde orbits, which fall outside of the halo selection.
As no simple kinematic cut will completely separate the different components, we opted to emphasize the purity of our halo sample.
Based on the toy model, we estimate that the fraction of interloping disk stars in a kinematically defined halo is only $\sim10\%$, but this in turn makes the halo sample less complete at $75\%$.
For a comparison, \citet{ns2010} defined a halo that is $\sim90\%$ complete, but only $\sim55\%$ pure.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{latte_mwcomp.pdf}
\caption{Stars from the Latte cosmological zoom-in baryonic simulation of a Milky Way-mass galaxy, from the FIRE project, observed identically to stars in the Solar neighborhood.
Top panel shows positions of stars in the Toomre diagram, color-coded by metallicity.
We identify disk and halo stars in Latte using a cut in this diagram (black line), analogous to a cut used for the Milky Way sample.
Metallicity distributions of Latte stars are in the middle, with disk being shaded red, and halo blue.
Bottom panel shows the orientations of stars' angular momenta in Latte, with disk in red, metal-poor halo in dark blue, and metal-rich halo in light blue.
Properties of Latte stars are remarkably similar to the distributions of stars in our Milky Way sample (reproduced as empty histograms on the middle and bottom panel for direct comparison).
}
\label{fig:latte}
\end{center}
\end{figure}
The middle panel of Figure~\ref{fig:toy} shows the orientation of angular momenta, $\theta_L$, for the components of the toy model, with halo in blue and disk in red.
As expected, the majority of disk stars are moving in the disk plane, and the distribution is sharply peaked at $\theta_L=180^\circ$.
The nearly isotropic halo of the \citet{bensby2003} model has a flat distribution in $\theta_L$.
The sum of the two components is represented with a thick black line, which compares favorably to the distribution of $\theta_L$ observed in the Milky Way, and shown in dashed gray.
The agreement between the toy model and the Milky Way is particularly good at very prograde and very retrograde orbits.
The abundance of stars on only slightly prograde orbits ($\theta_L\approx100^\circ$) is somewhat underestimated in the toy model, indicating that the transition between a disk and a halo component in the Milky Way is more gradual than what can be reproduced by a simple model featuring only two disks and an isotropic halo.
The bottom panel of Figure~\ref{fig:toy} compares the modeled halo and the observed one.
For a fair comparison, in this panel we only consider model stars that would satisfy our kinematic halo selection, which are a combination of some disk and the majority of halo stars, as discussed above.
The distribution of $\theta_L$ for this kinematically-selected halo stars from a toy model is shown as a shaded histogram.
While the intrinsic distribution of an isotropic halo is flat (middle panel), kinematic selection introduces a suppression at the most prograde orbits (right panel).
Kinematic selection excludes halo stars with $|V-V_{LSR}|\leq220$\;km/s, all of which are on prograde orbits, which is manifested as a depression at $\theta_L\gtrsim90^\circ$.
The magnitude of this depression exactly matches the distribution of metal-poor halo stars in the Milky Way, overplotted with a dark blue line in the right panel of Figure~\ref{fig:toy}.
This suggests that the metal-poor halo in the Solar neighborhood is intrinsically isotropic, and that at least some stars more metal-poor than $\rm[Fe/H]\leq-1$, which are kinematically consistent with the disk, are in fact a part of the metal-poor halo.
The distribution of metal-rich halo stars in the Milky Way, shown as light blue line in the bottom panel of Figure~\ref{fig:toy}, shows the opposite behavior of an excess at prograde orbits and is significantly different from the toy model prediction for an isotropic halo.
A simple toy model successfully explains the bulk properties of our sample: most of the stars are in a rotating disk, with a minority in an isotropic halo, which maps well to the metal-poor halo stars identified in our TGAS--RAVE-on sample.
This toy model also points out that the metal-rich halo stars are inconsistent with either of its components.
Next, we analyze the origin of such a population using a hydrodynamical simulation.
\subsection{The Latte simulation}
\label{sec:latte}
The Latte simulation, first presented in \citet{wetzel2016}, is a simulation from the Feedback In Realistic Environments (FIRE)\footnote{The FIRE project website is \url{http://fire.northwestern.edu}} project \citep{hopkins2014FIRE}. Latte is fully cosmological, with baryonic mass resolution of $7000 \,M_{\sun}$ and spatial softening/smoothing of $1\,pc$ for gas and $4\,pc$ for stars.
The simulation uses the standard FIRE-2 implementation of gas cooling, star formation, stellar feedback, and metal enrichment as described in \citet{hopkins2017}, including the numerical terms for turbulent diffusion of metals in gas as described therein\footnote{
The original simulation presented in \citet{wetzel2016} did not include terms for turbulent metal diffusion.
Here, we analyze a re-siumulation that includes those terms (all other parameters unchanged), which creates a more realistic metallicity distribution function in both the host galaxy (Wetzel et al., in prep.) and its satellites (Escala et al., in prep.).
As explored in \citet{Su2016} and \citet{hopkins2017}, the inclusion of turbulent metal diffusion has no systematic effect in any gross galaxy properties (including the average metallicity), as we also checked for all analyses in this paper.
}, and the GIZMO code \citep{hopkins2015}. FIRE simulations have been used to study galaxies from ultra-faint dwarf galaxies \citep{wheeler2015} to small groups of galaxies \citep{feldmann2016}; FIRE simulations successfully reproduce observed internal properties of dwarf galaxies \citep{chan:fire.dwarf.cusps, elbadry2016}, thin/thick disk structure and both radial and vertical disk metallicity gradients in a Milky Way-like galaxy \citep{ma2016}, star-formation histories of dwarf and massive galaxies \citep{hopkins2014, sparre:sfmainsequence}, global galaxy scaling relations \citep{hopkins2014, ma:mass.metallicity, feldmann2016}, and satellite populations around Milky Way-mass galaxies \citep{wetzel2016}.
To construct a sample of star particles in Latte analogous to the TGAS--RAVE-on sample, we first aligned the simulation coordinate system with the disk, and then selected stars in a 3\;kpc sphere located at a distance of 8.3\;kpc from the center of the galaxy.
We classified star particles as either disk or halo using a kinematic cut in the Toomre diagram similar to the one employed for stars in the Milky Way (\S\ref{sec:sample}).
Because the circular velocity in Latte is slightly different from the Galactic value, the definition of the local standard of rest is also different, but we keep the same conservative measure for the dispersion of 220\;km/s in the disk of Latte.
The Toomre diagram for the sample in Latte is shown in the top panel of Figure~\ref{fig:latte}.
Qualitatively, it is similar to that of the Milky Way (Figure~\ref{fig:toomre}), with most of the stars rotating in the disk at $\sim235$\;km/s, and the density of stars smoothly decreasing away from the local standard of rest.
Quantitatively, the halo fraction in Latte is an order of magnitude higher at 10\%, and, compared to the Milky Way's, its kinematic space is more structured.
This is partly because the Latte sample does not suffer from selection effects, so it effectively extends to larger distances, where the halo constitutes a larger mass fraction.
Any kinematic structures are hence better sampled and more readily observable in Latte.
Additionally, at least some of the structure present in the Milky Way sample is smoothed by the observational uncertainties \citep[see, e.g.,][]{sanderson2015}, which are not present in Latte.
Stars in the Toomre diagram of Figure~\ref{fig:latte} are color-coded by metallicity and show trends similar to those observed in the Milky Way.
For a more quantitative analysis, we show metallicity distribution of Latte's disk and halo components in the central panel of Figure~\ref{fig:latte}.
To facilitate comparison, we also include metallicity distributions of the Milky Way components as empty histograms.
The Latte halo is more metal poor than its disk, and although there is no bimodality in the halo metallicity, the whole distribution is as wide as observed in the Milky Way, extending from [Fe/H]$\lesssim-2$ to [Fe/H]$\simeq0$.
This agreement, in addition to $\rm[\alpha/Fe]$ abundance trends recovered in simulated disks (Wetzel et al., in prep), demonstrate that the physics included in the FIRE simulations captures the most important ingredients for chemical evolution in galaxies.
Following our analysis of the Milky Way sample, we proceed to divide the Latte halo into a metal-rich, [Fe/H]$>-1$, and a metal-poor, [Fe/H]$\leq-1$, component.
The ratio of metal-rich to metal-poor halo stars in Latte is not quite the same as in the Milky Way, but this does not seriously impede our goal of qualitatively understanding differences between the two populations.
Finally, we analyze orbital properties of stars in Latte by showing the orientation of their angular momenta with respect to the $Z$ axis, $\theta_L$, as solid histograms in the bottom panel of Figure~\ref{fig:latte}.
The angular momenta of Latte disk stars are well aligned with the $Z$ axis, with $\theta_L\simeq180^\circ$, similar to disk stars in the Milky Way (shown in Figure~\ref{fig:latte} as an empty histogram).
The Latte halo shows a flatter distribution of $\theta_L$, but there is still an excess of metal-rich stars on prograde orbits (histogram shaded light blue) with respect to the metal-poor halo stars (dark blue).
Overall, Latte stars have similar kinematic, chemical and orbital properties to stars observed in the Milky Way, suggesting that it provides a reasonable analogue.
We next trace Latte stars back to their birth location and suggest a possible scenario for the formation of halo stars in the Solar neighborhood.
\subsection{Formation paths of the Latte stellar halo}
\label{sec:formation}
We define the formation distance of a star particle in Latte as its physical (not comoving) distance from the center of the primary (Milky Way-like) galaxy at time that it formed, and we inspect this formation distance as a function of the star's current ($z = 0$) age in Figure~\ref{fig:dform}.
Stars with disk kinematics (as defined by our Toomre-diagram cut in Figure~\ref{fig:latte}) are shown in red, while those identified as halo are blue.
The suppression of stars that formed at solar distances $\sim$7\;Gyr ago coincides with the time of the last significant merger ($z\sim0.7$).
This event brought significant gas to the center of the galaxy, which switched the star-forming conditions from those producing mainly stars on halo-like orbits (more than 95\% of halo stars are older than 6\;Gyr) to the orderly production of disk stars (more than 90\% of disk stars were formed in the last 6\;Gyr).
The formation distances between the disk and the halo component are equally dichotomous: most of the disk stars were formed close to their present-day distance from the galactic center ($5-11$\;kpc, shaded gray in Figure~\ref{fig:dform}), while halo stars originated from the extremes of the central 1\;kpc to the virial radius and beyond (for reference, the virial radius 10\;Gyr ago was $\approx70\;$kpc).
Overall, only $\sim20$\% of Latte stars on halo orbits were formed inside of their present-day radial distance range, indicating that radial redistribution is an important phenomenon sculpting the inner Galaxy.
We discuss the implications of the halo component undergoing radial migration/heating in \S\ref{sec:migration}.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{latte_dform2.pdf}
\caption{Distance from the host galaxy at the time of formation, as a function of current stellar age (in physical kpc), for star particles currently in the solar neighborhood (gray shaded region) in the Latte simulation.
Blue circles are kinematically identified as halo (with metal-rich stars plotted in light blue, and metal-poor are in dark blue), red circles are kinematically consistent with the disk.
The galaxy experienced a significant merger 7\;Gyr ago, so most stars formed at that time come from the inner 1\;kpc.
We define a star as being accreted if it formed more than 20\;kpc from the host galaxy (horizontal black line).
Otherwise, we define its formation as in situ (in the disk).
Current disk stars formed in situ and at later times, while halo stars formed early.
Furthermore, the halo contains a population of both accreted and in situ stars, with the accreted component being more metal-poor.}
\label{fig:dform}
\end{center}
\end{figure}
The formation distance of a star particle can also be used as a rudimentary diagnostic of its formation mechanism.
From Figure~\ref{fig:dform} we see that almost none of the disk stars were formed beyond 20\;kpc (indicated with a horizontal black line), so we adopt this distance as a delimiter between the in-situ and accreted origin for stars in Latte.
Most of the accreted stars are classified as halo, and were formed inside dwarf galaxies merging with the host galaxy.
The tracks in the space of formation distance and age (Figure~\ref{fig:dform}) delineate the orbits of these dwarf galaxies.
All of the satellites are disrupted once they get within the central 20\;kpc, and for most of them this happens on the first approach.
In the process, they bring in gas which fuels in-situ star formation.
At early times, most of the stars that formed in situ also become a part of the halo, while the last significant merger starts the onset of the in-situ formation of the disk.
Although satellite accretion onto the host galaxy continues after the last significant merger, none of the accreted stars reach the solar circle, so effectively all of the halo stars in the solar neighborhood were formed prior to the last significant merger.
This is in line with findings of \citet{zolotov2009}, who showed that late-time accretion predominantly builds outer parts of the stellar halo.
In summary, Latte stars born at late times formed in the disk, while old stars in Latte are predominantly part of the halo.
One third of the halo in the solar neighborhood was accreted from the infalling satellites, while the majority of stars formed in situ in the inner galaxy and migrated to the solar circle.
In this simplified depiction for the origin of the stellar halo, we do not account for the unlikely possibility that stars that formed within 20\;kpc still could have been bound to a satellite galaxy, and hence accreted, nor do we distinguish between different modes of in-situ halo formation (e.g., through a dissipative collapse; \citealt{samland2003}, or with stars being heated from the disk; \citealt{purcell2010}).
These are not serious shortcomings, because satellites that entered the inner 20\;kpc were disrupted soon thereafter, so any newly-formed stars would have been only loosely bound to the satellite at the time.
Furthermore, given that the median formation distance of the in-situ halo is $\sim4$\;kpc, we expect that dissipative collapse only marginally contributes to the census of halo stars at the solar circle, but ultimately we draw conclusions that are insensitive to the particulars of their origin.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{latte_fin.pdf}
\caption{(\emph{Left}) Average in-situ fraction of Latte halo stars in the solar neighborhood as a function of angular momentum orientation.
Metal-poor halo stars (dark blue) have a $\approx40\%$ probability of being formed in situ regardless of their angular momentum.
The metal-rich halo (light blue) is more likely to have formed in situ, and this probability increases for stars on orbits aligned with the disk ($\theta_L>90^\circ$).
(\emph{Right}) Average in-situ fraction as a function of metallicity and current angular momentum orientation angle.
Whether or not a star was accreted depends only weakly on its current orbital properties, but it correlates well with its metallicity.
The in-situ fraction varies smoothly between the metal-rich end, where all stars formed in situ, and the metal-poor end, where all stars were accreted.
Thus, in the inner halo, a star's metallicity is a much better indicator of its origin (the probability that it was formed in situ) than its current kinematics.
}
\label{fig:facc}
\end{center}
\end{figure*}
In Figure~\ref{fig:facc} we explore how the fraction of halo stars formed in situ depends on the angular momentum orientation ($\theta_L$, left) and a combination of angular momentum and metallicity (right).
On average, 40\% of metal-poor halo stars have formed in situ (dark blue line in the left panel of Figure~\ref{fig:facc}).
Metal-rich halo stars are more likely to have formed in situ (light blue line), and this probability increases to 90\% for stars whose orbits are aligned with the disk.
Interestingly, when we investigate the dependence of the in-situ fraction simultaneously as a function of metallicity and angular momentum orientation (right panel of Figure~\ref{fig:facc}), we find only a weak dependence on the current angular momentum orientation of a star, but a strong correlation with metallicity.
All of the lowest metallicity stars, [Fe/H]$\lesssim-2.5$, were accreted (yellow), and all of the metal-rich halo stars, [Fe/H]$\gtrsim-0.5$, formed in situ (purple).
Given the similarities between the global chemical and orbital properties in Latte and the Milky Way, this result suggests that the metal-rich halo component identified in the TGAS--RAVE-on sample formed in the inner Galaxy, but was driven to the Solar circle through subsequent evolution.
Ages of individual stars are another important diagnostic of their origin, so we now explore correlations between metallicity and age for stars in the Solar neighborhood, as predicted by the Latte simulation.
Figure~\ref{fig:ages} shows how the metallicity of stars in Latte depends on their age for disk (red), in-situ halo (light blue) and accreted halo (dark blue).
Shaded regions correspond to the 16th to 84th percentile in the distribution of metallicities for stars of a given age.
In general, metallicity increases with time, however, accretion of a significant amount of the low metallicity gas during the last significant merger 7\;Gyr ago is evident as a decrease in the metallicity of stars formed in situ immediately following this event.
Comparing the different populations in the simulation, we note that the metallicities of halo stars formed in situ closely follow the evolution of disk stars, while the accreted halo is more metal poor at all ages.
The bifurcation in the metallicity tracks for the in-situ and accreted halo is a prediction of the Latte simulation, which, if confirmed observationally, can be used to directly differentiate between the accreted and in-situ halo stars.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{latte_ages.pdf}
\caption{Metallicity range (16--84 percentile) of star particles in the Latte simulation as a function of their age for three structural components (defined as in Figures~\ref{fig:latte}--\ref{fig:dform}) identified at the solar circle: disk (red), in-situ halo (light blue) and accreted halo (dark blue).
The in-situ halo follows the metallicity evolution of the disk, while accreted halo stars of the same age are consistently more metal poor.
This prediction can be directly tested once stellar ages are available for the \emph{Gaia} stars.}
\label{fig:ages}
\end{center}
\end{figure}
\section{Discussion}
\label{sec:discussion}
We have presented evidence that a substantial fraction the stellar halo in the solar neighborhood is metal-rich, and we now put this result in context of previous investigations (\S~\ref{sec:previous}).
Based on its chemical abundances and orbital properties, we inferred that the metal-rich halo has been formed in situ, so we discuss mechanisms that heat stars originally formed in the disk to halo-like orbits in \S~\ref{sec:diskheating}.
We conclude the section by outlining a test of the in-situ formation scenario for the metal-rich halo component with stellar ages in \S~\ref{sec:ages}.
\subsection{Previous evidence for a metal-rich halo}
\label{sec:previous}
Large-scale spectroscopic surveys have mapped the distribution of stellar metallicity in the Milky Way \citep[e.g.,][]{ivezic2008}.
Some of these stars have distance estimates, and can therefore be spatially identified with either the disk or the halo.
Relevant for this discussion, halo stars in the outer Galaxy, $R_{G}>15$\;kpc, are very metal-poor, with median $\rm[Fe/H]\sim-2.2$, while in the inner Galaxy, the typical metallicity of a halo star is $\rm[Fe/H]\sim-1.6$ \citep[e.g.,][]{carollo2007, dejong2010}, which agrees well with the metal-poor halo component identified in our sample.
Also similar to our findings, metallicities of the inner halo extend to the solar value, even though these are the tail of the metallicity distribution \citep[e.g.,][]{allendeprieto2006}, and not an additional component that is visible in Figure~\ref{fig:mdf}.
However, employing only spatial information makes it hard to distinguish between the halo and the thick-disk, especially at high metallicity.
The difference between the thick disk and the halo is more pronounced in kinematics, so \citet{sheffield2012} selected outliers from the disk velocity field to study the halo.
They identified eight metal-rich halo stars, whose $\alpha$-abundances are consistent with disk, and suggested these have been kicked out of the disk.
Still, none of these stars have kinematics that could rule out a thick disk interpretation at a $3\;\sigma$ level.
To date, only one metal-rich star has a high probability of being a halo star \citep{hawkins2015}.
This star has high velocity in the Galactic rest frame, $V_{GSR}\simeq430$\;km/s, and is on an eccentric orbit, $e=0.72$, that reaches $\sim30$\;kpc above the Galactic plane, but has a metallicity $\rm[Fe/H]=-0.18$, so \citet{hawkins2015} concluded it has been ejected from the thick disk.
This star is a valuable indicator of the processes governing the assembly of the Galaxy, but being a single star, it cannot establish how ubiquitous these processes are.
\subsection{Disk heating mechanisms}
\label{sec:diskheating}
The excess of metal-rich halo stars on prograde orbits indicates they originate from within the Milky Way disk, but it is unclear at what Galactocentric distance these stars were formed.
The bulk of similarly selected stars in Latte originated from the inner Galaxy, implying that a degree of radial migration and/or heating occurred.
Alternatively, these stars could be runaway stars -- stars kicked out of the disk at the Solar radius during dynamical processes not captured within the Latte simulation.
In this section we explore implications of these opposing disk heating mechanisms, and assess how plausible they are in explaining metal-rich halo stars detected in RAVE-on--TGAS.
\subsubsection{Runaway stars}
\label{sec:runaway}
Runaway stars are young stars that were formed in the disk and ejected from their birthplace \citep{blaauw1961}.
Some of them have been found in the halo \citep[e.g.,][]{conlon1990}, so it is sensible to test whether any of our metal-rich halo stars are in fact runaways.
Having formed in the disk recently, runaways should have high metallicities, and \citet{bromley2009} have already suggested that solar-metallicity stars reported at 5\;kpc away from the Galactic place \citep{ivezic2008} could be runaway stars.
The metallicity distribution of our metal-rich halo peaks at $\rm[Fe/H]\approx-0.5$, so most of them are probably not runaway stars.
However, the high-metallicity tail of our halo sample extends to super-solar values, so we test whether any of those stars are consistent with being runaways.
In addition to being metal-rich, runaway stars typically have an early spectral type, so to test the runaway origin of the metal-rich halo, we analyze the fraction of runaways expected in a disk population as a function of temperature.
Due to difficulties in the spectroscopic analysis of hot stars, the RAVE-on catalog preferentially contains cold stars, with more than 80\% of the sample being K stars.
The expected fraction of runaway stars drops sharply with decreasing temperature from 40\% for O stars, 5\% for B stars, $\approx2\%$ for A stars to negligible contributions from later stellar types \citep{blaauw1961, gies1986, bromley2009, perets2012}.
There are no OB stars in our halo sample, and only three A stars.
These all have a super-solar metallicity, $\rm[Fe/H]>0$, so they are prime candidates for runaway stars.
However, they constitute only 0.5\% of the metal-rich halo sample, so we can safely conclude that runaway stars are a minor component of the observed metal-rich stellar halo.
\subsubsection{Radial migration}
\label{sec:migration}
In the classical picture of radial migration, spiral structures in the disk radially scatter nearby stars up to several kpc \citep{sellwood2002}.
Stars that have migrated outwards are typically more metal-rich than their neighbors, so radial migration explains why some stars in the Solar neighborhood (possibly including the Sun!) are more metal-rich than both the surrounding stellar population and the local interstellar medium \citep{wielen1996}.
Idealized simulations of disk formation have found radial migration to be a key component in explaining numerous observables, such as the spread in the age--metallicity relation \citep{roskar2008} or the disk morphology and its abundance patterns \citep{schonrich2009}.
Such studies have also shown that stars migrating outwards reach larger heights above the disk plane \citep[e.g.,][]{schonrich2009, loebman2011}.
This scenario can explain, at least in part, the formation of the thick disk \citep[e.g.,][]{wilson2011}, so it is also conceivable that the metal-rich stars we identified on the halo-like orbits are an endpoint of this process.
However, subsequent numerical works have found that outward migrators do not attain larger scale heights from the disk plane \citep{minchev2012, vera-ciro2014}, thus casting doubts on the idea of forming the thick disk and a metal-rich halo through radial migration.
The above results on radial migration are based on simulations initialized to match the orderly morphology of stellar disks observed at the present day and do not capture the range of dynamical conditions present in the cosmological simulations of galaxy formation \citep[e.g.,][]{agertz2009}.
In particular, the metal-rich halo in the Latte simulation was formed at early cosmic times ($z \gtrsim 1$, or age $\gtrsim 7$ Gyr), before the formation of the thin disk \citep{ma2016} while the host galaxy was still actively accreting.
Accretion from the inter-galactic medium and merging satellites brought in a lot of gas to the galactic center (evident as star-forming tracks in Figure~\ref{fig:dform}), which fueled additional in-situ star formation.
\citet{elbadry2016} showed that these conditions create two mechanisms for radially displacing stars from their birth location.
First, some stars are formed during gas outflows, so their initial orbits can be eccentric and have large apocenters.
Second, the combination of inflowing gas accretion and gas outflows driven by stellar feedback produce strong fluctuations in the underlying gravitational potential.
Such fluctuations have already been shown to change the distribution of dark matter, in particular, in generating a cored density profile \citep[e.g.,][]{pontzen2012, brooks2014, dicintio2014, chan:fire.dwarf.cusps}, but \citet{elbadry2016} further showed that stellar orbits are affected as well, ultimately becoming heated to a more isotropic distribution.
This mechanism is most efficient in relatively shallow potential wells of dwarf galaxies.
While the Milky Way-mass host galaxy in Latte is too massive to exhibit such behavior at $z \sim 0$, its progenitor has significantly lower stellar mass (was more gas rich) while the metal-rich halo was forming, so we suggest that a similar process drove the radial migration to the Solar circle that we see here, consistent with conclusions of \citet{ma2017}.
We will investigate this behavior in Latte in more detail in future work.
Radial migration, driven by large-scale motions in the Milky Way progenitor, could explain the origin of metal-rich stars on halo-like orbits in the Solar neighborhood.
If these stars truly originate from the inner Galaxy, then they not only illustrate an important dynamical mechanism shaping the Galaxy, but are also a unique window into star formation in the early Milky Way.
Future analysis and data from the \emph{Gaia} mission will allow us to test this hypothesis, which we briefly discuss in the next section.
\subsection{Inferring the formation scenario of the stellar halo with stellar ages}
\label{sec:ages}
In section \S\ref{sec:formation}, we showed that the metal-rich halo simulated in Latte was chemically enriched in a fashion similar to the Latte disk.
When compared at a fixed age, these components have higher metallicity than the metal-poor halo at all times (see Figure~\ref{fig:ages}).
We can directly test this prediction by dating the stars in our TGAS--RAVE-on sample.
Unfortunately, stellar ages are not obtained easily \citep[for a recent review, see][]{soderblom2010}.
A number of observables that correlate with age have been identified, such as stellar rotation \citep{barnes2007}, chromospheric activity \citep{mamajek2008}, or surface abundances \citep{ness2016}, but none of these empirical relations are applicable to all of the field stars.
Models of stellar evolution can relate the position of any star in the Hertzsprung--Russell diagram (HRD) and its internal structure to its age.
The latter is inferred from asteroseismic studies of stellar pulsations, and has so far been employed to date a few dozen of well observed stars \citep[e.g.,][]{keplerages}.
In the coming decade, asteroseismic dating will be expanded, but still limited to the brightest stars \citep{tess, plato}.
We expect the HRD age dating to be more easily applied to a larger sample of stars, and discuss it in more detail below.
Coeval stellar populations are routinely dated by comparison of their tracks in the HRD to theoretical isochrones \citep[e.g.,][]{sandage1970, chaboyer1998, dotter2007}, but isochrone dating of field stars is less straightforward.
Intrinsically, without the HRD positions of coeval companions, age estimates of field stars are very uncertain in evolutionary stages which keep stars at an approximately constant position in the HRD, such as the main sequence phase.
In addition, precisely measuring stellar distances, which are required to put a star on the HRD, as opposed to merely on a color-magnitude diagram, is observationally challenging.
However, if distances are known, stellar ages can be measured for stars in pre- or post-main sequence evolutionary stages.
\citet{gcs} measured ages and other intrinsic stellar parameters for thousands of nearby field stars by obtaining their absolute magnitudes from Hipparcos parallaxes, effective temperatures and metallicities from follow-up spectroscopy, and then reading off the age by interpolating theoretical isochrones in this three-dimensional space.
TGAS has already increased the sample of stars with known distances by an order of magnitude, and several groups are modeling the multi-band stellar photometry (and including spectroscopy when available) to provide constraints on their ages.
In such a procedure, ages of red giants are measured with a precision of $1-3$\;Gyr when only photometric data is available.
Including spectroscopically derived stellar parameters reduces the uncertainty in recovered ages to 1\;Gyr (P.~Cargile, private communication).
Most of the halo stars in our sample are giants, so if the bifurcation in the age--metallicity relation of halo stars exists at the level suggested by the simulation studied here, we will soon be able to detect it observationally.
\section{Summary}
\label{sec:summary}
The goal of this work was to investigate the origin of halo stars in the Solar neighborhood.
We analyzed a sample of stars located in 6D using the first year of \emph{Gaia} data, combined with observations from ground-based spectroscopic surveys.
The halo was defined by requiring a relative velocity with respect to the Local Standard of Rest larger than 220\;km/s.
Metal-poor stars, $\rm[Fe/H]<-1$, comprise approximately half of this kinematically defined halo sample.
The other half of the sample are stars more metal-rich than expected for the inner halo ($\rm[Fe/H]>-1$), and whose metallicity and $\alpha$-element abundances are traditionally associated with the disk population.
We built a toy model of the Solar neighborhood to show that orbits of the metal-poor halo stars are isotropically distributed, while the metal-rich halo is preferentially aligned with the Galactic disk.
To uncover the origin of these two halo components, we performed an identical analysis on a sample of stars selected in a Solar-like neighborhood of the Latte cosmological hydrodynamic simulation.
A significant fraction of the simulated halo stars are also metal-rich and on prograde orbits with respect to the disk, and most of them were formed in situ in the main galaxy.
In general, we found that the origin of halo stars is well correlated with stellar metallicity, with metal-poor stars having been accreted, and metal-rich having formed in situ, and has little dependence on their kinematics.
Additionally, the majority of metal-rich halo stars in Latte were formed in the inner galaxy, and migrated several kpc outwards, indicating that radial redistribution actively operates in Milky Way-like galaxies.
In this model, the in-situ component of the stellar halo is more metal-rich than the accreted component at a fixed age, a hypothesis easily verifiable with forthcoming data.
We have presented for the first time a large population of metal-rich, $\rm[Fe/H]>-1$, stars on halo-like orbits, and inferred that they have formed in situ.
These conclusions arise in part from a remarkable agreement between the precise observational data from the \emph{Gaia} mission and the Latte simulation of a Milky Way-like galaxy.
Direct comparison of observed and simulated galaxies at this level of detail will be a powerful tool in studying the Galaxy in the \emph{Gaia} era.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{distance_hist_raveon.pdf}
\caption{Distribution of heliocentric distances in the TGAS--RAVE-on sample, split by kinematically identified components.
The disk stars (red) are in general closer than the halo (blue), but we see no difference in the distances of the metal-rich (light blue) and metal-poor halo component (dark blue).
Orbital differences observed between the metal-rich and the metal-poor halo do not originate from a difference in their spatial distributions.
}
\label{fig:distance}
\end{center}
\end{figure}
\vspace{0.5cm}
\emph{Acknowledgments:}
It is a pleasure to thank Andy Casey for providing a match of the RAVE-on catalog to TGAS, Yuan-Sen Ting for matching the APOGEE catalog to TGAS, Kim Venn, Rosy Wyse, Warren Brown and Elena D'Onghia for insightful comments that shaped the progression of this project.
This work has made use of the following Python packages: \texttt{matplotlib} \citep{mpl}, \texttt{numpy} \citep{numpy}, \texttt{scipy} \citep{scipy}, \texttt{Astropy} \citep{astropy} and \texttt{gala} \citep{gala}.
This paper was written in part at the 2016 NYC Gaia Sprint, hosted by the Center for Computational Astrophysics at the Simons Foundation in New York City.
AB was supported by an Institute for Theory and Computation Fellowship.
CC acknowledges support from the Packard Foundation.
AW was supported by a Caltech-Carnegie Fellowship, in part through the Moore Center for Theoretical Cosmology and Physics at Caltech, and by NASA through grant HST-GO-14734 from STScI.
DK was supported by NSF grant AST-1412153 and a Cottrell Scholar Award from the Research Corporation for Science Advancement.
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{http://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
Funding for RAVE has been provided by: the Australian Astronomical Observatory; the Leibniz-Institut fuer Astrophysik Potsdam (AIP); the Australian National University; the Australian Research Council; the French National Research Agency; the German Research Foundation (SPP 1177 and SFB 881); the European Research Council (ERC-StG 240271 Galactica); the Istituto Nazionale di Astrofisica at Padova; The Johns Hopkins University; the National Science Foundation of the USA (AST-0908326); the W. M. Keck foundation; the Macquarie University; the Netherlands Research School for Astronomy; the Natural Sciences and Engineering Research Council of Canada; the Slovenian Research Agency; the Swiss National Science Foundation; the Science \& Technology Facilities Council of the UK; Opticon; Strasbourg Observatory; and the Universities of Groningen, Heidelberg and Sydney.
The RAVE web site is at \url{https://www.rave-survey.org}.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is \url{www.sdss.org}.
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{tdcontamination.pdf}
\caption{(Left) Probability contours of toy model thick disk stars in the Toomre diagram in whole steps of standard deviation, $\sigma$ (orange lines).
All halo stars from our RAVEon--TGAS sample (metal-rich in light blue circles and metal-poor in dark blue squares) lie outside of the $3\;\sigma$ thick disk contour, but some are consistent with the thick disk at a $4\;\sigma$ level.
(Right) Probability for stars, identified in RAVEon--TGAS as part of the halo, of actually being a part of the thick disk.
Lines show cumulative fractions of halo stars as a function of this probability, with light blue for the metal-rich and dark blue for the metal-poor halo stars.
Only a small fraction of both halo components is expected to be a misclassified part of the thick disk (20\% of the metal-rich and 5\% of the metal-poor halo have a thick disk probability larger than 1\%, marked with a black vertical line).}
\label{fig:tdcont}
\end{center}
\end{figure*}
\bibliographystyle{apj}
|
1,116,691,499,721 | arxiv | \section{Introduction}
\label{results}
Let $(M^n,g)$ be a connected Riemannian (= $g$ is positive definite) or pseudo-Riemannian manifold of dimension $n\ge 2$. We say that a metric $\bar g$ on $M^n$ is \emph{geodesically equivalent} to $g$, if every geodesic of $g$ is a (reparametrized) geodesic of $\bar g$. We say that they are \emph{affine equivalent}, if their Levi-Civita connections coincide.
The first examples of
geodesically equivalent metrics are due to Lagrange \cite{lagrange}.
He observed that the radial projection $f(x,y,z)= \left(-\frac{x}{z},- \frac{y}{z}, -1\right)$ takes geodesics of the half-sphere
$S^2:=\{(x,y,z)\in \mathbb{R}^3: \ \ x^2+y^2+z^2=1, \ z<0\}$ to the geodesics of the plane $ E^2:=\{(x,y,z)\in \mathbb{R}^3: \ \ z=-1\}$, since the geodesics of both metrics are intersection of the 2-plane containing the point $(0,0,0)$ with the surface. Later, Beltrami \cite{Beltrami2,Beltrami3} generalized the example for the metrics of constant negative curvature, and for the pseudo-Riemannian metrics of constant curvature. In the example of Lagrange, he replaced the half sphere by the half of
one of the hyperboloids $H_\pm^2:=\{(x,y,z)\in \mathbb{R}^3: \ \ x^2+y^2-z^2=\pm 1\}, $ with the restriction of the Lorentz metrics $dx^2+dy^2-dz^2$ to it. Then, the geodesics of the metric are also intersections of the 2-planes containing the point $(0,0,0)$ with the surface, and, therefore, the stereographic projection sends it to the straight lines of the appropriate plane.
Though the examples of the Lagrange and Beltrami are two-dimensional, one can easily generalize them for every dimension (for Riemannian metrics, it was done already in \cite{Beltrami2}) and for every signature.
Since the time of Hermann Weyl, geodesically equivalent metrics were actively discussed in the realm of geneal relativity theory. The context of general relativity poses the following restrictions: the dimension is $4$, the metrics are pseudo-Riemannian of Lorentz signature $(-,+,+,+)$ or $(+,-,-,-)$, and sometimes the metrics satisfy additional assumptions such that one or both metrics are Ricci-flat ($R_{ij}=0$), or Einstein ($R_{ij}= \tfrac{R}{4}g_{ij}$), or, more generally, satisfy the Einstein equation $R_{ij}-\tfrac{R}{2}g_{ij} =T_{ij}$ with `physically interesting' stress-energy tensor $T_{ij}$.
Let us explain (using a slightly naive language)
one of the possible motivations for this interest. Suppose we would like to understand the structure of the space-time in a certain part of the universe. We assume that this part is far enough so the we can use only telescopes (in particular we can not
send a space ship there). We still assume that the telescopes can see sufficiently many objects in this part of universe. Then, if the relativistic effects are not negligible (that happens for example if
the objects in this
part of space time are sufficiently fast
or if this region of the universe is big enough),
we obtain as a rule the world lines of the objects as unparameterized curves.
Indeed, local coordinates on a 4-manifold are 4 smooth functions on the manifold such that their differentials are linearly independent.
Now, for every freely falling object in this part of the universe such that it can be registered by telescopes, each telescope at every moment of time
gives us two such functions,
namely the spherical coordinates $\phi$ and $\theta$ (latitude and longitude)
of the direction the light reflected
from the object comes to the telescope from (in a naive language, the telescope `sees' the direction where the object lies), see the picture below. Since we have two telescopes, altogether we have 4 functions of $t$, \ $(\phi_1(t), \theta_1(t),\phi_2(t), \theta_2(t))$, that we consider to be
the word line (i.e., geodesic) of the object in the coordinate system $(\phi_1, \theta_1,\phi_2, \theta_2)$. If we see sufficiently many objects, we have sufficiently many geodesics.
Of course, we cannot get lightlike or spacelike geodesics by this procedure. In the best case, we can reconstruct (numerically) sufficiently many geodesics, in the sense their velocity vectors are dense in a certain open subset of $TM$. See also the discussion in \cite{Hall2007}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{manyobservers_new}
\end{figure}
Now, as a rule, we can not get the natural parameter (=proper time) of an object. Indeed, if the relativistic effects are are not negligible,
the proper time of the object is not our own time $t$, i.e., the curve
$(\phi_1(t), \theta_1(t),\phi_2(t), \theta_2(t))$ is a reparameterized geodesic only. If we can not observe a periodic process on an object (note that the astronomical objects such that we can register a periodic process on, for example pulsars, are very rare) or any other way to measure the own time of the object,
we can not obtain the own time of the objects by astronomic observations (see also the discussion in \cite{Gibbons}).
In view of this discussion, the following two problems (Problem \ref{1} and Problem \ref{2} below)
in the theory of geodesically equivalent metrics are interesting for general relativity:
\begin{Prob} \label{1} How to reconstruct a metric by its unparameterized geodesics?
\end{Prob}
The general setting is as follows: we have
a family of smooth curves $\gamma(t;\alpha)$ in $U\subseteq \mathbb{R}^4$ depending on 6-dimensional\footnote{locally, the set of unparameterized
geodesics of an $n-$dimensional manifold has the structure of a manifold of dimension $2(n-1)$}
parameter $\alpha=(\alpha_1,...,\alpha_6)$;
we assume that the family is sufficiently big (we formalize `sufficiently big' in the beginning of Section \ref{Problem11}).
We need to find a metric $g$ such that for every fixed
$\alpha$ the curve
$\gamma(t;\alpha)$ is a reparameterized geodesic of $g$.
Mathematically, the problem has sense in every dimension and for every signature of the metric.
In dimension 2, versions of this question were considered by S. Lie \cite{Lie} and R. Liouville \cite{Liouville}, and were also discussed by Veblen and Thomas \cite{thomas,veblen23,veblen26} and
Eisenhart \cite{eisenhart23} in the beginning of 20th century. In the realm of general relativity, the problem was explicitly stated by J. Ehlers et al \cite{Ehlers},
where it was said that {\it ``We reject clocks as basic tools for setting up the space-time geometry and propose ... freely falling particles instead. We wish to show how the full space-time geometry can be synthesized ... . Not only the measurement of length but also that of time then appears as a derived operation.''}
This problem can be naturally divided in two subproblems.
\begin{Subprob} \label{1.1} Given a family of curves $\gamma(t; \alpha)$, how to understand whether these curves are reparameterised geodesics of a certain affine connection? How to reconstruct this connection effectively?
\end{Subprob}
We will say that a metric {\it lies in a projective class} of a certain symmetric affine connection $\Gamma=\Gamma_{jk}^i$, if every geodesic of $g$ is a reparameterized geodesic of $\Gamma$.
\begin{Subprob} \label{1.2} Given an affine connection $\Gamma=\Gamma_{jk}^i$,
how to understand whether there exists a metric $g$ in the projective class of $\Gamma$? How to reconstruct this metric effectively?
\end{Subprob}
Both subproblems were actively discussed in the literature. In dimension 2, the answer on Subproblem \ref{1.1} is classical and was known already to Sophus Lie; given a family of curves one constructs an ODE of the second order $y''(x) = f(x, y(x), y'(x))$; the curves $\gamma(t; \alpha)$ are reparameterized geodesics of a certain connection if and only if the right hand side of the ODE is a 3rd degree polynomial in $y'(x)$, $$f(x , y(x), y'(x))= A(x, y(x)) + B(x, y(x)) y'(x)+ C(x, y(x)) \left(y'(x)\right)^2+ D(x, y(x)) \left(y'(x)\right)^3.$$
The answer in the multidimensional case can be obtained using the same idea as in dimension $2$, we give it in Section \ref{Problem11}.
The second subproblem is more complicated and is almost open. In dimension 2, the subproblem was considered in the
recent paper \cite{BDE} of Bryant et al: given an affine connection,
they construct a system of differential invariants
that vanish if and only if there exists a metric (in a neighborhood of almost every point) in the projective class of this connection. The invariants are very complicated and are of very high orders.
In theory, one can also obtain a similar answer in every dimension. Indeed, by \cite{eastwood},
in every dimension the existence of a metric in a projective class is equivalent to the existence of a nontrivial solution of a certain overdetermined system of linear PDE in the Cauchy-Frobenius form (i.e., the sysem is of first order and all derivatives of unknown functions
are explicit (linear) expressions in the unknown functions). Given an overdetermined system of PDE in the Cauchy-Frobenius form, one can always, in theory, construct a system of differential invariants that vanish if and only if the system admits a nontrivial solution (in a neighborhood of almost every point). An effective construction of these differential invariants could be very complicates. The results of \cite{BDE} show that it is indeed the case in dimension 2. It is hard to predict whether the system of differential invariants is easier in the multidimensional case (normally multidimensional cases are harder
than lowdimensional;
but sometimes overdetermined systems are easier to analyse in
higher dimensions, because they can have higher degree of overdetermination).
In the present paper, in Section \ref{Problem12} we give an algorithmic answer to Subproblem \ref{1.2} under the additional assumption that the metric $g$ we are looking for is Ricci-flat and the projective class satisfies certain nondegeneracy assumption, i.e., in a situation most interesting from the viewpoint of general relativity. In Section \ref{Problem21},
we also discuss the case of arbitrary metric: we show that also in this case one can algorithmically reconstruct the metric by its projective class
assuming certain nondegeneracy assumption on the projective class; though in this case the nondegeneracy assumption is harder to check.
\begin{Rem}
Of course it is important in what form the geodesics $\gamma(t; \alpha)$ are given. Below, it will be clear what information we need from $ \gamma(t; \alpha)$ in order our algorithm works. If the geodesics are given numerically (which is the case if they came from astronomic observations), this information could be extracted without difficulties.
\end{Rem}
\begin{Prob} \label{2} In what situations is the reconstruction of a metric by the unparameterised geodesics unique (up to the multiplication of the metric by a constant)?
\end{Prob}
The example of Lagrange/Beltrami above shows that in certain situations the reconstruction is not unique: the geodesics of every
metric of constant curvature are straight lines, i.e., the geodesics of the standard flat metric,
in a certain coordinate system.
Constant curvature metrics are not the only metrics that allow nontrivial
geodesical equivalence. For example, as it was shown by Dini, the following two metrics on $U^2\subseteq \mathbb{R}^2$ are geodesically equivalent
\begin{equation} \label{dini} g= (X(x) - Y(y))(dx^2 +dy^2) \ \textrm{and} \
\left(\frac{1}{Y(y)} - \frac{1}{X(x)}\right)\left(\frac{dx^2}{X(x)} +\frac{dy^2}{Y(y)}\right),\end{equation}
where $X$ and $Y$ are arbitrary (smooth) functions of the indicated variables
such that the formulas \eqref{dini} correspond to metrics
(i.e., $0\ne X\ne Y \ne 0$ for all $(x,y)\in U^2$).
This example was generalized for all dimensions by Levi-Civita: from his results it is follows that the following two 4-dimensional metrics are geodesically equivalent:
\begin{equation}\label{LC1} \begin{array}{ccr} g &=& (X_0(x_0)- X_1(x_1))(X_0(x_0)- X_2(x_2))(X_0(x_0)- X_3(x_3)) dx_0^2 \\ &+&
(X_0(x_0)- X_1(x_1))(X_1(x_1)- X_2(x_2))( X_1(x_1)- X_3(x_3))dx_1^2 \\ &+&
(X_0(x_0)- X_2(x_2))(X_1(x_1)- X_2(x_2))( X_2(x_2)- X_3(x_3))dx_2^2 \\ & + & (X_0(x_0)- X_3(x_3))(X_1(x_1)- X_3(x_3))( X_2(x_2)- X_3(x_3))dx_3^3 \end{array} \end{equation}
\begin{equation} \label{LC2} \begin{array}{ccr} \bar g &=& \frac{1}{X_0(x_0)} \frac{1}{X_0(x_0)X_1(x_1)X_2(x_2)X_3(x_3)} (X_0(x_0)- X_1(x_1))(X_0(x_0)- X_2(x_2))(X_0(x_0)- X_3(x_3)) dx_0^2 \\ &+& \frac{1}{X_1(x_1)} \frac{1}{X_0(x_0)X_1(x_1)X_2(x_2)X_3(x_3)}
(X_0(x_0)- X_1(x_1))(X_1(x_1)- X_2(x_2))( X_1(x_1)- X_3(x_3))dx_1^2 \\ &+& \frac{1}{X_2(x_2)} \frac{1}{X_0(x_0)X_1(x_1)X_2(x_2)X_3(x_3)}
(X_0(x_0)- X_2(x_2))(X_1(x_1)- X_2(x_2))( X_2(x_2)- X_3(x_3))dx_2^2 \\ & + & \frac{1}{X_3(x_3)} \frac{1}{X_0(x_0)X_1(x_1)X_2(x_2)X_3(x_3)}(X_0(x_0)- X_3(x_3))(X_1(x_1)- X_3(x_3))( X_2(x_2)- X_3(x_3))dx_3^3. \end{array} \end{equation}
Here $(x_0,...,x_3)$ are local coordinates and
the functions $X_i$ depend on the indicated variables and are such that the metrics have sense.
In view of this,
in the realm of general relativity, Problem \ref{2} can
be naturally divided in two subproblems.
We call a metric $g$ {\it geodesically rigid,}
if every metric $\bar g$, geodesic equivalent to $g$, is proportional to $g$.
\begin{Subprob} \label{2.1} What metrics `interesting' for general relativity are { geodesically rigid}?
\end{Subprob}
\begin{Subprob} \label{2.2}
Construct all pairs of nonproportional geodesically equivalent metrics.
\end{Subprob}
Let us comment on these subproblems. The part of the Supbproblem \ref{2.1} that is hard or even impossible to formalize is the word ``interesting''. Instead of formalizing this notion, let us give few results in this direction.
Probably the metrics that are most interesting in the context of general relativity are
Ricci-flat nonflat metrics. As it was shown by A. Z. Petrov in \cite{Petrov1} (see also \cite{Hall2007} and \cite{einstein}),
\noindent{\it 4-dimensional Ricci-flat nonflat metrics of Lorentz signature can not be geodesically equivalent, unless they are affinely equivalent}
\noindent (two metrics are {\it affinely equivalent}, if their Levi-Civita connections coincide. Affine equivalent Ricci-flat 4-dimensional metrics are completely understood).
It is one of the results Petrov obtained in 1972 the Lenin prize,
the most important scientific award of the Soviet Union, for.
Recently, the answer of Petrov was generalized in \cite{einstein} (see also \cite{Hall2010}): it was shown that
\noindent{\it if $g$ and $\bar g$ are geodesically equivalent metrics on a $4-$dimensional manifold, and $g$ is Einstein and of nonconstant curvature, then the metrics are affinely equivalent}.
\noindent Let us also give an example of a metric that is important for general relativity and that is not geodesically rigid. This is the so-called Friedman-Lemaitre-Robertson-Walker metric
\begin{equation} \label{RW1} g = -dt^2 + R(t)^2 \frac{dx^2 + dy^2 + dz^2}{
1 + \tfrac{\kappa}{
4 } (x^2 + y^2 + z^2)} \
; \ \ \kappa = +1; 0;-1,\end{equation}
where R = R(t) is a real function (the scale factor) of the `cosmic time' $t$. The metric is not geodesically rigid. Indeed, for every constant $c$ such that the formula below has sense, the metric \begin{equation} \label{RW2} \bar g = \frac{-1}{(R(t)^2+c)^2} dt^2 + \frac{R(t)^2}{c(R(t)^2+c)} \frac{dx^2 + dy^2 + dz^2}{
1 + \tfrac{\kappa}{
4 } (x^2 + y^2 + z^2)}
\end{equation} is geodesically equivalent to $g$ (one can see it directly as it was done for example \cite{Nurowski} or \cite{Hall2008}, see also discussion in \cite{Gibbons}. Actually, the pair of geodesically equivalent metrics (\ref{RW1},\ref{RW2}) is a special case of geodesically equivalent metrics from Levi-Civita \cite{Levi-Civita}).
For certain functions $R$, the metric \eqref{RW1} is the main ingredient of the so-called Standard Model of modern cosmology, and is of cause very interesting for general relativity.
The metrics listed above, i.e., Einstein metrics and FLRW metrics, are without any doubt interesting for general relativity. Of cause, there are other metrics that could be interesting for general relativity, and we consider it very important
to understand what `interesting' metrics are geodesically rigid. In the present paper, in Section \ref{Problem21},
we prove that
{ \it almost every 4-dimensional metric is geodesically rigid.}
\noindent Let us explain what we understand under almost every. Our result is local, so we will work in a small neighborhood $U\subset \mathbb{R}^4$
with fixed coordinates $(x_1,...,x_4)$. We consider a metric $g$ as the mapping $g:U\to \mathbb{R}^{\tfrac{n(n+1)}{2}}= \mathbb{R}^{10}$; the space $ \mathbb{R}^{\tfrac{n(n+1)}{2}}$ should be viewed as the space of symmetric $n\times n$-matrices. On the space of metrics (viewed as mappings) we consider the standard uniform $C^{2}-$topology: the metric $g$ is $\varepsilon-$close to the metric $\bar g$ in this topology, if
the components of $g$ and their first and second derivatives are $\varepsilon-$close to that of $\bar g$.
In the present paper, we prove that
\noindent {\it for any metric $g$ and every $\varepsilon >0$ there exists a metric $\hat g$ such that $\hat g$ is $\varepsilon$-close to $g$ in the $C^2-$sense, and such that $\hat g$ is geodesically rigid. Moreover, there exists $\varepsilon'>0$ such that
every metric that is $\varepsilon'-$ close to $\hat g$ in the $C^2-$sense is also geodesically rigid. }
\noindent The result is also true in dimensions $\ge 4$; the proof is essentially the same.
Now, concerning the lower dimensions, the result is true in dimension 3,
if we replace the uniform $C^2-$ topology by the uniform $C^3$-topology. The proof (will not be given here) is based on the same idea. In dimension 2, the result is again
true, if we replace the uniform $C^2-$ topology by the uniform $C^8$-topology.
This result was expected, at least if we replace $C^2-$topology by $C^\infty$-topology. Indeed, by Sinjukov \cite{sinjukov} and Eastwood et al \cite{eastwood}, the existence of a metric geodesically equivalent to a given one is equivalent to the existence of a nontrivial solution of a certain linear system of partial
differential equations in the Cauchy-Frobenius form \eqref{prol}, whose coefficients are certain invariant
expressions in the components of the given metrics and their derivatives.
It is known that the existence of the solution
of such system is equivalent to certain differential conditions on coefficients, that is, on
the entries of the metrics. If there exists at least one metric that is geodesically rigid,
then the differential conditions are not identically fulfilled, and
almost every (in the $C^\infty-$ sense) metric is geodesically rigid.
Now, the existence of geodesically rigid metrics in dimensions $n\ge 3$ is wellknown (at least since Sinjukov \cite{sinjukov54}). The existence of geodesically equivalent metrics in dimension $n=2$ is more tricky; it follows from Kruglikov \cite{kruglikov2008} where all above mentioned differential conditions were constructed. So in a certain sense our result is the improving $C^\infty-$ closeness (which should be clear to experts, though we did not find a place where it is written) to $C^2-$closeness.
Let us now comment on Subproblem \ref{2.2}. First of all, the problem is very classical, and was explicitly asked by E. Beltrami\footnote{ Italian original from \cite{Beltrami}:
La seconda $\dots$ generalizzazione $\dots$ del nostro problema, vale a dire: riportare i punti di una superficie sopra un'altra superficie in modo che alle linee geodetiche della prima corrispondano linee geodetiche della seconda} in \cite{Beltrami}.
In the Riemannian case, it was solved by Dini in dimension 2 and Levi-Civita in all dimensions.
More precisely, Dini has shown that locally, in a neighborhood of almost every point of a two-dimensional manifold,
every two geodesically equivalent metrics are given by the form \eqref{dini} in a certain coordinate system. Levi-Civita has generalized this result to every dimension, we recall his result in Section \ref{Problem22}.
Unfortunately, the proofs of Dini and Levi-Civita require that the (1,1)-tensor $g^{i\ell} \bar g_{\ell j}$ is semi-simple (i.e., has no Jordan blocks), and that all its eigenvalues are real. If one of the metrics is Riemannian, this condition is fulfilled automatically.
Examples show the existence of geodesically equivalent pseudo-Riemannian
metrics such that the (1,1)-tensor $g^{i\ell} \bar g_{\ell j}$ is not semisimple or/and its eigenvalues are not real. The examples exist already in dimension 2: as it was shown\footnote{As is was explained in \cite{pucacco}, essential part of the result could be attributed to Darboux
\cite{Darboux}} in \cite{pucacco}, the metrics from every column of the table
\begin{tabular}{|c||c|c|c|}\hline & \textrm{Liouville case} & \textrm{Complex-Liouville case} & \textrm{Jordan-block case}\\ \hline \hline
$g$ & $(X(x)-Y(y))(dx^2 -dy^2)$ & $\Im(h)dxdy$ & $\left( 1+{x} Y'(y)\right)dxdy $
\\ \hline $ \bar g$ &$ \left( \frac{1}{Y(y)}-\frac{1}{X(x)}\right) \left( \frac{dx^2}{X(x)} - \frac{dy^2}{Y(y)} \right)$&
\begin{minipage}{.3\textwidth}$-\left(\frac{\Im(h)}{\Im(h)^2 +\Re(h)^2}\right)^2dx^2 \\ +2\frac{\Re(h) \Im(h)}{ (\Im(h)^2 +\Re(h)^2)^2} dx dy \\ + \left(\frac{\Im(h)}{\Im(h)^2 +\Re(h)^2}\right)^2dy^2 $
\end{minipage} & \begin{minipage}{.3\textwidth}$ \frac{1+{x} Y'(y)}{Y(y)^4} \bigl(- 2Y(y) dxdy\\
+ (1+{x} Y'(y))dy^2\bigr)$\end{minipage}\\ \hline
\end{tabular}
are geodesically equivalent (we assume that the functions $X$ and $Y$ depend on the indicated variables only, and that the function $h$ is a holomorphic function of the complex variable $z=x + i \cdot y$). Moreover, every pair of 2-dimensional
geodesically equivalent pseudo-Riemannian metrics has this form in a neighborhood of almost every point in a certain coordinate system.
By direct calculations we see that the (1,1)-tensor $g^{i\ell} \bar g_{\ell j}$ for these metrics
is semisimple with two real eigenvalues in the Liouville case (we also see that the form of the metrics is very similar to \eqref{dini}, the only difference is the signature), has two complex-conjugated eigenvalues in the Complex-Liouville case, and is not semisimple in the Jordan-block case.
Actually, certain authors consider that
the Subproblem \ref{2.2} is also solved; the solution is attributed to Aminova \cite{Aminova}. Unfortunaltely, the author of the present paper does not understand her result, and has certain doubts that it is correct.
More precisely, in view of \cite[Theorem 1.1]{Aminova}
and the formulas \cite[(1.17),(1.18)]{Aminova} for $k=1$, $n=4$ and all $\varepsilon$s equal to $+1$,
the following two metrics $g$ and $\bar g$ given by the matrices (where $\omega$ is an arbitrary function of the variable $x_4$).
$$ \left[ \begin {array}{cccc} 0&0&0&3\,x_{{3}}+3\,\omega \left( x_{{4}}
\right) \\\noalign{\medskip}0&0&1&2\,x_{{2}}\\\noalign{\medskip}0&1&0
&x_{{1}}\\\noalign{\medskip}3\,x_{{3}}+3\,\omega \left( x_{{4}}
\right) &2\,x_{{2}}&x_{{1}}&4\,x_{{1}}x_{{2}}\end {array} \right] ,
$$
{$
\left[ \begin {array}{cccc} 0&0&0&3\,{\frac {x_{{3}}+\omega \left( x_
{{4}} \right) }{{x_{{4}}}^{5}}}\\\noalign{\medskip}0&0&2\,{x_{{4}}}^{-
5}&{\frac {-3\,x_{{3}}-3\,\omega \left( x_{{4}} \right) +2\,x_{{2}}x_{
{4}}}{{x_{{4}}}^{6}}}\\\noalign{\medskip}0&2\,{x_{{4}}}^{-5}&-{x_{{4}}
}^{-6}&{\frac {3\,x_{{3}}+3\,\omega \left( x_{{4}} \right) -2\,x_{{2}}
x_{{4}}+x_{{1}}{x_{{4}}}^{2}}{{x_{{4}}}^{7}}}\\\noalign{\medskip}3\,{
\frac {x_{{3}}+\omega \left( x_{{4}} \right) }{{x_{{4}}}^{5}}}&{\frac
{-3\,x_{{3}}-3\,\omega \left( x_{{4}} \right) +2\,x_{{2}}x_{{4}}}{{x_{
{4}}}^{6}}}&{\frac {3\,x_{{3}}+3\,\omega \left( x_{{4}} \right) -2\,x_
{{2}}x_{{4}}+x_{{1}}{x_{{4}}}^{2}}{{x_{{4}}}^{7}}}&{\frac { \left( -3
\,x_{{3}}-3\,\omega \left( x_{{4}} \right) +2\,x_{{2}}x_{{4}} \right)
\left( 2\,x_{{1}}{x_{{4}}}^{2}+3\,x_{{3}}+3\,\omega \left( x_{{4}}
\right) -2\,x_{{2}}x_{{4}} \right) }{{x_{{4}}}^{8}}}\end {array}
\right]$}
should be geodesically equivalent, though they are not (which can be checked by direct calculations).
Note that the metrics above have signature $(2,2)$, so they are not that interesting for general relativity. In the case of Lorentz signature, the theorem of Aminova seems to be correct, but still it is very complicated to extract the precise formulas from her works.
Note also that according to \cite{Aminova}, in the case of Lorentz signature, geodesically equivalent metrics were discribed by Petrov \cite{Petrov49} in dimension 3, by Golikov \cite{Golikov} in dimension 4, and by Kruchkovich \cite{Kruchkovich} in all dimensions. From these papers, we were able to find (and to check) the paper of Petrov \cite{Petrov49} only.
{\it In the present paper, we combine recent
results of \cite{splitting} and above mentioned results of \cite{pucacco} and \cite{Petrov49} to
give an easy algorithm how to obtain a list of pairs of all possible
geodesically equivalent 4-dimensional metrics $g, \bar g$ of Lorentz signature. }
More precisely, we explain (following \cite{splitting}) that every such pair can be obtained by applying the explicit
gluing construction from Theorem \ref{thm3} to building blocks, and provide explicit formulas for all possible building blocks. One can easily obtain a complete list of metrics by this algorithm. There exists
three
possible three-dimensional building blocks, three possible two-dimensional, and one possible 1-dimensional,
so all together there exists 10 normal forms for geodesically equivalent (nonproportional)
metrics of Lorentz signature. The normal forms are given
by explicit formulas and allow certain freedom as (almost) arbitrary choice of functions of one variable or constants or metrics on two- or three-dimensional disks. We also explain the (only) difficulty in applying this algorithm in higher dimensions.
\section{ Problem \ref{1}: How to reconstruct a metric by its unparameterized geodesics. }\label{Problem1}
\subsection{ Subproblem \ref{1.1}: how to reconstruct a connection by unparameterized geodesics, and when it is possible. } \label{Problem11}
We will work in arbitrary dimension $n\ge2$, in a small neighborhood $U\subset \mathbb{R}^n$.
We assume that we are given a family of smooth
curves $\gamma(t; \alpha)$. We assume that the family is sufficiently big in the sense that
at any point $x_0\in U$ the set of vectors
$$\Omega_{x_0}:= \{ \xi \in T_{x_0}U\mid \textrm{ there exists $\alpha$ and $t_0$ such that $\tfrac{d}{dt} \big(\gamma(t;\alpha)\big)_{|t=t_0} $
is proportional to $\xi$} \}$$ contains an open
subset of $T_{x_0}U$. We put $\Omega= \bigcup_{x\in U}\Omega_x$. We will call a pair $(t_0; \alpha)$ \ {\it $x_0-$admissible}, if
$\tfrac{d}{dt} \big(\gamma(t;\alpha)\big)_{|t=t_0} \in \Omega_{x_0}$.
We need to understand whether there exists a symmetric
affine connection $\Gamma$ such that every curve $\gamma(t;\alpha)$ is a reparameterized geodesic of $\Gamma$, and construct this connection if it exists.
It is well known (at least since the time of Levi-Civita \cite{Levi-Civita}) that,
in local coordinates, every geodesic $\gamma:I\to U$, $\gamma:t\mapsto \gamma^i(t) \in U\subset \mathbb{R}^n$
of a symmetric affine connection $\Gamma$ is given in terms of arbitrary parameter $t$ as solution of
\begin{equation} \label{arb} \frac{d^2 \gamma ^a}{dt^2}+ \Gamma_{bc}^a\frac{d\gamma^b}{dt}\frac{d\gamma^c}{dt} = f\left(\frac{d\gamma}{dt}\right)\frac{d\gamma^a}{dt},\end{equation}
Better known version of this formula assumes that the parameter is affine (we denote it by ``$s$'') and reads
\begin{equation} \label{nat} \frac{d^2\gamma^a}{ds^2}+ \Gamma_{bc}^a\frac{d\gamma^b}{ds}\frac{d\gamma^c}{ds} = 0, \end{equation}
it is easy to check that the change of the parameter $s \longrightarrow t$ transforms \eqref{nat} in \eqref{arb}.
For further use, let us note that if we linearly
change the parameter $t$ of a curve $\gamma(t; \alpha)$ (by putting $t= \const \cdot t_{new}$),
the left hand side of \eqref{arb} is multiplied by $\const^2$ implying that the function $f$ should be homogeneous of degree 1:
$f(\const\cdot \xi)= \const\cdot f(\xi)$ for every $\xi$ (such that $\xi \in \Omega$). This allows us to
assume without loss of generality that for every $x$ the
subset $\Omega_x\subseteq T_xU$ contains a cone over a nonempty open subset.
Let us now take a point $x_0\in U$. For every $x_0-$admissible $(t_0; \alpha)$, we
view the equations \eqref{arb} as a system of equations
on the entries of $\Gamma(x_0)$ and on the function $f_{|\Omega_{x_0}}$; the coefficients in this system come from known data $\left(\frac{d\gamma(t;\alpha)}{dt}\right)_{|t=t_0}$, $\left(\frac{d^2\gamma(t;\alpha)}{dt^2}\right)_{|t=t_0}$. Since we have infinitely many
$x_0-$ admissible $(t;\alpha)$'s, we have an infinite system of equations. Let us show that if this system of equations is solvable, then the solution is unique up to a certain `gauge' freedom.
Let us first describe the gauge freedom: we consider two connections $\Gamma$ and $\bar \Gamma$ related by Levi-Civita's formula
\begin{equation} \label{bar}
\Gamma_{bc}^a= \bar \Gamma_{bc}^a - \delta_b^a\phi_c - \delta_c^a\phi_b,
\end{equation}
where $\phi = \phi_i $ is
a one form. Suppose the curve $\gamma$ satisfies the equation \eqref{arb} with a certain function $f$.
Substituting $\Gamma$ given by \eqref{bar} in the left hand side of \eqref{arb} and using
$$
(\delta_b^a\phi_c + \delta_c^a\phi_b) \frac{d\gamma^b}{dt}\frac{d\gamma^c}{dt}=
2 \left( \frac{d\gamma^b}{dt} \phi_b \right) \frac{d\gamma^a}{dt} ,
$$
we obtain that the same curve $\gamma$ satisfies the equation \eqref{arb} with respect to the connection $\bar \Gamma$ and the function \begin{equation} \label{barf}
\bar f(v):= f(v) + 2 \left(v^b \phi_b \right) .\end{equation}
Thus, if $(\Gamma, f)$ is a solution of \eqref{arb}, then for every $1-$form $\phi$
the pair
$\left(\bar \Gamma, \bar f \right)$ given by (\ref{bar},\ref{barf}) is also a solution.
Let us show that up to this gauge freedom the connection $\Gamma$ and the function $f$ are unique.
We again work at one point $x_0\in U$ and
again view \eqref{arb} as equations on $(\Gamma,f)$. Suppose
we have two solutions $(\Gamma,f)$ and $(\bar \Gamma,\bar f)$.
We subtract one equation from the other to obtain
\begin{equation} \label{tilde}
\tilde \Gamma_{bc}^a v^b v^c = \tilde f(v)v^a,
\end{equation}
where $\tilde \Gamma= \bar \Gamma -\Gamma$, $\tilde f= \bar f- f$. This equation is fulfilled for all
vectors $v= v^a$ lying in an open nonempty $ \Omega_{x_0}\subset T_{x_0}U$. Since the mapping
$\sigma(u, v) \mapsto \tilde \Gamma_{bc}^a u^b v^c$ is linear in $u$ and $v$, it satisfies the parallelogram
equality
\begin{equation} \label{par} 0=\sigma(u +v, u+v) + \sigma(u -v, u-v) - 2\sigma(u, u) - 2\sigma(v,v).
\end{equation}
Combining \eqref{par} with \eqref{tilde}, we obtain
\begin{equation} \label{tildef1} \begin{array}{cl} 0&=\tilde f(u+v)(v+ u) + \tilde f(u -v)(u- v) - 2\tilde f(u) u - 2\tilde f(v) v \\
&= (\tilde f(u+v) + \tilde f(u -v) - 2\tilde f(u))u +
(\tilde f(u+v) - \tilde f(u -v) - 2\tilde f(v))v .\end{array}\end{equation}
Taking $u$ and $v$ to be linearly independent, we obtain
\begin{equation} \label{tildef2} \left\{\begin{array}{c} \tilde f(u+v) + \tilde f(u -v) - 2\tilde f(u)=0 \\
\tilde f(u+v) - \tilde f(u -v) - 2\tilde f(v)=0\end{array} \right. \end{equation}
implying $\tilde f(u+ v)= \tilde f(u)+ \tilde f(v)$.
As we explained above,
the functions $f, \bar f$, and, therefore, $\tilde f$, also satisfy
$\const \cdot \tilde f(v) = \tilde f(\const \cdot v)$. Then,
the restriction of $\tilde f$ to a certain nonempty open subset $\Omega'_{x_0} \subset \Omega_{x_0} \subset T_{x_0}U$ is linear, i.e., is given by
$\tilde f(v)= 2 \phi_a v^a$ for a certain 1-form $\phi=\phi_a$ and for all $v$ from $\Omega'_{x_0}$.
Then, the connection
$$\hat \Gamma^a_{bc} := \bar \Gamma^{a}_{bc} - \phi_b \delta^a_c - \phi_c \delta^a_b$$
has the property that for
every $(t_0;\alpha)$ such that $\left(\frac{d \gamma ^a}{dt}\right)_{|t=t_0}\in \Omega'_{x_0}$ the corresponding $\gamma(t;\alpha)$ satisfies (at $t=t_0$) the equation
$$\frac{d^2 \gamma ^a}{dt^2}+ \Gamma_{bc}^a\frac{d\gamma^b}{dt}\frac{d\gamma^c}{dt} = \frac{d^2 \gamma ^a}{dt^2}+ \hat \Gamma_{bc}^a\frac{d\gamma^b}{dt}\frac{d\gamma^c}{dt} \ \ \left(\Longleftrightarrow \ \Gamma_{bc}^a\frac{d\gamma^b}{dt}\frac{d\gamma^c}{dt} = \hat \Gamma_{bc}^a\frac{d\gamma^b}{dt}\frac{d\gamma^c}{dt}\right) $$ implying $\Gamma = \hat \Gamma$ implying that $\Gamma$ and $\bar \Gamma$ are as in \eqref{bar} implying $f$ and $\bar f$ are as in \eqref{barf}.
Finally, the connection $\Gamma$ and the function $f$, if they exist, are uniquely determined by the unparameterized curves $\gamma(t; \alpha)$
up to the gauge freedom \begin{equation}\label{gauge}
\Gamma_{bc}^a\mapsto \Gamma_{bc}^a + \delta_b^a\phi_c + \delta_c^a\phi_b, \ \ f\mapsto f + 2 \phi\end{equation}
\begin{Rem} \label{f=phi} If the function $f$ is linear, i.e., if $f(\xi)=
2\phi_b \xi^b$ for a certain $1-$form $\phi$, then, up to the gauge freedom, we can take $f\equiv 0$.
Moreover, putting $f\equiv 0$ we exhaust the gauge freedom.\end{Rem}
Let us now explain how to
reconstruct the pair $(\Gamma, f)$ up to the gauge freedom. We give an algorithm how to do it.
The algorithm gives also a possibility to understand whether there exists such $(\Gamma, f)$:
we will see it that in order to uniquely reconstruct the (possible)
entries $\Gamma(x_0)^i_{jk}$ of the connection at a point $x_0$, we will need only finitely many $\gamma(t; \alpha)$ passing through this point. There exists such $(\Gamma, f)$, if for all $x_0$ the
entries of $\Gamma(x_0)^i_{jk}$ do not depend on the $x_0-$admissible $(t_0; \alpha)$ we used to construct $\Gamma(x_0)^i_{jk}$.
We will work at
a point $x_0$; our goal is to reconstruct the components $\Gamma(x_0)_{jk}^i$. We take $x_0$-admissible
$(t_0; \alpha)$ such that the first component
$\left(\tfrac{d\gamma^1}{dt}\right)_{|t=t_0} \ne 0$. For this geodesic $\gamma(t_0;\alpha)$,
we rewrite the equation \eqref{arb} at $t=t_0$
in the following form:
\begin{equation} \label{rec}
\textrm{
$ \begin{array}{rcl} f\left(\tfrac{d\gamma}{dt}\right) &=& \big(\tfrac{d^2\gamma^1}{d^2t} + \Gamma_{ab}^1\tfrac{d\gamma^a}{dt} \tfrac{d\gamma^b}{dt} \big)/\tfrac{d\gamma^1}{dt} \\
\tfrac{d\gamma^2}{dt}\, \Gamma_{ab}^1\tfrac{d\gamma^a}{dt} \tfrac{d\gamma^b}{dt} -
\tfrac{d\gamma^1}{dt}\, \Gamma_{ab}^2\tfrac{d\gamma^a}{dt} \tfrac{d\gamma^b}{dt} &=& \tfrac{d^2\gamma^2}{d^2t} \, \tfrac{d\gamma^1}{dt} - \tfrac{d\gamma^2}{dt}\, \tfrac{d^2\gamma^1}{d^2t} \\
&\vdots& \\
\tfrac{d\gamma^n}{dt}\, \Gamma_{ab}^1\tfrac{d\gamma^a}{dt} \tfrac{d\gamma^b}{dt} -
\tfrac{d\gamma^1}{dt}\, \Gamma_{ab}^n\tfrac{d\gamma^a}{dt} \tfrac{d\gamma^b}{dt} &=& \tfrac{d^2\gamma^n}{d^2t} \, \tfrac{d\gamma^1}{dt} - \tfrac{d\gamma^n}{dt}\, \tfrac{d^2\gamma^1}{d^2t}. \end{array}$}\end{equation}
The first equation of \eqref{rec} is equivalent to the equation of \eqref{arb} for $a=1$
solved with respect to $f\left(\tfrac{d\gamma}{dt}\right)$. We obtain the second, third, etc. equations of \eqref{rec} by substituting the first equation of \eqref{rec} in the equations of \eqref{arb} corresponding to $a=2,3,\textrm{etc.}$
We consider now a subsystem of \eqref{rec} containing the the second, third, etc. equations of \eqref{rec}.
We see that the system does not contain the function $f$. Then, for every $x_0$-admissible $(t_0, \alpha)$, it is a linear (inhomogeneous) system on the components $\Gamma(x_0)_{jk}^i.$
We take a sufficiently big number $N$ and substitute $N$ \
$x_0-$admissible generic $(t_0;\alpha)$'s in this subsystem.
\begin{Rem} If $n=4$, it is sufficient to take $N= 12$.
We understand the world {\it `generic}' in the following sense: for every $n$ pairs $(t_0, \alpha)$, the velocity vectors $\left(\tfrac{d\gamma}{dt}\right)_{|t=t_0}$ are linearly independent. \end{Rem}
At every point $x_0$, we obtained an inhomogeneous linear system of equations on $\frac{n^2(n+1)}{2}$ unknowns $\Gamma(x_0)^i_{jk}$.
{\it In the case the solution of this system does not exist (at least at one point $x_0$),
there exists no connection
whose (reparameterized) geodesics are $\gamma(t; \alpha)$.
}
If the solution exists at all points, the solution is unique up to the gauge freedom \eqref{bar}.
Indeed, a solution of the last $n-1$ equations of \eqref{rec} gives us also the values $f$ by the first equation of \eqref{rec}, so the gauge freedom in the equations \eqref{rec} is the same as of the equations \eqref{arb}. Thus, a solution, if it exists,
gives us the only up to the gauge freedom candidate for the entries $\Gamma(x_0)_{jk}^i$ at every point $x_0$ such that its geodesics are (reparameterized) curves $\gamma(t; \alpha)$.
Assume now that at every point $x_0$, a solution $\Gamma(x_0)_{jk}^i$ exists.
In order to construct the entries $\Gamma(x_0)_{jk}^i$ (up to the gauge freedome),
we used $N$ \ $x_0-$admissible curves.
In order to understand whether all geodesics $\gamma(t; \alpha)$ are reparameterized geodesics of
$\Gamma$, we need to substitute all
geodesics $\gamma(t; \alpha)$ in the equation \eqref{arb}, and check
whether it is fulfilled; in this case, it is natural to rewrite the equation \eqref{arb} in the $f-$free form $$\left(\frac{d^2 \gamma ^a}{dt^2}+ \Gamma_{bc}^a\frac{d\gamma^b}{dt}\frac{d\gamma^c}{dt}\right) \wedge \frac{d\gamma^a}{dt}=0.$$
\subsection{ Subproblem \ref{1.2}:
given an affine connection $\Gamma=\Gamma_{jk}^i$,
how to understand whether there exists a metric $g$ in the projective class of $\Gamma$? How to reconstruct this metric effectively?}
\subsubsection{ General theory. } \label{genth}
We are given a symmetric affine connection $\Gamma_{jk}^i$ on $M^n$, we need to understand whether there exists a metric in the projective class of $\Gamma$. In this section we recall (following \cite{bryant,eastwood}) the general approach how to do it: the existence of a metric in the projective class is equivalent to the existence of a nondegenerate
solution of a certain system of linear
PDE in the Cauchy-Frobenius form, and, in theory, there exists an algorithmic way to understand the existence of such solutions.
\begin{Th}[\cite{eastwood}, see also references inside] { $ g$ lies in a projective class of
a connection {$\Gamma_{jk}^i$} if and only if
$\sigma^{ab}:= g^{ab} \cdot \det(g)^{1/(n+1)} $ is a solution of }
\begin{equation} \label{east} {\left(\nabla_a\sigma^{bc} \right)
- \tfrac{1}{n+1}\left(\nabla_i\sigma^{i b}\delta^c_a + \nabla_i\sigma^{i c}\delta^b_a\right) =0.} \end{equation}
Here $\sigma^{ab}:= { g^{ab}} \cdot { \det(g)^{1/(n+1)}}$ should be understood as an element of ${ S^2M } \otimes { (\Lambda_n)^{2/(n+1)}} M$. In particular,
$\nabla_a\sigma^{bc}= \underbrace{\frac{\partial }{\partial x^a} \sigma^{bc} + \Gamma^{b}_{a{d}}\sigma^{{d}c} + \Gamma^{c}_{{d}a}\sigma^{b{d}}}_{\textrm{\tiny Usual covariant derivative}} - \underbrace{ \frac{2}{n+1} \Gamma^{{d}}_{{d}a }\, \sigma^{bc}}_{\textrm{ \tiny addition coming from volume form}}$
\end{Th}
The equations \eqref{east} is a system of $\left(\frac{n^2(n+1)}{2}- n\right)$ linear PDEs of the first order on $\frac{n(n+1)}{2}$ unknown components of $\sigma$.
Two-dimensional version of these equations was essentially known to R. Liouville \cite{Liouville}: instead of working with $\sigma^{ab}:= { g^{ab}} \cdot { \det(g)^{1/(n+1)}}$, he worked with $a_{ij}=\tfrac{1}{\det(\bar g)^{2/3}}\bar g_{ij}$; in dimension $2$ the entries of
$\sigma^{ij}$ and $a_{ij}$ are linearly related. The 2-dimensional analog of the equations \eqref{east} is then the Liouville system of 4 PDE's of the first order
\begin{equation}
\label{liouveq} \begin{array}{rcc}
{\frac {\partial { a_{11}}}{\partial x{{}}}} +2\,{{ K_0}}
{
a_{12}} -2/3\,{{ K_1}} { a_{11}} &=&0 \\
2\,\frac
{\partial a_{12}}{\partial x{{}}} +\frac {\partial {
a_{11}} }{\partial y{{}}} +2\,{ {K_0}} { a_{22}} +2/3\,{
{K_1}} { a_{12}} -4/3\,{{ K_2}}
{ a_{11}} &=&0 \\
{\frac {\partial { a_{22}}}{\partial x{{}}}} +2\,{\frac
{\partial { a_{12}}}{\partial y{{}}}} +4/3\,{ {K_1}} {
a_{22}} -2/3\,{ {K_2}} { a_{12}} -2\,{ {K_3}}
{ a_{11}} &=&0 \\
{\frac {\partial { a_{22}}}{\partial y{{}}}} +2/3\,{ {K_2}}
{ a_{22}}
-2\,{{ K_3}} { a_{12}} &=&0,
\end{array}
\end{equation}
where $K_0 :=-\Gamma^2_{11}$, $K_1:= \Gamma^1_{11}-2\Gamma^2_{12}$, $
K_2:= -\Gamma^2_{22}+2\Gamma^1_{12}$, $K_3:= \Gamma^1_{22}$.
\begin{Rem}
One sees that the gauge freedom \eqref{gauge} does not affect the coefficients $K_0,...,K_3$ of the equation \eqref{liouveq}. One can check by calculations that this is also true in all dimensions: the
gauge freedom \eqref{gauge} does not change the equations \eqref{east}. \end{Rem}
The PDE-system \eqref{east} can be prolonged (see \cite{eastwood}) to the system
\begin{equation} \label{prol} \left\{\begin{array}{rcl}
{\nabla_a} \sigma^{bc}&=&{ \delta_a{}^b}{\mu^c}
+{ \delta_a{}^c}{\mu^b}\\
{\nabla_a}\mu^b&=&{ \delta_a{}^b}{\rho}
-{ \frac{1}{n}}{P_{ac}}\sigma^{bc}+{\frac{1}{n}}{W_{ac}{}^b{}_d}\sigma^{cd}\\
{\nabla_a}\rho&=&{ -\frac{2}{n}}{P_{ab}}\mu^b+
{\frac{4}{n}}{Y_{abc}}\sigma^{bc}
\end{array}\right. \end{equation}
where {$P$} is the symmeterized Ricci-tensor, {$Y$} the Cotton-York-Tensor and
{$W_{ab}{}^c{}_d$} the projective Weyl tensor for the connection { $\Gamma$}.
\begin{Rem} Here we use another index convention for the projective Weyl tensor than in Section \ref{Problem12} of our paper. This convention is the same as in \cite{eastwood}, and is standard in the so-called tractor calculus, we refer to \cite{eastwood} for precise formulas. In Section \ref{Problem12} we will explain the convention used there
by given the formula for Weyl tensor.
\end{Rem}
The system \eqref{prol} is a linear system of PDE of the first order
on the unknown functions $\sigma^{bc}$, $\mu^b$ , $\rho$. Moreover,
all derivatives of unknowns are expressed as functions of unknowns, i.e., the system is in the Cauchy-Frobenius form.
One can understood this system geometrically
as a connection on the projective tractor bundle ${\mathcal{E}}^{(BC)}
={\mathcal{E}}^{(bc)}(-2)+{\mathcal{E}}^b(-2)+{\mathcal{E}}(-2),$ see \cite{eastwood} for details.
The solutions of the system are then
parallel sections of the connection; there exists an algorithmic way to understand whether a certain connection admits a nontrivial parallel section. In the two-dimensional case, the algorithm was fulfilled for certain projectively homogeneous
connections in \cite{bryant}; for arbitrary two-dimensional connection, the algorithm was fulfilled in \cite{BDE}, and the answer (i.e., the differential conditions on $K_i$ such that its vanishing implies the existence of a nontrivial solution) appears to be very complicated.
In theory, one can fulfill this algorithm for every dimension; it is clearly a nontrivial task. In the next section we will show that, under the additional assumption that
the searched metric is Ricci-flat, there exists a trick that simplifies the algorithm.
\subsubsection{ The case $n=4$, $g$ is Ricci-flat.} \label{Problem12}
Let us now assume that we know the geodesics of a nonflat Ricci-flat metric. That is, we know a certain $\Gamma$ such that for a certain $\phi_a$ which we do not know
$\bar \Gamma_{bc}^a := \Gamma_{bc}^a + \delta_b^a\phi_c + \delta_c^a\phi_c$ is the Levi-Civita connection of a certain nonflat
Ricci-flat metric which we again do not know. Our goal is to find this metric (which I call $\bar g$). By the above mentioned results of Petrov \cite{Petrov49}, Hall et al \cite{Hall2007,Hall2010}, and Kiosak et al \cite{einstein}, the metric is unique up to multiplication by a constant; the goal of this section is to explain how to find it algorithmically. The algorithm works under certain additional (generic) condition on the connection $\Gamma$.
We consider the projective Weyl tensor introduced in \cite{Weyl} (not to be confused with the conformal Weyl tensor)
\begin{equation} \label{weyltensor} {W^i}_{jk\ell}:={R^i}_{jk\ell}-\tfrac{1}{n-1}\left({\delta^{i}_\ell} \, R_{jk}-{\delta^{i}_k}\, R_{j\ell}\right) \end{equation}
(in our convention $R_{jk}= {R^a}_{jka}$, so that ${W^a}_{jka}= 0$).
Weyl has shown that the projective Weyl tensor does not depend of the choice of connection within the projective class: if the connections $\Gamma $ and $\bar \Gamma$ are related by the formula \eqref{bar}, then their projective Weyl tensors coincide.
Now, from the formula \eqref{weyltensor}, we know that, if the searched $\bar g$ is Ricci-flat, projective Weyl tensor coincides with the Riemann tensor $ {\bar R^i}_{{ \ jk\ell}}$ of $\bar g$. Thus, if we know the projective class of the Ricci-flat
metric $\bar g$, we know its Riemann tensor.
Then, the metric $\bar g$ must satisfy the following system of
equations due to the symmetries of the Riemann tensor:
\begin{equation} \label{equations} \left\{ \begin{array}{cc} \bar g_{ia} {W^a}_{jkm} + \bar g_{ja} {W^a}_{ikm} = 0 \\
\bar g_{ia} {W^a}_{jkm} - \bar g_{ka} {W^a}_{mij}=0\end{array}\right. \end{equation}
The first portion of the eqautions \eqref{equations} is due to the symmetry
($\bar R_{{ij}km}= -\bar R_{{ji}km}$), and the second portion is due to the symmetry $(\bar R_{{km}ij}= \bar R_{ij{km}})$ of the curvature tensor of $\bar g$.
We see that for every point $x_0\in U$ \eqref{equations} is a system of linear equations on $\bar g(x_0)_{ij}$.
The number of equations (around 100) is much bigger than the number of unknowns (which is 10). It is expected therefore, that a generic projective Weyl tensor ${W^i}_{jkl}$ admits no more than one-dimensional space of solutions (by assumtions, our $W$ admits at least one-dimensional space of solutions). The expectation is true, as the following classical
result shows
\begin{Th}[\cite{Petrov1,Hall83,rendall,book,mcintosh}]
Let ${W^i}_{jk\ell} $ be a tensor in $\mathbb{R}^4$ such that it is skew-symmetric with respect to $k,l$ and
such that its traces ${W^a}_{ak\ell}$ and ${W^a}_{ja\ell}$ vanish. Assume that for all 1-forms
$\xi_i\ne 0$ we have ${W^a}_{jk\ell} \xi_a\ne 0.$ Then, the equations \eqref{equations} have no more than one-dimensional space of solutions.
\end{Th}
Let us comment on the condition ${W^a}_{jk\ell} \xi_a\ne 0.$ In this context, for every fixed indexes $k,\ell$, ${W^i}_{j \ast \ast}$ could be viewed as a $n\times n$-matrix; and the condition ${W^a}_{j\ast \ast } \xi_a= 0$ means that the matrix has a nontrivial kernel (in particular, it is degenerate). Now, the condition ${W^a}_{jk\ell} \xi_a= 0$ means that for all indexes $k, \ell$ the kerns of the
$n\times n$-matrices ${W^a}_{jk\ell}$ have nontrivial intersection. {\it Thus, it is a very restrictive condition on $W$, and, therefore, on $\Gamma$. }
This result shows that, under the assumptions that for all
$\xi_i\ne 0$ we have ${W^a}_{jk\ell} \xi_a\ne 0, $ we can reconstruct the conformal class of the metric $\bar g$ by solving the system of linear equations \eqref{equations}. This can be done algorithmically. Then, we also know the conformal
class of $\sigma$
in \eqref{east}, i.e., we know that $\sigma$ is of the form
\begin{equation} \label{ansatz} \sigma^{ij}= e^{\lambda} a^{ij},\end{equation} where $ a^{ij}$ is known and comes from the solution of the linear
system \eqref{equations}, and the function
$\lambda$ is unknown. Substituting the ansatz \eqref{ansatz}
in the system \eqref{east}, we obtain an inhomogeneous system of linear equations on the components $\tfrac{\partial\lambda}{\partial x^i}$.
Direct calculations show that this system has at most one solution; since we assumed the existence of the metric in the projective class, one can always solve this system
and obtain all $\tfrac{\partial\lambda}{\partial x^i}$. Finally, we can obtain the function $\lambda$, and, therefore, the metric $\bar g$, by integration.
Let us note that in all steps we assumed that a Ricci-flat
metric $\bar g$ exists in the given projective class.
But the algorithm also gives us an algorithmic check whether such metric exists: one should go along the steps of the algorithm and look whether something goes wrong.
For example, the system \eqref{equations} could have no nontrivial
solution (i.e., every solution $\bar g_{ij} $
of \eqref{equations} has zero determinant). Then, no Ricci-flat
metric $\bar g$ exists in our
projective class.
If the system \eqref{equations} has nontrivial
solution, then, after plugging the ansatz \eqref{ansatz} in \eqref{east}, we obtain a system of nonhomogeneous linear
equations on $\tfrac{\partial\lambda}{\partial x^i}$. This system may have no solution at all (the number of equations is much bigger than the number of unknowns; besides, the system is inhomogeneous),
or the $1-$form $\tfrac{\partial\lambda}{\partial x^i}dx_i$
may be not closed. In this case, no Ricci-flat
metric $\bar g$ exists in our
projective class.
Finally, if the system \eqref{equations} has nontrivial
solution, if we can solve the system of linear equaitons
we obtain after plugging the ansatz \eqref{ansatz} in \eqref{east}, and if the solution $\tfrac{\partial\lambda}{\partial x^i}$
satisfies the `closeness' condition $ \tfrac{\partial }{\partial x^k} \tfrac{\partial\lambda}{\partial x^i}=\tfrac{\partial }{\partial x^i} \tfrac{\partial\lambda}{\partial x^k}$, then we do obtain a
metric $g_{ij}$ in the projective class. The metric must not be Ricci-flat though.
\begin{Rem} In Section \ref{Problem21}, we show that
one can reconstruct an almost every (4-dimensional)
metric by its projective class, see Remark \ref{ssylka} there. In the case of arbitrary metric, the nondegeneracy assumption on the projective class is more complicated, and it is harder to check it.
\end{Rem}
\section{ Problem 2: In what situations is the reconstruction of the metric by the unparameterised
geodesics unique (up to the multiplication of the metric by a constant)?}\label{Problem2}
\subsection{ For generic 4-dimensional metric, the reconstruction of the metric by the unparameterized
geodesics is unique. } \label{Problem21}
Let us first construct one geodesically rigid metric in dimension $n=4$.
Using the formula \eqref{weyltensor}, by short tensor calculations we see that the metric $g_{ij}$ must satisfy the equation
\begin{equation} \label{9}
n g^{a (i} W^{j)}_{\ \ \, a kl} =g^{a b }W_{\ \ \, a b [l}^{(i} \delta_{k]}^{j)},
\end{equation} where $n=4$, the brackets ``$[ \ ]$'' denote the skew-symmetrization without division, and the brackets ``$( \ )$'' denote the symmetrization without division.
\begin{Rem}
Actually, the equation (\ref{9}) is a part of the curvature of the tractor connection \eqref{prol}; in this context, it was obtained in \cite{eastwood}.
\end{Rem}
We take a 4-dimensional metric $\bar g$ such that at the point $x_0$ it is given by the identity matrix
$$
\begin{pmatrix} 1 &&&\\ &1&&\\ &&1&\\ &&&1\end{pmatrix},
$$
and such that its curvature tensor (with lowered indexes) $R_{ijkl} $
at the point $x_0$ is given by
\begin{equation} \label{riem}
R_{ijkl}= h_{ik}h_{jl}- h_{il}h_{jk}+ H_{ik}H_{jl}- H_{il}H_{jk},
\end{equation}
where the entries at $x_0$ of the $(0,2)-$tensors $h$ and $H$ are given by the diagonal matrices
$$
h= \left[ \begin {array}{cccc} 1&&&\\ \noalign{\medskip}&2&&
\\ \noalign{\medskip}&&-1&\\ \noalign{\medskip}&&&0\end {array}
\right] \ , \
\ \ H= \left[ \begin {array}{cccc} 0&&&\\ \noalign{\medskip}&0&&
\\ \noalign{\medskip}&&1&\\ \noalign{\medskip}&&&1\end {array}
\right].
$$
Such metric $\bar g$ exists by \cite[Theorem 1.12.2]{Gilkey2001}
(see also \cite[Theorem 1.1]{Brozos}),
since the tensor \eqref{riem} satisfies all symmetries of the curvature tensor.
Every metric $ g $ geodesically equivalent to $\bar g$ has the same projective
Weyl tensor as $\bar g$. We view the equation \eqref{9} as the system of homogeneous linear equations on the components of $g$;
every metric $g$ geodesically equivalent to
$\bar g$ satisfies this system of equations (with the same coefficients $W$!).
At the point $x_0$,
this is a system on $10$ unknowns $g(x_0)^{ij}$.
Since the system is symmetric in $i,j $ and
skew-symmetric in $k,l$, the system contains $60$ equations (actually, less because of certain hidden
symmetries inside). By direct calculations, we see that the rank of this system is $9$. Indeed, it has at least one nontrivial solution, namely $\bar g(x_0)^{ij}$, so its rank is at most $9$. One can easily
find $9$ linear independent
equations of this system (so the rank is at least 9), namely the equations corresponding to the followings indexes $(i,j,k,l)$:
\begin{center}
\begin{tabular}{|c||c|}\hline
$(i, j, k, l)$ & $\textrm{equation}$ \\\hline\hline
$(1, 1, 2, 1)$ & $-12 g^{1 2} = 0$ \\
$(1, 1, 3, 1)$ & $2 g^{1 3} = 0$ \\
$(1, 1, 4, 1) $& $ 2 g^{1 4} = 0 $\\
$(2, 1, 2, 1) $ & $ 5 g^{1 1}-6 g^{2 2}+g^{3 3} = 0$\\
$ (2, 1, 4, 1)$ & $ g^{2 4} = 0 $\\
$ (2, 2, 3, 2)$ & $ 8g^{2 3} = 0$ \\
$ (3, 1, 3, 1)$ & $-4g^{1 1}+g^{3 3}+4g^{2 2}-g^{4 4} = 0$\\
$ (3, 1, 4, 1) $& $ 2g^{3 4} = 0$ \\
$ (3, 2, 3, 2) $ & $-6g^{2 2}+4g^{3 3}+3g^{1 1}-g^{4 4} = 0.$
\\ \hline
\end{tabular} \end{center}
We see that the equations in the table are linearly independent.
Thus, at the point $x_0$, the set of solutions of this system is 1-dimensional, implying that
every metric $g$, geodesically equivalent to $\bar g$, is proportional to $\bar g$.
Let us show that at every point in a small neighborhood of $x_0$, the system \eqref{9} also has rank 9. Indeed, the rank of a matrix is the biggest dimension of a nondegenerate quadratic submatrix and therefore is a lower semi-continuous (integer valued) function, i.e., rank of this system is
at least 9 at every point of a small neighborhood of $x_0$. Now, at every point the components $\bar g^{ij}$ give us
a nontrivial solution, so the rank can not be bigger than $9$. Thus, in a small neighborhood of $x_0$,
every metric $g$ geodesically equivalent to $\bar g$ is conformally equivalent to $\bar g$. Now, by
Weyl \cite{Weyl}, two conformally equivalent 4-dimensional metrics are proportional. Then, the metric $\bar g$ is geodesically rigid.
Now let $\tilde g$ be an arbitrary metric in a small neighborhood of $x_0$. We consider the metric
$$
g_{{t}}:= (1- t)\tilde g + t\bar g.
$$
The system \eqref{9} constructed for this metric has rank $9$ for $t$ lying in a small interval around
$1$. Since the coefficients of the system are algebraic expressions in $t$ whose
coefficients are algebraic expressions in
the components of $\bar g$, $\tilde g$
and their first and second derivatives, for almost all $t$ the system \eqref{9} constructed for the
metric $g_{{t}}$ has rank $9$. We take $t$ close to $0$ such that
the metric $g_t$ is $\varepsilon-$close to $\tilde
g$ and such that the system \eqref{9} constructed for the
metric $g_{{t}}$ has rank $9$. As we explained above,
this metric is geodesically rigid. Every metric $\hat g$ that is $C^2-$ close to $g_t$ is also geodesically rigid, since
the entries of $W$ for $\hat g$
are algebraic expressions in the components of $\hat g$ and its first and second derivatives. Hence, the
coefficients in the system \eqref{9} constructed for $\hat g$
are close to that of the system \eqref{9} constructed for $g_t$ implying the system
also has rank 9 implying the metric $\hat g$ is geodesically rigid as well.
{\it Thus, for every 4-dimensional metric $\tilde g$ and for any $\varepsilon>0$ there exists a metric $g_t$
that is $\varepsilon-$ close in the $C^2-$sense to $\tilde g$ and $\varepsilon'>0$ such that all metrics
$\varepsilon'-$ close in the $C^2-$sense to $g_t$ are geodesically rigid. }
\begin{Rem}
As we mentioned in the introduction, a similar proof can be done for all dimensions $n\ge 4$. For dimensions 2 and 3, the proof does not work anymore, since the system \eqref{9} has corank at least $2$ for all metrics $g$ (one can prove it using the methods of \cite[\S2.3.2]{KioMat2010}). One can still modify the proof replacing the system \eqref{9} by another projectively invariant
system of equations. This other projectively invariant
system of equations requires higher derivatives of the components of $g$ though.
In dimension 3, one can construct (using the curvature of the tractor connection \eqref{prol}, see also \cite{Nurowski2}) such
projectively invariant system such that its coefficients
depend on the components of the metrics and its first, second and third derivatives. Therefore, for
every 3-dimensional local metric $\tilde g$ and for any $\varepsilon>0$ there exists a metric $g_t$
that is $\varepsilon-$ close in the $C^3-$sense to $\tilde g$ and $\varepsilon'>0$ such that all metrics
$\varepsilon'-$ close in the $C^3-$sense to $g_t$ are geodesically rigid. Now, in dimension 2, the construction of the projectively invariant system is much more involving (see \cite{BDE}) and requires 8 derivatives of the components of the metric.
\end{Rem}
\begin{Rem} \label{ssylka} We also see that the projective class of almost every (in the $C^2-$sense)
4-dimensional metric determines its conformal class uniquely: one can find the conformal class by solving the system \eqref{9}. Then, one can proceed along the
algorithm from Section \ref{Problem12} and understand whether there exists a metric in the projective class, and find it.
\end{Rem}
\subsection{ Normal forms for pairs of geodesically equivalent 4-dimensional metrics such that one of them has Lorentz signature.}
\subsubsection{ Splitting and gluing constructions from \cite{splitting}.} \label{Problem22}
Given two metrics $g$ and $\bar g$ on the same manifold, we consider the $(1,1)-$tensor $L=L(g,\bar g)$ defined by \begin{equation} \label{L}
L_j^i := \left(\frac{\det(\bar g)}{\det(g)}\right)^{\frac{1}{n+1}} \bar g^{ik}
g_{kj},\end{equation}
where ${\bar g}^{ik}$ is the contravariant inverse of
${\bar g}_{ik}$.
\begin{Rem}If $n$ is even, the tensor $L$ is always well defined. If $n$ is odd, the ratio ${\det(\bar g)}/{\det(g)}$ may be negative, and the formula \eqref{L} may have no sense. In this case, we replace $\bar g$ by $-\bar g$ and make the ratio ${\det(\bar g)}/{\det(g)}$ positive and $L$ well defined. In the cases interesting in our context, $g$ and $\bar g$ have the same signature, and the problem with the sign does not appear at all. \end{Rem}
\begin{Rem} The tensor $L^i_j $ defined in \eqref{L} is essentially the same as
as the tensor
introduced by Sinjukov (see equations (32, 34) on the page 134 of the book \cite{sinjukov}, and also
Theorem 4 on page 135) and which is often denoted by tensor $a_{ij}$ in the related literature. More precisely, $L^i _j =a_{\ell j} g^{\ell i}$. It is also closely related to $\sigma$ from \S \ref{genth}: $\bar g$ is geodesically equivalent to $g$, if and only if
$\bar \sigma^{ab} := {L^a}_{\ell}g^{\ell b}\cdot \det(g)^{1/(n+1)}$ is a solution of \eqref{east}.
\end{Rem}
The simplified version of the {\it gluing construction} does the following.
Consider two manifolds $M_1$ and $M_2$ with pairs of geodesically equivalent metrics $h_1\sim \bar h_1$ on $M_1$ and $h_2\sim \bar h_2$ on $M_2$. Assume that the corresponding $(1,1)$-tensor fields $L_1=L(h_1, \bar h_1)$ and $L_2=L(h_2, \bar h_2)$
have no common eigenvalues in the sense that
for any two points $x\in M_1$, $y\in M_2$ we have
$$
\textrm{Spectrum}\, L_1(x) \cap \textrm{Spectrum}\, L_2(y) =\varnothing.
$$
Then one can naturally construct a pair of geodesically equivalent metrics $g\sim \bar g$ on the direct product $M=M_1 \times M_2$.
These new metrics $g$ and $\bar g$ differ from the direct product metrics $h_1 + h_2$ and $\bar h_1 + \bar h_2$ on $M_1\times M_2$ and are given by the following formulas involving $L_1$ and $L_2$:
we denote by $\chi_i$, $i=1,2$, the characteristic polynomial of $L_i$: $\chi_i= \det(t\cdot {\bf 1} - L_i)$. We treat the $(1,1)-$tensors $L_i$ as linear operators acting on $TM_i$.
A polynomial $f(L)$ in $L$ is then the $(1,1)$-tensor of the form $f(L)=a_0(x) \cdot \mathrm{Id} + a_1(x) L + a_2(x) L^2 + \cdots + a_m(x) L^m$.
For two tangent vectors $$u= (\underbrace{u_1}_{\in TM_1}, \underbrace{u_2}_{\in TM_2})\, , \ \ v=( \underbrace{v_1}_{\in TM_1}, \underbrace{v_2}_{\in TM_2}) \in TM $$
we put \begin{eqnarray} g(u,v) & = & h_1\left( \chi_2(L_1)( u_1), v_1\right) + h_2\left(\chi_1(L_2)(u_2), v_2\right), \label{hh1} \\
\bar g(u,v) & = & \frac{1}{\chi_2(0)}\bar h_1\left( \chi_2(L_1) (u_1), v_1\right) + \frac{1}{\chi_1(0)}\bar h_2\left(\chi_1(L_2)(u_2), v_2\right). \label{bh1}
\end{eqnarray}
The corresponding $(1,1)-$tensor
$L=L(g,\bar g)$ is the direct sum of $L_1$ and $L_2$ in the natural sense:
for every
$$
v= (\underbrace{v_1}_{\in T_{x}M_1}, \underbrace{v_2}_{\in T_{y}M_2}){\in T_{(x,y)}(M_1\times M_2)} \ \textrm{ \ we have\ } \ L(\xi)= \left(L_1(v_1), L_2(v_2)\right).$$
It might be convenient to understand the formulas (\ref{hh1}, \ref{bh1}) in matrix notation: we consider the coordinate system
$(x^1,...,x^r,y^{r+1},...,y^{n})$ on $M$ such that $x-$coordinates are coordinates on $M_1$ and $y-$coordinates are coordinates on $M_2$.
Then, in this coordinate system, the matrices of $g$ and $\bar g$ have the block diagonal form
\begin{equation}\label{matg}
g =\begin{pmatrix} h_1 \chi_2(L_1) & 0 \\ 0 & h_2 \chi_1(L_2)\end{pmatrix}\ , \ \ \bar g =\begin{pmatrix} \frac{1}{\chi_2(0)} \bar h_1 \chi_2(L_1) & 0 \\ 0 & \frac{1}{\chi_1(0)} \bar h_2 \chi_1(L_2)\end{pmatrix}.
\end{equation}
\begin{Th}[Gluing Lemma from \cite{splitting}] \label{thm3}
If $h_1$ is geodesically equivalent to $ \bar h_1$, and $h_2$ is geodesically equivalent to $ \bar h_2$, then the metrics $g,\bar g$ given by {\rm (\ref{hh1}, \ref{bh1})} are geodesically equivalent too.
\end{Th}
The {\it splitting construction} is the inverse operation. We will not describe it completely (and refer to \cite{splitting}); we will use its following corollary explained in \cite[\S 2.1]{splitting}:
{\it Every pair of
geodesically equivalent metrics $h$ and $\bar h$ in a neighborhood of almost every point
can be obtained (up to a coordinate change) by applying splitting construction to building blocks. }
By a {\it building block} we understand an open neighborhood $U\subset \mathbb{R}^m $ with a pair of geodesically equivalent metrics $h\sim \bar h$ such that at every point the tensor $L$ given by \eqref{L} has only one real eigenvalue, or two complex-conjugate
eigenvalues, and such that the geometric multiplicity of the eigenvalue is constant on $U$.
\begin{Rem} Riemannian version of the splitting/gluing constructions was known before, see for example \cite[Lemma 2]{archive} and \cite[\S\S2.2, 2.3]{bifurcations}. \end{Rem}
\begin{Ex} \label{bb1}
In the definition of the building block, we allow the dimension $m=1$.
Then, the following two metrics on the interval $I\subset \mathbb{R}^1$ with the following two
geodesically equivalent metrics $ h=dx^2$ and $\bar h = X(x)dx^2$ (where the function $X$ never vanishes) form a building block. Actually, up to a coordinate change,
$(U_1, h, \bar h)$ is the only 1-dimensional building block.
\end{Ex}
\begin{Ex} \label{bb2}
All possible examples of two-dimensional building blocks can be extracted from the table of 2-dimensional geodesically equivalent metrics from the introduction. The metrics from the first column of the table do not correspond to a building block, since the tensor $L$ for these metrics has two different eigenvalues, $X(x)$ and $Y(y)$. But the metrics from the second and the third columns do correspond to the building block, since the tensors $L$ for these metrics are given by the matrices
$$
\left[ \begin {array}{cc} {\Re(h)}&{ \Im(h)}\\ \noalign{\medskip}-{
\Im(h)}&{ \Re(h)}\end {array} \right] \ , \ \ \left[ \begin {array}{cc} Y \left( x_{{2}} \right) &0
\\ \noalign{\medskip}1+x_{{1}}{\frac {d}{dx_{{2}}}}Y \left( x_{{2}}
\right) &Y \left( x_{{2}} \right) \end {array} \right].
$$
Of cause, in every dimension, in particular in dimension two, there exists a trivial building block
$(U, h, \bar h = \const \cdot h)$; the tensor $L$ for this metric is a multiple
of $\delta^i_j$.
From the results of \cite{pucacco} it follows that every two-dimensional building block has one of these three forms.
\end{Ex}
The formulas for the 3-dimensional building block can be obtained using Petrov \cite{Petrov49} and Eisenhart \cite{eisenhart38}; we will give them later. From linear algebra it follows that if the metrics $g$, $\bar g$ have Lorentz signature, then 4-dimensional building blocks are not possible (except for the trivial block corresponding to proportional metrics $g\sim \bar g:= \const \cdot g$), since in the Lorentz signature a $g$-selfadjoint $(1,1)$tensor $L$ can not have a Jordan block of dimension $\ge 4$ with real eigenvalue, and a Jordan block of dimension $\ge 2$ with complex eigenvalue.
\begin{Ex}[Dini formulas \eqref{dini} follow from splitting-gluing constructions.] \label{diniformulas}
We consider the two 1-dimensional building blocks
$$
\left(I_1, h_1=dx^2, \bar h_1 = \frac{1}{X(x)^2}dx^2\right) \ \textrm{and} \
\left(I_2, h_2=-dy^2, \bar h_2 = -\frac{1}{Y(x)^2}dy^2\right).
$$
We assume that $X(x)>Y(y)$ for all $(x,y)$.
The corresponding tensors $L_1$ and $L_2$ (we view them as $1\times 1$-matrices) and their characteristic polynomials are
$$
L_1= (X(x))\ ; \ \ L_2=(Y(y)) \ ; \ \ \chi_1(t)= t - X(x) \ ; \ \ \chi_2(t)= t - Y(y).
$$
We see that the metrics $h_1$, $h_2$ satisfy the assumptions in Theorem \ref{thm3}.
Plugging these data in the formulas \eqref{matg}, we obtain geodesically equivalent metrics $g$ and $\bar g$
given by the matrices
$$
g =\begin{pmatrix} X(x)- Y(y) & \\ & X(x)- Y(y)\end{pmatrix} \ , \ \ \bar g =
\begin{pmatrix} \tfrac{X(x)- Y(y)}{X(x)^2Y(y)} & \\ & \tfrac{X(x)- Y(y)}{X(x)Y(y)^2}.\end{pmatrix}
$$
We see that these metrics are precisely the Dini metrics \eqref{dini}.
For further use let us note that the tensor \eqref{L} for these metrics is given by $ L= \begin{pmatrix} X(x) & \\ & Y(y)\end{pmatrix}.$
\end{Ex}
\begin{Ex}[Levi-Civita metrics (\ref{LC1},\ref{LC2}) follow from splitting-gluing constructions.]
We take 4 pairs of geodesically equivalent metrics on the interval $I$.
\begin{equation} \label{met}
g_1= dx_1^2 \sim \bar g_1 = \tfrac{1}{X_1(x_1)^2}dx_1^2\ ; \ \
g_2= -dx_2^2 \sim \bar g_2 = -\tfrac{1}{X_2(x_2)^2}dx_2^2\ ; $$ $$
g_3= dx_3^2 \sim \bar g_3 = \tfrac{1}{X_3(x_3)^2}dx_3^2\ ; \ \
g_4= -dx_4^2 \sim \bar g_4 = -\tfrac{1}{X_4(x_4)^2}dx_4^4. \end{equation} We assume that for $i\ne j$
$X_i(x_i)\ne X_j(x_j)$ for all $x_i,x_j\in I$.
Gluing $(I, g_1, \bar g_1)$ and $(I, g_2, \bar g_2)$,
($(I, g_3, \bar g_3)$ and $(I, g_4, \bar g_4)$,respectively) we obtain two pairs of geodesically equivalent metrics
(we denote them by $h_1\sim \bar h_1$ \ ($h_2 \sim \bar h_2$, respectively)) on the two-dimensional disk
$U^2=I\times I$. These metrics and the corresponding tensors \eqref{L} were essentially constructed in Example \ref{diniformulas} and are given by matrices
$$
h_1 =\begin{pmatrix} X_1(x_1)- X_2(x_2) & \\ & X_1(x_1)- X_2(x_2)\end{pmatrix} \ \ \sim \ \ \bar h_1 =
\begin{pmatrix} \tfrac{X_1(x_1)- X_2(x_2)}{X_1(x_1)^2X_2(x_2)} & \\ & \tfrac{X_1(x_1)- X_2(x_2)}{X_1(x_1)X_2(x_2)^2}\end{pmatrix} \ , $$
$$
h_2= \begin{pmatrix} X_3(x_3)- X_4(x_4) & \\ & X_3(x_3)- X_4(x_4)\end{pmatrix} \ \sim \ \bar h_2=
\begin{pmatrix} \tfrac{X_3(x_3)- X_4(x_4)}{X_3(x_3)^2X_4(x_4)} & \\ & \tfrac{X_3(x_3)- X_4(x_4)}{X_3(x_4)X_4(x_4)^2}\end{pmatrix}\ , $$ $$L_1=L(h_1, \bar h_1)= \begin{pmatrix} X_1(x_1) & \\ & X_2(x_2)\end{pmatrix} \ , \ \
L_2=L(h_2, \bar h_2)= \begin{pmatrix} X_3(x_3) & \\ & X_4(x_4)\end{pmatrix}.
$$
We see that the metrics $h_1$, $h_2$ satisfy the assumptions in Theorem \ref{thm3}.
Gluing these metrics, we obtain the metrics (\ref{LC1},\ref{LC2}).
\end{Ex}
\begin{Rem} By changing the sign of the metrics \eqref{met} we can make geodesically equivalent
metrics $g\sim \bar g$ of arbitrary signature. \end{Rem}
\begin{Ex}[General Levi-Civita metrics]
We take $m$ building blocks: the first $r$ building blocks are 1-dimensional,
and the last $m-r$ building blocks $h_{r+1}\sim \bar h_{r+1},...,h_{m}\sim \bar h_{m}$
have dimensions $k_i\ge 2$, $i=r+1, ..., m-r$. For cosmetic
reasons we think that the first $r$ building blocks are
\begin{equation} \label{pm}
(U_i^1, h_i= \pm {dx_i}^2 , \bar h_i= \pm \frac{1}{X_i(x_i)^2} {dx_i}^2) \ , \ \ {i=1,...,r},\end{equation}
the sign $\pm$ in $h_i$ and $\bar h_i$ is the same for each $i$, but may be different for different $i$'s.
The last $m-r$ building blocks are
$$
\left(U_i^{k_i}, h_i= \sum_{\alpha_i, \beta_i=1}^{k_i}(h_i(x_i))_{\alpha_i \beta_i}dx_i^{\alpha_i}dx_i^{\beta_i}, \bar h_i= \frac{1}{X_i^{k+1}}\sum_{\alpha_i, \beta_i=1}^{k_i}(h_i(x_i))_{\alpha_i \beta_i} dx_i^{\alpha_i}dx_i^{\beta_i}\right) \ , \ \ {i=r+1,...,m}.$$
Here the functions $X_i$ are constant for $i>r$ and depend only on the corresponding variable $x_i$ for $i\le r$. As above, we assume that $\textrm{Image}(X_i)\cap \textrm{Image}(X_j)=\varnothing$ for $i\ne j$.
The metrics $ h_i$, $i=r+1,...,m$ can be arbitrary, but their entries $(h_i)_{\alpha_i\beta_i}$ must depend on the coordinates $x_i= (x_i^1,...,x_i^{k_i})$ only.
Inductively applying the gluing procedure, we obtain for $g$ and $\bar g$ the following form:
\begin{equation} \label{LCM}
\begin{array}{cccc} g & = & \sum_{i=1}^rP_i{dx_i}^2 & + \sum_{i=r+1}^m \left[P_i \sum_{\alpha_i, \beta_i=1}^{k_i}(h_i(x_i))_{\alpha_i \beta_i}dx_i^{\alpha_i}dx_i^{\beta_i}\right], \\
{\bar g} & = & \sum_{i=1}^r P_i \rho_i {dx_i}^2 & + \sum_{i=r+1}^m \left[ P_i \rho_i\sum_{\alpha_i, \beta_i=1}^{k_i}(h_i(x_i))_{\alpha_i \beta_i} dx_i^{\alpha_i}dx_i^{\beta_i}\right],
\end{array}\end{equation}
where
\begin{equation} \label{P_i}
P_i:= \pm \prod_{j\ne i} (X_i- X_j), \ \ \ \rho_i:= \frac{1}{X_i \, \prod_{\alpha} X_\alpha}.
\end{equation}
(the signs $\pm$ in \eqref{P_i} depend on the choice of the signs $\pm$ in \eqref{pm} and can be arbitrary).
This is precisely Levi-Civita's normal form for geodesically equivalent (Riemannian)
metrics from \cite{Levi-Civita}.
Now, since every pair of geodesically equivalent
metrics (in a neighborhood of almost every point)
can be obtained by a gluing construction, and since in the Riemannian signature only the blocks used above can be used, every Riemannian geodesically equivalent metrics have the form \eqref{LCM} in a certain coordinate system. This is the famous Levi-Civita's
Theorem from \cite{Levi-Civita}.
\end{Ex}
Note, than the Lorentz signature of $g$ and $\bar g$ does not allow the tensor $L$ to have complex eigenvalues
of algebraic multiplicity greater than one. Similarly, it does not allow the tensor $L$ to have a Jordan block of dimension 4, or two Jordan blocks. Thus, in order to obtain the description of nonpropotional
4-dimensional geodesically
equivalent metrics of Lorentz signature, one needs the building blocks of dimensions $1,2,3$ only. In dimension 1, only one buiding block, namely the one from Example \ref{bb1}, is possible.
Geodesically equivalent metrics such that the tensor $L$ has the 2-dimensional
Jordan-block structure $$\begin{pmatrix}
\lambda &1 \\
& \lambda \end{pmatrix}\ , \ \
\begin{pmatrix}
\lambda & \\
& \lambda \end{pmatrix}\ , \ \
\begin{pmatrix}
\alpha &\beta& \\
-\beta & \alpha
\end{pmatrix}.$$
were described in Example \ref{bb2}.
For the Jordan-block structure \begin{equation}\label{petrovcase}\begin{pmatrix}
\lambda &1& \\
& \lambda &1 \\ &&
\lambda\end{pmatrix},\end{equation}
the description of the
metrics follows from Petrov \cite{Petrov49}: the metrics are given by
\begin{equation} \label{bolsinov}
\begin{array}{ccl} g & =& \left(4\, x_{{2}}\left( {\frac {d}{dx_{{3}}}}\lambda
\left( x_{{3}} \right) \right) +2\,\right){ dx}_{{1
}}{ dx}_{{3}} +{{ dx}_{{2}}}^{2}\\ &+&2\, x_{{1}} \left( {\frac
{d}{dx_{{3}}}}\lambda \left( x_{{3}} \right) \right) { dx}_{{2}}{ dx}_{{3}}+ {x_{{1}}}^{2}
\left( {\frac {d}{dx_{{3}}}}\lambda \left( x_{{3}}
\right) \right) ^{2}{{ dx
}_{{3}}}^{2} ,\\
\bar g & =&
\frac{1}{ \lambda \left( x_{{3}} \right)^{6}}
\Big[\left.\left(4\, x_{{2}} \lambda \left(
x_{{3}} \right)^{2}\left( {\frac {d}{dx_{{3}}}}
\lambda \left( x_{{3}} \right) \right) +2\,
\lambda \left( x_{{3}} \right)^{2}\right){ dx}_{{1}}{ dx}_{{3}}
+ \lambda \left( x_{{3}} \right)^{2}{{ dx}_{{2}}}^{2}
\right. \\
&-&\left. \left(4\, x_{{2}}\lambda \left( x_{{3}} \right)\left( {\frac {d}{dx_{{3}}}}\lambda \left( x_{{3}}
\right) \right) +2\lambda \left( x_{{3}} \right) -2\,x_{{1}} \lambda \left( x_{{3}} \right)^2\left( {\frac {d}{dx_{{3}}}}\lambda \left( x_{{3}}
\right) \right) \right) {dx}_{{2}}
{ dx}_{{3}}
\right. \\
&+& \left. \left( 4\, {x_{{2}}}^{2} \left( {\frac {d}{dx_{{3}}}}
\lambda \left( x_{{3}} \right) \right)^{2}+4\,x_{{2}} \left( {\frac {d}{dx_{{3}}}}\lambda \left( x_{{3}}
\right) \right) - 4\, x_{{1}}x_{{2}}
\lambda \left( x_{{3}} \right)\left( {\frac {d}{dx
_{{3}}}}\lambda \left( x_{{3}} \right) \right)^{2}\right){{ dx}_{{3}}}^{2} \right.
\\ &+& \left. \left(1+
{x_{{1}}}^{2} \lambda \left( x_{{3}} \right)^{2}\left( {\frac {d}{dx_{{3}}}}\lambda \left( x_{{3}} \right) \right)^{2} -2\,x_{{1}}\lambda \left( x_{{3}} \right)\left( {\frac {d}{dx_{{3}}}}\lambda \left( x_{{3}} \right)
\right)\right) {{ dx}_{{3}}}^{2}\Big]\right.
\end{array}
\end{equation}
The corresponding $L$ is given by the matrix $$\left[ \begin {array}{ccc} \lambda \left( x_{{3}} \right) &1& \left(
{\frac {d}{dx_{{3}}}}\lambda \left( x_{{3}} \right) \right) x_{{1}}
\\ \noalign{\medskip}0&\lambda \left( x_{{3}} \right) &2\, \left( {
\frac {d}{dx_{{3}}}}\lambda \left( x_{{3}} \right) \right) x_{{2}}+1
\\ \noalign{\medskip}0&0&\lambda \left( x_{{3}} \right) \end {array}
\right]. $$
\begin{Rem}
Actually, the formulas \eqref{bolsinov} are slightly more general than that of \cite{Petrov49}. They are equivalent to the formulas from \cite{Petrov49} (modulo a coordinate transformation)
at the points such that $d\lambda\ne 0$. The formulas \cite{Petrov49} were obtained together with A. Bolsinov; they can be generalized for every dimension. We will publish this result elsewhere.
\end{Rem}
As it follows from \cite[Lemma 6]{einstein}, if $L$ has the Jordan-form $\begin{pmatrix}
\lambda &1& \\
& \lambda & \\ &&
\lambda\end{pmatrix}$, the eigenvalue $\lambda$ is constant, and the metrics are affinely
equivalent (i.e., Levi-Civita connections of $g$ and $\bar g$ coincide). Affinely equivalent metrics whose tensor $L$ has this form were essentially described by Eisenhart in \cite{eisenhart38}, see also \cite[Theorem 1]{solodovnikov1959}. From their description it follows, that,
in a certain coordinate system, geodesically equivalent metrics $g\sim \bar g$ are given by
\begin{equation} \label{metricseisenhart} \begin{array}{cl} g&= 2\,{ dx}_{{3}}{ dx}_{{1}}+
{ h}\left( x_{{2}},x_{{3}} \right)_{11}{{ dx}_{{2}}}^{2}+2\, { h(x_2,x_3)_{12}} { dx}_{{2}}{ dx}_{{3}}
+{ h(x_2,x_3)_{22}}{{ dx}_{{3}}}^{2}, \\
\bar g &= 2\,\alpha\,{ dx}_{{3}}{ dx}_{{1}}+\alpha\,{
h\left( x_{{2}},x_{{3}} \right)_{11}}{{ dx}_{{2}}}^{2}
+2\,\alpha\,{ h\left( x_{{2}},x_{{3}} \right)_{12}} { dx}_{{2}}{ dx}_{{3
}} +\beta {{ dx}_{{3}}}^{
2}+\alpha\,{ h\left( x_{{2}},x_{{3}}
\right)_{22}}{{ dx}_{{3}}}^{2}, \end{array}
\end{equation} where $\alpha$ and $\beta$ are constants.
Now, the metrics $g,\bar g$ such that $L= \begin{pmatrix}
\lambda & &\\
& \lambda & \\ & & \lambda\end{pmatrix} $ are conformally equivalent. By
by the classical result of Weyl \cite{Weyl}, they are proportional (i.e., $ \bar g =\const\cdot g$).
Thus, we have described all building blocks that can be used in constructing
metrics of Lorentz signature; Theorem \ref{thm3} gives us the construction. Let us count the number of cases in dimension 4: we can represent $4$ as the sum of natural numbers by 4 different ways:
\begin{tabular}{c|c|c} \hline
Dim of blocks & Description of blocks & \# of cases\\ \hline
1+1+1+1 & \begin{minipage}{.6\textwidth}All building blocks are as in Example \ref{bb1}, and $g\sim \bar g$ are essentially (\ref{LC1},\ref{LC2}) with the changed sign of $dx_1^2$\end{minipage}& 1\\ \hline
1+1+2 & \begin{minipage}{.6\textwidth}The first two building blocks are as in Example \ref{bb1}, the third is as in Example \ref{bb2}\end{minipage}&3\\ \hline
2+2 & \begin{minipage}{.6\textwidth}Both building blocks are as in Example \ref{bb2}; at least one of them is trivial \end{minipage}& 3 \\
\hline
1+3 & \begin{minipage}{.6\textwidth} The first building block as is Example \ref{bb1}, the second is as in \eqref{bolsinov}, as in \eqref{metricseisenhart}, or trivial \end{minipage}& 3 \\ \hline
\end{tabular}
\begin{Rem} The general schema also works in higher dimensions, but in this case there is the following essential difficulty (and this is the only difficulty): up to our knowledge, for dimensions $n-1\ge 5$,
there is no description of all pairs of $(g, L)$ such that $g$ has Lorentz signature and $L$ is an (1,1)-selfadjoint tensor such that it is covariantly constant, and such that the Jordan normal form of $L$ is
\begin{equation}\label{llll}
\begin{pmatrix}
\lambda &1& &&&\\
& \lambda &1 &&& \\
&& \lambda&0&&\\
&&&\ddots&\ddots&\\
&&&&\lambda & 0\\
&&&&&\lambda\end{pmatrix} \end{equation}
In dimension $n = 4$, since $n-1=3$, the Jordan normal form \eqref{llll} coincides with \eqref{petrovcase}, and the
local description follows from \cite{Petrov49}. In dimension $n=5$ we have $n-1=4$ and one can obtain the local description (we will not do it in the present paper) combining the results of \cite{eisenhart38,solodovnikov1959} with the algebraic description of possible holonomy groups of 4-dimensional metrics of Lorentz signature (see e.g. \cite{ new2, new3}).
\end{Rem}
\paragraph{\bf Acknowledgement.}
This work benefited from discussions with A. Bolsinov, G. Gibbons, D. Giulini, V. Kiosak, P. Nurowski, and A. Wipf. I thank the anonimous referee and G. Hall for valuable suggestions and finding misprints. During the work on this paper, the author was partially supported by
Deutsche Forschungsgemeinschaft (SPP 1154 and GK 1523) and FSU Jena.
|
1,116,691,499,722 | arxiv | \section{INTRODUCTION}
We consider the local solvability problem for a class of degenerate linear partial differential operators of the form
\begin{equation}
\label{P}
P=\sum_{j=1}^NX_j^*f_jX_j+iX_0+X_{N+1}+a_0,
\end{equation}
where the $X_j=X_j(x,D)$, $0\leq j\leq N+1$, $D=(D_1,...,D_n)=-i(\partial_{x_1},...,\partial_{x_n})$, are homogeneous first order linear partial differential operators with smooth coefficients defined on an open set $\Omega\subseteq\mathbb{R}^n$, $f_j=f_j(x)\in C^\infty(\Omega;\mathbb{R})$ are real smooth functions possibly vanishing at some point of the set $\Omega$ and $a_0\in C^\infty(\Omega;\mathbb{C})$. In addition we consider the operators $X_j$, $0\leq j\leq N+1$, having real coefficients (i.e the $iX_j$, $0\leq j\leq N+1$, are real smooth vector fields),.
Our goal here is to give sufficient conditions for the local solvability of $P$ to hold at each point of $\Omega$. Of course, the most interesting cases are covered by situations in which we have local solvability at points where the operator $P$ is degenerate. Moreover, we will not suppose that the vector fields (i.e. the operators) $iX_j$, $1\leq j\leq N+1$, are nondegenerate, therefore very complicated situations can arise in the class \eqref{P} above.
The sufficient conditions we are going to introduce will be given in terms of the operators $X_j$ and of the functions $f_j$, and, as we shall see, they will be essentially requirements on the subprincipal part of the operator, which are therefore invariantly defined.
The study of operators such as \eqref{P} goes back to the work of Kannai \cite{Ka} where an example of hypoelliptic non-solvable operator is given. To be precise, the so called Kannai operator, is not locally solvable around points where the principal symbol changes sign, showing that this property is meaningful for the local solvability.
Some generalizations of this operator have been studied by Colombini, Cordaro and Pernazza in \cite{CCP} and by Colombini, Pernazza and Treves in \cite{CCP1}, and further extensions have been given by the author and A. Parmeggiani in \cite{FP} and \cite{FP1} (see also \cite{Par}).
The form of the operator $P$ in \eqref{P} is mostly linked to that in \cite{FP} and \cite{FP1} and our aim is to cover other open cases. With respect to the class in \cite{FP1}, here we allow the presence of several functions in the second order part of the operator even when $X_{N+1}(x,D)\equiv 0$. Moreover, while in \cite{FP} and \cite{FP1} the condition $iX_0f>0$ near $f^{-1}(0)$, with $f_j= f$ for all $j=1,...,N$, is required, here we allow $iX_0f_j\geq 0$ over $\Omega$, for all $j=1,...,N$. Let us also remark that for \eqref{P} we require $X_0(x,D)\not\equiv 0$ and $X_{N+1}$ possibly such that $X_{N+1}\equiv 0$ or degenerate. The cases $X_0\not\equiv 0$ in presence of a single function $f=f_j$ for all $j$, and the case $X_0\equiv 0, X_{N+1}\not\equiv 0$ with several functions $f_j$ are studied in \cite{FP1}.
There are other interesting results about operators with double characteristics which are somehow connected to operators in the class \eqref{P}. We recall, for instance, the results of Helffer in \cite{Hel} where he shows the construction of parametrices for some operators with double characteristics and gives, as an application, examples of operators which are also contained in the class \eqref{P}.
We have results in \cite{T} due to Treves where operators of the form $XY+Z$ are considered. Very interesting results for pseudodifferential operators appear in works by Mendoza \cite{M} and Mendoza and Uhlmann \cite{MU, MU2} in which necessary and sufficient conditions for the local solvability of operators having a principal symbol with specific form are given.
In \cite{KP} Pravda-Starov exhibits some examples of weakly hyperbolic operators which are not locally solvable consistently with the results of Mendoza and Uhlmann.
In this regard we want to highlight that, due to the required conditions, all the partial differential operators contained in the class treated in \cite{MU,MU2} do not satisfy condition $\mathsf{Sub}(\mathscr{P})$ of \cite{MU,MU2} except for dimension 2. In this case, that is when $n=2$, we can find a deep connection between the sufficient conditions given here for \eqref{P} and those given in \cite{MU2}.
It is also worth to recall recent results due to Dencker in \cite{D} about necessary conditions for the local solvability of pseudodifferential operators of subprincipal type (that essentially have involutive characteristic set).
Finally, other results connected to the argument can be found in \cite{F}, \cite{Mu}, \cite{MR1}, \cite{PP} and \cite{PR}.
The present paper is organized as follows.
In Section 2 we will present the hypotheses that determine the class, recall the general definition of local solvability and prove the solvability result by means of a priori estimate. The key point in the proof will be the use of the Fefferman-Phong inequality on a new operator $P'$ appearing in the estimate.
In Section 3 we will give a few examples of operators in the class whose local solvability is guaranteed by the theorem of Section 2.
\section{STATEMENT AND PROOF OF THE RESULT}
Recall that we are dealing with the local solvability problem for the class of operators of the form \eqref{P} given in the Introduction.
In what follows we shall assume that conditions (H1), (H2) and (H3) below are satisfied:
\begin{itemize}
\item [(H1)] $X_0(x,D)\neq 0$ $\forall x\in\Omega$ and $iX_0f_j\ge0$ on $\Omega$ for all $1\leq j\leq N$;
\item [(H2)] $[X_0,X_j](x,D)=0$ in $\Omega$ for all $1\leq j\leq N$;
\item [(H3)] for all $x_0\in\Omega$ there exists $U\subset\Omega$ open and bounded containing $x_0$, and a positive constant $C$ such that
$$|\{X_0,X_{N+1}\}(x,\xi)|^2\leq C\left(\sum_{j=1}^N \bigl(iX_0f_j(x)\bigl)X_j(x,\xi)^2+ X_0(x,\xi)^2\right),\quad \forall (x,\xi)\in U\times \mathbb{R}^n;$$
\end{itemize}
where we denote by $X_j(x,\xi)$ the (total) symbol of the operator $X_j$ and by $\{\cdot,\cdot\}$ the Poisson bracket.
\begin{remark}
Observe that the subprincipal symbol of $P$ in \eqref{P} is given by $Sub(P)(x,\xi)=iX_0(x,\xi)+X_{N+1}(x,\xi)$, thus it is invariant on $T^*\Omega$ and not only on the double characteristics set since it is the principal symbol of a first order operator. Moreover one gets that (H1), (H2) and (H3) are essentially requirements on the real and the imaginary part of $Sub(P)$. In particular the imaginary part has a key role here. As we shall see below, conditions (H1) and (H2) allow a control on the commutator (or the Poisson bracket at the level of symbols) between the principal part and the imaginary subprincipal part, while condition (H3) imposes a relation between the real and the imaginary part of $Sub(P)$.
Since the operator $P$ is degenerate and has a principal symbol which may change sign, the only chance is to control it by means of the first order part. Here the control is guaranteed by the hypothesis (H1) on $\mathsf{Im}Sub(P)=X_0$. In fact the latter condition gives the validity of a Poincar\'e type inequality for $X_0$ that will be used to absorb the $L^2$-errors coming from the principal part and the term $X_{N+1}$.
\end{remark}
The local solvability result we are going to show is proved by means of a priori estimates.
Before giving the statement of the result we recall below a general definition of $H^s$ to $H^{s'}$ local solvability for a partial differential operator, where $H^s$ stands for the Sobolev space of order $s$.
\begin{definition}
Let P be a partial differential operator defined on an open set $\Omega \subseteq\mathbb{R}^n$. We say that P is $H^s$ to $H^{s'}$ locally solvable at $x_0\in\Omega$ if there exists a compact $K\subset \Omega$, with $x_0\in \mathring{K}=U$ (where $ \mathring{K}$ is the interior of $K$), such that for all $f\in H^s_{\mathrm{loc}}(\Omega)$ there is $u\in H^{s'}_{\mathrm{loc}}(\Omega)$ which solves $Pu=f$ in $U$.
\end{definition}
\begin{theorem}\label{Thm}
Let $P$ be the operator in \eqref{P} satisfying conditions (H1), (H2) and (H3). Then, for all $x_0\in\Omega$, $P$ is $L^2$ to $L^2$ locally solvable at $x_0$.
\end{theorem}
In order to get the result in Theorem \ref{Thm} it suffices to prove the following a priori estimate that we shall call \textit{solvability estimate}: \textit{For all $x_0\in \Omega$ there exists a compact set $K\subset\Omega$ containing $x_0$ in its interior $\mathring{K}=U$ and a positive constant $C$ such that, for all $\varphi\in C_0^\infty(U)$},
\begin{equation}\label{solvest}
\| P^* \varphi \| \geq C\| \varphi\| ,
\end{equation}
\textit{with $\| \cdot \| $ denoting the $L^2$-norm and $P^*$ the adjoint of $P$.}
In view of the well-known equivalence between $H^s$ to $H^{s'}$ local solvability and validity of suitable a priori estimates (see, for instance, \cite{L}), the proof of the theorem is mainly concerned in obtaining the inequality above for the operator $P^*$, giving as a consequence the local solvability of $P$ in the sense $L^2$ to $L^2$.
\begin{proof}[Proof of Theorem \ref{Thm}]
First note that, since $X_0$ is nondegenerate in $\Omega$, we can always find a change of coordinates such that $X_0(x,D)=D_1$. Note also that conditions (H1), (H2) and (H3) are still satisfied, since they are invariant under changes of coordinates. Therefore let us assume $X_0(x,D)=D_1$, and observe that $X_0^*=X_0$ and $X_{j}^*=X_{j}+d_{j}$, where $d_{j}=-i\mathsf{div}(iX_j)$.
We now pick an arbitrary point $x_0\in \Omega$ and start the proof of the solvability inequality by estimating the term
\begin{equation}
\begin{gathered}
2\mathsf{Re}(P^*\varphi, -i\underbrace{X_0^*}_{=X_0}\varphi)= 2\sum_{j=1}^N\mathsf{Re}(X_j^*f_jX_j\varphi, -iX_0^*\varphi)+2\mathsf{Re}(-iX_0^*\varphi, -iX_0^*\varphi)\\
+2\mathsf{Re}(X_{N+1}\varphi, -iX_0^*\varphi)+2\mathsf{Re}(\overline{a_0}\,\varphi, -iX_0^*\varphi)\\
\geq\underbrace{\sum_{j=1}^N2\mathsf{Re}(X_j^*f_jX_j\varphi, -iX_0^*\varphi)}_{(\ref{est1}.1)}+2\| X_0\varphi\| ^2\\
\label{est1}
+\underbrace{2\mathsf{Re}(X_{N+1}\varphi, -iX_0\varphi)}_{(\ref{est1}.2)}-\frac {1}{\delta_0}\| \overline{a_0}\| ^2_{L^\infty(K)}\| \varphi\| ^2-\delta_0\| X_0\varphi\| ^2,
\end{gathered}
\end{equation}
for all $\varphi \in C_0^\infty(K)$, where $K$ is a compact set in $\Omega$ containing $x_0$ in its interior and $\delta_0$ is a positive constant that will be chosen later. We then consider the terms $(\ref{est1}.1)$ and $(\ref{est1}.2)$ separately.
For the term $(\ref{est1}.1)$ we have that, for all $\varphi\in C_0^\infty(K)$,
\begin{equation} \label{term1}
\begin{gathered}
\sum_{j=1}^N2\mathsf{Re}(X_j^*f_jX_j\varphi, -iX_0\varphi)=\sum_{j=1}^N\left[(X_j^*f_jX_j\varphi, -iX_0\varphi)+(-iX_0\varphi,X_j^*f_jX_j\varphi)\right]\\
=\sum_{j=1}^N\left[(iX_0X_j^*f_jX_j\varphi, \varphi)+(-iX_0\varphi,X_j^*f_jX_j\varphi)\right]\\
=\sum_{j=1}^N([iX_0,X_j^*f_jX_j]\varphi,\varphi).
\end{gathered}
\end{equation}
For the term $(\ref{est1}.2)$ we have, for all $\varphi\in C_0^\infty(K)$,
\begin{equation*}
\begin{gathered}
2\mathsf{Re}(X_{N+1}\varphi, -iX_0\varphi)=(X_{N+1}\varphi, -iX_0\varphi)+(-iX_0\varphi,X_{N+1}\varphi)\\
=([iX_0,X_{N+1}]\varphi,\varphi)+(iX_0\varphi,X_{N+1}^*\varphi)-(iX_0\varphi,X_{N+1}\varphi)\\
=([iX_0,X_{N+1}]\varphi,\varphi)+(iX_0\varphi,d_{N+1}\varphi),
\end{gathered}
\end{equation*}
whence,
\begin{equation}\label{term2}
\begin{gathered}
2\mathsf{Re}(X_{N+1}\varphi, -iX_0\varphi)
=\mathsf{Re}\bigl(([iX_0,X_{N+1}]\varphi,\varphi)\bigl)+\mathsf{Re}\bigl((iX_0\varphi,d_{N+1}\varphi)\bigl)\\
\geq -\delta_1\| [X_0,X_{N+1}]\varphi\| ^2-\frac{1}{\delta_1}\| \varphi\| ^2 -\delta_2\| X_0\varphi\| ^2-\frac{1}{\delta_2}\| d_{N+1}\| ^2_{L^\infty(K)}\| \varphi\| ^2,
\end{gathered}
\end{equation}
where $\delta_1$ and $\delta_2$ are two positive constants that will be chosen later.
Therefore, by \eqref{term1} and \eqref{term2}, we get
\begin{equation}
\label{est2}
\begin{gathered}
2\mathsf{Re}(P^*\varphi, -iX_0^*\varphi)\geq \sum_{j=1}^N([iX_0,X_j^*f_jX_j]\varphi,\varphi)+\| X_0\varphi\| ^2\\
-\delta_1\| [X_0,X_{N+1}]\varphi\| ^2+(1-\delta_0-\delta_2)\| X_0\varphi\| ^2-\left[\frac{1}{\delta_2}\| d_{N+1}\| ^2_{L^\infty(K)}+\frac {1}{\delta_0}\| \overline{a_0}\| ^2_{L^\infty(K)}+\frac{1}{\delta_1}\right] \| \varphi\| ^2.
\end{gathered}
\end{equation}
We now write $\| X_0\varphi\| ^2=(X_0 \varphi,X_0\varphi)=(X_0^2\varphi,\varphi)$ and, similarily, $\| [X_0,X_{N+1}]\varphi\| ^2=([X_0,X_{N+1}]^*[X_0,X_{N+1}]\varphi,\varphi)$, so that the first three terms on the righthand side of \eqref{est2} can be written as
\begin{equation}
\label{P'}
\Big((\sum_{j=1}^N[iX_0,X_j^*f_jX_j]+X_0^2-\delta_1[X_0,X_{N+1}]^*[X_0,X_{N+1}])\varphi,\varphi\Big):=(P'\varphi,\varphi),
\end{equation}
and
\begin{equation}
\label{ReP}
\begin{gathered}
2\mathsf{Re}(P^*\varphi, -i\underbrace{X_0}_{=X_0^*}\varphi)\geq(P'\varphi,\varphi)+(1-\delta_0-\delta_2)\| X_0\varphi\| ^2\\
-\left[\frac{1}{\delta_2}\| d_{N+1}\| ^2_{L^\infty(K)}+\frac {1}{\delta_0}\| \overline{a_0}\| ^2_{L^\infty(K)}+\frac{1}{\delta_1}\right] \| \varphi\| ^2.
\end{gathered}
\end{equation}
Our next goal is to prove that the operator $P'$ given in \eqref{P'}, that is $P'=\sum_{j=1}^N[iX_0,X_j^*f_jX_j]+X_0^2-\delta_1[X_0,X_{N+1}]^*[X_0,X_{N+1}]$, satisfies the Fefferman-Phong inequality in a compact set of $\Omega$ containing $x_0$ in its interior. This will give a control on the term \eqref{P'}, a control being necessary in order to get the solvability estimate \eqref{solvest}.
To prove the inequality for $P'$ we shall proceed as follows: we will define an operator $A$ which extends $P'$ globally and prove the Fefferman-Phong inequality for the latter. As a consequence we will get the same result for $P'$ in a suitable compact set containing $x_0$ in its interior.
Let us consider a sequence of compact sets $K_0\subseteq K_0' \Subset K_1\subseteq K_1'\subset \Omega$ such that $K_0$ contains $x_0$ in its interior and condition (H3) is satisfied in $K_1'$. Let $\chi_0$ and $\chi_1$ be two functions such that $\chi_\ell\in C_0^\infty(K_\ell')$, $\chi_\ell\equiv 1$ in $K_\ell$ and $0\leq \chi_\ell\leq 1$ in $K_\ell'$ for $\ell=0,1$, and take the operators $\tilde{Y}$, $\tilde{X}_0$, $\tilde{X}_j$, $1\leq j\leq N$, of the form $\tilde{Y}(x,D)=\chi_1(x)[X_0,X_{N+1}](x,D)$, $\tilde{X_0}(x,D)=\chi_1(x)X_0(x,D)$ and $\tilde{X}_j(x,D)=\chi_1(x)X_j(x,D)$ for $1\leq j\leq N$ respectively. We now define the operator $A$ as the Weyl quantization of the symbol $a(x,\xi)\in S^2(\mathbb{R}^n\times\mathbb{R}^n)$ given by
$$ a(x,\xi)=\chi_0\Big(\sum_{j=1}^N\frac1 i\{ip_0,\overline{p}_j\#f_j\#p_j\}+p_0\#p_0-\delta_1\overline{q}\#q\Big),$$
with $q(x,\xi)$ and $p_j(x,\xi)$, $0\leq j\leq N$, denoting the Weyl symbols of the operators $\tilde{Y}$ and $\tilde{X}_j$ respectively, and compute the symbol $a$ (which is such that $a|_{\pi^{-1}(K_0)}=p'|_{\pi^{-1}(K_0)}$ with $p'$ Weyl symbol of $P'$) by means of the Weyl calculus of pseudo-differential operators.
Since $p_j(x,\xi)=p^1_j(x,\xi)+ip_j^0(x,\xi)$, with $p_j^1(x,\xi)=\chi_1(x)X_j(x,\xi)\in S^1(\mathbb{R}^n\times\mathbb{R}^n)$ and $p_j^0(x,\xi)=p_j^0(x)\in S^0(\mathbb{R}^n\times\mathbb{R}^n)$ (with $p_0^0(x)=0$), we have
\begin{gather*}
\overline{p}_j\#f_j\#p_j=\overline{p}_j\#(f_jp_j+\frac {1}{2 i} \{f_j,p_j\})\\%=\overline{p}_j\#(f_jp_j+r_0(x))\\
=f_j\overline{p}_jp_j+\frac {1}{2 i} \{\overline{p}_j,f_jp_j \}+\frac {1}{2 i} \overline{p}_j\{f_j,p_j\}+\frac{1}{2 i}\big\{ \overline{p}_j,\frac {1}{2 i} \{f_j,p_j\} \big\}\\
=f_j|p_j|^2+\frac {1}{2 i} \{\overline{p}_j,f_j\}p_j +\frac {1}{2 i} f_j\{\overline{p}_j,p_j \}+\frac {1}{2 i} \overline{p}_j\{f_j,p_j\}+\frac {1}{2 i}\big\{ \overline{p}_j,\frac {1}{2 i} \{f_j,p_j\} \big\},\\
=f_j|p_j|^2+\frac {1}{2 i} \{\overline{p}_j,f_j\}p_j +\frac {1}{2 i} \overline{p}_j\{f_j,p_j\}+r_0\\
=f_j|p_j|^2+\frac {1}{2 i} \{p^1_j-ip_j^0,f_j\}(p^1_j+ip_j^0) +\frac {1}{2 i} (p^1_j-ip_j^0)\{f_j,p^1_j+ip_j^0\}+r_0\\
\underset{(\{f_j,p_j^0\}=0)}{=}f_j((p^1_j)^2+(p^0_j)^2)+r_0\\
=f_j({p^1_j})^2+r_0,
\end{gather*}
where we denoted by $r_0=r_0(x)$ a (new) smooth compactly supported function with support in $\Omega$.
We then have
\begin{gather*}
\chi_0\sum_{j=1}^N\frac1 i\{ip_0,\overline{p}_j\#f_j\#p_j\}=\chi_0\sum_{j=1}^N\{p_0,f_j(p^1_j)^2\}+\chi_0r_0\\
=\chi_0\sum_{j=1}^N \big(\{p_0,f_j\}(p^1_j)^2+\underbrace{\chi_0f_j\{p_0,(p^1_j)^2\}}_{=0}+\chi_0r_0,
\end{gather*}
where $\chi_0\,f_j\{p_0,(p^1_j)^2\}=0$ since (by condition (H2), that is $\{X_0,X_j\}(x,\xi)=0$) we have
$$\chi_0\,f_j\{p_0,(p^1_j)^2\}(x,\xi)= 2\chi_0\sum_{j=1}^N f_j \Big( \chi_1 X_j\{X_0,\chi_1\}+\chi_1 X_0\{\chi_1,X_j\}\Big)\chi_1X_j,$$
(with $X_j=X_j(x,\xi)$ symbols of $X_j(x,D)$), and therefore, since $\mathrm{supp}\,\chi_0\bigcap \mathrm{supp} \{X_j,\chi_1\}=\emptyset$, $j=0,1,$ we get that the quantity above is identically zero.
Then, as $p_0\#p_0=p_0^2$ and $\overline{q}\#q=q^2+r_0$, we have that
\begin{equation}
\label{FP}
a(x,\xi)=\chi_0\big(\sum_{j=1}^N \{p_0,f_j\}(p_j^1)^2+p_0^2-\delta_1\,q^2+r_0\big)\end{equation}
$$=\chi_0(x)^2\left(\sum_{j=1}^N \{X_0,f_j\}(x)X_j(x,\xi)^2+X_0(x,\xi)^2-\delta_1\{X_0,X_{N+1}\}^2(x,\xi)+r_0\right),$$
whence, by choosing $\delta_1$ sufficiently small and using hypotheses (H1) and (H3) (which is satisfied in $K_0'$), we have from \eqref{FP} that there exists a positive constant $c$ such $a(x,\xi)\geq -c$ and hence $A$ satisfies the Fefferman-Phong inequality.
Finally, since $(A\varphi, \varphi)=(P'\varphi, \varphi)$ for all $\varphi \in C_0^{\infty}(K_0)$, we conclude that $P'$ satisfies the Fefferman-Phong inequality on $K_0$, that is, explicitly, there exists a positive constant $C$ such that, for all $\varphi \in C_0^\infty (K_0)$, $(P'\varphi,\varphi)\geq -C\| \varphi\| ^2$.
Now, denoting by $K$ the compact containing $x_0$ in its interior where the Fefferman-Phong inequality for $P'$ holds, we have that \eqref{est2} is satisfied for all $\varphi\in C_0^\infty(K)$ (note that \eqref{est2} holds on each compact in $\Omega$) and we have from \eqref{ReP}
\begin{equation*}
2\mathsf{Re}(P^*\varphi, -i\underbrace{X_0}_{=X_0^*}\varphi)\geq (1-\delta_0-\delta_2)\| X_0 u\| ^2 -\left[\frac{1}{\delta_2}\| d_{N+1}\| ^2_{L^\infty(K)}+\frac {1}{\delta_0}\| \overline{a_0}\| ^2_{L^\infty(K)}+\frac{1}{\delta_1}+C\right] \| \varphi\| ^2,
\end{equation*}
where $\delta_1$ is fixed here in order to have the Fefferman-Phong inequality for $P'$ in $K$ whereas $C$ is the related constant.\\
Since $2\mathsf{Re}(P^*\varphi, -iX_0^*\varphi)\leq \delta_3\| X_0\varphi\| ^2+\frac{1}{\delta_3}\| P^*\varphi\| ^2$, then we find
$$ \frac{1}{\delta_3}\| P^*\varphi\| ^2 \geq (1-\delta_0-\delta_2-\delta_3)\| X_0 u\| ^2 - C(K,\delta_0,\delta_2)\| \varphi\| ^2.$$
We next choose $\delta_j$, $j=0,2,3$, sufficiently small so that $(1-\delta_0-\delta_2-\delta_3)\geq 1/2,$ and get
$$\| P^*\varphi\| ^2\geq C_1 \| X_0\varphi\| ^2-C_2 \| \varphi\| ^2,$$
where the constants $C_1$ and $C_2$ are fixed now.
Finally, by applying a Poincar\'e inequality on $X_0$ (which is nondegenerate), and possibly by shrinking the compact $K$ around $x_0$ to a compact that we keep denoting by $K$, we have that there exists a positive constant $C$ such that, for all $\varphi\in C_0^\infty(K)$, one has
$$\| P^*\varphi\| ^2\geq C \| \varphi\| ^2,$$
which is the solvability estimate we were looking for. This concludes the proof.
\end{proof}
\begin{remark}\label{FinalRmk}
In the proof of Theorem \ref{Thm} we exploited conditions (H1), (H2) and (H3) in order to have that the symbol $p'(x,\xi)$ (to be precise its global extension $a(x,\xi)$) satisfies the hypothesis needed to apply the Fefferman-Phong inequality on $P'$. The latter inequality applied in \eqref{ReP}, together with a Poincar\'e inequality on $X_0$ whose validity is granted by condition (H1) (i.e. $X_0(x,D)$ is nondegenerate in $\Omega$), gives the control of the second order quantity $(P'\varphi,\varphi)$ and the cancellation of $L^2$-errors. This means that conditions (H1), (H2) and (H3) are sufficient to get the solvability result for $P$ in $\Omega$. However, these hypothesis, are not even necessary for the $L^2$ to $L^2$ local solvability to hold. In fact, if no conditions are imposed on the vector fields involved in the expression of $P$, then $a(x,\xi)$ is of the form
\begin{equation}\label{FP1}
\begin{gathered}
a(x,\xi)=\chi_0^2\Big(\sum_{j=1}^N\left( \{p_0,f_j\}(p_j^1)^2+2f_j\{p_0,p_j^1\}p_j^1\right)+p_0^2-\delta_1q^2+r_0\Big)\\
=\chi_0^2\left(\sum_{j=1}^N \{X_0,f_j\}X_j^2+2f_j\{X_0,X_j\}+X_0^2-\delta_1\{X_0,X_{N+1}\}^2+r_0\right).
\end{gathered}
\end{equation}
Hence, if one can have the Fefferman-Phong inequality for $A$ (global extension of $P'$) being the Weyl quantization of the quantity above, then, still requiring the nondegeneracy of $X_0$, one can find the solvability result of Theorem \ref{Thm} by the same previous technique.
Note finally that, when stronger conditions are satisfied, that is, when $P'$ is such that it satisfies the G\aa rding, the Melin or the Rothschild-Stein sharp subelliptic inequality, then one can get (starting from \eqref{ReP}) a better local solvability result, meaning that one can have $H^s$ to $H^{s'}$ local solvability with $(s,s')=(-1,0)$, $(s,s')=(-1/2,0)$ and $(s,s')=(-1/r,0)$, with $r>1$ integer, respectively. In fact we have that there exists a compact $K\subset\Omega$ containing $x_0$ in its interior and a positive constant $C$ such that $(P'\varphi, \varphi)\geq C \| \varphi \| _{-s}$, with $-s=1,1/2, 1/r$, if $P'$ satisfies the G\aa rding, the Melin or the Rothschild-Stein inequality respectively. We then have from \eqref{ReP}
$$ \delta_3\| X_0\varphi\| ^2+\frac{1}{\delta_3}\| P^*\varphi\| ^2\geq 2\mathsf{Re}(P^*\varphi, -iX_0\varphi)\geq C\| \varphi\| ^2_{-s}+(1-\delta_0-\delta_2)\| X_0\varphi\| ^2$$
$$-\left[\frac{1}{\delta_2}\| d_{N+1}\| ^2_{L^\infty(K)}+\frac {1}{\delta_0}\| \overline{a_0}\| ^2_{L^\infty(K)}+\frac{1}{\delta_1}\right] \| \varphi\| ^2,$$
hence, as before, by suitably choosing the constants $\delta_j$, $j=0,1, 2,3,$ and absorbing the $L^2$-errors with the $-s$ norm one obtains $H^{s}$ to $H^0$ local solvability.
\end{remark}
\section{EXAMPLES OF OPERATORS IN THE CLASS}
In this section we shall give some examples of operators in the class \eqref{P}.
\begin{example}
Consider the operator defined in $\mathbb{R}^n$, $n\geq 2$, of the form
\begin{gather*}P=x_1(D_1^2-D_2^2)+i(D_1+D_2)+X(x,D)\\
=D_1x_1D_1-D_2x_1D_2+iD_2+X(x,D),
\end{gather*}
where $X(x,D)$ is a first order homogenous partial differential operator with real smooth coefficients of the form
$$X(x,D)=g_1(x_1)D_1+g_2(x_1,x_2)D_2$$
when $n=2$, with $g_1\in C^\infty (\mathbb{R})$, $g_2\in C^\infty (\mathbb{R}^2)$, and of the form
$$X(x,D)=g_1(x_1,x_3,..,x_n)D_1+g_2(x)D_2+\sum_{j=3}^ng_j(x_1,x_3,...,x_n)D_j,$$
when $n\geq3$, with $g_2\in C^\infty (\mathbb{R}^n)$ and $g_j\in C^\infty(\mathbb{R}^{n-1};\mathbb{R})$.\\
It is easy to check that conditions (H1), (H2) and (H3) are satisfied by $P$, hence, by the theorem above, we have $L^2$ to $L^2$ local solvability for $P$ at each point of $\mathbb{R}^n$, $n\geq 2$.
\end{example}
\begin{example}
Consider the operator defined in $\mathbb{R}^{n+1}$ of the form
$$P=\sum_{j=1}^k D_j x_j^p D_j\pm\sum_{j'=k+1}^nD_{j'} x_{j'}^{p'} D_{j'}+if(t)D_t+\sum_{\ell=1}^ng_\ell(x)D_\ell,$$
where $D_j=D_{x_j}$, $k\leq n$ is a positive integer, $p,p'\in \mathbb{N}$ and $g_\ell$ and $f$ are real smooth functions with $f$ nonvanishing. Again one has that conditions (H1), (H2) and (H3) are satisfied, therefore $P$ is $L^2$ to $L^2$ locally solvable in $\mathbb{R}^{n+1}$.
\end{example}
\begin{example}
Let $n+1\geq 3$ and
$$X_1(x,t,D_x,D_t)=g(x_1)D_1+D_2;\quad X_2(x,t,D_x,D_t)=D_t+h(x)D_2; $$
$$X_3(x,t,D_x, D_t)=\sum_{j=1}^n k_j(x)D_{j}+k_{n+1}(x,t)D_t;\quad X_0(x,t,D_x, D_t)=D_t,$$
where $D_j=D_{x_j}$, $g\in C^\infty(\mathbb{R};\mathbb{R})$, $h\in C^\infty(\mathbb{R}^n;\mathbb{R}), k_i \in C^\infty(\mathbb{R}^n;\mathbb{R}),$ for all $i=1,...,n,$ and $k_{n+1}\in C^\infty(\mathbb{R}^{n+1};\mathbb{R})$. Let $P$ be
$$P= X_1^* t^{2p+1}X_1+X_2^* t^{2p'+1}X_2+iX_0+X_3,$$
with $p$ and $p'$ positive integers. Then, again, the hypotheses of Theorem \ref{Thm} apply and we get $L^2$ to $L^2$ local solvability for $P$ at each point of $\mathbb{R}^{n+1}$. Note that in this example the vector fields $X_1$ and $X_2$ do not form an involutive distribution. This shows that the class \eqref{P} generalizes the class of mixed type operators in \cite{FP1} where the presence of functions $f_j=f$, for all $1\leq j\leq N$, is required and a strict sign condition of the form $iX_0(x,D)f>0$ on $f^{-1}(0)$ is needed. Moreover, with respect to the class of Shr\"{o}dinger type operators in \cite{FP1} where, instead, the presence of several functions $f_j$ is allowed and $X_0, X_{N+1}$ are assumed to be such that $X_0\equiv 0$ and $X_{N+1}\not\equiv 0$, here we do not require any involutive structure of the vector fields $X_j$, $1\leq j\leq N$, whereas in \cite{FP1} an involutivity property is considered.
\end{example}
\begin{example}
This last example is to show that there are cases in which we can have a better kind of local solvability for operators in the class $\eqref{P}$.
Consider in $\mathbb{R}^2$ the operator
$$P=D_1x_1D_1-D_2x_2D_2+i(D_1-D_2)+x_2D_1$$
$$=x_1D_1^2-x_2D_2^2+x_2D_1,$$
where
$$X_1(x,D)=D_1,\,\, X_2(x,D)=D_2,\,\, X_0(x,D)=D_1-D_2,\,\, X_3(x,D)=x_2D_1,$$
$$ f_1(x)=x_1,\,\,f_2(x)=-x_2,$$
and note that conditions (H1), (H2) and (H3) are satisfied. Moreover $P'$ is such that its Weyl symbol is given by (see \eqref{P'} and \eqref{FP1})
$$p'(x,\xi)=\xi_1^2+\xi_2^2+(\xi_1-\xi_2)^2-\delta_1\xi_1^2,$$
whence (by choosing $\delta_1$ sufficiently small) $P'$ satisfies the G\aa rding inequality and by Remark \ref{FinalRmk} we have that $P$ is $H^{-1}$ to $L^2$ locally solvable at each point of $\mathbb{R}^2$.
To conclude we just want to say that the same result as before holds for the operator
$$P=D_12x_1D_1-D_22x_2D_2+i(D_1-D_2)+x_2D_1$$
$$=x_1D_1^2-x_2D_2^2-i(D_1-D_2)+x_2D_1,$$
whose symbol $p'$ is
$$p'(x,\xi)=2\xi_1^2+2\xi_2^2+(\xi_1-\xi_2)^2-\delta_1\xi_1^2.$$
\end{example}
|
1,116,691,499,723 | arxiv | \section{Introduction}
Reinforcement learning (RL) has achieved impressive performance on many continuous control tasks~\cite{schulman2015high,lillicrap2015continuous}, and
policy optimization is one of the main workhorses for such applications \cite{duan2016benchmarking,sutton2000policy,schulman2015trust,schulman2017proximal}.
Recently, there have been extensive research efforts studying the global convergence properties of policy optimization methods on benchmark control problems including linear quadratic regulator (LQR)~\cite{pmlr-v80-fazel18a,bu2019lqr,malik2019derivative,yang2019provably,mohammadi2021convergence,furieri2020learning,hambly2021policy}, stabilization \cite{perdomo2021stabilizing,ozaslan2022computing}, linear robust/risk-sensitive control~\cite{zhang2021policy,zhang2020stability,gravell2020learning,zhang2021derivative,zhao2021primal,cui2022mixed}, Markov jump linear quadratic control~\cite{jansch2020convergence,jansch2020policyMDP,jansch2020policy,rathod2021global}, Lur'e system control~\cite{qu2021exploiting}, output feedback control~\cite{fatkhullin2020optimizing,zheng2021analysis,li2021distributed,duan2021optimization,duan2022optimization,mohammadi2021lack,zheng2022escaping}, and dynamic filtering \cite{umenberger2022globally}.
For all these benchmark problems, the objective function in the policy optimization formulation is always differentiable over the entire feasible set, and the existing convergence theory heavily relies on this fact.
Consequently, an important open question remains whether direct policy search can enjoy similar global convergence properties when applied to the famous $\mathcal{H}_\infty$ control problem whose objective function can be non-differentiable over certain points in the policy space \cite{apkarian2006controller,apkarian2006nonsmooth,arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs,noll2005spectral}.
Different from LQR which considers stochastic disturbance sequences, $\mathcal{H}_\infty$ control directly addresses the worst-case disturbance, and provides arguably the most fundamental robust control paradigm \cite{zhou96,Dullerud99,skogestad2007multivariable,basar95,doyle1988state,Gahinet1994}.
Regarding the connection with RL, it has also been shown that $\mathcal{H}_\infty$ control can be applied to stabilize the training of adversarial RL schemes in the linear quadratic setup \cite[Section 5]{zhang2020stability}.
Given the fundamental importance of $\mathcal{H}_\infty$ control, we view it as an important benchmark for understanding the theoretical properties of direct policy search in the context of robust control and adversarial RL. In this work, we study and prove the global convergence properties of direct policy search on the $\mathcal{H}_\infty$ state-feedback synthesis problem.
The objective of the $\mathcal{H}_\infty$ state-feedback synthesis is to design a linear state-feedback policy that stabilizes the closed-loop system and minimizes the $\mathcal{H}_\infty$ norm from the disturbance to a performance signal at the same time.
The design goal is also equivalent to synthesizing a state-feedback policy that minimizes a quadratic cost subject to the worst-case disturbance. We will present the problem formulation for the $\mathcal{H}_\infty$ state-feedback synthesis and discuss such connections in Section~\ref{sec:PF}. Essentially, $\mathcal{H}_\infty$ state-feedback synthesis can be formulated as a constrained policy optimization
problem $\min_{K\in\mathcal{K}} J(K)$, where the decision variable $K$ is a matrix parameterizing the linear state-feedback policy, the objective function $J(K)$ is the closed-loop $\mathcal{H}_\infty$-norm for given $K$, and the feasible set $\mathcal{K}$ consists of all the linear state-feedback policies stabilizing the closed-loop dynamics. Notice that the feasible set for the $\mathcal{H}_\infty$ state-feedback control problem is the same as the nonconvex feasible set for the LQR policy search problem~\cite{pmlr-v80-fazel18a,bu2019lqr}. However, the objective function $J(K)$ for the $\mathcal{H}_\infty$ control problem
can be non-differential over certain feasible points, introducing new difficulty to direct policy search. There has been a large family of nonsmooth $\mathcal{H}_\infty$ policy search algorithms developed based on the concept of Clarke subdifferential \cite{apkarian2006controller,apkarian2006nonsmooth,arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs}.
However, a satisfying global convergence theory is still missing from the literature. Our paper bridges this gap by making the following two contributions.
\begin{enumerate}
\item We show that all Clarke stationary points for the $\mathcal{H}_\infty$ state-feedback policy search problem are also global minimum.
\item We identify the coerciveness of the $\mathcal{H}_\infty$ cost function and use this property to show that Goldstein's subgradient method \cite{goldstein1977optimization} and its implementable variants \cite{pmlr-v119-zhang20p, davis2021gradient,burke2020gradient,burke2005robust,kiwiel2007convergence,kiwiel2010nonderivative} can be guaranteed to stay in the nonconvex feasible set of stabilizing policies during the optimization process and eventually find the global optimal solution of the $\mathcal{H}_\infty$ state-feedback control problem. Finite-time complexity bounds for finding $(\delta,\epsilon)$-stationary points are also provided.
\end{enumerate}
Our work sheds new light on the theoretical properties of policy optimization methods on $\mathcal{H}_\infty$ control problems, and serves as a meaningful initial step towards a general global convergence theory of direct policy search on nonsmooth robust control synthesis.
Finally, it is worth clarifying the differences between $\mathcal{H}_\infty$ control and mixed $\mathcal{H}_2/\mathcal{H}_\infty$ design. For mixed $\mathcal{H}_2/\mathcal{H}_\infty$ control, the objective is to design a stabilizing policy that minimizes an $\mathcal{H}_2$ performance bound and satisfies an $\mathcal{H}_\infty$ constraint at the same time \cite{glover1988state,khargonekar1991mixed,kaminer1993mixed,mustafa1991lqg}. In other words, mixed $\mathcal{H}_2/\mathcal{H}_\infty$ control aims at improving the average $\mathcal{H}_2$ performance while ``maintaining" a certain level of robustness by keeping the closed-loop $\mathcal{H}_\infty$ norm to be smaller than a pre-specified number. In contrast, $\mathcal{H}_\infty$ control aims at ``improving" the system robustness and the worst-case performance via achieving the smallest closed-loop $\mathcal{H}_\infty$ norm. In \cite{zhang2021policy}, it has been shown that the natural policy gradient method initialized from a policy satisfying the $\mathcal{H}_\infty$ constraint can be guaranteed to maintain the $\mathcal{H}_\infty$ requirement during the optimization process and eventually converge to the optimal solution of the mixed design problem. However, notice that the objective function for the mixed $\mathcal{H}_2/\mathcal{H}_\infty$ control problem is still differentiable over all the feasible points, and hence the analysis technique in \cite{zhang2021policy} cannot be applied to our $\mathcal{H}_\infty$ control setting.
More discussions on the connections and differences between these two problems will be given in the supplementary material.
\section{Problem Formulation and Preliminaries}
\label{sec:PF}
\subsection{Notation}
The set of $p$-dimensional real vectors is denoted as $\field{R}^p$.
For a matrix $A$, we use the notation \(A^\mathsf{T}\), \( \|A\| \), \(\tr{A}\), \(\sigma_{\min}(A)\), \(\norm{A}_2\), and \(\rho(A) \) to denote its transpose, largest singular value, trace, smallest singular value, Frobenius norm, and spectral radius, respectively.
When a
matrix $P$ is negative semidefinite (definite), we will use the notation $P \preceq (\prec) 0$. When $P$ is positive
semidefinite (definite), we use the notation $P \succeq (\succ) 0$.
Consider a (real) sequence $\mathbf{u}:=\{u_0,u_1,\cdots\}$ where
$u_t \in \field{R}^{n_u}$ for all $t$. This sequence is said to be in $\ell_2^{n_u}$
if $ \sum_{t=0}^\infty \| u_t\|^2<\infty$ where $\|u_t\|$ denotes
the standard (vector) 2-norm of $u_t$. In addition, the $2$-norm for
$\mathbf{u} \in \ell_2^{n_u}$ is defined as
$\|\mathbf{u}\|^2:=\sum_{t=0}^\infty \| u_t\|^2$.
\subsection{Problem statement: $\mathcal{H}_\infty$ state-feedback synthesis and a policy optimization formulation}
We consider the following linear time-invariant (LTI) system
\begin{align}\label{eq:lti1}
x_{t+1}=Ax_t+Bu_t+w_t, \,\,x_0=0
\end{align}
where $x_t\in\field{R}^{n_x}$ is the state, $u_t\in\field{R}^{n_u}$ is the control action, and $w_t\in\field{R}^{n_w}$ is the disturbance. We have $A\in \field{R}^{n_x\times n_x}$, $B\in \field{R}^{n_x\times n_u}$, and $n_w=n_x$.
We denote $\mathbf{x}:=\{x_0,x_1,\cdots\}$, $\mathbf{u}:=\{u_0,u_1,\cdots\}$, and $\mathbf{w}:=\{w_0, w_1, \cdots\}$.
The initial condition is fixed as $x_0=0$.
The objective of $\mathcal{H}_\infty$ control is to choose $\{u_t\}$ to minimize the quadratic cost $\sum_{t=0}^\infty (x_t^\mathsf{T} Q x_t+u_t^\mathsf{T} R u_t)$ in the presence of the worst-case $\ell_2$ disturbance satisfying $\norm{\mathbf{w}}\le 1$. In this paper, the following assumption is adopted.
\begin{assumption}\label{assump1}
The matrices $Q$ and $R$ are positive definite. The matrix pair $(A,B)$ is stabilizable.
\end{assumption}
In $\mathcal{H}_\infty$ control, $\{w_t\}$ is considered to be the worst-case disturbance satisfying the $\ell_2$ norm bound $\norm{\mathbf{w}}\le 1$, and can be chosen in an adversarial manner.
This is different from LQR which makes stochastic assumptions on $\{w_t\}$.
Without loss of generality, we have chosen the $\ell_2$ upper bound on $\mathbf{w}$ to be $1$. In principle, we can formulate the $\mathcal{H}_\infty$ control problem with any arbitrary $\ell_2$ upper bound on $\mathbf{w}$, and there is no technical difference. We will provide more explanations on this fact in the supplementary material.
Therefore, $\mathcal{H}_\infty$ control can be formulated as the following minimax problem
\begin{align}\label{eq:minmax}
\min_{\mathbf{u}}\max_{\mathbf{w}:\norm{\mathbf{w}}\le 1} \sum_{t=0}^\infty (x_t^\mathsf{T} Q x_t+ u_t^\mathsf{T} R u_t)
\end{align}
Under Assumption \ref{assump1}, it is well known that
the optimal solution for \eqref{eq:minmax} can be achieved using a linear state-feedback policy $u_t=-Kx_t$ (see \cite{basar95}).
Given any $K$,
the LTI system \eqref{eq:lti1} can be rewritten as
\begin{align}\label{eq:lti2}
x_{t+1}=(A-BK)x_t+w_t, \,x_0=0.
\end{align}
Now we define $z_t=(Q+K^\mathsf{T} R K)^{\frac{1}{2}}x_t$. We have $\norm{z_t}^2=x_t^\mathsf{T} (Q+K^\mathsf{T} R K) x_t=x_t^\mathsf{T} Q x_t+u_t^\mathsf{T} R u_t$.
We denote $\mathbf{z}:=\{z_0,z_1,\cdots\}$. If $\mathbf{x}\in \ell_2^{n_x}$, then we have $\norm{\mathbf{z}}^2=\sum_{t=0}^\infty (x_t^\mathsf{T} Q x_t+u_t^\mathsf{T} R u_t)<+\infty$.
Therefore, the closed-loop LTI system \eqref{eq:lti2} can be viewed as a linear operator mapping any disturbance sequence $\{w_t\}$ to another sequence $\{z_t\}$. We denote this operator as $G_K$, where the subscript highlights the dependence of this operator on $K$.
If $K$ is stabilizing, i.e. $\rho(A-BK)<1$, then $G_K$ is bounded in the sense that it maps any $\ell_2$ sequence $\mathbf{w}$ to
another sequence $\mathbf{z}$ in $\ell_2^{n_x}$. For any stabilizing $K$, the $\ell_2\rightarrow \ell_2$ induced norm of $G_K$ can be defined as:
\begin{align}
\norm{G_K}_{2\rightarrow 2}:=\sup_{0\neq \norm{\mathbf{w}}\le 1}\frac{\norm{\mathbf{z}}}{\norm{\mathbf{w}}}
\end{align}
Since $G_K$ is a linear operator, it is straightforward to show
\begin{align*}
\norm{G_K}_{2\rightarrow 2}^2:=\max_{\mathbf{w}:\norm{\mathbf{w}}\le 1} \sum_{t=0}^\infty x_t^\mathsf{T} (Q+K^\mathsf{T} R K) x_t=\max_{\mathbf{w}:\norm{\mathbf{w}}\le 1} \sum_{t=0}^\infty (x_t^\mathsf{T} Q x_t+u_t^\mathsf{T} R u_t).
\end{align*}
Therefore, the minimax optimization problem \eqref{eq:minmax} can be rewritten as the policy optimization problem:
$\min_{K\in\mathcal{K}}\norm{G_K}_{2\rightarrow 2}^2$, where $\mathcal{K}$ is the set of all linear state-feedback stabilizing policies, i.e. $\mathcal{K}=\{K\in\field{R}^{n_x\times n_u}: \,\rho(A-BK)<1\}$. In the robust control literature \cite{apkarian2006controller,apkarian2006nonsmooth,arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs}, it is standard to drop the square in the cost function and just reformulate \eqref{eq:minmax} as $\min_{K\in\mathcal{K}} \norm{G_K}_{2\rightarrow 2}$. This is exactly the policy optimization formulation for $\mathcal{H}_\infty$ state-feedback control.
The main reason why
this problem is termed as $\mathcal{H}_\infty$ state-feedback control is that in the frequency domain, $G_K$ can be viewed as a transfer function which lives in the Hardy $\mathcal{H}_\infty$ space and has an $\mathcal{H}_\infty$ norm being exactly equal to $\norm{G_K}_{2\rightarrow 2}$.
Applying the frequency-domain formula for the $\mathcal{H}_\infty$ norm, we can calculate $\norm{G_K}_{2\rightarrow 2}$ as
\begin{align}\label{eq:hinfcost}
\norm{G_K}_{2\rightarrow 2}=\sup_{\omega\in[0, 2\pi]}\lambda_{\max}^{1/2}\big((e^{-j\omega}I-A+BK)^{-\mathsf{T}}(Q+K^{\mathsf{T}}RK)(e^{j\omega}I-A+BK)^{-1}\big),
\end{align}
where $I$ is the identity matrix, and $\lambda_{\max}$ denotes the largest eigenvalue of a given symmetric matrix.
Therefore, eventually the $\mathcal{H}_\infty$ state-feedback control problem can be formulated as
\begin{align}\label{eq:hinfopt}
\min_{K\in\mathcal{K}} J(K),
\end{align}
where $J(K)$ is equal to the $\mathcal{H}_\infty$ norm specified by \eqref{eq:hinfcost}. Classical $\mathcal{H}_\infty$ control theory typically solves \eqref{eq:hinfopt} via introducing extra Lyapunov variables and
reparameterizing the problem into a higher-dimensional convex domain
over which convex optimization algorithms can be applied~\cite{zhou96,Dullerud99,befb94}.
In this paper, we revisit \eqref{eq:hinfopt} as a benchmark for direct policy search, and discuss how to search the optimal solution of \eqref{eq:hinfopt} in the policy space directly. Applying direct policy search to address \eqref{eq:hinfopt} leads to a nonconvex nonsmooth optimization problem.
A main technical challenge is that the objective function \eqref{eq:hinfcost} can be non-differentiable over some important feasible points \cite{apkarian2006controller,apkarian2006nonsmooth,arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs}.
\subsection{Direct policy search: A nonsmooth optimization perspective}
Now we briefly review several key facts known for the $\mathcal{H}_\infty$ policy optimization problem \eqref{eq:hinfopt}.
\begin{prop} \label{prop:1}
The set $\mathcal{K}=\{K: \rho(A-BK)<1\}$ is open. In general, it can be unbounded and nonconvex. The cost function \eqref{eq:hinfcost} is continuous and nonconvex in $K$.
\end{prop}
See \cite{pmlr-v80-fazel18a,bu2019topological} for some related proofs. We have also included more explanations in the supplementary material. An immediate consequence is that \eqref{eq:hinfopt} becomes a nonconvex optimization problem.
Another important fact is that the objective function \eqref{eq:hinfcost} is also nonsmooth. As a matter of fact, \eqref{eq:hinfcost} is subject to two sources of nonsmoothness. Based on \eqref{eq:hinfcost}, we can see that the largest eigenvalue for a fixed frequency $\omega$ is nonsmooth, and the optimization step over $\omega\in [0, 2\pi]$ is also nonsmooth. As a matter of fact, the $\mathcal{H}_\infty$ objective function \eqref{eq:hinfcost} can be non-differentiable over important feasible points, e.g. optimal points.
Fortunately, it is well known\footnote{We cannot find a formal statement of Proposition \ref{prop:regular} in the literature. However, based on our discussion with other researchers who have worked on nonsmooth $\mathcal{H}_\infty$ synthesis for long time, this fact is well known and hence we do not claim any credits in deriving this result. As a matter of fact, although not explicitly stated, the proof of Proposition \ref{prop:regular} is hinted in the last paragraph of \cite[Section III]{apkarian2006nonsmooth} given the facts that the $\mathcal{H}_\infty$ norm is a convex function over the Hardy $\mathcal{H}_\infty$ space (which is a Banach space) and the mapping from $K\in\mathcal{K}$ to the (infinite-dimensional) Hardy $\mathcal{H}_\infty$ space is strictly differentiable.
For completeness, a simple proof of Proposition~\ref{prop:regular} based on Clarke's chain rule \cite{clarke1990optimization} is included in the supplementary material.} that the $\mathcal{H}_\infty$ objective function \eqref{eq:hinfcost} has the following desired property so it is Clarke subdifferentiable.
\begin{prop}\label{prop:regular}
The $\mathcal{H}_\infty$ objective function \eqref{eq:hinfcost} is locally Lipschitz and subdifferentially regular over the stabilizing feasible set $\mathcal{K}$.
\end{prop}
Recall that $J:\mathcal{K}\rightarrow \field{R}$ is locally Lipschitz if for
any bounded $S\subset \mathcal{K}$, there exists a constant $L > 0$ such that $|J(K)-J(K')|\le L \norm{K-K'}_2$ for all $K,K'\in S$.
Based on Rademacher's theorem, a locally Lipschitz function is differentiable almost everywhere, and the Clarke subdifferential is well defined for all feasible points. Formally, the Clarke subdifferential is defined as
\begin{align}
\partial_C J(K):=\conv\{\lim_{i\rightarrow \infty}\nabla J(K_i):K_i\rightarrow K,\,K_i\in\dom(\nabla J)\subset \mathcal{K}\}
\end{align}
where $\conv$ denotes the convex hull. Then we know that the Clarke subdifferential for the $\mathcal{H}_\infty$ objective function \eqref{eq:hinfcost} is well defined for all $K\in \mathcal{K}$. We say that $K$ is a Clarke stationary point if $0\in \partial_C J(K)$. The following fact is also well known.
\begin{prop} \label{pro3}
If $K$ is a local min of $J$, then $0\in\partial_C J(K)$ and $K$ is a Clarke stationary point.
\end{prop}
Under Assumption \ref{assump1}, it is well known that there exists $K^*\in \mathcal{K}$ achieving the minimum of~\eqref{eq:hinfopt}. Since $\mathcal{K}$ is an open set, $K^*$ has to be an interior point of $\mathcal{K}$ and hence $K^*$ has to be a Clarke stationary point. In Section \ref{sec:land}, we will prove that any Clarke stationary points for \eqref{eq:hinfopt} are actually global minimum.
Now we briefly elaborate on the subdifferentially regular property stated in Proposition \ref{prop:regular}.
For any given direction $d$ (which has the same dimension as $K$), the generalized Clarke directional derivative of $J$ is defined as
\begin{align}
J^{\circ}(K,d):=\lim_{K'\rightarrow K}\sup_{t\searrow 0} \frac{J(K'+td)-J(K')}{t}.
\end{align}
In contrast, the (ordinary) directional derivative is defined as follows (when existing)
\begin{align}
J'(K,d):=\lim_{t\searrow 0} \frac{J(K+td)-J(K)}{t}.
\end{align}
In general, the Clarke directional derivative can be different from the (ordinary) directional derivative. Sometimes
the ordinary directional derivative may not even exist.
The objective function $J(K)$ is subdifferentially regular if for every $K\in \mathcal{K}$, the ordinary directional
derivative always exists and coincides with the generalized one for every direction, i.e. $J'(K,d)=J^{\circ}(K,d)$.
The most important consequence of the subdifferentially regular property is given as follows.
\begin{cor} \label{cor1}
Suppose $K ^\dag\in\mathcal{K}$ is a Clarke stationary point for $J$. If $J$ is subdifferentially regular, then the directional derivatives $J'(K^\dag, d)$ are non-negative for all $d$.
\end{cor}
See \cite[Theorem 10.1]{rockafellar2009variational} for related proofs and more discussions. Notice that having non-negative directional derivatives does not mean that the point $K^\dag$ is a local minimum. Nevertheless, the above fact will be used in our main theoretical developments.
Now we briefly summarize two key difficulties in establishing a global convergence theory for direct policy search on the $\mathcal{H}_\infty$ state-feedback control problem \eqref{eq:hinfopt}. First, it is unclear whether the direct policy search method will get stuck at some local minimum. Second, it is challenging to guarantee the direct policy search method to stay in the nonconvex feasible set $\mathcal{K}$ during the optimization process. Since $\mathcal{K}$ is nonconvex, we cannot use a projection step to maintain feasibility. Our main results will address these two issues.
\subsection{Goldstein subdifferential}
Generating a good descent direction for nonsmooth optimization is not trivial. Many nonsmooth optimization algorithms are based on the concept of Goldstein subdifferential \cite{goldstein1977optimization}. Before proceeding to our main result, we briefly review this concept here.
\begin{defn}[Goldstein subdifferential]
Suppose $J$ is locally Lipschitz. Given a point $K\in\mathcal{K}$ and a parameter $\delta>0$, the Goldstein subdifferential of $J$ at $K$ is defined to be the following set
\begin{align} \label{Gold_sub}
\partial_\delta J(K):=\conv \left\{\cup_{K'\in\mathbb{B}_\delta(K)} \partial_C J(K')\right\},
\end{align}
where $\mathbb{B}_\delta(K)$ denotes the $\delta$-ball around $K$. The above definition implicitly requires $\mathbb{B}_\delta(K)\subset\mathcal{K}$.
\end{defn}
Based on the above definition, one can further define the notion of $(\delta,\epsilon)$-stationarity.
A point $K$ is said to be $(\delta,\epsilon)$-stationary if $\dist(0, \partial_\delta J(K))\le \epsilon$.
It is well-known that the minimal norm
element of the Goldstein subdifferential generates a good descent direction. This fact is stated as follows.
\begin{prop}[\cite{goldstein1977optimization}]
Let $F$ be the minimal norm element in $\partial_\delta J(K)$. Suppose $K-\alpha F/\norm{F}_2\in \mathcal{K}$ for any $0\le \alpha \le \delta$.
Then we have
\begin{align}\label{eq:descent}
J(K-\delta F/\norm{F}_2)\le J(K)-\delta \norm{F}_2.
\end{align}
\end{prop}
The idea of Goldstein subdifferential has been used in designing algorithms for nonsmooth $\mathcal{H}_\infty$ control \cite{arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs}. We will show that such policy search algorithms can be guaranteed to find the global minimum of \eqref{eq:hinfopt}. It is worth mentioning that there are other notions of enlarged subdifferential~\cite{apkarian2006nonsmooth} which can lead to good descent directions for nonsmooth $\mathcal{H}_\infty$ synthesis. In this paper, we focus on the notion of Goldstein subdifferential and related policy search algorithms.
\section{Optimization Landscape for $\mathcal{H}_\infty$ State-Feedback Control}
\label{sec:land}
In this section, we investigate the optimization landscape of the $\mathcal{H}_\infty$ state-feedback policy search problem, and show that any Clarke stationary points of \eqref{eq:hinfopt} are also global minimum. We start by showing the coerciveness of the $\mathcal{H}_\infty$ objective function \eqref{eq:hinfcost}.
\begin{lem}\label{lem1}
The $\mathcal{H}_\infty$ objective function $J(K)$ defined by \eqref{eq:hinfcost} is coercive over the set $\mathcal{K}$ in the sense that for any sequence $\{K^l\}_{l=1}^\infty\subset \mathcal{K}$ we have
$J(K^l) \rightarrow +\infty$,
if either $\|K^l\|_2 \rightarrow +\infty$, or $K^l$ converges to an element in the boundary $\partial \mathcal{K}$.
\end{lem}
\begin{proof}
We will only provide a proof sketch here. A detailed proof is presented in the supplementary material. Suppose we have a sequence $\{K^l\}$ satisfying $\norm{K^l}_2\rightarrow +\infty$. We can choose $\mathbf{w}=\{w_0,0,0,\cdots\}$ with $\norm{w_0}=1$ and show
$J(K^l)\ge w_0^\mathsf{T} (Q+(K^l)^\mathsf{T} R K^l) w_0 \ge \lambda_{\min}(R) \norm{K^l w_0}^2$.
Clearly, we have used the positive definiteness of $R$ in the above derivation. Then by carefully choosing $w_0$, we can ensure $J(K^l)\rightarrow +\infty$ as $\norm{K^l}_2\rightarrow +\infty$. Next, we assume $K^l\rightarrow K\in \partial \mathcal{K}$.
We have $\rho(A-BK)=1$, and hence there exists some $\omega_0$ such that $(e^{j\omega_0}I-A+BK)$ becomes singular.
Then we can use the positive definiteness of $Q$ to show
$J(K^l)\ge \lambda^{1/2}_{\min}(Q) (\| (e^{j\omega_0}I-A+BK^l)^{-1} \|\cdot \| (e^{-j\omega_0}I-A+BK^l)^{-1} \|)^{\frac{1}{2}}$.
Notice $\sigma_{\min} (e^{\pm j\omega_0}I-A+BK^l) \to 0$ as $l \to \infty$, which implies $ \| (e^{\pm j\omega_0}I-A+BK^l)^{-1} \| \to +\infty$ as $l \to \infty$. Therefore, we have $J(K^l) \to +\infty$ as $K^l\to K\in \partial \mathcal{K}$.
More details for the proof can be found in the supplementary material.
\end{proof}
We want to emphasize that the positive definiteness of $(Q,R)$ are crucial for proving the coerciveness of the cost function \eqref{eq:hinfcost}. Built upon Lemma \ref{lem1}, we can obtain the following nice properties of the sublevel sets of \eqref{eq:hinfopt}.
\begin{lem}\label{lem2}
Consider the $\mathcal{H}_\infty$ state-feedback policy search problem \eqref{eq:hinfopt} with the objective function $J(K)$ defined in \eqref{eq:hinfcost}. Under Assumption \ref{assump1}, the sublevel set defined as $\mathcal{K}_\gamma:=\{K\in \mathcal{K}: J(K)\le \gamma\}$ is compact and path-connected for every $\gamma\ge J(K^*)$ where $K^*$ is the global minimum of \eqref{eq:hinfopt}.
\end{lem}
\begin{proof}
The compactness of $\mathcal{K}_\gamma$ directly follows from the continuity and coerciveness of $J(K)$, and is actually a consequence of \cite[Proposition 11.12]{bauschke2011convex}. The path-connectedness of the strict sublevel sets for the continuous-time $\mathcal{H}_\infty$ control problem has been proved in \cite{hu2022connectivity}. We can slightly modify the proof in \cite{hu2022connectivity} to show that the strict sublevel set $\{K\in \mathcal{K}: J(K)<\gamma\}$ is path-connected. Based on the fact that every non-strict sublevel sets are compact, now we can apply \cite[Theorem 5.2]{martin1982connected} to show $\mathcal{K}_\gamma$ is also path-connected.
An independent proof based on the non-strict version of the bounded real lemma is also provided in the supplementary material.
\end{proof}
The path-connectedness of $\mathcal{K}_\gamma$ for every $\gamma$ actually implies the uniqueness of the minimizing set in a certain strong sense \cite[Sections 2\&3]{martin1982connected}. Due to the space limit, we will defer the discussion on the uniqueness of the minimizing set to the supplementary material. Here, we present a stronger result which is one of the main contributions of our paper.
\begin{thm}\label{thm1}
Consider the $\mathcal{H}_\infty$ state-feedback policy search problem \eqref{eq:hinfopt}. Under Assumption \ref{assump1}, any Clarke stationary point of $J(K)$ is a global minimum.
\end{thm}
A detailed proof is presented in the supplementary material. Here we provide a proof sketch. Since $Q$ and $R$ are positive definite, the non-strict version of the bounded real lemma\footnote{The difference between the strict and non-strict versions of the bounded real lemma is quite subtle \cite[Section 2.7.3]{befb94}. For completeness, we will provide more explanations for the non-strict version of the bounded real lemma in the supplementary material.} states that $J(K)\le \gamma$ if and only if there exists a positive definite matrix $P$ such that the following matrix inequality holds
\begin{align}\label{eq:lmi1}
\bmat{(A-BK)^\mathsf{T} P (A-BK) - P & (A-BK)^\mathsf{T} P \\ P(A-BK) & P}+\bmat{Q+K^\mathsf{T} R K & 0 \\ 0 & -\gamma^2 I}\preceq 0.
\end{align}
The above matrix inequality is linear in $P$ but not linear in $K$. A standard trick from the control theory can be combined with the Schur complement lemma to convert the above matrix inequality condition to another condition which is linear in all the decision variables \cite{befb94}. Specifically, there exists a matrix function $\lmi(Y,L,\gamma)$ which is linear in $(Y,L,\gamma)$ such that $\lmi(Y,L,\gamma)\preceq 0$ and $Y\succ 0$ if and only if \eqref{eq:lmi1} is feasible with $K=L Y^{-1}$ and $P=\gamma Y^{-1}\succ 0$. The matrix function $\lmi(Y,L,\gamma)$ involves a larger matrix. Hence we present the analytical formula of $\lmi(Y,L,\gamma)$ in the supplementary material and skip it here. Since $\lmi(Y,L,\gamma)$ is linear in $(Y,L,\gamma)$, we know $\lmi(Y,L,\gamma)\preceq 0$ is just a convex semidefinite programming condition. Based on this convex necessary and sufficient condition for $J(K)\le \gamma$, we can prove the following important lemma.
\begin{lem}\label{lem3}
For any $K\in\mathcal{K}$ satisfying $J(K)>J^*$, there exists a matrix direction $d\neq 0$ such that $J'(K,d)\le J^*-J(K)<0$, where $J^*=J(K^*)$ and $K^*$ is the global minimum of \eqref{eq:hinfopt}.
\end{lem}
\begin{proof}
Suppose we have $K=LY^{-1}$ where $(Y,L, J(K))$ is a feasible point for the convex regime $\lmi(Y,L,J(K))\preceq 0$.
In addition, we have $K^*=L^* (Y^*)^{-1}$ where $(Y^*,L^*, J(K^*))$ is a point satisfying $\lmi(Y^*,L^*,J(K^*))\preceq 0$.
Since the LMI condition is convex, the line segment between $(Y,L,J(K))$ and $(Y^*,Q^*,J(K^*))$ is also in this convex set. For any $t>0$, we know $(Y+t \Delta Y,L+t \Delta L, J(K)+t(J(K^*)-J(K)))$ also satisfies
$\lmi(Y+t\Delta Y, L+t\Delta L, J(K)+t(J(K^*)-J(K)))\preceq 0$,
where $\Delta L=L^*-L$, and $\Delta Y=Y^*-Y$.
Therefore, based on the bounded real lemma, we know $J((L+t\Delta L) (Y+t \Delta Y)^{-1})\le J(K)+t(J(K^*)-J(K))$.
Let's choose $d=\Delta L Y^{-1} - LY^{-1}\Delta Y Y^{-1}$. Then we have
\begin{align*}
J'(K,d)\le \lim_{t\searrow 0} \left( \frac{J((L+t\Delta L) (Y+t \Delta Y)^{-1})-J(K)}{t}+o(t)\right)\le J^*-J(K)<0.
\end{align*}
A detailed verification of the above inequality is provided in the supplementary material. Notice $d\neq 0$. If $\Delta L Y^{-1} -LY^{-1}\Delta Y Y^{-1}=0$, the above argument still works and we reach to the conclusion $J'(K,0)<0$. But this is impossible since we always have $J'(K,0)=0$. Hence we have $d\neq 0$. This completes the proof for this lemma.
\end{proof}
Now we are ready to provide the proof for Theorem \ref{thm1}. Based on Lemma \ref{lem3} and the fact that $J(\cdot)$ is subdifferentially regular, the proof can be done by contradiction. Suppose $K^*$ is the global minimum, and $K^\dag$ is a Clarke stationary point. If $K^\dag$ is not a global minimum. Then by Lemma \ref{lem3}, there exists $d \neq 0$ such that $J'(K^\dag,d) < 0$, this contradicts the fact that $J'(K^\dag,d) \ge 0$ for all $d$ by Corollary \ref{cor1}. Therefore, $K^\dag$ has to be the global minimum of \eqref{eq:hinfopt}.
The above proof relies on Lemma \ref{lem3} and the fact that $J$ is subdifferentially regular.
Without using the subdifferentially regular property, Lemma \ref{lem3} itself is not sufficient for proving Theorem \ref{thm1}.
It is also worth mentioning that Lemma \ref{lem3} can be viewed as a modification of the convex parameterization/lifting results in \cite{sun2021learning,umenberger2022globally} for non-differentiable points.
\section{Global Convergence of Direct Policy Search on $\mathcal{H}_\infty$ State-Feedback Control}
\label{sec:main1}
In this section, we first show that Goldstein's subgradient method \cite{goldstein1977optimization} can be guaranteed to stay in the nonconvex feasible regime $\mathcal{K}$ during the optimization process and eventually converge to the global minimum of \eqref{eq:hinfopt}.
The complexity of finding $(\delta,\epsilon)$-stationary points of \eqref{eq:hinfopt} is also presented.
Then we further discuss the convergence guarantees for various implementable forms of Goldstein's subgradient method.
\subsection{Global convergence and complexity of Goldstein's subgradient Method}
We will investigate the global convergence of Goldstein's subgradient method for direct policy search of the optimal $\mathcal{H}_\infty$ state-feedback policy. Goldstein's subgradient method iterates as follows
\begin{align}\label{eq:gold1}
K^{n+1}=K^n-\delta^n F^n/\norm{F^n}_2,
\end{align}
where $F^n$ is the minimum norm element of the Goldstein subdifferential $\partial_{\delta^n} J(K^n)$.
We assume that an initial stabilizing policy is available, i.e. $K^0\in\mathcal{K}$. The same initial policy assumption has also been made in the global convergence theory for direct policy search on LQR \cite{pmlr-v80-fazel18a}. More recently, some provable guarantees have been obtained for finding such stabilizing policies via direct policy search~\cite{perdomo2021stabilizing,ozaslan2022computing}. Hence such an assumption on the initial policy $K^0$ is reasonable.
Our global convergence result relies on the fact that there is a strict separation between any sublevel set of \eqref{eq:hinfopt} and the boundary of $\mathcal{K}$. This fact is formalized as follows.
\begin{lem}\label{lem4}
Consider the $\mathcal{H}_\infty$ state-feedback policy search problem \eqref{eq:hinfopt} with the cost function $J(K)$ defined in \eqref{eq:hinfcost}. Denote the complement of the feasible set $\mathcal{K}$ as $\mathcal{K}^c$.
Suppose Assumption \ref{assump1} holds and $\gamma\ge J^*$. Then there is a strict separation between the sublevel set $\mathcal{K}_\gamma$ and $\mathcal{K}^c$. In other words, we have $dist(\mathcal{K}_\gamma, \mathcal{K}^c)>0$.
\end{lem}
\begin{proof}
Obviously, the set $\mathcal{K}_\gamma \cap \mathcal{K}^c$ is empty (since we know $\mathcal{K}_\gamma\subset \mathcal{K}$). Based on Lemma \ref{lem2}, we know $\mathcal{K}_\gamma$ is compact. Since $\mathcal{K}$ is open, we know $\mathcal{K}^c$ is closed. Therefore, there is a strict separation between $\mathcal{K}_\gamma$ and $\mathcal{K}^c$, and we have $dist(\mathcal{K}_\gamma, \mathcal{K}^c)>0$.
\end{proof}
Now we are ready to present our main convergence result.
\begin{thm} \label{thm2}
Consider the $\mathcal{H}_\infty$ state-feedback policy search problem \eqref{eq:hinfopt} with the cost function $J(K)$ defined in \eqref{eq:hinfcost}. Suppose Assumption \ref{assump1} holds, and an initial stabilizing policy is given, i.e. $K^0\in\mathcal{K}$. Denote $\Delta_0:=\dist(\mathcal{K}_{J(K^0)}, \mathcal{K}^c)>0$. Choose $\delta^n=\frac{c\Delta_0}{n+1}$ for all $n$ with $c$ being a fixed number in $(0,1)$.
Then Goldstein's subgradient method \eqref{eq:gold1} is guaranteed to stay in $\mathcal{K}$ for all $n$. In addition, we have $J(K^n)\rightarrow J^*$ as $n\rightarrow \infty$.
\end{thm}
\begin{proof}
We have $\delta^n\le c\Delta_0< \Delta_0$ for all $n$. Now we use an induction proof to show $K^n\in \mathcal{K}_{J(K^0)}$ for all $n$. For $n= 0$, we know $K^0-c\Delta_0 F^0/\norm{F^0}_2$ has to be within the $\Delta_0$ ball around $K^0$ since we know the norm of $F^0/\norm{F^0}_2$ is exactly equal to $1$. Since $\Delta_0:=\dist(\mathcal{K}_{J(K^0)}, \mathcal{K}^c)>0$, we know $K^0-\delta^0 F^0/\norm{F^0}_2\in \mathcal{K}$. As a matter of fact, we know $\mathbb{B}_{\delta^0}(K^0)$ has to be a subset of $\mathcal{K}$. Hence we can apply \eqref{eq:descent} to show that $K^1$ exists and is also in $\mathcal{K}_{J(K^0)}$. Similarly, we can repeat this argument to show $K^n\in \mathcal{K}_{J(K^0)}$ for all $n$.
Next, we can apply \eqref{eq:descent} to every step and then sum the inequalities over all $n$. Then the following inequality holds for all $N$:
\begin{align}\label{eq:iterative}
\sum_{n=0}^N \delta_n \norm{F^n}_2 \le J(K^0)-J^*
\end{align}
Since we have $\sum_{n=0}^\infty \delta^n=+\infty$, we know $\liminf_{n\rightarrow\infty} \norm{F^n}_2= 0$. There exists one subsequence $\{i_n\}$ such that $\norm{F^{i_n}}_2\rightarrow 0$. For this subsequence, the resultant policy sequence $\{K^{i_n}\}$ is also bounded (notice that the policy parameter sequence stays in the compact set $\mathcal{K}_{J(K^0)}$ for all $n$) and has a convergent subsequence. We can show that the limit of this subsequence is a Clarke stationary point.
Hence the function value associated with this subsequence converges to $J^*$. Notice that $J(K^n)$ is monotonically decreasing for the entire sequence $\{n\}$. Hence we have $J(K^n)\rightarrow J^*$.
\end{proof}
We have tried to be brief in giving the above proof. We will present a more detailed proof in the supplementary material. We believe that
this is the first result showing that direct policy search can be guaranteed to converge to the global optimal solution of the $\mathcal{H}_\infty$ state-feedback control problem. The above result only provides an asymptotic convergence guarantee to ensure $J(K^n)\rightarrow J^*$. One can use a similar argument to establish a finite-time complexity bound for finding the $(\delta,\epsilon)$-stationary points of \eqref{eq:hinfopt}. Such a result is given as follows.
\begin{thm} \label{thm3}
Consider the $\mathcal{H}_\infty$ problem \eqref{eq:hinfopt} with the cost function \eqref{eq:hinfcost}. Suppose Assumption \ref{assump1} holds, and $K^0\in\mathcal{K}$. Denote $\Delta_0:=\dist(\mathcal{K}_{J(K^0)}, \mathcal{K}^c)>0$. For any $\delta<\Delta_0$, we can
choose $\delta^n=\delta$ for all $n$ to ensure that
Goldstein's subgradient method \eqref{eq:gold1} stays in $\mathcal{K}$ and satisfies the following finite-time complexity bound:
\begin{align}
\min_{n:0\le n\le N} \norm{F^n}_2\le \frac{J(K^0)-J^*}{(N+1)\delta}
\end{align}
In other words, we have $\min_{0\le n\le N} \norm{F^n}_2\le \epsilon$ after $N=\mathcal{O}\left(\frac{\Delta}{\delta\epsilon}\right)$ where $\Delta:=J(K^0)-J^*$. For any $\delta<\Delta_0$ and $\epsilon>0$, the complexity of finding a $(\delta,\epsilon)$-stationary point is $\mathcal{O}\left(\frac{\Delta}{\delta\epsilon}\right)$.
\end{thm}
\begin{proof}
The above result can be proved using a similar argument from Theorem \ref{thm2}.
We can use the same induction argument to show $K^n\in\mathcal{K}_{J(K^0)}$ for all $n$, and \eqref{eq:iterative} holds with $\delta^n=\delta$. Then the desired conclusion directly follows.
\end{proof}
The complexity for nonsmooth optimization of Lipschitz functions is quite subtle. While the above result gives a reasonable characterization of the finite-time performance of Goldstein's subgradient method on the $\mathcal{H}_\infty$ state-feedback control problem, it does not quantify how fast $J(K^n)$ converges to $J^*$. Recall that $(\delta,\epsilon)$-stationarity means $\dist(0, \partial_\delta J(K))\le \epsilon$, while $\epsilon$-stationarity means $\dist(0, \partial_C J(K))\le \epsilon$.
As commented in \cite{shamir2020can,pmlr-v119-zhang20p,davis2021gradient}, $(\delta,\epsilon)$-stationarity does not imply being $\delta$-close to an
$\epsilon$-stationary point of $J$. Importantly, the function value of a $(\delta,\epsilon)$-stationary point can be far from $J^*$ even for small $\delta$ and $\epsilon$. Theorem 5 in \cite{pmlr-v119-zhang20p} shows that
there is no finite time algorithm that can find $\epsilon$-stationary points provably for all Lipschitz functions. It is still possible that one can develop some finite time bounds for $(J(K^n)-J^*)$ via exploiting other advanced properties of the $\mathcal{H}_\infty$ cost function \eqref{eq:hinfcost}. This is an important future task.
\subsection{Implementable variants and related convergence results}
\label{sec:imple}
In practice, it can be difficult to evaluate the minimum norm element of the Goldstein subdifferential.
Now we discuss implementable variants of Goldstein's subgradient method and related guarantees.
\textbf{Gradient sampling \cite{burke2020gradient,burke2005robust,kiwiel2007convergence}.}
The gradient sampling (GS) method
is the main optimization algorithm used in the robust control package HIFOO \cite{arzelier2011h2,gumussoy2009multiobjective}.
Suppose we can access a first-order oracle which can evaluate $\nabla J$ for any differentiable points in the feasible set\footnote{When $(A,B)$ is known, one can calculate the $\mathcal{H}_\infty$ gradient at differential points using the chain rule in \cite{apkarian2006nonsmooth}. More explanations can be found in the supplementary material.}. Based on Rademacher's theorem, a locally Lipschitz function is differentiable almost everywhere. Therefore, for any $K^n\in\mathcal{K}$, we can randomly sample policy parameters over $\mathbb{B}_{\delta^n}(K^n)$ and obtain differentiable points with probability one. For all these sampled differentiable points, the Clarke subdifferential at each point is just the gradient. Then the convex hull of these sampled gradients can be used as an approximation for the Goldstein subdifferential $\partial_{\delta^n} K^n$. The minimum norm element from the convex hull of the sampled gradients can be solved via a simple convex quadratic program, and is sufficient for generating a reasonably good descent direction for updating $K^{n+1}$ as long as we sample at least $(n_x n_u+1)$ differentiable points for each $n$ \cite{burke2020gradient}. In the unconstrained setup, the cluster points of the GS algorithm can be guaranteed to be Clarke stationary \cite{kiwiel2007convergence,burke2020gradient}. Such a result can be combined with Theorem \ref{thm1} and Lemma \ref{lem4} to show the global convergence of the GS method on the $\mathcal{H}_\infty$ state-feedback synthesis problem. The following theorem will be treated formally in the supplementary material.
\begin{thm}[Informal statement]\label{thm4}
Consider the policy optimization problem \eqref{eq:hinfopt} with the $\mathcal{H}_\infty$ cost function defined in \eqref{eq:hinfcost}. Suppose Assumption \ref{assump1} holds, and $K^0\in \mathcal{K}$. The iterations generated from the trust-region version of the GS method (described in \cite[Section 4.2]{kiwiel2007convergence} and restated in the supplementary material) can be guaranteed to stay in $\mathcal{K}$ for all iterations and achieve $J(K^n)\rightarrow J^*$ with probability~one.
\end{thm}
\textbf{Non-derivative sampling (NS) \cite{kiwiel2010nonderivative}.} The NS method can be viewed as the derivative-free version of the GS algorithm. Suppose we only have the zeroth-order oracle which can evaluate the function value $J(K)$ for $K\in \mathcal{K}$. The main difference between NS and GS is that the NS algorithm relies on estimating the gradient from function values via Gupal’s estimation method. In the unconstrained setting, the cluster points of the NS method can be guaranteed to be Clarke stationary with probability one \cite[Theorem 3.8]{kiwiel2010nonderivative}. We can combine \cite[Theorem 3.8]{kiwiel2010nonderivative} with our results (Theorem \ref{thm1} and Lemma~\ref{lem4}) to prove the global convergence of NS in our setting. A detailed discussion is given in the supplementary material.
\textbf{Model-free implementation of NS.} When the system model is unknown, there are various methods available for estimating the $\mathcal{H}_\infty$-norm from data \cite{muller2019gain,muller2017stochastic,rojas2012analyzing,rallo2017data,wahlberg2010non,oomen2014iterative,tu2019minimax,tu2018approximation}.
Based on our own experiences/tests, the multi-input multi-output (MIMO) power iteration method~\cite{oomen2013iteratively} works quite well as a stochastic zeroth-order oracle for the purpose of implementing NS in the model-free setting. While the sample complexity for model-free NS is unknown, we will provide some numerical justifications to show that such a model-free implementation closely tracks the convergence behaviors of its model-based counterpart.
\textbf{Interpolated normalized gradient descent (INGD) with finite-time complexity.} No finite-time guarantees for finding $(\delta,\epsilon)$-stationary points have been reported for the GS/NS methods. In \cite{pmlr-v119-zhang20p,davis2021gradient}, the INGD method has been developed as another implementable variant of Goldstein's subgradient method, and is proved to satisfy high-probability finite-time complexity bounds for finding $(\delta,\epsilon)$-stationary points of Lipschitz functions. INGD uses an iterative sampling strategy to generate a descent direction which serves a role similar to the minimal norm element of the Goldstein subdifferential. A first-order oracle for differentiable points is needed for implementing the version of INGD in \cite{davis2021gradient}.
It has been show \cite{pmlr-v119-zhang20p,davis2021gradient} that for unconstrained nonsmooth optimization of $L$-Lipschitz functions\footnote{We slightly abuse our notation by denoting the Lipschitz constant as $L$. Previously, we have used $L$ to denote a particular matrix used in the LMI formulation for $\mathcal{H}_\infty$ state-feedback synthesis.}, the INGD algorithm can be guaranteed to find the $(\delta,\epsilon)$-stationary point with the high-probability iteration complexity $\mathcal{O}\left(\frac{\Delta L^2}{\epsilon^3 \delta}\log(\frac{\Delta}{p\delta\epsilon})\right)$, where $\Delta:=J(K^0)-J^*$ is the initial function value gap, and $p$ is the failure probability (i.e. the optimization succeeds with the probability $(1-p)$).
We can combine the proofs for
\cite[Theorem 2.6]{davis2021gradient} and Theorem \ref{thm3} to obtain the following complexity result for our $\mathcal{H}_\infty$ setting. A formal treatment is given in the supplementary material.
\begin{thm}[Informal statement]\label{thm5}
Consider the policy optimization problem \eqref{eq:hinfopt} with the $\mathcal{H}_\infty$ cost function defined in \eqref{eq:hinfcost}. Suppose Assumption \ref{assump1} holds, and the initial policy is stabilizing, i.e. $K^0\in \mathcal{K}$. Denote $\Delta_0:=\dist(\mathcal{K}_{J(K^0)}, \mathcal{K}^c)>0$, and let $L_0$ be the Lipschitz constant of $J(K)$ over the set $\mathcal{K}_{J(K^0)}$. For any $\delta<\Delta_0$, we can
choose $\delta^n=\delta$ for all $n$ to ensure that the iterations of
the INGD algorithm stay in $\mathcal{K}$ almost surely, and find a $(\delta,\epsilon)$-stationary point with the high-probability iteration complexity $\mathcal{O}\left(\frac{\Delta L_0^2}{\epsilon^3 \delta}\log(\frac{\Delta}{p\delta\epsilon})\right)$, where $p$ is the failure probability.
\end{thm}
\section{Numerical Simulations}
To support our theory, we provide some numerical simulations in this section. The left plot in Figure~\ref{Simulationrelts} shows that GS, NS, INGD, and model-free NS work well for the following example:
\begin{equation} \label{set_matric}
A = \bmat{1 &0 &-5 \\ -1 &1 &0\\ 0 &0 &1},\,\, B= \bmat{1 \\ 0 \\ -1},\,\, Q = \bmat{2 &-1 &0 \\ -1 &2 &-1 \\ 0 &-1 &2}, \,\, R = 1.
\end{equation}
For this example, we have $J^* = 7.3475$. We initialize from $K^0 = \bmat{0.4931 &-0.1368 &-2.2654}$, which satisfies $\rho(A-BK^0) = 0.5756 < 1$. The hyperparameter choices are detailed in the supplementary material. We can see that model-free NS closely tracks the trajectory of NS and works well.
In the middle plot of Figure \ref{Simulationrelts}, we test the NS method on randomly generated cases. We set $A\in \mathbb{R}^{3\times 3}$ to be $I + \xi$, where each element of $\xi \in \mathbb{R}^{3 \times 3}$ is sampled uniformly from $[0,1]$. For $B\in \mathbb{R}^{3\times 1}$, each element is uniformly sampled from $[0,1]$. We have $Q = I + \zeta I \in \mathbb{R}^{3\times 3}$ with $\zeta$ uniformly sampled from $[0,0.1]$, and $R \in \mathbb{R}$ uniformly sampled from $[1,1.5]$. For each experiment, the initial condition $K^0 \in \mathbb{R}^{1\times 3}$ is also randomly sampled such that $\rho(A-BK^0) < 1$. The NS method converges globally for all the cases. In the right plot, we focus on the model-free setting for \eqref{set_matric}. We decrease the number of samples used in the $\mathcal{H}_\infty$ estimation and show how this increases the noise in the zeroth-order $\mathcal{H}_\infty$ oracle and worsens the convergence behaviors of the model-free NS method.
Nevertheless, the model-free NS method tracks its model-based counterpart with enough samples. More numerical results can be found in the supplementary material.
\begin{figure}
\minipage{0.33\textwidth}
\includegraphics[width=\linewidth]{plot/all_sime.pdf}
\label{fig:awesome_image2}
\endminipage\hfill
\minipage{0.33\textwidth}%
\includegraphics[width=\linewidth]{plot/random_exp2.pdf}
\label{fig:awesome_image3}
\endminipage\hfill
\minipage{0.33\textwidth}%
\includegraphics[width=\linewidth]{plot/vary_N.pdf}
\label{fig:awesome_image3}
\endminipage
\caption{Simulation results. Left: The trajectory of relative error of GS, NS, INGD, and Model-free NS methods on \eqref{set_matric}. Middle: The trajectory of relative optimality gap of 8 randomly generated cases for NS method. Right: The trajectory of Model-free NS method with more noisy oracle on \eqref{set_matric}.} \label{Simulationrelts}
\end{figure}
\section{Conclusions and Future Work}
In this paper, we developed the global convergence theory for direct policy search on the $\mathcal{H}_\infty$ state-feedback synthesis problem. Although the resultant policy optimization formulation is nonconvex and nonsmooth, we managed to show that any Clarke stationary points for this problem are actually global minimum, and
the concept of Golstein subdifferential can be used to build direct policy search algorithms which are guaranteed to converge to the global optimal solutions. The finite-time guarantees in this paper are developed only for finding $(\delta,\epsilon)$-stationary points.
An important future task is to investigate the finite-time bounds for the optimality gap (i.e. $J(K^n)-J^*$) as well as
the sample complexity of direct policy search on model-free $\mathcal{H}_\infty$ control. It is also of great interests to investigate the convergence properties of direct policy search in nonlinear/output-feedback settings\footnote{Some discussions on possible extensions along this direction have been given in the supplementary material.}.
\begin{ack}
This work is generously supported by the NSF award
CAREER-2048168 and the 2020 Amazon research award. The authors would like to thank Michael L. Overton, Maryam Fazel, Yang Zheng, Peter Seiler, Geir
Dullerud, Aaron Havens, Darioush Kevian, Kaiqing Zhang, Na Li, Mehran Mesbahi, Tamer Ba\c{s}ar, Mihailo Jovanovic, and Javad Lavaei for the valuable discussions, as well as the helpful suggestions from the anonymous reviewers of NeurIPS.
\end{ack}
\bibliographystyle{abbrv}
{\small
|
1,116,691,499,724 | arxiv | \section{Introduction}
There is variation in the flavor and aroma of different plantation commodities. For example, in Indonesia, the clove buds from Java has a prominent wooden aroma and sour flavor while those in Bali have a sweet-spicy flavor \cite{Broto}. Arabica coffee from Gayo has a lower acidity and a strong bitterness. In contrast, coffee from Toraja has a medium browning, tobacco, or caramel flavor, not too acidic and bitter. Furthermore, Kintamani coffee from Bali has a fruit flavor and acidity, mixed with a fresh flavor. Contrastingly, Coffee from Flores has a variety of flavors ranging from chocolate, spicy, tobacco, strong, citrus, flowers and wood. Coffee from Java has a spicy aroma while that from Wamena has a fragrant aroma and without pulp \cite{coffeland}.
The specific flavors and aromas are attributed to the composition of commodities' metabolites. Generally, specific metabolite contributes is responsible for particular flavors and aroma. For this reason, it is vital to recognize the characteristics of each plantation commodity based on metabolite composition. This study investigates the origin of clove buds. This helps to maintain the flavor of a product using clove buds as a mixture. Also, the characteristics of food products can be predicted based on the origin of clove buds used due to differences flavour and taste between regions \cite{kresnowati}.
Metabolic profiling is a widely used approach in obtaining information related to metabolites contained in a biological sample. This is a quantitative measurement of metabolites from biological samples \cite{kopka,putri}. To give meaning to the metabolites data sets, chemometrics technique was developed. This is a chemical sub-discipline that uses mathematics, statistics and formal logic to gain knowledge about chemical systems. It provides maximum relevant information by analysing metabolites data sets from biological samples \cite{massart}. Additionally, it is used in pattern recognition of metabolites data sets in complex chemical systems \cite{kresnowati}. Pattern recognition in biological samples identifies specific metabolites or biomarkers that form particular flavor and aroma.
Artificial neural networks have been widely used in pattern recognition \cite{cornelius} and other applications in various fields as shown by some researchs \cite{samir,25,26,27,28,29}. However, it has not been fully implemented, especially in clove buds. The small data sets available limits the implementation of artificial neural networks for clove buds. This is attributed to the lack of metabolite composition in the clove buds and high cost for extracting them. Furthermore, some clove buds have zero metabolite concentration. However, this is because of inefficient tools in the laboratory to detect metabolites whose values are very small. Therefore this study implements artificial neural networks as pattern recognition in clove buds data sets. Each origin of clove buds has specific metabolites as a biomarker.
\section{Materials and Methods}
\subsection{Materials}
This study uses clove buds data sets obtained from Kresnowati et al. \cite{kresnowati}. It examined clove buds from four origins in Indonesia, including Java, Bali, Manado and Toli-Toli. Each origin has three regions, and therefore, there are twelve regions in total. In the laboratory, eight experiments are carried out in each region, except for Java with only six experiments. Each experiment, 47 types of metabolites were recorded. In the matrix, data sets are 94 $\times$ 47. The row and column represent the number of experiments and metabolites, respectively.
\subsection{Data Preprocessing}
In total, the clove buds data sets have a wide range, specifically between $10^{-4}$ and 10. Therefore, logarithmic transformations are used to obtain reliable numerical data. Since some metabolites data have zero concentration, logarithmic transformation cannot be applied directly. This is because their concentrations range below the specified threshold. The metabolite data with zero concentration are not removed because of acting as biological markers. Therefore, they are replaced with value of one order smaller than the smallest concentration available. In this case, the zeros are replaced with $10^{-5}$. Before implementing artificial neural networks, one stage preprocessing clove buds data sets from \cite{rustam} are added to normalize the values of metabolites data. Normalization ensure that each data has the same influence or contribution to determine its origin. The following normalization formula is used \cite{beltramo}
\begin{equation}
z_{kl}=\frac{{x}_{kl}-\overline{x}}{s}.
\end{equation}
Here $z_{kl}$ is the result of normalization of $x_{kl}$, $\overline{x}$ is the mean of the $k$-th experiment and $s$ is
\begin{equation}
s=\sqrt{\sum_{k=1}^{n}\frac{{x}_{kl}-\overline{x}}{n-1}}.
\end{equation}
\subsection{Artificial Neural Network}
Artificial neural networks are a false representation of the human brain that simulates the learning process \cite{fausett}. Backpropagation and resilient propagation are learning algorithms widely used in artificial neural networks \cite{aizenberg,johansson,pleune,kaiser,el2015,chayjan,anastasiadis,santra,patnaik,fisch,shiblee}. In this study, two different network architectures, including resilient and backpropagation are used. The first and second architectures consist of two and one hidden layers, respectively.
\subsubsection{Backpropagation Learning Algorithm}
Backpropagation learning algorithm is based on the repeated use of chain rules to calculate the effect of each weight in network concerning the error function $E$ \cite{riedmiller}.
\begin{equation}
\frac{\partial E}{\partial w_{ij}}=\frac{\partial E}{\partial o_{i}}\frac{\partial o_{i}}{\partial net_{i}}\frac{\partial net_{i}}{\partial w_{ij}}
\end{equation}
where $w_{ij}$ is the weight from $j-th$ neuron to $i-th$ neuron, $o_{i}$ is the output, and $net_{i}$ is the weighted number of neurons input $i$. Once the partial derivatives for each weight are known, the goal of minimizing the error function is achieved with gradient descent \cite{riedmiller}:
\begin{equation}
w_{ij}^{(t+1)}=w_{ij}^{(t)}-\epsilon \frac{\partial E}{\partial w_{ij}}^{(t)}
\label{eq:4}
\end{equation}
where $t$ is iteration and $0<\epsilon<1$ the learning rate. From the Equation (\ref{eq:4}), choosing a large learning rate (close to 1), allows for oscillations. This makes the error fall above the specified tolerance value and lessens the identification accuracy. Conversely, in case, the learning rate ($\epsilon$) is too small (close to 0), many steps are needed for convergence of the error function $E$. To avoid these, the backpropagation learning algorithm is expanded by adding the momentum parameter $(0 <\alpha<1)$ as shown in Equation (\ref{eq:5}). The addition of momentum parameter also accelerates the convergence of error function \cite{riedmiller}.
\begin{equation}
\Delta w_{ij}^{(t+1)}=-\epsilon \frac{\partial E}{\partial w_{ij}}^{(t)} + \alpha \Delta w_{ij}^{(t-1)}
\label{eq:5}
\end{equation}
where it measures the effect of previous step on the currently.
To activate neurons in the hidden and output layers, the sigmoid activation function is used. Three essential properties used in backpropagation and resilient propagation include bounded, monotonic and continuously differentiable. This helps to convert a weighted amount of input into an output signal for each neuron $i$ as shown by Equation (\ref{eq:6}) \cite{bhagat}.
\begin{equation}
O_{i}=f(I_{i})=\frac{1}{1+e^{-\sigma I_{i}}}.
\label{eq:6}
\end{equation}
where $I_{i}$ is the input of $i$-th weighted number of neuron, $\sigma$ the slope parameter of the sigmoid activation function and $O_{i}$ the output of $i$-th neuron. The threshold used on the output layer for the sigmoid activation function is
\begin{equation}
O_{i} =\left\{\begin{matrix}
1 ~~ if ~~ O_{i} \geq 0.5\\
0 ~~ if ~~ O_{i} < 0.5
\end{matrix}\right.
\end{equation}
The weighted amount input is given in the following equation \cite{bhagat}.
\begin{equation}
\sum_{i=1}^{n} w_{ij} O_{i}+w_{Bj} O_{B}.
\label{eq:8}
\end{equation}
The sum of $i$ represents the input received from all neurons in the input layer, while $B$ is the bias neuron. Weight $w_{ij}$ is associated with connections from $i$-th neuron to $j$-th neuron, while $w_{Bj}$ weight relates to the connections from biased to $j$-th neuron. The weighted amount obtained in the hidden and the output layers are activated by substituting the weighted amount from Equation (\ref{eq:8}) to be an exponent in Equation (\ref{eq:6}).
\subsubsection{Resilient propagation Learning Algorithm}
Riedmiller et al. in \cite{riedmiller} proposed a resilient propagation learning algorithm developed by the backpropagation algorithm. The algorithm directly adapts to the weight value based on local gradient information. Riedmiller et al. \cite{riedmiller} introduced an update value $\Delta_{ij}$ for each weight determining the size of weight update. The adaptive update value evolves during the learning process based on its local sight on the error function $E$, according to the following learning rule \cite{riedmiller}:
\begin{equation}
\Delta_{ij}^{t}=\left\{\begin{matrix}
\eta^{+} * \Delta_{ij}^{(t-1)}, ~~ if ~~ \frac{\partial E}{\partial w_{ij}}^{(t-1)}*\frac{\partial E}{\partial w_{ij}}^{(t)}>0\\
\eta^{-} * \Delta_{ij}^{(t-1)}, ~~ if ~~ \frac{\partial E}{\partial w_{ij}}^{(t-1)}*\frac{\partial E}{\partial w_{ij}}^{(t)}<0\\
\Delta_{ij}^{(t-1)}~~~~ ,~~~~~~~~~~~~~~~~~~ else \\ \end{matrix}
\right.
\end{equation}
where ($0<\eta^{-}<1<\eta^{+}$) $\eta^{-}$ and $\eta^{+}$ represents the decrease and increase factors, respectively. According to this adaptation rule, every time the partial derivative of the corresponding weight $w_{ij}$ changes its sign, which indicates that the last update is too big and the algorithm is above the local minimum, the update value $\Delta_{ij}$ is decreased by the factor $\eta^{-}$. In case the derivative retains its sign, the update value slightly increases to accelerate convergence in the shallow regions \cite{riedmiller}.
Once the update value for each weight is adjusted, the update weight itself follows rule stating that in case the derivative is positive, the weight is decreased by its update value. If the derivative is negative, the update value is added
\begin{equation}
\Delta w_{ij}^{t}=\left\{\begin{matrix}
-\Delta_{ij}^{t-1}, ~~ if~~ \frac{\partial E}{\partial w_{ij}}^{(t)}>0\\
+\Delta_{ij}^{t-1}, ~~if~~ \frac{\partial E}{\partial w_{ij}}^{(t)}<0\\
0 ~~~~~, ~~~~~~~ else \\
\end{matrix}\right.
\end{equation}
\begin{equation}
w_{ij}^{t+1} = w_{ij}^{t}+\Delta w_{ij}^{t}
\end{equation}
However, in case the partial derivative sign changes, which means the previous step was too large and the minimum missed, the previous weight update is reverted:
\begin{equation}
\Delta w_{ij}^{(t)}=-\Delta w_{ij}^{(t-1)},~~ if ~~ \frac{\partial E}{\partial w_{ij}}^{(t-1)}*\frac{\partial E}{\partial w_{ij}}^{(t)}<0
\end{equation}
Due to the 'backtracking' weight step, the derivative should change its sign once again in the next step. To avoid another problem, there should be no adaptation of update value in the succeeding step. In practice, this can be carried out by setting $\frac{\partial E}{\partial w_{ij}}^{(t-1)}$ = 0 in the $\Delta_{ij}$ adaptation rule. The update values and weights are changed every time the whole set of patterns is presented once to the network (learning by epoch).
The following shows the process of adaptation and resilient propagation learning process. The $\mathbf{minimum (maximum)}$ operator is expected to provide a minimum or maximum of two numbers. The sign operator returns +1 in the argument is positive, -1 in case the it is negative, and 0 for otherwise.
\begin{equation}
\begin{split}
For~each ~weight~ and ~bias\{\\
\mathbf{if}(\frac{\partial E}{\partial w_{ij}}^{(t-1)}*\frac{\partial E}{\partial w_{ij}}^{(t)}>0)\mathbf{\, then\{}\\
\Delta_{ij}^{(t)}=\mathbf{minimum}(\Delta_{ij}^{(t-1)}*\eta^{+},\Delta_{\mathit{\mathrm{max}}})\\
\Delta w_{ij}^{(t)}=\mathbf{sign}(\frac{\partial E}{\partial w_{ij}}^{(t)}*\Delta_{ij}^{(t)})\\
w_{ij}^{(t+1)}=w_{ij}^{(t)}+\Delta w_{ij}^{(t)}\}\\
\mathbf{else \: if}(\frac{\partial E}{\partial w_{ij}}^{(t-1)}*\frac{\partial E}{\partial w_{ij}}^{(t)}<0)\mathbf{\, then\{} \\
\Delta_{ij}^{(t)}=\mathbf{maximum}(\Delta_{ij}^{(t-1)}*\eta^{-},\Delta_{\mathit{\mathrm{min}}}) \\
w_{ij}^{(t+1)}=w_{ij}^{(t)}-\Delta w_{ij}^{(t-1)}\\
\frac{\partial E}{\partial w_{ij}}^{(t)}=0\}\\
\mathbf{if}(\frac{\partial E}{\partial w_{ij}}^{(t-1)}*\frac{\partial E}{\partial w_{ij}}^{(t)}=0)\mathbf{\, then\{} \\
\Delta w_{ij}^{(t)}=-\mathbf{sign}(\frac{\partial E}{\partial w_{ij}}^{(t)}*\Delta_{ij}^{(t)})\\
w_{ij}^{(t+1)}=w_{ij}^{(t)}+\Delta w_{ij}^{(t)}\}\\
\}
\end{split}
\end{equation}
\section{Results and Discussions}
In this study, the percentage of training and testing data sets are 80\% and 20\%, respectively. The metabolites data sets in matrix are 94$\times$47. Out of 94 rows, 75 were chosen randomly as training data sets, while the remaining were used as testing. The selection of training data sets is carried out randomly 30 times. Therefore, in each network architecture, there are 30 values for the percentage of identification accuracy, coefficient of determination and the mean squared error ($MSE$). The average is chosen as a representative of the 30 values. In each network architecture, learning rate ($\epsilon$) 0.9, momentum parameter ($\alpha$) 0.1 and maximum epoch 5000 are used with error target $10^{-3}$. In this study, each origin is represented by a binary code. Specifically, the binary code for the Java origin is 1000, Bali 0100, Manado 0010 and Toli-Toli 0001. The calculation of identification accuracy and $MSE$ is shown in Equation (\ref{eq:13}) and (\ref{eq:14}).
\begin{equation}
\% \: accuracy = \frac{a}{k}100\%
\label{eq:13}
\end{equation}
Where $a$ is the number of origins identified correctly, while $k$ is the total number. $MSE$ calculated by the following equation \cite{bhagat}
\begin{equation}
MSE = \frac{1}{m\cdot n} \sum_{p=1}^{m} \sum_{k=1}^{n}(T_{kp}-O_{kp})^{2}.
\label{eq:14}
\end{equation}
where $T_{kp}$ is the desired target, $O_{kp}$ the network output and $p$ the variable corresponding to the number of origins.
The suitability between the expected target and network output was evaluated based on the coefficient of determination $R^{2}$. It was calculated using the following equation \cite{el2015}
\begin{equation}
R^{2}=1-\frac{\frac{1}{n} \sum_{k=1}^{n} (T_{kp}-O_{kp})^{2}}{\frac{1}{n-1} \sum_{k=1}^{n} (T_{kp}-\overline{T_{kp}})^{2}}.
\end{equation}
Where $\overline{T_{kp}}$ is the average desired target.
In this study, backpropagation and resilient propagation were used, each consisting of two and one hidden layers. For one hidden layer, the number of neurons was determined using the formula proposed by Shibata and Ikeda in 2009 \cite{shibata}, specifically $N_{h} = \sqrt{N_{i} \cdot N_{o}}$, where $N_{h}$, $N_{i}$, and $N_{o}$ represents hidden, input and output neurons, respectively. In both backpropagation and resilient propagation, the number of neurons used do not exceed one hidden layer. Based on Shibata and Ikeda \cite{shibata} formula, the number of neurons in one hidden layer was obtained, specifically $N_{h} = \sqrt{N_{i} \cdot N_{o}} = \sqrt{47 \cdot 4 } = 13.71$. However, in this study, it was rounded up to 15 neurons. Some experiments were conducted to evaluate the identification accuracy, and whether using one hidden layer 15 neurons might lead to a better accuracy of identification than two hidden layers. However, the number of neurons varied, setting less than 15 neurons. For two hidden layers, experiments were conducted with the number of consecutive neurons as follows; 3-5 (8), 4-6 (10), 5-7 (12) and 6-8 (14). The number of neurons in the hidden layer never exceed 15 neurons.
\subsection{Backpropagation (B-Prop) with Two Hidden Layers}
\label{section:section31}
In this section, backpropagation learning algorithm with two hidden layers was used. The number of neurons in the hidden layer varied with not more than 15 neurons. There were four variations of network architecture, including 47-3-5-4, 47-4-6-4, 47-5-7-4 and 47-6-8-4. The input layer consists of 47 neurons based on the number of metabolites. The output layer consists of 4 neurons according to the number of clove buds origins.
\begin{table*}
\caption{Backpropagation with two hidden layers}
\centering
\begin{tabular}{ccccccc}
\hline
\textbf{Network} & \multicolumn{2}{c}{\textit{\textbf{MSE}}} & \multicolumn{2}{c}{\textbf{Accuracy} (\%)} & \multicolumn{2}{c}{$R^2$} \\
\cline{2-7}
\textbf{Architecture} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing} \\
\hline
\textbf{47-3-5-4} & \textbf{0.10346} & \textbf{0.11357} & \textbf{76.98} & \textbf{73.68} & \textbf{0.81}
& \textbf{0.76} \\
\hline
47-4-6-4 & 0.13084 & 0.13547 & 62 & 57.54 & 0.64
& 0.61 \\
\hline
47-5-7-4 & 0.14889 & 0.15884 & 49.73 & 41.4 & 0.51 & 0.42 \\
\hline
47-6-8-4 & 0.15388 & 0.15874 & 50.04 & 42.46 & 0.48 & 0.43 \\
\hline
\end{tabular}
\label{tab:tabel1}
\end{table*}
Table \ref{tab:tabel1} shows the network architecture \textbf{47-3-5-4} gives the highest value for identification accuracy and coefficient of determination in training and testing data sets. Similar to the $MSE$, this network architecture provides the smallest amount of both training and the testing data sets. From Table \ref{tab:tabel1}, increasing the number of neurons in the backpropagation with two hidden layers decreases network performance. This is in line with Shafi et al. in 2006 \cite{shafi}, which stated that increasing the number of neurons in the hidden layer only heightened the complexity of the network. Still, it does not increase the accuracy of pattern recognition.
\subsection{Backpropagation (B-Prop) with One Hidden Layer}
\label{section:section32}
The backpropagation learning algorithm with one hidden layer was implemented to evaluate its result in case of a comparison using two hidden layers. The results obtained are shown in Table \ref{tab:tabel2}.
\begin{table*}
\caption{Backpropagation with one hidden layer}
\centering
\begin{tabular}{ccccccc}
\hline
\textbf{Network} & \multicolumn{2}{c}{\textit{\textbf{MSE}}} & \multicolumn{2}{c}{\textbf{Accuracy} (\%)} & \multicolumn{2}{c}{$R^2$} \\
\cline{2-7}
\textbf{Architecture} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing} \\
\hline
47-15-4 & 0.0721 & 0.0773 & 99.91 & 99.47 & 0.99 & 0.98 \\
\hline
\end{tabular}
\label{tab:tabel2}
\end{table*}
Table \ref{tab:tabel2} shows that network architecture 47-15-4 identifies the clove buds origin effectively. The identification accuracy percentage is 99.91\% and 99,47\% for training and testing data sets, respectively. Besides, the $MSE$ value is also smaller compared to the two hidden layers.
For the backpropagation algorithm, the results show one hidden layer is better than two. This is in line with Villiers and Barnard \cite{de1993}, which stated that network architecture with one hidden layer is on average better than two hidden layers. They concluded that two hidden layers are more difficult to train. Additionally, they also established that this behaviour is caused by a local minimum problem. The networks with two hidden layers are more prone to the local minimum problem during training.
\subsection{Resilient Propagation (R-Prop) with Two Hidden Layers}
Resilient propagation learning algorithm contains some parameters, including the upper and lower limits, as well as the decrease and increase factors. In this study, the range of update values is limited to the upper limit ($\Delta_{max}$) = 50, the lower limit ($\Delta_{min}$) = $10^{-6}$, and the decrease and increase factors ($\eta^{-}$) = 0.5 and ($\eta^{+}$) = 1.2, respectively. The reason for choosing these values is shown in \cite{riedmiller}.
Similar to Section \ref{section:section31}, the resilient propagation learning algorithm is applied to the network architecture with two hidden layers. The number of neurons vary but do not exceed 15 neurons. In this section, there are four variations of network architecture, including 47-3-5-4, 47-4-6-4, 47-5-7-4 and 47-6-8-4.
\begin{table*}
\caption{Resilient propagation with two hidden layers}
\centering
\begin{tabular}{ccccccc}
\hline
\textbf{Network} & \multicolumn{2}{c}{\textit{\textbf{MSE}}} & \multicolumn{2}{c}{\textbf{Accuracy} (\%)} & \multicolumn{2}{c}{$R^2$} \\
\cline{2-7}
\textbf{Architecture} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing} \\
\hline
47-3-5-4 & 0.07209 & 0.08111 & 99.73 & 97.37 & 0.98 & 0.95 \\
\hline
47-4-6-4 & 0.07162 & 0.08316 & 99.69 & 96.49 & 0.99 & 0.94 \\
\hline
\textbf{47-5-7-4} & \textbf{0.07160} & \textbf{0.07961} & \textbf{99.96} & \textbf{97.89} & \textbf{0.99} & \textbf{0.96} \\
\hline
47-6-8-4 & 0.07160 & 0.07978 & 99.78 & 97.72 & 0.99 & 0.96 \\
\hline
\end{tabular}
\label{tab:tabel3}
\end{table*}
The results in Table \ref{tab:tabel3} show the network architecture \textbf{47-5-7-4} gives the highest identification accuracy of clove buds origin. The percentage of identification accuracy is 99.96\% and 97.89\% for training and testing data sets, respectively.
\subsection{Resilient Propagation (R-Prop) with One Hidden Layer}
In this section, the resilient propagation learning algorithm is implemented with one hidden layer. Similar to section \ref{section:section32}, the number of neurons in the hidden layer is 15 neurons, and have the network architecture 47-15-4.
\begin{table*}
\caption{Resilient propagation with one hidden layer}
\centering
\begin{tabular}{ccccccc}
\hline
\textbf{Network} & \multicolumn{2}{c}{\textit{\textbf{MSE}}} & \multicolumn{2}{c}{\textbf{Accuracy} (\%)} & \multicolumn{2}{c}{$R^2$} \\
\cline{2-7}
\textbf{Architecture} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing} \\
\hline
47-15-4 & 0.07158 & 0.07932 & 99.86 & 94.74 & 0.99 & 0.92 \\
\hline
\end{tabular}
\label{tab:tabel4}
\end{table*}
Table \ref{tab:tabel4} shows the network architecture 47-15-4 identifies the origin of clove buds with identification accuracy of 99.86\% and 94.74\% on training and testing data sets, respectively.
The network architecture of the resilient propagation algorithm, both two hidden layers and one hidden layer, provides identification results with very high accuracy. However, the network architecture with two hidden layers is slightly lower accuracy.
Tables \ref{tab:tabel3} and \ref{tab:tabel4} show the two-layered resilient propagation with deficient neurons performs better than the single-layer having more neurons. This is in line with Santra et al. \cite{santra}, established that the performance of two hidden layers with 8-10 (18) neurons is better than one hidden layer with 62 neurons.
The summary of the best identification accuracy and determination coefficient are shown in Figures \ref{fig:training}, \ref{fig:testing}, \ref{fig:r_training} and \ref{fig:r_testing}, respectively. For each network architecture, the smallest \textit{MSE} in training and testing data sets are shown in Figures \ref{fig:mse_training} and \ref{fig:mse_testing}, respectively.
\begin{figure}[h!]
\centering
\includegraphics[width=7cm,height=6cm]{Akurasi_Data_Training.png}
\caption{Identification accuracy percentage of training data sets.}
\label{fig:training}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=7cm,height=6cm]{Akurasi_Data_Testing.png}
\caption{Identification accuracy percentage of testing data sets.}
\label{fig:testing}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=7cm,height=6cm]{R_Training_New.png}
\caption{Determination coefficient of training data sets.}
\label{fig:r_training}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=7cm,height=6cm]{R_Testing_New.png}
\caption{Determination coefficient of testing data sets.}
\label{fig:r_testing}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=7cm,height=6cm]{MSE_Training.png}
\caption{$MSE$ of training data sets.}
\label{fig:mse_training}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=7cm,height=6cm]{MSE_Testing.png}
\caption{$MSE$ of testing data sets.}
\label{fig:mse_testing}
\end{figure}
The results of identification from the origins of clove buds have been obtained. In small data set categories, backpropagation with one hidden layer provides accurate identification in the training and testing data sets. It accurately identifies the origins of clove buds obtained using the resilient propagation algorithm with two hidden layers.
The neural networks model obtained in this paper can be a reference from a scientific perspective. For instance, it can be used in future studies to identify the origin of various plantation commodities with small metabolites data sets. At the moment, the most appropriate way of determining the origin of a plantation commodity is qualitative, relying on the services of flavorist to evaluate flavor and taste. This is because each commodity has a specific flavor and taste based on the origin of its region. Furthermore, the different origins of clove buds data sets have not been reported in the literature and thus no direct comparison can be presented in this paper.
\section{Conclusions}
This paper demonstrated the potential and ability of a neural network approach with backpropagation and resilient propagation learning algorithms. It was meant to identify the clove buds origin based on metabolites composition. The work was divided into two parts, the first one being identification of the clove buds origin using the backpropagation learning algorithm. Two network architectures were constructed, each containing one hidden and two hidden layers. The results showed the use of one hidden layer gives clove buds origin identification accurately, specifically 99.91\% and 99.47\% in training and testing data sets. The second step involved the identification of the clove buds origin using resilient propagation learning algorithm. Two network architectures were constructed, each containing one hidden and two hidden layers. The results showed the use of two hidden layers gives accurate clove buds origin identification, including 99.96\% and 97.89\% in training and testing data sets. From these results, it was concluded that for identification of a small of metabolites data sets from a plantation commodity, the backpropagation algorithm with one hidden layer and the resilient propagation algorithm with two hidden layers should be used. This paper also confirmed the contribution of artificial neural networks to pattern recognition of metabolites data sets obtained by metabolic profiling technique.
\bibliographystyle{unsrt}
|
1,116,691,499,725 | arxiv | \section{Introduction}
The advent of high precision cosmic microwave background (CMB) experiments, such as \emph{Planck} \cite{Planck}, has recently motivated several authors to revisit the theory of cosmological
recombination pioneered by Peebles \cite{Peebles} and Zeldovich et al. \cite{Zeldovich_et_al} in the 1960s. The free electron fraction as a function of redshift $x_e(z)$ is one of the major
theoretical uncertainties in the prediction of the CMB temperature and polarization anisotropy power spectra \cite{Hu1995, Lewis06, Chluba10_uncertainties}. To obtain a recombination history accurate to
the percent level, it is necessary to account for a high number of excited states of hydrogen, up to a principal quantum number $n_{\max} = \mathcal{O}(100)$ \cite{Recfast_long,
Recfast_short}. The desired sub-percent accuracy can only be reached when explicitly resolving the out-of-equilibrium angular momentum substates, which requires
the multi-level atom (MLA) codes to follow
$N_{\rm level} = n_{\max}(n_{\max}+1)/2$ individual states. Moreover, the ordinary differential equations (ODEs) describing the level populations are stiff, requiring the
solution of large $N_{\rm level}\times N_{\rm level}$ systems of equations at each integration time step. This problem has been solved by several authors \cite{CRMS07, Grin_Hirata, Chluba_Vasil}, but each of these codes takes hours to days to run.
Eventually, it is necessary to be
able to produce not only accurate but also {\em fast} recombination histories, to be included in Markov Chain Monte Carlo (MCMC) codes for cosmological parameter estimation. The MCMC requires CMB power
spectra (and hence recombination histories) to be generated at each proposed point in cosmological parameter space, with a typical chain sampling ${\cal O}(10^5)$ points \cite{WMAP7p}. Furthermore,
dozens of MCMCs are often run with different combinations of observational constraints and different parameter spaces.
This makes it impractical to
include recombination codes that run for more than a few seconds in the MCMC.
One solution is to precompute
recombination histories $x_e(z| H_0, T_{\rm CMB}, \Omega_m h^2, \Omega_b h^2, Y_{\rm He}, N_{\nu})$ on a grid of cosmological parameters, and then use elaborate interpolation algorithms to evaluate the
recombination history for any cosmology \cite{RICO}, or to construct fitting functions \cite{Recfast_short, recfastimproved}. However, such procedures need to be re-trained every time additional
parameters are added, and are rather unsatisfying regarding their physical significance.
In this work we present a new method of solution for the recombination problem, perfectly equivalent to the standard MLA method, but much more efficient computationally. The basic idea is
that the vast majority of the excited hydrogen levels are populated and depopulated only by \emph{optically thin} radiative transitions (bound-bound and bound-free) in a bath of thermal photons; we show that their effect can be ``integrated out'' leaving
only a few functions of the matter and radiation temperatures $T_{\rm m}$ and $T_{\rm r}$ (this list would include the free electron density $n_e$ if
we incorporated collisions), which can be pretabulated. In an
actual call to the recombination code from an MCMC, it is then only necessary to solve an \emph{effective} MLA (hereafter, EMLA) with a smaller number of levels (perhaps only 3: $2s$, $2p$ and $3p$), which eliminates the
computationally difficult $N_{\rm level}\times N_{\rm level}$
system solution in the traditional MLA. [The idea is similar in spirit to the line-of-sight integral method for the
computation of the CMB power spectrum \cite{SZ.LOSI}, which eliminated a large number of independent variables from the cosmological perturbation theory system of ODEs (the high-order moments of the
radiation field, $\Theta_\ell$ for $\ell\gg1$) in favor of
pretabulated spherical Bessel functions.] Our method achieves a speed-up of the recombination calculation by 5 to 6 orders of magnitude.
We note that our method only eliminates the computational complexity associated with the high-$n$ excited states and does not include continuum processes and radiative transfer effects
that have been studied by previous authors \cite{DG05, CS06, Kholupenko06, WS07, CS08, Hirata_2photon, GD08, Hirata_Forbes, CS09a, CS09b, CS09c}. However, we note that there has been much progress in
analytic
treatments of
these effects \cite{Hirata_2photon, GD08, Hirata_Forbes}; ultimately, we expect to improve these analytic treatments and graft them (and an analytic treatment of helium recombination
\cite{He1,He2,He3,HeKIV,HeRCS}) on to
the
ultrafast EMLA code described herein to yield a recombination code
that is accurate enough for {\slshape Planck} data analysis.
This paper is organized as follows. In Section \ref{section:rates} we review the general picture of hydrogen recombination, and the bound-bound and bound-free transition rates involved in the
calculation. In Section \ref{section:std MLA} we describe the standard MLA method. We present our new EMLA method in Section \ref{section:new method} and demonstrate its equivalence with the standard MLA formulation. We describe our numerical implementation and results in Section \ref{section:results}, and conclude in Section \ref{section:conclusion}. Appendix \ref{app:invertibility} is dedicated to demonstrating the invertibility of the system defining the EMLA equations. Appendix \ref{app:complementarity} proves a complementarity relation between effective transition probabilities. We prove detailed balance relations between effective transition rates in Appendix \ref{app:db}. Appendix \ref{appendix:post-saha} exposes the post-saha approximation we use at early times when computing recombination histories.
\section{Bound-bound and bound-free transition rates} \label{section:rates}
The evolution of the free electron fraction is governed by the network of transitions between bound states of hydrogen as well as recombination and photoionization rates. Before giving detailed
expressions for these rates, let us first outline the general picture of the process of recombination.
It has long been known that direct recombinations to the ground state are ineffective for recombination \cite{Peebles, Zeldovich_et_al}, since the resulting emitted photons are immediately reabsorbed by
hydrogen in
the ground state, as soon as the neutral fraction is higher than $\sim 10^{-9}$. Electrons and protons can efficiently combine only to form hydrogen in excited states. The minute amount of excited
hydrogen at all relevant times during cosmological recombination is not sufficient to distort the blackbody radiation field near the ionization thresholds of the excited states. Recombination to the
excited states is therefore a thermal process: it depend on the matter temperature $T_{\rm m}$ which characterizes the free electrons and protons velocity distribution, and also on the radiation temperature
$T_{\rm r}$, since the abundant low-energy thermal photons can cause stimulated recombinations. Photoionization rates from excited states only depend on the radiation temperature since they do not involve free
electrons in the initial state.
Transitions between bound excited states may be radiative or collisional. Radiative transition rates are well known and depend only on the radiation temperature characterizing the blackbody radiation
field, undistorted in the vicinity of the optically thin lines from the Balmer series and beyond. Collisional transition rates are much less precisely known, but only depend on the matter temperature and
the abundance of charged particles causing the transitions (i.e. free electrons and free protons, which, once helium has recombined, have the same abundance $n_e = n_p$ due to charge neutrality).
Finally, some of the excited states can radiatively decay to the ground state. The most obvious route to the ground state is through Lyman transitions from the $p$ states. However, due to the very high
optical depth of these transitions, emitted Lyman photons are immediately reabsorbed by hydrogen atoms in their ground state. This ``bottleneck'' can only be bypassed by the systematic redshifting of
photons, which can escape re-absorption once their frequency is far enough below the resonant frequency of the line. The relevant transition rate in this case is the \emph{net} decay rate to the ground
state, which is a statistical average accounting for the very small escape probability of Lyman photons. Two-photon transitions are usually much slower than single-photon transitions. However, the rate of two-photon decays from the metastable $2s$ state is comparable to the net decay rate in the highly self-absorbed Lyman transitions, and this process should therefore be included in a recombination calculation \cite{Peebles,
Zeldovich_et_al}.
We now give explicit expressions for the bound-bound and bound-free rates dicussed above. Subscripts $nl$ refer to the bound state of principal quantum number $n$ and angular momentum quantum number $l$.
We denote $\alpha_{\rm fs}$ the fine structure constant, $\mu_e \equiv m_em_p/(m_e + m_p)$ the reduced mass of the electron-proton system, $E_I$ the ionization energy of hydrogen, and $E_n \equiv
-E_In^{-2}$
the energy of the $n^{\rm th}$ shell. Finally, we denote by $f_{\rm BB}(E, T_{\rm r}) \equiv ( \textrm{e}^{E/T_{\rm r}} - 1 )^{-1}$ the photon occupation number at
energy $E$ in the blackbody radiation field at temperature $T_{\rm r}$.
\subsection{Recombination to and photoionization from the excited states}
The recombination coefficient to the excited state $nl$, including stimulated recombinations, is denoted $\alpha_{nl}(T_{\rm m}, T_{\rm r})$ (it has units of cm$^{3}$ s$^{-1}$). The photoionization rate per atom in
the state $nl$ is denoted $\beta_{nl}(T_{\rm r})$. Both can be expressed in terms of the bound-free radial matrix elements $g(n,l,\kappa,l')$ \cite{Burgess}. Defining
\begin{eqnarray}
\gamma_{nl}(\kappa) &\equiv& \frac2{3n^2} \alpha_{\rm fs}^3 \frac{E_I}{h} (1 + n^2 \kappa^2)^3 \nonumber \\
&&\times \sum_{l' = l\pm1} \max(l, l') g(n,l,\kappa,l')^2,
\end{eqnarray}
where $\kappa$ denotes the momentum of the outgoing electron in units of $\hbar/a_0$ (where $a_0$ is the reduced-mass Bohr radius),
the recombination coefficient is given by \cite{Burgess}:
\begin{eqnarray}
\alpha_{nl}(T_{\rm m}, T_{\rm r}) &=&\frac{ h^3}{(2 \pi \mu_e T_m)^{3/2}}
\nonumber \\ &&
\times \int_0^{+\infty}\textrm{e}^{-E_I \kappa^2 / T_{\rm m}}\gamma_{nl}(\kappa)
\nonumber \\ && \times
\left[1 + f_{\rm BB}\left(E_{\kappa n},T_{\rm r}\right)\right]\textrm{d} (\kappa^2),
\label{eq:alpha}
\end{eqnarray}
where $E_{\kappa n} \equiv E_I(\kappa^2 + n^{-2})$. The photoionization rate only depends on the radiation temperature and can be obtained by detailed balance considerations from the recombination
coefficient:
\begin{equation}
\beta_{nl}(T_{\rm r}) = \frac{(2 \pi \mu_e T_r)^{3/2}}{(2 l +1)h^3} \textrm{e}^{E_n /T_{\rm r} } \alpha_{nl}(T_{\rm m} = T_{\rm r}, T_{\rm r}).
\label{eq:beta-db}
\end{equation}
\subsection{Transitions between excited states}
We denote $R_{nl\rightarrow n'l'}$ the transition rate from the excited state $nl$ to the excited state $n'l'$. It has units of sec$^{-1}$ per atom in the initial state. Transitions among excited states
can be either radiative or collisional:
\begin{equation}
R_{nl\rightarrow n'l'} = R_{nl\rightarrow n'l'}^{\rm rad}(T_{\rm r}) + R_{nl\rightarrow n'l'}^{\rm coll}(T_{\rm m}, n_e),
\end{equation}
where $n_e = n_p$ is the abundance of free electrons or free protons. In this paper, we follow exclusively the radiative rates. These are given by
\begin{eqnarray}
R_{nl\rightarrow n'l'}^{\rm rad} = \Bigg{\{} \begin{array} {ccc} A_{nl,n'l'}\left[1 + f_{\rm BB}(E_{nn'}, T_{\rm r})\right]& & E_n > E_{n'}\\[10pt]
\frac{g_{l'}}{g_l} \textrm{e}^{-E_{n'n}/T_{\rm r}} R_{n' l'\rightarrow nl}^{\rm rad}& & E_n < E_{n'},
\end{array}\label{eq:Rab rad}
\end{eqnarray}
where $E_{nn'} \equiv E_n - E_{n'}$ is the energy difference between the excited levels, $g_l \equiv 2 l +1$ is the degeneracy of the state $nl$, and $A_{nl,n'l'}$ is the Einstein $A$-coefficient for the
$nl \rightarrow n'l'$ transition, which may be obtained from the radial matrix element $R_{n'l'}^{nl}$\cite{Bethe}:
\begin{equation}
A_{nl,n'l'} = \frac{2\pi}3 \alpha_{\rm fs}^3 \frac{E_I}{h}\left(\frac1{n'^2}-\frac1{n^2}\right)^3 \frac{\max(l, l')}{2l +1}|R_{n'l'}^{nl}|^2.
\end{equation}
\subsection{Transitions to the ground state}
Finally, the ground state population $x_{1s} \approx 1 - x_e$ evolves due to transitions from and into the $np$ and $2s$ states (two-photon transitions from higher energy states are dominated by ``1 +
1'' photon decays, already accounted for). Photons emitted in the Lyman lines are very likely to be immediately reabsorbed, and the only meaningful quantity for these transitions is the \emph{net} decay
rate in the line, which is a statistical average over a large number of atoms, and accounts for the very low escape probability of a photon emitted in the line. In the Sobolev approximation
\cite{Sobolev} with optical depth $\tau_{np,1s}\gg1$, the net
decay rate in the $np\rightarrow 1s$ transition is:
\begin{eqnarray}
\dot{x}_{1s}\big{|}_{np} = - \dot{x}_{np}\big{|}_{1s} &=& \frac{A_{np,1s}}{\tau_{np,1s}}\left(x_{np} - 3 x_{1s} f_{np}^+\right) \nonumber \\
&=& \frac{8 \pi H}{3 \lambda_n^3 n_{\rm H} x_{1s}}\left(x_{np} - 3 x_{1s} f_{np}^+\right),
\label{eq:dot.x_np}
\end{eqnarray}
where $\lambda_n \equiv h c /E_{n1}$ is the transition wavelength, and $f_{np}^+$ is the photon occupation
number at
the blue side of the corresponding Ly-$n$ line. In this paper, we will take $f_{np}^+ = f_{\rm BB}(E_{n1}, T_{\rm r})$, i.e. assume the incoming radiation on the blue side of the line has a blackbody spectrum. This
assumption is actually violated due to feedback from higher-frequency Lyman lines (e.g. radiation escaping from Ly$\beta$ can redshift into Ly$\alpha$) \cite{CS07, He1, Kholupenko_Deuterium}; while
our formalism is general
enough to incorporate different $f_{np}^+$, we have not yet implemented this in our code.
The $2s$ state cannot decay to the ground state through a radiatively allowed transition. This decay is however possible with a two-photon emission, which, although slow, is comparable in efficiency to
the highly self-absorbed Lyman transitions. The simplest expression for the net $2s\rightarrow 1s$ two-photon decay rate is:
\begin{eqnarray}
\dot{x}_{1s}\big{|}_{2s} = - \dot{x}_{2s}\big{|}_{1s} = \Lambda_{2s1s}\left(x_{2s} - x_{1s} \textrm{e}^{-E_2/T_{\rm r}}\right),
\label{eq:dot.x_2s}
\end{eqnarray}
where $\Lambda_{2s1s} \approx 8.22$ s$^{-1}$ is the total $2s\rightarrow 1s$ two-photon decay rate \cite{2photonrate}.
In each case, we denote the {\em net} downward rate in the $i\rightarrow 1s$ transition, where $i \in \{2s, 2p, 3p, ...\}$:
\begin{equation}
\dot{x}_{1s}\big{|}_{i} = - \dot{x}_i\big{|}_{1s} = x_i \tilde{R}_{i\rightarrow 1s} - x_{1s}\tilde{R}_{1s\rightarrow i},
\label{eq:xddown}
\end{equation}
where the rates $\tilde R$ depend on atomic physics, $T_{\rm r}$, and the optical depths in the Lyman lines.
Both the Sobolev approximation for the $np \rightarrow 1s$ transitions Eq.~(\ref{eq:dot.x_np}), and the simple expression Eq.~(\ref{eq:dot.x_2s}) for
the net $2s \rightarrow 1s$ two-photon decay do not account for subtle yet important radiative transfer effects. An accurate recombination calculation
should account for time-dependent effects in Ly$\alpha$ \cite{Hirata_2photon, CS09a}, a suite of two-photon continuum processes
\cite{CS06, Kholupenko06, Hirata_2photon, CS09b}, and resonant scattering in Ly$\alpha$ \cite{Hirata_Forbes, CS09c}. These are not included in the present code and we plan to add them in the
future using analytic treatments.
\section{The standard MLA method} \label{section:std MLA}
Although the standard MLA formulation does not make this distinction, we cast the excited states of hydrogen into two categories. On the one hand, most excited states are not directly radiatively
connected to the ground state. We call these states ``interior'' states and denote $X_K$ the fractional abundance of hydrogen in the interior state $K \in \{3s, 3d, 4s, 4d, 4f, 5s, ...\}$. On the other hand, the $2s$ and $np$ states ($n\geq 2$)
are directly radiatively connected with the ground state. We call these states ``interface'' states and denote $x_i$ the fractional abundance of hydrogen in the interface state $i \in \{2s, 2p, 3p ,...\}$.
In the standard MLA formulation, the free electron fraction $x_e(z)$ is evolved by solving the hierarchy of coupled differential equations: for the
interior states,
\begin{eqnarray}
\dot{X}_K &=& x_e^2 n_{\rm H} \alpha_K + \sum_{L} X_L R_{L \rightarrow K} + \sum_j x_j R_{j \rightarrow K} \nonumber\\
&-& X_K \Bigl(\beta_K + \sum_L R_{K \rightarrow L} + \sum_j R_{K \rightarrow j} \Bigr);
\label{eq:dot.X_K}
\end{eqnarray}
for the interface states,
\begin{eqnarray}
\dot{x}_i &=& x_e^2 n_{\rm H} \alpha_i + \sum_{L} X_L R_{L \rightarrow i} + \sum_j x_j R_{j \rightarrow i} + x_{1s} \tilde{R}_{1s\rightarrow i}\nonumber\\
&&- x_i \Bigl( \beta_i + \sum_L R_{i \rightarrow L} + \sum_j R_{i \rightarrow j} + \tilde{R}_{i \rightarrow 1s}\Bigr);
\label{eq:dot.x_i}
\end{eqnarray}
and for the free electrons and ground state,
\begin{equation}
\dot{x}_e = -\dot{x}_{1s} = x_{1s}\sum_i \tilde{R}_{1s\rightarrow i} - \sum_i x_i \tilde{R}_{i \rightarrow 1s}.\label{eq:dot.x_e}
\end{equation}
The radiative rates between excited states are many orders of magnitude larger than the rate at which recombination proceeds, which is of the order of
the Hubble rate. Even the relatively small net rates out of the interface states ($\Lambda_{2s,1s}$ and $A_{2p,1s}/\tau_{2p,1s}$) are still more than 12 orders of magnitude larger than the Hubble rate. The populations of the excited states can therefore be obtained to high accuracy in the steady-state approximation
(this approximation is ubiquitous in many problems and has long been used in the context of cosmological recombination \cite{Peebles, Hirata_2photon, Grin_Hirata}, where its accuracy has been tested explicitly \cite{Chluba_Vasil}).
Setting $\dot{X}_K$ and $\dot{x}_i$ to zero in Eqs.~(\ref{eq:dot.X_K}) and (\ref{eq:dot.x_i}), we see that the problem amounts to first solve a system of
linear algebraic equations for the $X_K, x_i$, with an inhomogeneous term depending on $x_e$, and then use the populations $x_i$ in
Eq.~(\ref{eq:dot.x_e}) to evolve the free electron fraction. The
solution of the system of equations (\ref{eq:dot.X_K}), (\ref{eq:dot.x_i}) needs to
be done at \emph{every time step}, since the inhomogeneous term of the equation depends on the ionization history, which explicitly depends on time as well as on the cosmological parameters. Recent work
\cite{Grin_Hirata, Chluba_Vasil} has shown that to compute sufficiently accurate recombination histories, one needs to account for excited states up to a principal quantum number $n_{\max}
\sim 100$, resolving the angular momentum substates. This requires
solving an $\mathcal{O}\left(10^4 \times 10^4\right)$ system of equations at each time step, which, even with modern computers,
is extremely time consuming.
\section{New method of solution: the effective multi-level atom}
\label{section:new method}
We now give a computationally efficient method of solution for the primordial recombination problem. We factor the effect of the numerous transitions involving interior states in terms of effective transitions into and out of the much smaller number of interface states. Once the rates of these effective transitions are tabulated, the cosmological evolution of the free electron fraction can be obtained from a simple effective few-level atom calculation. We describe the method in Section~\ref{ss:general} and give the proof of its exact
equivalence to the standard MLA method in Section~\ref{sec:equivalence}. In the subsequent Section~\ref{ss:nstar}, we consider which states should be treated as interface states.
\subsection{Motivations and general formulation}
\label{ss:general}
We first note that the only quantity of importance for CMB power spectrum calculations is the free electron fraction as a function of redshift, $x_e(z)$. The populations of the excited states are
calculated only as an intermediate step -- if they are desired (e.g. to calculate H$\alpha$ scattering features \cite{HRS}), the populations of the excited states can be obtained by solving
Eqs.~(\ref{eq:dot.X_K},
\ref{eq:dot.x_i}) once the free electron fraction is known. Furthermore, only the ``interface'' states $2s$ and $np$ are directly connected to the ground state and directly appear in the evolution equation for the free electron fraction Eq.~(\ref{eq:dot.x_e}). All other (``interior'') excited states are only connected with other excited states or with the continuum, through optically thin radiative transitions (and to a lesser extent through collisions \cite{CRMS07}). Interior states are only
transitional states: an electron in the ``interior'' rapidly transitions through spontaneous and stimulated decays or absorptions caused by the blackbody radiation field (or collisions with free
electrons and protons), until it is either photoionized, or reaches an interface state. There can be a very large number of transitions before any of these outcomes occurs, but the passage through the
``interior'' is always very short compared to the overall recombination timescale, and can be considered as instantaneous (for the same reason that the steady-state approximation is valid in the standard MLA formulation).
Instead of computing the fraction of hydrogen in each interior state $K$, one can rather evaluate the probabilities that an atom initially in the interior state $K$ ultimately reaches one of the
interface states or gets photoionized. Of course, after reaching an interface state, the atom may perfectly transition back to an interior state, or get photoionized. However, we consider the
probability of \emph{first} reaching a given interface state before any other one, which is uniquely defined. For an atom in the interior state $K$, we denote by $P_K^i$ the probability of ultimately
reaching the interface state $i$, and $P_{K}^e$ the probability of ultimately being photoionized. The probabilities $P_K^i$ must self-consistently account for both direct transitions $K \rightarrow i$ and all possible indirect transitions $K \rightarrow L \rightarrow i$ (with an arbitrary number of intermediate states). Mathematically, this translates to the system of linear equations:
\begin{equation}
P_K^i = \sum_L \frac{R_{K \rightarrow L}}{\Gamma_K} P_L^i + \frac{R_{K \rightarrow i}}{ \Gamma_K },
\label{eq:PKi}
\end{equation}
where $\Gamma_K$ is the total width (or inverse lifetime) of the state $K$:
\begin{equation}
\Gamma_K \equiv \sum_L R_{K \rightarrow L} + \sum_j R_{K \rightarrow j} + \beta_K. \label{eq:GammaK}
\end{equation}
Similarly, the $P_K^e$ must satisfy the self-consistency relations:
\begin{equation}
P_K^e = \sum_L \frac{R_{K \rightarrow L}}{\Gamma_K} P_L^e + \frac{\beta_K}{ \Gamma_K },
\label{eq:PKe}
\end{equation}
We show in Appendix \ref{app:invertibility} that these linear systems are invertible and therefore uniquely determine $P^i_K$ and $P^e_K$.
In Appendix~\ref{app:complementarity} we prove the complementarity
relation,
\begin{equation}
\sum_i P_K^i + P_K^e = 1,
\label{eq:complementarity}
\end{equation}
which has the simple physical interpretation that an atom in the $K$th interior state eventually reaches an interface state or is photoionized with unit probability.
Once these probabilities are known, it is possible to describe the large number of transitions between all the states in a simplified manner, in terms of effective rates into and out of the interface states. To clarify the explanation, we illustrate in Figure \ref{fig:scheme} the processes described below.
\begin{figure}
\includegraphics[width = 86mm]{scheme.eps}
\caption{Schematic representation of the formulation of the recombination problem adopted in this work. Dotted arrows represent possibly numerous fast transitions within the ``interior''.}
\label{fig:scheme}
\end{figure}
An electron and a proton can effectively recombine to the interface state $i$ either through a direct recombination (with coefficient $\alpha_i$), or following a recombination to an interior state $K$ (with coefficient $\alpha_K$), from which a sequence of interior transitions may ultimately lead to the interface state $i$ with probability $P_K^i$. The effective recombination coefficient to the interface state $i$ is therefore:
\begin{equation}
\mathcal{A}_i \equiv \alpha_i + \sum_K \alpha_K P_K^i\label{eq:Ai}.
\end{equation}
Conversely, an atom in the interface state $i$ may effectively be ionized either through a direct photoionization (with rate $\beta_i$), or after being first excited to an interior state $K$ (with rate $R_{i \rightarrow K}$), from which the atom may ultimately be photoionized after a series of interior transitions with probability $P_K^e$. The effective photoionization rate from the interface state $i$ is therefore:
\begin{equation}
\mathcal{B}_i \equiv \beta_i + \sum_K R_{i\rightarrow K} P_K^e\label{eq:Bi}.
\end{equation}
Finally, atoms can effectively transition from an interface state $i$ to another interface state $j$, either through a direct transition if it is allowed, or after first transitioning through the interior. The effective transfer rate between the $i$th and $j$th interface states is therefore:
\begin{equation}
\mathcal{R}_{i\rightarrow j} \equiv R_{i \rightarrow j} + \sum_K R_{i \rightarrow K} P_K^j \ \ \ (j \neq i). \label{eq:Rij}
\end{equation}
The rate of change of the population of
the interface state $i$ is therefore:
\begin{eqnarray}
\dot x_i &=& x_e^2 n_{\rm H} \mathcal{A}_i + \sum_{j\neq i} x_j\mathcal{R}_{j \rightarrow i} + x_{1s} \tilde{R}_{1s \rightarrow i} \label{eq:dot.x_i.new}\\
&-& x_i \Bigl( \mathcal{B}_i + \sum_{j\neq i} \mathcal{R}_{i\rightarrow j} + \tilde{R}_{i\rightarrow 1s}\Bigr),\nonumber
\end{eqnarray}
where we have included the effective transitions described above, as well as transitions from and to the ground state.
The system of equations (\ref{eq:PKi}--\ref{eq:dot.x_i.new}) is exactly equivalent to the standard MLA formulation, as we shall show in
Section ~\ref{sec:equivalence} below.
Let us now consider the dependences of the effective rates. In the purely radiative case, the probabilities $P_K^i$ and $P_K^e$ only depend on the
radiation temperature $T_{\rm r}$, since transitions between excited states and photoionizations only depend on the locally thermal radiation field. As a
consequence, the effective recombination rates $\mathcal{A}_i\left(T_{\rm m}, T_{\rm r} \right)$ are only functions of matter and radiation temperatures and the
effective photoionization and bound-bound rates $\mathcal{B}_i\left(T_{\rm r} \right)$ and $\mathcal{R}_{i\rightarrow j}\left(T_{\rm r} \right)$ are functions of the
radiation temperature only. When including collisional transitions, all effective rates become functions of the three variables $T_{\rm r}, T_{\rm m}$ and $n_e$.
In all cases, effective rates can be easily tabulated and interpolated when needed for a recombination calculation.
Intuitively, we would expect that $\mathcal{A}_i$, $\mathcal{B}_i$, and $\mathcal{R}_{i\rightarrow j}$ satisfy the detailed balance relations,
\begin{equation}
g_i \textrm{e}^{-E_i/T_{\rm r}} \mathcal{R}_{i\rightarrow j}(T_{\rm r}) = g_j \textrm{e}^{-E_j/T_{\rm r}} \mathcal{R}_{j\rightarrow i}(T_{\rm r})
\label{eq:DBR}
\end{equation}
and
\begin{equation}
g_i \textrm{e}^{-E_i/T_{\rm r}}\mathcal{B}_i(T_{\rm r}) = \frac{(2\pi\mu_eT_{\rm r})^{3/2}}{h^3} \mathcal{A}_i(T_{\rm m}=T_{\rm r},T_{\rm r}).
\label{eq:DBAB}
\end{equation}
We show in Appendix~\ref{app:db} that these equations are indeed valid. This means that we only need to tabulate half of the
$\mathcal{R}_{i\rightarrow j}$ [the other half can be obtained from Eq.~(\ref{eq:DBR})] and all the $\mathcal{A}_i$ [the $\mathcal{B}_i$ can be obtained from Eq.~(\ref{eq:DBAB}); in particular, we do not need to solve for the $P_K^e$].
We note that the probabilities $P_K^i, P_K^e$ are a generalization of the cascade matrix technique introduced by Seaton
\cite{Seaton59}. Seaton's calculation assumed a vanishing ambient radiation field, so that electrons can only ``cascade down'' to lower energy
states. In the context of the recombination of the primeval plasma, one cannot ignore the strong thermal radiation field, and electrons rather ``cascade
up and down,'' following spontaneous and stimulated decays or photon absorption events. The spirit of our method is however identical to Seaton's
cascade-capture equations \cite{Seaton59}, where the ``cascading'' process is decoupled from the particular process populating the excited states, or
from the depopulation of the interface states.
\subsection{Equivalence with the standard MLA method} \label{sec:equivalence}
This section is dedicated to proving the equivalence of the EMLA equations, Eqs.~(\ref{eq:PKi}--\ref{eq:dot.x_i.new}), with the standard MLA
equations, Eqs.~(\ref{eq:dot.X_K}, \ref{eq:dot.x_i}), in
the steady-state limit for the interior states (i.e. where we set $\dot X_K\approx 0$). The steady-state approximation does not need to be made for the interface states to demonstrate the equivalence of the two formulations (but we do use it for practical computations since it is valid to very high accuracy).
We denote by $N$ the number of interior states and $n_*$ the number of interface states (we will address in Section \ref{ss:nstar} the issue of which states need to be considered as interface states).
We begin by defining the $N\times N$ rate matrix ${\bf M}$ whose elements are
\begin{equation}
M_{KL} \equiv \delta_{KL}\Gamma_K - \left(1 - \delta_{KL}\right) R_{K \rightarrow L}.
\label{eq:MKL}
\end{equation}
We also define the $n_* + 1$ length-$N$ vectors ${\bf P}^i, {\bf P}^e$ whose elements are the probabilities $P_K^i$ and $P_K^e$ respectively, and the $n_*+1$ length-$N$ vectors ${\bf R}^i, {\bf R}^e$ of components
\begin{eqnarray}
R^i_K &\equiv& R_{K\rightarrow i},\\
R_K^e &\equiv& \beta_K.
\end{eqnarray}
The defining equations for the probabilities, Eqs.~(\ref{eq:PKi}, \ref{eq:PKe}), can be written in matrix form ${\bf MP}^i={\bf R}^i$ and $\bf{MP}^e = {\bf R}^e$ respectively (after multiplication by $\Gamma_K$). We show in Appendix \ref{app:invertibility} that the matrix ${\bf M}(T_{\rm r})$ is invertible, for any temperature $T_{\rm r} \geq 0$. The formal solutions for the probabilities are therefore
\begin{eqnarray}
{\bf P}^i &=& {\bf M}^{-1}{\bf R}^i \label{eq:P-solve}\\
{\bf P}^e &=& {\bf M}^{-1}{\bf R}^e.\label{eq:Pe-solve}
\end{eqnarray}
We also define the length-$N$ vector ${\bf X}$ which contains the populations of the interior states $X_K$, and the length-$N$ vector ${\bf S}$ of components
\begin{equation}
S_K \equiv x_e^2 n_{\rm H} \alpha_K + \sum_j x_j R_{j \rightarrow K}. \label{eq:S_K}
\end{equation}
A careful look at Eq.~(\ref{eq:dot.X_K}) in the steady-state
approximation ($\dot X_K=0$) shows that it is the matrix equation ${\bf M}^{\rm T}{\bf X} = {\bf S}$, which has the solution:
\begin{equation}
{\bf X}=\left({\bf M}^{\rm T}\right)^{-1}{\bf S} = \left({\bf M}^{-1}\right)^{\rm T}{\bf S}.
\label{eq:X-solve}
\end{equation}
Both Eqs.~(\ref{eq:dot.x_i}) and (\ref{eq:dot.x_i.new}) can be cast in the form
\begin{eqnarray}
\dot{x}_i &=& x_e^2 n_{\rm H} \alpha_i + \sum_{j\neq i} x_i R_{j\rightarrow i} + x_{1s} \tilde{R}_{1s\rightarrow i} \nonumber\\
&-& x_i \Big{(}\beta_i + \sum_{j\neq i} R_{i\rightarrow j} + \tilde{R}_{i \rightarrow 1s}\Big{)} + \dot{x}_i|_{\rm interior}.
\end{eqnarray}
The only a priori different term is the net transition rate from the interior to the state $i$, $\dot{x}_i|_{\rm interior}$. In the standard MLA formulation, Eq.~(\ref{eq:dot.x_i}), this term is
\begin{eqnarray}
\dot{x}_i|_{\rm interior}^{(\rm MLA)} &=& \sum_K \left(X_K R_K^i - x_i R_{i\rightarrow K}\right)\\
&=& {\bf X}^{\rm T} {\bf R}^i - x_i \sum_K R_{i \rightarrow K}. \label{eq:xdot.interior.mla}
\end{eqnarray}
With our new formulation, Eq.~(\ref{eq:dot.x_i.new}), using the definitions of the effective rates Eqs.~(\ref{eq:Ai}--\ref{eq:Rij}), the net transition rate from the interior to the state $i$ is:
\begin{eqnarray}
\dot{x}_i|_{\rm interior}^{(\rm EMLA)} &=& \sum_K \Bigg{[}x_e^2 n_{\rm H}\alpha_K P_K^i + \sum_{j\neq i}x_j R_{j\rightarrow K}P_K^i\nonumber\\
&-& x_i R_{i\rightarrow K} (P_K^e + \sum_{j \neq i} P_K^j)\Bigg{]}.\label{eq:dot.x_i.interior.new}
\end{eqnarray}
Using the complementarity relation Eq.~(\ref{eq:complementarity}), we rewrite $P_K^e + \sum_{j \neq i} P_K^j = 1 - P_K^i$. We then recognize that the common factor of $P_K^i$ is just the $K$-th component of the vector ${\bf S}$, Eq.~(\ref{eq:S_K}), so we can rewrite Eq.~(\ref{eq:dot.x_i.interior.new}) as
\begin{eqnarray}
\dot{x}_i|_{\rm interior}^{(\rm EMLA)} = {\bf S}^{\rm T} {\bf P}^i - x_i\sum_K R_{i\rightarrow K}. \label{eq:xdot.interior.fla}
\end{eqnarray}
From the formal solution for the populations of the interior states Eq.~(\ref{eq:X-solve}), we have
\begin{equation}
{\bf X}^{\rm T} {\bf R}^i = {\bf S}^{\rm T}{\bf M}^{-1} {\bf R}^i = {\bf S}^{\rm T} {\bf P}^i,
\end{equation}
where the second equality is obtained from the formal solution for the probabilities $P_K^i$, Eq.~(\ref{eq:P-solve}). We therefore see from Eqs.~(\ref{eq:xdot.interior.mla}) and (\ref{eq:xdot.interior.fla}) that
\begin{equation}
\dot{x}_i|_{\rm interior}^{(\rm MLA)} = \dot{x}_i|_{\rm interior}^{(\rm EMLA)},
\end{equation}
and hence the two formulations are \emph{exactly} equivalent. They only differ by the order in which the bilinear product $ {\bf S}^{\rm T}{\bf M}^{-1} {\bf R}^i $ is evaluated.
\subsection{Choice of interface states}
\label{ss:nstar}
If one naively includes all $np$ states up to $n = n_{\max} =\mathcal{O}(100)$ in the list of interface states, the interpolation of effective rates can become somewhat cumbersome as it involves
$\mathcal{O}\left(10^4\right)$ functions of one to three variables. However, only the lowest few of these states actually have significant transition rates to the ground state; indeed, most of the
decays to the ground state proceed through either $2s$ (two-photon decay) or $2p$ (Ly$\alpha$ escape), as anticipated in the earliest studies \cite{Peebles, Zeldovich_et_al}.
The rate of Lyman line escape is dominated by the lowest few lines. For example, if the relative populations of the excited states were given by the Boltzmann ratios (which is a good approximation until late times) then the net decay rate in the $np \rightarrow 1s$ transition (not accounting for feedback from the next line) would be proportional to
\begin{equation}
\dot x_{np\rightarrow 1s} \propto (1-n^{-2})^3 e^{-E_n/T_{\rm r}}.
\label{eq:Boltzmann}
\end{equation}
This relation would imply that the Ly$\beta$ escape rate is $<1$\% of the Ly$\alpha$ escape rate, and the higher-order Lyman lines contribute even less. Our previous computations of the escape rates (e.g.
Ref.~\cite{Hirata_2photon}) agree with this expectation. These considerations imply that for $n \geq 3,~ \dot{x}_{1s}|_{np} \ll \dot{x}_{1s}|_{2p}$ in Eq.~(\ref{eq:dot.x_e}). Moreover, an atom in the $np$ state with $n \geq 3$ is much more likely to spontaneously decay to $n's$ or $n'd$, with $2 \leq n' < n$, than to emit a Lyman-$n$ photon that successfully escapes the line. This implies that $\Big{|}\dot{x}_{np}|_{1s}\Big{|} \ll \Big{|}\dot{x}_{np}\Big{|}$ in Eq.~(\ref{eq:dot.x_i}).
In addition to a very low net decay rate out of the $np$ states for $n \geq 3$, feedback between neighboring lines further suppresses their efficiency as interface states. The few photons that escape the
Ly$(n+1)$ line will be reabsorbed almost certainly in the next lower line, after a redshift interval
\begin{equation}
\Delta z = z_{\rm em} - z_{\rm ab} = (1 + z_{\rm ab}) \left(\frac{E_{n+1,1}}{E_{n1}} - 1\right).
\end{equation}
Feedback between the lowest-lying lines is not instantaneous: $\Delta z/ (1 + z_{\rm ab}) = 0.185$ for Ly$\beta \rightarrow$Ly$\alpha$ feedback, $0.055$ for Ly$\gamma \rightarrow$Ly$\beta$, and
0.024 for Ly$\delta \rightarrow$Ly$\gamma$. However, for higher-order lines, feedback rapidly becomes nearly instantaneous as $\Delta z /(1 + z_{\rm ab}) \sim 2/n^3$.
Thus the effect of the higher Lyman lines is even weaker than Eq.~(\ref{eq:Boltzmann}) would suggest.
Recent work \cite{Kholupenko_Deuterium} has shown that including lines above Ly$\beta$ results in a fractional error $|\Delta x_e|/x_e$ of at most $\approx 3\times 10^{-4}$.
We therefore conclude that very accurate recombination histories can be obtained by
only including $2s, 2p ,..., n_*p$ as interface states and negelecting higher-order Lyman transitions altogether.
We will use $n_*=3$ in this paper, and investigate the optimal value of this cutoff more quantitively in future work.
Our formulation in terms of effective transition rates and interface states is therefore much better adapted for a fast recombination calculation that
the standard MLA formulation. To compute accurate recombination histories, explicitly accounting for high-$n$ shells of hydrogen, one first needs to
tabulate the $\{\mathcal{A}_i\}$ and $\{\mathcal{R}_{i\rightarrow j}\}$ on temperature grids.
The computation of the effective rates is the time-consuming
part of the calculation; however, since they are independent of the cosmological parameters, this can be done once, and not repeated for each cosmology. The free electron fraction can
then be computed very quickly for any given cosmology by solving the $n_* + 1$ equations (\ref{eq:dot.x_i.new}) and (\ref{eq:dot.x_e}), interpolating
the effective rates from the precomputed tables. Note that Eq.~(\ref{eq:dot.x_i.new}) is a simple $n_* \times n_*$ system of linear algebraic equations
in the steady-state approximation.
\section{Implementation and results}
\label{section:results}
Here we give some details on the implementation of our EMLA code. Section \ref{sec:eff.rates.comp} describes the computation of the effective rates (the computationally expensive part of the calculation, which needs to be done only once). Section \ref{sec:efla.code} descibes the implementation of the ultrafast effective few-level atom calculation. We show our recombination histories and compare our results with the existing standard MLA code \textsc{RecSparse} \cite{Grin_Hirata} in Section \ref{sec:compare}.
\subsection{Computation of the effective rates}\label{sec:eff.rates.comp}
We have implemented the calculation of the effective rates in the purely radiative case. Bound-free rates were computed by numerically integrating
Eq.~(\ref{eq:alpha}) using an 11-point Newton-Cotes method, where the radial matrix elements $g(n, l, \kappa, l')$ were obtained using the recursion
relation given by Burgess \cite{Burgess}. Einstein $A$-coefficients were computed by using the recursion relations obtained by Hey \cite{Hey06} for the
radial matrix elements $R_{n'l'}^{nl}$. Finally, we obtained the probabilities $P_K^i$ using a sparse matrix technique similar to that of
Ref.~\cite{Grin_Hirata} when solving Eq.~(\ref{eq:PKi}). We accounted explicitly for all excited states up to a principal quantum number $n_{\max}$, resolving angular momentum substates. We tabulated the effective rates ${\cal A}_i(T_{\rm m},T_{\rm r})$ on a grid of 200 log-spaced points in $T_{\rm r}$ from 0.04 to 0.5 eV and 20 linearly spaced points
in $T_{\rm m}/T_{\rm r}$ from 0.8 to 1.0, and ${\cal R}_{i\rightarrow j}(T_{\rm r})$ on the grid of points in $T_{\rm r}$. The maximum
relative change in the effective rates ($\Delta\ln\mathcal{A}_i$ or $\Delta\ln\mathcal{R}_{i\rightarrow j}$) over the whole range of temperatures considered is
0.051 when comparing $n_{\max}=64$ vs. 128, 0.015 when comparing $n_{\rm max}=128$ vs. 250, and 0.005 when comparing $n_{\rm max}=250$ vs. 500.
In the left panel of Figure \ref{fig:fudge}, we show the total effective recombination coefficient $\mathcal{A}_{\rm B}(T_{\rm m}, T_{\rm r}) \equiv \mathcal{A}_{2s}(T_{\rm m}, T_{\rm r}) + \mathcal{A}_{2p}(T_{\rm m}, T_{\rm r})$ computed for $n_* = 2$ (i.e. with interfaces states $2s$ and $2p$ only, neglecting all Lyman transitions above Lyman-$\alpha$), normalized to the case-B recombination coefficient $\alpha_{\rm B}(T_{\rm m})$. Note that $\alpha_{\rm B}(T_{\rm m})$ is just $\mathcal{A}_{\rm B}(T_{\rm m}, T_{\rm r} = 0| n_{\max} = \infty)$ with our notation; indeed, for $T_{\rm r} = 0$, $\beta_K = 0$ and therefore $P_K^e = 0$ for all $K$ so $\sum_i P_K^i = 1$ and hence $\sum_i\mathcal{A}_i = \sum_i \alpha_i + \sum_K \alpha_K = \sum_{nl} \alpha_{nl}$, where the last sum is over all excited states. We can see that as the radiation temperature increases (i.e. as the redshift increases in Figure \ref{fig:fudge}), the convergence with $n_{\max}$ becomes faster. This is to be expected, since for higher $T_{\rm r}$, highly excited hydrogen is more easily photoionized, i.e. $P_{nl}^e$ becomes closer to unity. In that case adding more shells to the calculation does not matter so much because recombinations to the highest shells are very inefficient, due to the high probability of a subsequent photoionization.
In the right panel of Figure \ref{fig:fudge}, we show the ratio $\mathcal{A}_{2s}(T_{\rm m}, T_{\rm r})/\mathcal{A}_{\rm B}(T_{\rm m}, T_{\rm r})$, which is the fraction of recombinations to the $n = 2$ shell that are to the $2s$ level. This fraction is in general different from the intuitive value of 1/4, and its exact value depends on temperature.
\begin{figure*}
\includegraphics[width = 86mm]{fudge_nmax.eps}
\includegraphics[width = 86mm]{fracA2s.eps}
\caption{\emph{Left panel}: ``Exact fudge factor'' as a function of redshift $\mathcal{A}_{\rm B}(T_{\rm m}, T_{\rm r}) / \alpha_{\rm B}(T_{\rm m})$, for several values of $n_{\max}$, using $T_{\rm m}(z)$ computed by \textsc{RecSparse} for cosmological parameters as
in Ref.~\cite{Grin_Hirata}. We use the fit of Ref.~\cite{alphaB} for the case-B recombination coefficient $\alpha_{\rm B}(T_{\rm m})$. For
comparison, the code \textsc{Recfast} uses a constant fudge factor $F = 1.14$ to mimick the effect of high-$n$ shells. \emph{Right panel}: Fraction of the effective recombinations to the $n=2$ shell that lead to atomic hydrogen in the $2s$ state. In both cases the effective rates were computed for $n_* = 2$, i.e. with interface states $2s$ and $2p$ only, neglecting escape from the Lyman $\beta,\gamma,...$ lines.}
\label{fig:fudge}
\end{figure*}
\subsection{Ultrafast EMLA code}\label{sec:efla.code}
In order to actually compute the recombination history, we require an evolution equation for the free electron fraction,
\begin{equation}
\dot{x}_e(x_e, n_{\rm H}, H, T_{\rm m}, T_{\rm r}),
\end{equation}
and in some cases a similar equation for $\dot{T}_{\rm m}$. For concreteness, we implement the case of 3 interface states $i\in\{2s,2p,3p\}$ ($n_*=3$).
To compute $\dot x_e$, we first obtain the downward ${\cal R}_{i\rightarrow j}(T_{\rm r})$ from our table via cubic
polynomial (4-point) interpolation and ${\cal A}_{i}(T_{\rm m},T_{\rm r})$ via bicubic interpolation (2-dimensional in $\lnT_{\rm r}$ and $T_{\rm m}/T_{\rm r}$ using $4\times 4$
points). The upward ${\cal R}_{j\rightarrow i}(T_{\rm r})$ are obtained using Eq.~(\ref{eq:DBR}) and the effective photoionization rates ${\cal B}_i(T_{\rm r})$ are obtained using Eq.~(\ref{eq:DBAB}). We then solve for the $\{x_i\}$ using
Eq.~(\ref{eq:dot.x_i.new}), and finally obtain $\dot x_e$ using Eq.~(\ref{eq:dot.x_e}).
The matter temperature is determined by the Compton evolution
equation,
\begin{equation}
\dot{T}_{\rm m} = -2HT_{\rm m} + \frac{8\sigma_{\rm T}a_{\rm r}T_{\rm r}^4x_e(T_{\rm r}-T_{\rm m})}{3(1+f_{\rm He}+x_e)m_ec},
\label{eq:Tm-evol}
\end{equation}
where $\sigma_{\rm T}$ is the Thomson cross section, $a_{\rm r}$ is the radiation constant, $f_{\rm He}$ is the He:H ratio by number of nuclei, $m_e$
is the electron mass, and $c$ is the speed of light. At high redshift, one may use the steady-state solution (see Appendix A of
Ref.~\cite{Hirata_2photon}),
\begin{equation}
T_{\rm m} \approx T_{\rm m,ss} = T_{\rm r} \left[ 1 + \frac{3(1+f_{\rm He}+x_e)m_ec H}{8\sigma_{\rm T}a_{\rm r}T_{\rm r}^4x_e} \right]^{-1}.
\label{eq:Tm-ss}
\end{equation}
At the highest redshifts, the ODE describing hydrogen recombination is stiff; therefore for $z > 1570$ we follow the recombination history using perturbation theory around the Saha approximation, as described in Appendix \ref{appendix:post-saha}. At $500<z<1570$ we use Eq.~(\ref{eq:Tm-ss}) to set the matter temperature, and a
fourth-order Runge-Kutta integration algorithm (RK4) to follow the single ODE for $x_e(z)$; and at $z<500$ we use RK4 to follow the two ODEs for
$x_e(z)$ and $T_{\rm m}(z)$ simultaneously. The integration step size is $\Delta z = -1.0$ (negative since we go from high to low redshifts).
\subsection{Results and code comparison} \label{sec:compare}
We have tabulated the effective rates for $n_{\max} = 16$, 32, 64, 128, 250 and 500. It is in principle possible to compute the effective rates for an arbitrarily high $n_{\max}$, but it is not meaningful to do so as long as collisional transitions are not properly accounted for. The recurring computation time of our ultrafast EMLA code is 0.08 seconds on a MacBook laptop computer with a 2.1 GHz processor, independently of $n_{\max}$. Our recombination histories are shown in Figure~\ref{fig:xehist}. We compared our results with the existing standard MLA code \textsc{RecSparse} for $n_{\max} = 16$, 32, 64, 128 and 250. As can be seen in Figure~\ref{fig:ufcompare}, the two codes agree to better than $8\times 10^{-5}$
across the range $200<z<1600$, despite having different methods for accounting for the excited states, and independent implementations for matrix elements and ODE integration.
\begin{figure*}
\includegraphics[width=86mm]{xe_change.eps}
\includegraphics[width=86mm]{logxe.eps}
\caption{\label{fig:xehist}\emph{Left panel}: Relative differences between recombination histories computed with successively more accurate effective rates. \emph{Right panel}: Recombination history for effective rates computed with $n_{\max} = 500$, i.e. accounting explicitly for 125,250 states of the hydrogen atom.}
\end{figure*}
\begin{figure}
\includegraphics[width=86mm]{ufcompare.eps}
\caption{\label{fig:ufcompare}A comparison of our ultrafast code to \textsc{RecSparse} \cite{Grin_Hirata}, for different values of $n_{\max}$. The vertical axis is the fractional
difference in free electron abundance rescaled by $10^5$ (positive indicating that \textsc{RecSparse} gives a larger $x_e$). We see that the maximum fractional
deviation is $<8\times 10^{-5}$. The feature around $z = 1540$ is due to a timestep change in \textsc{RecSparse}.}
\end{figure}
\section{Conclusions and future directions}
\label{section:conclusion}
We have shown that the computation of primordial hydrogen recombination can be factored into two independent calculations. On the one hand, most
excited states are not directly radiatively connected to the ground state, and undergo transitions caused by the thermal bath of blackbody photons at
the relevant frequencies, as well as the thermal electrons and protons. One can account for these numerous transitions with effective transition rates into and out of the ``interface''
states which are connected to the ground state. The computationally intensive aspect
of a recombination calculation in fact resides in the evaluation of these effective rates, which are functions of matter and radiation temperature only. This calculation being independent of cosmological
parameters, it can be done prior to any recombination calculation, once and for all. A simple effective few-level atom can then be evolved for any set of
cosmological parameters, without any need for ``fudge factors'' or approximations.
This work does \emph{not} present a final recombination code satisfying the accuracy requirements for future CMB experiments. Firstly, collisional
transitions were not included. They may be particularly important for the high-$n$ states. The effective rates computed here are therefore only
approximating the correct rates in the limit of zero density. Our formalism is general and collisions can be included as soon as accurate rates are
available (the main change would be that the interpolation tables would require $\ln n_e$ as an additional independent variable). Secondly, we have not included important radiative transfer effects,
such as feedback between low-lying Lyman lines \cite{CS07,
Kholupenko_Deuterium}, two-photon decays from $n\ge 3$ \cite{DG05, WS07, CS08, Hirata_2photon, CS09b}, resonant scattering in
Ly$\alpha$ \cite{GD08, Hirata_Forbes, CS09c}, or overlap of the high-lying Lyman lines (work in preparation). To preserve the computational efficiency of
our method, fast analytic approximations have to be developed to include these effects, which will be the subject of future work.
\section*{Acknowledgements}
We thank Dan Grin for numerous useful and stimulating conversations, and for providing data from \textsc{RecSparse} computations for code comparison. We
also aknowledge fruitful conversations with the participants of the July 2009 Paris Workshop on Cosmological Recombination. We thank Dan Grin, Marc Kamionkowski and Jens Chluba for a careful reading of the draft of this paper. Y. A.-H. and C.H. are
supported by the U.S. Department of Energy (DE-FG03-92-ER40701) and the National Science Foundation (AST-0807337). C. H. is supported by the Alfred P.
Sloan Foundation.
|
1,116,691,499,726 | arxiv | \section{Introduction}
The delayed feedback system of the form
\begin{align}\label{eq:nl}
\dot x = F\Bigl(x, \int_0^{\infty} [d\eta(\tau)] \cdot g\bigl(x(t-\tau),\tau\bigr) \Bigr),
\end{align}
is a model paradigm in biology and physics \citep{monk2003, atay2003, adimy2006, rateitschak2007, eurich2005, meyer2008}. The first argument is the instantaneous part and the second one, the delayed or retarded part, which forms a feedback loop. The function $\eta$ is a cumulative distribution of delays and $F$ and $g$ are nonlinear functions satisfying $F(0,0)=0$ and $g(0,\tau)=0$. When $F:\text{R}^d \times \text{R}^{d \times d} \to \text{R}^d$ and $g:\text{R}^d \times \text{R} \to \text{R}^{d \times d}$ are smooth functions, the stability of $x=0$ is given by the linearized form,
\begin{align}
\dot x = -A x - \int_0^{\infty} [B(\tau) \cdot d \eta(\tau)] x(t-\tau).
\end{align}
The coefficients $A$ and $B(\tau) \in \text{R}^{d \times d}$ are the Jacobian matrices of the instantaneous and the delayed parts, $\eta: [0,\infty) \to \text{R}^{d \times d}$ is the distribution of delays and $(\cdot)$ is the pointwise matrix multiplication. In biological applications, discrete delays in the feedback loop are often used to account for the finite time required to perform essential steps before $x(t)$ is affected. This includes maturation and growth times needed to reach reproductive age in a population \citep{hutchinson1948,mackey1978}, signal propagation along neuronal axons \citep{campbell2007}, and post-translational protein modifications \citep{monk2003,bernard2006b}. Introduction of a discrete delay can generate complex dynamics, from limit cycles to chaos \citep{sriram2008}. Linear stability properties of scalar delayed equations are fairly well characterized. However, lumping intermediate steps into a delayed term can produce broad and atypical delay distributions, and it is not clear how that affects the stability compared to a discrete delay \citep{campbell2009}.
Here, we study the stability of the zero solution of a scalar ($d=1$) differential equation with distributed delays,
\begin{align}\label{eq:x}
\dot x & = - a x - b \int_0^{\infty} x(t-\tau) d\eta(\tau).
\end{align}
The solution $x(t) \in \text{R}$ is the deviation from the zero steady state of equation (\ref{eq:nl}). Coefficients $a=-\text{D}_1F(0,0) \in \text{R}$ and $b = -\text{D}_2F(0,0) \neq 0$, and the integral is taken in the Riemann-Stieltjes sense. We assume that $\eta$ is a cumulative probability distribution function, i.e. $\eta: \text{R} \to [0, 1]$ is nondecreasing, piecewise continuous to the left, $\eta(\tau)=0$ for $\tau<0$ and $\eta(+\infty)=1$. Additionally, we assume that there exists $\nu>0$ such that
\begin{align}\label{eq:nu}
\int_0^{\infty} e^{\nu \tau} d\eta(\tau) < \infty.
\end{align}
This last condition implies that the mean delay value is finite,
\begin{align*}
E & = \int_0^{\infty} \tau d\eta(\tau) < \infty.
\end{align*}
The corresponding probability density function is $f(\tau)$ given by $d \eta(\tau) = f(\tau) d\tau$, where the derivative is taken in the generalized sense. The distribution can be continuous, discrete, or a mixture of continuous and discrete elements. When it is a single discrete delay (a Dirac mass), the asymptotic stability of the zero solution of equation (\ref{eq:x}) is fully determined by the following theorem, due to Hayes \citep{hayes1950},
\begin{theorem}\label{th:hayes}
Let $f(\tau) = \delta(\tau-E)$ a Dirac mass at $E$. The trivial solution of equation (\ref{eq:x}) is asymptotically stable if and only if $a > |b|$, or if $b>|a|$ and
\begin{equation}\label{eq:Emax}
E < \frac{\arccos(-a/b)}{\sqrt{b^2-a^2}}.
\end{equation}
\end{theorem}
There is a Hopf point if the characteristic equation of equation (\ref{eq:x}) has a pair of imaginary roots and all other roots are negative. For a discrete delay, the Hopf point occurs when equality in (\ref{eq:Emax}) is satisfied. Moreover, for any distribution $\eta$, there is a zero root along the line $-a=b$. At $-a=b=1/E$, there is a double zero root. When $a>-1/E$, all other roots have negative real parts, but when $a<-1/E$, there is one positive real root. Thus, the stability depends on $\eta$ if and only if $b>|a|$. Moreover, only a Hopf point can occur when $b>|a|$. Therefore, a distribution of delays can only destabilize equation (\ref{eq:x}) through a Hopf point, and only when $b>|a|$. This is a common situation when the feedback acts negatively on the system ($\text{D}_2F(0,0)<0$) to cause oscillations.
Assuming $b>0$ and making the change of timescale $t \to bt$, we have $a \to a/b$, $b \to 1$ and $\eta(\tau) \to \eta(b\tau)$. Equation (\ref{eq:x}) can be rewritten as
\begin{align}\label{eq:xx}
\dot x & = - a x - \int_0^{\infty} x(t-\tau) d\eta(\tau).
\end{align}
The aim of this paper is to study the effect of delay distributions on the stability of the trivial solution of equation (\ref{eq:xx}), therefore, we focus on the region $|a|<1$. To emphasize the relation between the stability and the delay distribution, we will say that $\eta$ (or $f$) is stable if the trivial solution of equation (\ref{eq:xx}) is stable, and that $\eta$ (or $f$) is unstable if the trivial solution is unstable.
It has been conjectured that among distributions with a given mean $E$, the discrete delay is the least stable one \citep{bernard01,atay2008}. If this were true, according to Theorem \ref{th:hayes}, all distributions would be stable provided that
\begin{equation}\label{eq:sc}
E < \frac{\arccos(-a)}{\sqrt{1-a^2}}.
\end{equation}
This conjecture has been proved for $a=0$ using Lyapunov-Razumikhin functions \citep{krisztin1990}, and for distributions that are symmetric about their means [$f(E-\tau) = f(E+\tau)$] \citep{miyazaki1997,bernard01,atay2008,kiss2009}. It has been observed that in general, a greater relative variance provides a greater stability, a property linked to geometrical features of the delay distribution \citep{anderson1991}. There are, however, counter-examples to this principle, and there is no proof that for $a \neq 0$ the least stable distribution is the single discrete delay. It is possible to lump the non-delayed term into the delay distribution using the condition found in \citep{krisztin1990}, but the resulting stability condition, $E/(1+a)<\pi/2$, is not optimal. Here, we show that if inequality (\ref{eq:sc}) holds, all distributions are asymptotically stable. That is, distributed delays stabilize negative feedback loops.
In section \ref{s:pre}, we set the stage for the main stability results. In section \ref{s:d}, we show the stability for distributions of discrete delays. In section \ref{s:g}, we present the generalization to any distributions and in section \ref{s:b}, we provide illustrative examples.
\section{Preliminary results}\label{s:pre}
Let $\eta$ be a distribution with mean $1$. We consider the family of distributions
\begin{align}\label{eq:scale}
\eta_E(\tau) = \begin{cases}
\eta(\tau/E), & E>0, \\
H(\tau), & E=0.
\end{cases}
\end{align}
where $H(\tau)$ is the step or heaviside function at 0. The distribution $\eta_E$ has a mean $E \geq 0$. The characteristic equation of equation (\ref{eq:xx}), obtained by making the ansatz $x(t) = \exp(-\lambda t)$, is
\begin{align}\label{eq:ce}
\lambda + a + \int_0^\infty{e^{-\lambda \tau} d\eta_E(\tau)} = 0.
\end{align}
When condition (\ref{eq:nu}) is satisfied, the distribution $\eta_E$ is asymptotically stable if and only if all roots of the characteristic equation have a negative real part $\text{Re}(\lambda)<0$ \citep{stepan1989}. Condition (\ref{eq:nu}) guarantees that there is no sequence of roots with real parts converging to a non-negative value. The leading roots of the characterisitc equations are therefore well defined. When $E=0$, i.e. when there is no delay, there is only one root, $\lambda < 0$. When $E>0$, the characteristic equation has pure imaginary roots $\lambda = \pm i \omega$ only if $0<\omega<\omega_c = \sqrt{1-a^2}$. Thus, the search for the boundary of stability can be restricted to imaginary parts $\omega \in (0, \omega_c]$ \citep{bernard01}.
We define
\begin{align}
C(\omega) = & \int_{0}^{\infty} \cos(\omega \tau) d\eta_E(\tau), \\
S(\omega) = & \int_{0}^{\infty} \sin(\omega \tau) d\eta_E(\tau).
\end{align}
We use a geometric argument to bound the roots of the characteric equation of equation (\ref{eq:xx}) by the roots of the characteristic equation with a discrete delay. More precisely, if the leading roots associated to the discrete delay are a pair of imaginary roots, then all the roots associated to the distribution of delays have negative real parts. We first state a criterion for stability: if $S(\omega)<\omega$ whenever $C(\omega)+a=0$, then $f$ is stable. The larger the value of $S(\omega)$, the more ``unstable'' the distribution is. We then show that a distribution of $n$ discrete delays $f_n$ is more stable than an certain distribution with two delays $f^*$, i.e. $S_n(\omega)\leq S^*(\omega)$. We construct $f^*$ and determine that one of the delays of this ``most unstable'' distribution $f^*$ is $\tau^*_1=0$, making it easy to determine its stability using Theorem \ref{th:hayes}. We then generalize for any distribution of delays.
The next proposition provides a necessary condition for instability. It is a direct consequence of theorem 2.19 in \citep{stepan1989}. We give a short proof for completeness.
\begin{prop}\label{pr:os} If the distribution $\eta_E$ is asymptotically unstable, then there exists $\omega_s \in [0,\omega_c]$ such that $C(\omega_s) + a = 0$ and $S(\omega_s) \geq \omega_s$.
\end{prop}
\begin{proof} Suppose that the distribution $\eta_E$ is asymptotically unstable. The roots of the characteristic equation depend continuously on the parameter $E$ and cannot appear in the right half complex plane. Thus there is a critical value $0<\rho<1$ at which $\eta_{\rho E}$ loses its stability, and this happens when the characteristic equation (\ref{eq:ce}) has a pair of imaginary roots $\lambda = \pm i\omega$ with $0 \leq \omega < \omega_c = \sqrt{1-a^2}$, i.e. through a Hopf point. Splitting the characteristic equation in real and imaginary parts, we have
\begin{align*}
\int_{0}^{\infty} \cos(\omega \tau) d\eta_{\rho E}(\tau) & + a = 0, \\
\int_{0}^{\infty} \sin(\omega \tau) d\eta_{\rho E}(\tau) & = \omega.
\end{align*}
Rewriting in term of $\eta_E$, we obtain
\begin{align*}
\int_{0}^{\infty} \cos(\omega \rho \tau) d\eta_E(\tau) & + a = 0, \\
\int_{0}^{\infty} \sin(\omega \rho \tau) d\eta_E(\tau) & = \omega.
\end{align*}
Finally, denoting $\omega_s = \rho \omega \leq \omega < \omega_c$, we have
\begin{align*}
\int_{0}^{\infty} \cos(\omega_s \tau) d\eta_E(\tau) & + a = 0, \\
\int_{0}^{\infty} \sin(\omega_s \tau) d\eta_E(\tau) & = \omega = \omega_s/\rho \geq \omega_s.
\end{align*}
This completes the proof.
\end{proof}
Proposition \ref{pr:os} provides a sufficient condition for stability:
\begin{cor}\label{th:cor}
The distribution $\eta_E$ is asymptotically stable if (i) $C(\omega)>-a$ for all $\omega \in [0,\omega_c]$ or if (ii) $C(\omega)=-a$, $\omega \in [0,\omega_c]$, implies that $S(\omega)<\omega$.
\end{cor}
Proposition \ref{pr:os} suggests that the scaling $\eta_E=\eta(\tau/E)$ is appropriate for looking at the stability with respect to the mean delay. The mean delay scales linearly, and unstable distributions therefore lose their stability at a smaller values of the mean delay, under this scaling. The condition $S(\omega_s)<\omega_s$ is however not necesssary for stability, as one can find cases where $S(\omega_s)>\omega_s$ even though the distribution is stable. This happens when an unstable distribution switches back to stability as $E$ is further increased (see for instance \citep{boese1989} or \citep{beretta2002} and example \ref{s:s}).
\section{Stability of a distribution of discrete delays}\label{s:d}
We define a density of $n$ discrete delays $\tau_i \geq 0$, and $p_i > 0$, $i=1,...,n$, $n \geq 1$, as
\begin{align}\label{eq:fn}
f_n(\tau) & = \sum_{i=1}^{n} p_i \delta(\tau-\tau_i) \\
\intertext{where $\delta(t-\tau_i)$ is a Dirac mass at $\tau_i$, and}
\sum_{i=1}^{n} p_i \tau_i & = E, \text{ and } \sum_{i=1}^{n} p_i = 1. \nonumber
\end{align}
In this section, we show that $f_n$ is more stable than a single discrete delay. We do that by observing that among all $n$-delay distributions, $n\geq 2$, that satisfy $C_n(\omega_s)+a=0$ for a fixed value $\omega_s < \omega_c$, the distribution $f^*$ that maximizes $S_n(\omega_s)$,
\begin{equation}
\max_{f_n} \bigl\{ S_n(\omega_s) | C_n(\omega_s)+a=0 \bigr\} = S^*(\omega_s),
\end{equation}
has 2 delays. We show that $S^*(\omega_s)<\omega_s$, implying that all distributions are stable. The following lemma shows how to maximize $S(\omega_s)$ for distributions of two delays.
\begin{lem}\label{th:S} Let $f_2$ be a delay density with mean $E$. Assume in addition that there exists $\omega_s \in (0,\omega_c)$ such that $C(\omega_s) = -a < \cos(\omega_c E)$. Then there exists $\tau_2^*$, $p_1^*$ and $p_2^*$ such that
\begin{align}
\tau_1^* & = 0, \\
p_1^* + p_2^* & = 1, \\
p_1^* + p_2^* \cos(\omega_s \tau_2^*) & = p_1 \cos(\omega_s \tau_1) + p_2 \cos(\omega_s \tau_2), \\
p_2^* \tau_2^* & = E.
\end{align}
Moreover, there is at most two solutions for $\tau_2^*$ with $\tau_2^*<\pi/\omega_s$. If $\tau_2^*$ is the smallest solution, we have that $\tau_2^* \leq \tau_2$ and
\begin{align*}
S^*(\omega_s) \equiv \sum_{i=1}^2 p_i^* \sin(\omega_s \tau_i^*) & \geq S(\omega_s).
\end{align*}
\end{lem}
\begin{proof}
To see that there is always a solution, let $c>0$ be the smallest value such that the inequality $\cos(\theta) \geq 1 - c \theta$ is verified for all $\theta$. [$c = 0.725...$ by solving $c=\sin(\theta)$, with $1-\theta \sin(\theta) = \cos(\theta)$.] We have that $1-c \omega E \leq C(\omega) \leq \cos(\omega E)$. Thus, the line $1-d \omega E$ that goes through $C(\omega_s)$ satisfies $d = (1-C(\omega_s))/(\omega_s E) \leq c$, and therefore crosses the curve $\cos(\omega_s E)$ at some points. The smallest solution $\tau_2^*$ is the one such that $1 - d \omega_s \tau_2^* = \cos(\omega_s \tau_2^*)$. This way,
\begin{align*}
p_1^* + p_2^* \cos(\omega_s \tau_2^*) & = 1 - (1-C(\omega_s)) p_2^* \tau_2^*/E, \\
& = C(\omega_s).
\end{align*}
These new delay values maximize $S(\omega_s)$ under the constraints that $C(\omega_s)+a=0$ and that the mean remains $E$. That is, we will prove that
\begin{align*}
S^*(\omega_s) \equiv \sum_{i=1}^2 p_i^* \sin(\omega_s \tau_i^*) & \geq \sum_{i=1}^2 p_i \sin(\omega_s \tau_i),
\end{align*}
for all admissible $p_i$, $\tau_i$. Two show that, we recast the problem in a slightly different way. Writing $u=\omega_s \tau_1$, $v=\omega_s \tau_2$ and $T = \omega_s E$, we can express parameters $p_i$ in terms of $(u,v)$:
\begin{equation*}
p_1 = \frac{v-T}{v-u} \quad\text{and}\quad p_2 = \frac{T-u}{v-u},
\end{equation*}
where $u<T<v$. We consider $C$ and $S$ as functions of $(u,v)$.
\begin{align}
C(u,v) & = \frac{v-T}{v-u} \cos(u) + \frac{T-u}{v-u} \cos(v), \label{eq:cuv} \\
S(u,v) & = \frac{v-T}{v-u} \sin(u) + \frac{T-u}{v-u} \sin(v). \label{eq:suv}
\end{align}
Equation (\ref{eq:suv}) is to be maximized for $(u,v)$ along the curve $h=\{u,v\}$ implicitly defined by the level curves $C(u,v)=-a$. There are either two solutions in $v$, including multiplicity, of the equation $C(0,v)=-a$ or none, so the curve can be parametrized in a way that $(u(\xi),v(\xi))$ satisfies $(u(0)=0,v(0)=v_{max})$ and $(u(1)=0,v(1)=v_{min})$, with $v_{min} \leq v_{max}$. We claim that $S$ is maximized for $\xi=1$, i.e. $u=0$ and $v=v_{max}$. This is true only if $S(u(\xi),v(\xi))$ is increasing with $\xi$. That is, the curve $h$ must cross the level curves of $S$ upward. It is clear that $S$ is a decreasing function of $v$, for $u$ fixed and an increasing function of $u$, for $v$ fixed. Thus the level curves $S(u,v)=k$ can be expressed as an increasing function $v_{S,k}(u)$ such that
\begin{equation*}
S(u,v_{S,k}(u))=k,
\end{equation*}
when $k$ is in the image of $S$. Likewise, equation (\ref{eq:cuv}) can be solved locally to yield $v_{C,a}(u)$ such that
\begin{equation*}
C(u,v_{C,a}(u))=-a,
\end{equation*}
whenever $-a$ is in the image of $C$. The function $v_{C,a}(u)$ could take two values on the domain of definition. Because $S$ is decreasing in $v$, we choose the lower solution branch for $v_{C,a}(u)$. If, along that lower branch, the slope of $v_{C,a}(u)$ is larger than that of $v_{S,k}(u)$, then as $v$ decreases along the curve $c$, $S$ increases. Therefore, to show that $(0,v_{C,a}(0)=v_{min})$ maximizes $S$, we need to show that
\begin{equation}\label{eq:vCvS}
\frac{d v_{C,a}(u)}{du} > \frac{d v_{S,k}(u)}{du} > 0.
\end{equation}
It is clear that $d v_{S,k}(u)/du>0$. The pointwise derivatives of the level curves at $(u,v)$ are
\begin{align*}
\frac{d v_{C}(u)}{du} & = \frac{v-T}{T-u} \frac{-\cos(u)+\cos(v)+(v-u)\sin(u)}{\cos(u)-\cos(v)-(v-u)\sin(v)}, \\
\frac{d v_{S}(u)}{du} & = \frac{v-T}{T-u} \frac{-\sin(u)+\sin(v)-(v-u)\cos(u)}{\sin(u)-\sin(v)+(v-u)\cos(v)}.
\end{align*}
Because only the lower branch of $v_C$ is considered, we restrict $(u,v)$ where $dv_{C}(u) / du <+\infty$. This is done without loss of generality since $S$ is strictly larger on the lower branch than on the upper branch. Along the lower branch, $v_C(u)<\pi$. Inequality (\ref{eq:vCvS}) then holds if
\begin{multline*}
(v-u) \bigl[ 2 - 2 \bigl(\cos(u)\cos(v)+\sin(u)\sin(v)\bigr) + (v-u) \bigl(\sin(u)\cos(v)-\cos(u)\sin(v) \bigr) \bigr] > 0.
\end{multline*}
Notice that this inequality does not depend on $T$, which cancels out, nor on $a$, since comparison is made pointwise, for any level curves. The inequality can be simplified and rewritten in terms of $z=v-u>0$,
\begin{equation*}
z \bigl[ 2 - 2 \cos(z) - z \sin(z)\bigl] > 0.
\end{equation*}
It can be verified that this inequality is satisfied for $z \in (0,\pi]$. Therefore, $S$ is maximized when $u=0$ and $v=v_{C,a}(0)=v_{min}$.
\end{proof}
\begin{figure}
\includegraphics[width=0.5\linewidth]{twodelaysproof.ps}
\caption{How delays are replaced to get an maximal value of $S^*$. In this example, $a=-0.1$. Parameters are $u=0.2$, $v=2$, $p_1=0.37$, $p_2=0.63$ and $T=1.33$. The parameters maximizing $S$ are $u^*=0$, $v^*=1.76$, $p_1^*=0.24$ and $p_2^*=0.76$.}\label{f:2d}
\end{figure}
\begin{theorem}\label{th:n} Let $f_n$ be a density with $n \geq 1$ discrete delays and mean $E$ satisfying inequality (\ref{eq:sc}). The density $f_n$ is asymptotically stable.
\end{theorem}
\begin{proof} Single delay distributions ($n=1$) are asymptotically stable by Theorem \ref{th:hayes}. We first show the case $n=2$.
Consider a density $f_2$, with $\tau_1 < \tau_{2}$. Suppose $C(\omega_s)+a=0$ for a value of $\omega_s<\omega_c$ (if not, Corollary \ref{th:cor} states that $f_2$ is stable). Remark that $-a=C(\omega_s)<\cos(\omega_s E)$. Indeed, from inequality (\ref{eq:sc}) and $\omega_s \leq \omega_c = \sqrt{1-a^2}$, we have $\cos(\omega_s E) \geq \cos(\omega_c E) > -a$. Replace the two delays by two new delays with new weights: $\tau_1^* = 0$ and $\tau_2^* \geq 0$ the smallest delay such that the following equations are satisfied:
\begin{align}
p_2^* \tau_2^* & = p_1 \tau_1 + p_2 \tau_2, \\
p_1^* + p_2^* \cos(\omega_s \tau_2^*) & = p_1 \cos(\omega_s \tau_1) + p_2 \cos(\omega_s \tau_2), \\
p_1^* + p_2^* & = p_1 + p_2 \quad (=1).
\end{align}
Lemma \ref{th:S} ensures that there always exists a solution when $C(\omega_s) \leq \cos(\omega_s E)$. Additionally, $\tau_2^* \leq \tau_2$ and
\begin{align*}
S^*(\omega_s) \equiv \sum_{i=1}^2 p_i^* \sin(\omega_s \tau_i^*) & \geq \sum_{i=1}^2 p_i \sin(\omega_s \tau_i).
\end{align*}
That is, the new distribution $*$ maximizes the value of $S$. Therefore, if we are able to show that distributions with a zero and a nonzero delay satisfy $S(\omega_s)<\omega_s$, then by Corollary \ref{th:cor}, all distributions with two delays are stable. Consider $f(\tau) = (1-p) \delta(\tau) + p \delta(\tau-r)$. Suppose that there is $\omega_s \leq \omega_c$ such that
\[
C(\omega_s) = 1-p + p \cos(\omega_s r) = -a.
\]
We must show that $S(\omega_s) = p \sin(\omega_s) < \omega_s$. Summing up the squares of the cosine and the sine, we obtain
\[
p^2 = (-a+p-1)^2 + S^2(\omega_s),
\]
so
\[
S(\omega_s) = \sqrt{p^2 - (-a+p-1)^2}.
\]
By assumption, the mean delay statisfies inequality (\ref{eq:sc}),
\[
pr < \frac{\arccos(-a)}{\sqrt{1-a^2}}.
\]
Thus,
\[
\omega_s = \frac{\arccos \bigl( - (a+1-p) p^{-1} \bigr)}{r} > p \sqrt{1-a^2}\frac{\arccos \bigl( - (a+1-p) p^{-1} \bigr)}{\arccos(-a)}.
\]
Because $(a+1-p)/p \geq a$ for $p \in (0,1]$ and $a \in (-1,1)$, we have the following inequality
\[
\frac{\arccos(-a)}{\sqrt{1-a^2}} \leq \frac{\arccos \bigl( - (a+1-p) p^{-1} \bigr)}{\sqrt{1-\bigl( (a+1-p) p^{-1} \bigr)^2}}.
\]
Thus,
\[
S(\omega_s) = \sqrt{p^2 - (-a+p-1)^2} \leq p \sqrt{1-a^2} \frac{\arccos \bigl( - (a+1-p) p^{-1} \bigr)}{\arccos(-a)} < \omega_s.
\]
This completes the proof for the case $n=2$.
For distributions $f_n$ with $n>2$ delays, the strategy is also to find a stable distribution that keeps $C(\omega_s)$ constant and increases $S(\omega_s)$, assuming that $C(\omega_s)+a=0$. This requires two steps. In the first one, all pairs of delays $\tau_i<\tau_j$ for which the inequality
\begin{align}\label{eq:belowcos}
\sum_{k \in \{i,j\}} p_k \cos(\omega_s \tau_k)\leq \cos\biggl(\omega_s \sum_{k \in \{i,j\}} p_k \tau_k\biggr),
\end{align}
holds are iteratively replaced by new delays $\tau_i^*=0$ and $\tau_j^*<\tau_j$, as done in Lemma \ref{th:S}. This transformation preserves $E$, $C(\omega_s)$ and increases $S(\omega_s)$. This is repeated until there remains $m<n$ delays with $\tau_i>0$, $i=2,...,m$ such that
\[
\sum_{k \in \{i,j\}} p_k \cos(\omega_s \tau_k) > \cos\biggl(\omega_s \sum_{k \in \{i,j\}} p_k \tau_k\biggr),
\]
for $i \neq j \in \{2,...,m\}$, and $\tau_1 = 0$. (The $\tau_i$ are not the same as in the original distribution, the $*$ have been dropped for ease of reading.) The positive delays $\tau_i>0$ satisfy
\begin{align*}
\sum_{i=2}^m p_i \cos(\omega_s \tau_i) > \cos\Bigl( \omega_s \sum_{i=2}^m p_i \tau_i \Bigr).
\intertext{while, by assumption,}
\sum_{i=1}^m p_i \cos(\omega_s \tau_i) = -a < \cos(\omega_s E).
\end{align*}
The second step is to replace all delays $\tau_i$, $i=2,...,m$ with the single delay $\bar\tau_2 = \sum_{i=2}^m p_i \tau_i$. We now have a 2-delay distribution with $\bar \tau_1=0$ and $\bar \tau_2 > 0$, $\bar p_1 \bar \tau_1 + \bar p_2 \bar \tau_2=E$, $\bar C(\omega_s) \leq C(\omega_s)$ and $\bar S(\omega_s)\geq S(\omega_s)$. Replace $\bar \tau_2$ by the delay $\tau_2^*<\bar \tau_2$ so that $C^*(\omega_s) = -a$, while keeping $E$ constant. Existence of $\tau_2^*$ is shown using the notation from the proof of Lemma \ref{th:S}, and noting that $C(0,v)$ and $S(0,v)$ are both decreasing in $v$. This change of delay has the effect of increasing $S$: $ S^*(\omega_s) \geq \bar S(\omega_s)$. Therefore, we have found a pair of discrete delays $(0, \tau_2^*)$ such that $C^*(\omega_s) = C(\omega_s)$ and $\omega_s > S^*(\omega_s) \geq S(\omega_s)$. By Corollary \ref{th:cor}, $f_n$ is asymptotically stable.
\end{proof}
\begin{figure}
\includegraphics[width=0.8\linewidth]{stabilitychart.ps}
\caption{Stability chart of distributions of delay in the $(a/b,bE)$ plane. The distribution-independent stability region is to the right of the blue curve. The distribution-dependent stability region is the shaded area. All stability curves leave from the point $(a=-b,E=1/b)$. The signs of the real roots of the characteristic equation $\lambda_0, \lambda_1$ along $a=-b$ are distribution-independent.}\label{f:chart}
\end{figure}
\section{Stability of a general distribution of delays}\label{s:g}
From the stability of distributions of discrete delays to the stability of general distributions of delays, there is a small step. First we need to bound the roots of the characteristic equation for general distributed delays.
\begin{lem}\label{th:mu}
Let $\eta_E$ be a delay distribution with mean $E$ satisfying inequality (\ref{eq:sc}). There exists a sequence $\{\eta_{n,E}\}_{n \geq 1}$ with distribution $\eta_{n,E}$ having $n$ delays, such that $\eta_{n,E}$ converges weakly to $\eta_E$. Then $\lambda$ is a root of the characteristic equation if and only if there exists a sequence of roots $\lambda_n$ for $\eta_{n,E}$ such that $\lim_{n \to \infty} \lambda_n = \lambda$. Let $\{\mu_n\}_{n \geq 1}$ be a sequence of real parts of roots of the characteristic equations. Additionally,
\begin{align*}
\limsup_{n \to \infty} \mu_n = \mu < 0.
\end{align*}
\end{lem}
\begin{proof}
Consider $\lambda_n = \mu_n + i \omega_n$ a root the characterisitic equation for $\eta_{n,E}$. $E$ satisfies inequality (\ref{eq:sc}), so $\mu_n < 0$. So
\begin{align*}
& \Bigl\lvert \lambda_n + a + \int_0^{\infty} e^{-\lambda_n \tau} d \eta_{E}(\tau) \Bigl\lvert \\
& = \Bigl\lvert \lambda_n + a + \int_0^{\infty} e^{-\lambda_n \tau} d [\eta_{E}(\tau)-\eta_{n,E}(\tau)] + \int_0^{\infty} e^{-\lambda_n \tau} d \eta_{n,E}(\tau) \Bigr\rvert \\
& = \Bigl\lvert \int_0^{\infty} e^{-\lambda_n \tau} d [\eta_{E}(\tau)-\eta_{n,E}(\tau)] \Bigl\lvert \to 0,
\end{align*}
as $n \to \infty$ by weak convergence. Thus any converging sub-sequence of roots converges to a root for $\eta_E$. The same way, if $\lambda$ is a root for $\eta_E$,
\begin{align*}
& \Bigl\lvert \lambda + a + \int_0^{\infty} e^{-\lambda \tau} d \eta_{n,E}(\tau) \Bigl\lvert \\
& = \Bigl\lvert \lambda + a + \int_0^{\infty} e^{-\lambda \tau} d [\eta_{n,E}(\tau)-\eta_{E}(\tau)] + \int_0^{\infty} e^{-\lambda \tau} d \eta_{E}(\tau) \Bigr\rvert \\
& = \Bigl\lvert \int_0^{\infty} e^{-\lambda \tau} d [\eta_{n,E}(\tau)-\eta_{E}(\tau)] \Bigl\lvert \to 0,
\end{align*}
as $n \to \infty$. Convergence is guarantedd by inequality (\ref{eq:nu}). Therefore, each root $\lambda$ lies close to a corresponding root $\lambda_n$.
Denote $\mu = \limsup_{n \to \infty} \mu_n$. Then $\mu$ is the real part of a root of the characteristic equation associated to $\eta_E$. $\mu_n<0$ for all $n$, so $\mu \leq 0$. Suppose $\mu=0$. Without loss of generality, we can assume that all other roots have negative real parts. Then $\eta_E$ is at a Hopf point, i.e. the leading roots of the charateristic equation are pure imaginary. Consider the distribution $\eta_{\bar a,\rho}(\tau) = \eta(\tau/\rho)$ and the associated real parts $\mu_{\bar a, \rho}$, where the subscript $a$ is there to emphasize the dependence of the stability on $a$. Then, by continuity, there exists $(\bar a,\rho)$ in the neighborhood $\varepsilon>0$ of $(a,E)$ for which $\eta_{\bar a, \rho}$ is unstable, i.e. $\mu_{\bar a, \rho}>0$. For sufficiently small $\varepsilon>0$, inequality (\ref{eq:sc}) is still satisfied:
\begin{equation*}
\rho < \frac{\arccos(- \bar a)}{\sqrt{1-\bar a^2}}.
\end{equation*}
Additionally, $\eta_{n,\rho}$ converges weakly to $\eta_{\bar a, \rho}$. However, because $\eta_{\bar a, \rho}$ is unstable, there exists $N>1$ such that $\eta_{n,\bar a, \rho}$ is unstable for all $n>N$, a contradiction to Theorem \ref{th:n}. Therefore $\mu<0$.
\end{proof}
\begin{theorem}\label{th:main} Let $\eta_E$ be a delay distribution with mean $E$ satisfying inequality (\ref{eq:sc}). The distribution $\eta_E$ is asymptotically stable.
\end{theorem}
\begin{proof}
Consider the sequence of distributions with $n$ delays $\{\eta_{n,E}\}_{n \geq 1}$ where $\eta_{n,E}$ converges weakly to $\eta_E$. By Lemma \ref{th:mu}, the leading roots of the characteristic equation of $\eta_E$ have negative real parts. Therefore $\eta_E$ is asymptotically stable.
\end{proof}
Is there a result similar to Theorem \ref{th:main} for the most stable distribution? That is, is there a mean delay value such that all distributions having a larger mean are unstable? When $a \geq 0$, the answer is no. For instance the exponential distribution with parameter is asymptotically stable for all mean delays, a property called unconditional stability. Other distributions are also unconditionally stable for $a\geq 0$. Anderson has shown that all distributions with smooth enough convex density functions are unconditionally stable \citep{anderson1991}, but densities do not need to be convex to be unconditionally stable. For example, the non-convex density $f(\tau)=0.5 [\delta(\tau)+\delta(\tau-2E)]$ has mean $E$ but is unconditionally stable. However, no distribution is unconditionally stable for all values of $a \in [-1,0)$, although some are for $a\geq a^*$ with $a^*>-1$ (see example below).
From the results obtained here, we have the most complete picture of the stability of equation (\ref{eq:x}) when the only information about the distribution of delays is the mean (figure \ref{f:chart}).
\begin{cor}\label{th:stab}
The zero solution of equation (\ref{eq:x}) is asymptotically stable if $a>-b$ and $a \geq |b|$ or if $b>|a|$ and
\begin{equation*}
E < \frac{\arccos(-a/b)}{\sqrt{b^2-a^2}}.
\end{equation*}
The zero solution of equation \ref{eq:x} may be asymptotically stable (depending on the particular distribution) if $b>|a|$ and
\begin{equation*}
E \geq \frac{\arccos(-a/b)}{\sqrt{b^2-a^2}}.
\end{equation*}
The zero solution of equation (\ref{eq:x}) is unstable if $a \leq -b$.
\end{cor}
\section{Boundary of stability}\label{s:b}
The exact boundary of the stability region in the $(a,E)$ plane can be calculated by parametrizing $\bigl(a(u),E(u)\bigr)$. Consider the distribution $\eta$. Then, at the boundary of stability,
\begin{align*}
0 & = i \omega + a + \int_0^\infty e^{- i \omega \tau} d \eta(\tau/E), \\
& = i \omega + a + \int_0^\infty e^{- i \omega E \tau} d \eta(\tau),
\intertext{setting $u = E \tau$,}
& = i \frac{u}{E} + a + \int_0^\infty e^{- i u \tau} d \eta(\tau).
\end{align*}
Separating the imaginary and the real part, we obtain
\begin{equation}\label{eq:par}
a(u) = - C(u) \quad \text {and} \quad E(u) = \frac{u}{S(u)},
\end{equation}
for $u \geq 0$. The fact that $u$ depends on $E$ is not a problem: $u \to \infty$ if and only if $E \to \infty$, and $u \to 0$ if and only if $E \to 0$. Equations (\ref{eq:par}) allows systematic exploration of the boundary of stability in the $(a,E)$ plane.
\subsection{Exponential distribution}
The exponential distribution $f(\tau) = e^{- \tau}$ has normalized mean 1, and
\begin{align*}
C(u) = \frac{1}{1+u^2} \quad \text{and} \quad S(u) = \frac{u}{1+u^2}.
\end{align*}
The stability boundary is given by $E = -1/a$, for $-1 \leq a<0$. Therefore the exponential distribution is not unconditionally stable for $a<0$.
\subsection{Discrete delays}
The exponential distribution is also not the most stable distribution. The density with a zero and a positive delay is $f(\tau)=(1-p) \delta(\tau) + p \delta(\tau-r)$, $p \in (0,1]$. After lumping the zero delay into the undelayed part, the exact stabiltity boundary becomes
\begin{equation*}
E = p r = \frac{\arccos\big(-(a+1-p)p^{-1}\bigr)}{\sqrt{1-\big((a+1-p)p^{-1}\bigr)^2}}
\end{equation*}
This has an asymptote at $a=2p-1$, which can be located anywhere in $(-1,1]$.
In general, for a distribution with $n$ delays,
\begin{align*}
a(u) = -\sum_{i=1}^n p_i \cos(u \tau_i) \quad \text{and} \quad E(u) = \frac{u}{\sum_{i=1}^n p_i \sin(u \tau_i)}.
\end{align*}
The boundary of the stability region can be formed of many branches, as with a distribution with three delays in figure \ref{f:softkernel}.
\subsection{Gamma distribution}\label{s:s}
As the mean $E$ is increased, distributions can revert to stability. This is the case with the second order gamma distribution (also called strong kernel) with normalized mean 1,
\begin{equation}\label{eq:sk}
f(\tau) = 2^2 \tau e^{-2 \tau}.
\end{equation}
We have
\begin{align*}
C(u) = \frac{1-u^2}{\bigl(1+u^2\bigr)^2}, \quad \text{and} \quad
S(u) = \frac{2 u}{\bigl(1+u^2\bigr)^2},
\end{align*}
The boundary of stability is given by
\begin{align*}
\bigl(a(u),E(u)\bigr) & = \biggl(\frac{u-1}{(1+u)^2},(1+u)^2\biggr),
\end{align*}
There is a largest value $\hat a = 0.1216$. For large values of $E$, $a \to 0^+$. Therefore the boundary of the stability region is not monotonous; for $a \in (0,\hat a)$, $f$ first becomes unstable and then reverts to stability as the mean is increased (figure \ref{f:softkernel}).
\begin{figure}
\includegraphics[width=0.45\linewidth]{threedelays.eps}
\includegraphics[width=0.45\linewidth]{softkernel.eps}
\caption{(\emph{Left}) Stability chart of the three-delay distribution with $\tau_2 = 16 \tau_1$, $\tau_3 = 96 \tau_1$, $p_1 = 0.51$, $p_2 = 0.39$, $p_3 = 0.1$. (\emph{Right}) Stability chart of the second order gamma distribution, equation (\ref{eq:sk}).}\label{f:softkernel}
\end{figure}
\subsection*{Acknowledgements}
The author thanks Fabien Crauste for helpful discussion.
\bibliographystyle{plain}
|
1,116,691,499,727 | arxiv | \section{Introduction\label{sec_intro}}
We study an equilibrium design problem faced by the decision maker (DM)
(e.g., a government or a policy maker) who can choose the set of feasible
reactions before senders and receivers move in a generalized competitive
signaling model with two-sided matching. We use the term, \textquotedblleft
equilibrium design\textquotedblright\ because the DM chooses the set of
feasible reactions and it affects the endogenous formation of the belief on
the sender's type. \todo[inline]{The flow seems more natural to include
footnote 1 in the main text. The paragrah will be continuous when we turn
off this todonote.} In our model, there is a continuum of heterogeneous
senders and receivers (e.g., sellers and buyers, workers and firms, and
entrepreneurs and investors) in terms of their types. The DM is interested
in maximizing the aggregate net surplus. She moves first by publicly
announcing the set of feasible reactions that receivers can take. After
that, senders take actions, followed by receivers' reaction choices as they
are matched with senders. For example, the policy maker may announce the set
of feasible transfers that firms can make to their employees. After that but
prior to entering the job matching market, workers (senders) make
investments in observable characteristics such as education. Once firms and
workers enter the market, they form one-to-one matches as a firm offers its
employee a wage (reaction). In a competitive signaling equilibrium, the
market wage function that specifies a worker's wage conditional on her
observable characteristics clears the matching market.
Signaling creates a trade-off in matching markets. It increases matching
efficiency because separating induces assortative matching. On the other
hand, it is costly in that senders need to choose inefficiently high levels
of equilibrium actions in order to separate themselves. Because of the
trade-off, there may be efficiency gains if the DM restricts the set of
feasible reactions to prevent a separating equilibrium from happening in the
first place. How can the DM find the optimal set of feasible reactions and
what the optimal signaling equilibrium would look like as a result? We first
provide a general methodology that the DM can use in designing the optimal
signaling equilibrium.
Given the multiplicity of signaling equilibrium, the DM focuses on a
stronger monotone signaling equilibrium where equilibrium actions,
reactions, beliefs, and matching are monotone in the stronger set order
(Shannon (1995)). We show that when utility functions satisfy monotonicity
and single crossing properties, any competitive signaling equilibrium is
stronger monotone if and only if it passes Criterion D1 (Cho and Kreps
(1987), Cho and Sobel (1990), Banks and Sobel (1987)). The stronger
monotonicity of beliefs makes it easy to derive any type of a stronger
monotone equilibrium given any set of feasible reactions chosen by the DM,
even when a separating equilibrium does not exist.\footnote{%
The stronger monotonicity of beliefs is the full implication of Criterion D1
on beliefs. When no restrictions are imposed on feasible reactions, only a
partial implication (e.g., Cho and Sobel monotonicity (1990)) is needed to
show that a separating equilibrium is a unique D1 equilibrium. See Section %
\ref{section_montone_equilibrium} for more discussion.}
When utility functions are quasilinear, the DM can only focus on intervals
as the set of feasible reactions that she chooses without loss of generality.%
\footnote{%
We allow the lower and upper bounds of the interval the DM chooses to be the
same. In this case, the interval shrinks to a singleton.} We show that given
any interval of feasible reactions that the DM may choose, a stronger
monotone equilibrium is \emph{unique} and \emph{well-behaved}. A
\textquotedblleft well-behaved\textquotedblright\ equilibrium is
characterized by the two threshold sender types. The lower threshold sender
type specifies the lowest sender type who enters the market, whereas any
sender above the upper threshold sender type pools their actions. Any sender
between the two threshold types separates themselves. If the two threshold
types are the same, it becomes a pooling equilibrium. If the upper threshold
type is the supremum of the sender types and greater than the lower
threshold type, it becomes a separating equilibrium. If the upper threshold
type is less than the supremum of the sender types but greater than the
lower threshold type, separating and pooling coexist in the well-behaved
equilibrium. In the separating part of the equilibrium, matching is \emph{%
assortative} in terms of sender action and receiver type (and hence in terms
of sender type and receiver type), whereas in the pooling part, it is \emph{%
random}.
The aggregate net surplus is a function of the two threshold sender types in
a unique stronger monotone equilibrium. We further show that choosing the
two threshold sender types is equivalent to choosing the lower and upper
bounds of the corresponding interval of feasible reactions in the sense that
we can uniquely retrieve the two bounds of the reaction interval from any
given two threshold sender types. Therefore, DM's design problem for an
optimal stronger monotone equilibrium comes down to the choice of the two
threshold sender types.
For the optimal equilibrium design, we propose an approach that approximates
the distribution of receiver types by two parameters, which are called
\textquotedblleft shift\textquotedblright\ ($k$) and \textquotedblleft
relative spacing\textquotedblright\ ($q$), given an arbitrary distribution
of sender types. \todo{minor edit} The relative heterogeneity of receiver
types to sender types is parametrized by $q.$ We first prove that regardless
of the sender's type distribution, there exists a range of feasible
reactions which induces a non-separating stronger monotone equilibrium that
is more efficient than the stronger monotone separating equilibrium when the
relative heterogeneity of receiver types to sender types and the
productivity of sender action are small. It implies that the unique
(separating) equilibrium without any restrictions on feasible reactions is
not optimal in the classical Spencian model of pure signaling with no
heterogeneity of receivers (Spence 1973).
For numerical analysis, we choose various beta distributions for sender type
such that, given any sender type distribution, the mean and variance of the
receiver type distribution increases as $q$ increases. This is particularly
relevant to the recent empirical findings in Poschke (2018) who documents
that the mean and variance of the firm size distribution are larger in rich
countries and increased over time for US firms. For the concreteness, one
may think of senders as workers and receivers as firms. A firm's type is
then its size, which can be measured by the amount of labour employed by it
or its market value if it is publicly traded. A sender's type can be her
unobservable skill as a worker,\footnote{%
In an entry-level job market, a worker's unobservable skill could be her
ability to understand a task given to her and to figure out how to complete
it. In a managerial job market, a worker's unobservable skill could be her
ability to come up with new business idea or innovation.} whereas her action
is her observable skill. We parametrize the (direct) productivity effect of
sender action on generating the gross match surplus. We allow this
productivity parameter to vary.
In a wide range of parameter values, the optimal stronger monotone
equilibrium is strictly well-behaved in that it has a separating part below
the upper threshold type and a pooling part above it. This implies that in
the pooling part of the equilibrium, the cost savings associated with the
pooled action choice by senders above the upper threshold type outweighs the
decrease in matching efficiency due to random matching. The lower threshold
type in the optimal stronger monotone equilibrium is always equal to the
minimum of the support of the sender type distribution. As the mean/variance
of the receiver type distribution (e.g., firm size distribution) or the
productivity effect of sender action (e.g., observable skill) increases, the
pooling part on the top in the optimal stronger monotone equilibrium is
getting smaller (i.e., the upper threshold type increases).
Furthermore, as the mean and variance of the receiver type distribution
increases (i.e., $q$ increases), the efficiency of the baseline separating
equilibrium increases\footnote{%
This may suggest that the efficiency in rich countries is higher than in
poor countries and that it also increases over time in the U.S., given the
empirical findings in Poschke (2018).} and the relative net gain in the
optimal stronger monotone equilibrium decreases, regardless of the
productivity effect of sender action and the sender type distribution (e.g.,
unobservable skill distribution). In addition, as the productivity effect of
sender action increases, the rate of the increase in the efficiency of the
baseline separating equilibrium with respect to an increase in $q$ is
getting smaller and the relative surplus gain in the optimal stronger
monotone equilibrium is smaller at every value of $q$.
Our result suggests that DM's equilibrium design is most effective (i) when
the receiver type distribution has the smallest mean and variance; and (ii)
when sender action has no productivity effect. As the mean and variance of
the receiver type distribution increase or the productivity parameter of
sender action increases, the DM's equilibrium design quickly loses its
effectiveness. \todo{minor edit} This highlights (i) how a trade-off between
matching efficiency and the economic cost of signaling changes as the firm
size distribution becomes more spread
and the direct productivity of observable skill increases; and (ii) how does
it impact on optimal equilibrium design.
\paragraph{Related literature}
Our paper opens a new research direction to the stronger monotone
equilibrium design and contributes to the literature on several fronts.
While the literature has studied monotone equilibrium, exploring
complementarities between actions and types, they mostly focus on games with
simultaneous moves and no signaling (Athey (2001), McAdams (2003), Reny and
Zamir (2004), Reny (2011), Van Zandt and Vives (2007)). Our Stronger
Monotone Signaling Equilibrium Theorem is the first fully-fledged monotone
equilibrium theorem in a model with sequential moves and signaling.
Recently, Liu and Pei (2020) derive the monotonicity of a sender's
equilibrium strategy in a two-period signaling game between one sender and
one receiver with an assumption similar to our assumption on the sender's
utility function. However, our paper differs from theirs because ours shows
(i) the equivalence between Criterion D1 and stronger monotone beliefs and
its implication and (ii) the monotonicity of equilibrium matching given a
monotonicity and a single crossing assumption on the receiver's utility.
Mensch (2020) shows the existence of an equilibrium where players'
strategies and beliefs are both monotone in a multi-period signaling game
with multiple players and totally ordered signal spaces. However, he does
not show the relation between monotone beliefs and equilibrium refinement
and its implication on deriving all stronger monotone equilibria. Not only
do we establish the existence of a stronger monotone signaling equilibrium
but we let the DM choose a set of feasible reactions before senders and
receivers move, while Liu and Pei (2020) and Mensch (2020) do not. We
propose a general methodology for the DM's optimal design of a unique
stronger monotone signaling equilibrium with his choice of a set of
reactions.
Pre-match investment competition studies whether pre-match competition to
match with a better partner can solve the hold-up problem of
non-contractible pre-match investment that prevails when a match is
considered in isolation (e.g.\ Grossman and Hart (1986) and Williamson
(1986)). Cole, Mailath, and Postlewaite (1995), Rege (2008), and Hoppe,
Moldovanu, and Sela (2009) consider pre-match investment with incomplete
information and non-transferable utility without monetary transfers (i.e.,
no reaction choice by a receiver). Therefore, the sender-receiver framework
does not apply. Pre-match investment with incomplete information in Hopkins
(2012) includes the transferable-utility case but with no restrictions on
transfers. A separating equilibrium is their focus.
\section{Preliminaries\label{sec_model}}
There is a continuum of senders and receivers. They can be interpreted as
sellers and buyers, workers and firms, or entrepreneurs and investors.
Receivers and senders are all heterogeneous in terms of types. The sender's
type set is $Z$ and the receiver's type set is $X.$ The unrestricted set of
feasible reaction is $R$. The DM can choose any subset of $R$ as the set of
feasible reactions, denoted by $T,$ that a receiver can choose. Thus, $P(R),$
the power set of $R$, is his choice set. In the example of workers and
firms, $T$ is the set of feasible transfers that a firm (receiver) can make
to his worker (sender).
The DM moves first by choosing $T\in P(R).$ Given $T,$ each sender can take
an (observable) action $s$ from a set $S$ prior to matching. As a sender and
a receiver form a match, the receiver takes reaction $t$ from a set $T$
given his partner's action. For example, workers (senders) choose education $%
s$ before entering market. A worker and a firm are matched as the worker
accepts the firm's wage offer $t$. When a sender of type $z$ chooses action $%
s$ and matches with a receiver of type $x$ who takes reaction $t,$ the
sender's utility is $u(t,s,z)$ and the receiver's utility is $g(t,s,z,x)$.
In the example with workers and firms, the utilities for a sender (worker)
of type $z$ and a receiver (firm) of type $x$ are $u(t,s,z)=t-c(s,z)$ and $%
g(t,s,z,x)=v(x,s,z)-t$, respectively. Note that $t$ is the monetary transfer
from a firm to his worker, $c(s,z)$ is the cost of choosing education $s\in
S $ for a worker of type $z$, and $v(x,s,z)$ is the monetary value of the
output produced by the worker in a match.
Assume that the measures of senders and receivers are one respectively. Let $%
G(z)$ and $H(x)$ denote cumulative distribution functions (CDFs) for sender
types and receiver types respectively. The reservation utility for every
agent corresponds to staying out of the market and it is equal to zero. We
assume that a sender takes the null action $\eta \in S$ to stay out of the
market such that $\eta <s$\ for all $s\neq \eta $ (e.g., $\eta =0$ if $S=%
\mathbb{R}
_{+}$). Each of $S,T,X,$ and $Z$ is a chain.
The rest of the paper is organized as follows. We define the notion of a
competitive signaling equilibrium given $T$ chosen by the DM and analyze a
monotone signaling equilibrium in Sections \ref{section2.2} and \ref%
{section_montone_equilibrium}. In Section \ref{Sec_Eq_w_lower_bound}, we
characterize the unique stronger monotone equilibrium given each $T$ that
the DM may choose. In Section \ref{section_optimal_design}, we conduct
numerical analyses for the optimal design of the unique stronger monotone
equilibrium. Section \ref{sec_discussion} concludes. Omitted proofs can be
found in Appendix.
\section{Competitive signaling equilibrium given $T$\label{section2.2}}
While senders and receiver may randomize their actions and reactions given
the set of feasible reactions $T$, we are interested in a competitive
equilibrium where they make deterministic choices. However, when it comes to
the DM's choice considered in Section \ref{Sec_Eq_w_lower_bound}, allowing
receivers to randomize reactions makes it possible for the DM to focus on
only intervals as the set of feasible reactions $T$ that she chooses without
loss of generality given the quasilinearity of utility functions
After the DM publicly announces the set of feasible reactions $T$, senders
and receivers make decisions over two periods. In the first period, senders
choose actions given a reaction function $\tau .$ In the application of
workers and firms, the reaction function $\tau $ is a market wage function
that specifies a wage for a worker (sender) conditional on her choice of an
action (e.g., education) that is observed in equilibrium.
Let $\sigma (z)$ be the optimal action chosen by a sender of type $z$. Given
$\sigma :Z\rightarrow S$, let $S^{\ast }$\ denote the set of actions chosen
by senders who enter the market for matching. The solution concept of a
\emph{competitive signaling equilibrium} given $T$ is based on the reaction
function $\tau :S^{\ast }\rightarrow T$, which specifies a receiver's
reaction conditional on a sender's equilibrium action $s$. In the second
period, senders and receivers who enter the market form one-to-one matches
given the senders' action choices, and receivers take reactions upon forming
a match with a sender.
We assume that all senders and receivers share a common belief, denoted by $%
\mu (s)\in \Delta (Z)$, on a sender's type conditional on her action $s\in S$%
. Fix a reaction function $\tau :S^{\ast }\rightarrow T$. If $G(\{z|\sigma
(z)=s\})>0$ for some $s\neq \eta $, then there must be the same measure of
receivers who are matched with senders with $s$ in equilibrium. Since
senders with $s$ is observationally identical, matching between them \emph{%
randomly} occurs in equilibrium and the receiver's (expected) utility is $%
\mathbb{E}_{\mu (s)}\left[ g(\tau (s),s,z,x)\right] ,$ where $\mathbb{E}%
_{\mu (s)}\left[ \cdot \right] $\ is the expectation operator over $Z$\
given the probability distribution $\mu (s)$. If $\{z|\sigma (z)=s\}$ is a
singleton, then $\mu \left( s\right) $ becomes a degenerate probability
distribution. In a separating equilibrium, $\mu \left( s\right) $ is a
degenerate probability distribution for all $s\in S^{\ast }.$%
Consider a sender's action choice problem. Let $\sigma (z)\in S^{\ast }$ be
the optimal action for a sender of type $z$ if (i) it solves the following
problem,
\begin{equation}
\max_{s\in S^{\ast }}\;u(\tau (s),s,z)\text{ s.t. }u(\tau (s),s,z)\geq 0,
\label{WP3}
\end{equation}%
and (ii) there is no profitable sender deviation to an off-path action $%
s^{\prime }\notin $ range $\sigma $. We define the notion of a profitable
sender deviation in Definition \ref{def_profitable_sender_deviation} below.
Note that $\sigma (z)=\eta $ becomes the optimal action for a sender of type
$z$ if there is no solution for (\ref{WP3}) and there is no profitable
sender deviation to an off-path action $s^{\prime }\notin $ range $\sigma $.
We now formulate the profitable sender deviation. Let $X^{\ast }\subset X$
be the set of receivers who enter the market. For all $s\in S^{\ast },$ let $%
m(s)\in P(X^{\ast })$ be the set of receiver types who are matched with a
sender with $s$, where $P(X^{\ast })$ is the power set of $X^{\ast }$.
Therefore, $m:S^{\ast }\rightarrow P(X^{\ast })$ is a set-valued matching
function. For all $x\in X^{\ast }$, $m^{-1}(x)\in S^{\ast }$ denotes the
action chosen by a sender with whom a receiver of type $x$ is matched, i.e.,
$x\in m\left( m^{-1}(x)\right) $.
\begin{definition}
\label{def_profitable_sender_deviation}Given $\{\sigma ,\mu ,\tau ,m\}$,
there is a profitable sender deviation to an off-path action if there exists
$z$ for which there are an action $s^{\prime }\notin $ range $\sigma $ and a
reaction $t^{\prime }\in T$ such that, for some $x^{\prime }$,
\begin{align}
\text{(a) } &\mathbb{E}_{\mu (s^{\prime })}\left[ g(t^{\prime },s^{\prime
},z^{\prime },x^{\prime })\right] >\mathbb{E}_{\mu (m^{-1}(x^{\prime }))}%
\left[ g\left( \tau \left( m^{-1}(x^{\prime })\right) ,m^{-1}(x^{\prime
}),z^{\prime },x^{\prime }\right) \right] \text{ and }
\label{receiver_matching_utility} \\
\text{(b) }
\begin{split}
& u(t^{\prime },s^{\prime},z)>u(\tau (\sigma (z)),\sigma (z),z) \hskip15pt %
\mbox{ if }\sigma (z)\in S^{\ast }\text{, } \\
& u(t^{\prime },s^{\prime },z)>0, \hskip95pt \mbox{ otherwise}.
\label{sender_matching_utility}
\end{split}%
\end{align}
\end{definition}
Note that $z^{\prime }$ on each side of (\ref{receiver_matching_utility}) is
the random variable governed by $\mu (s^{\prime })$ and $\mu
(m^{-1}(x^{\prime }))$ respectively.
A receiver's matching problem can be formulated as follows:
\begin{equation}
\max_{s\in S^{\ast }}\;\mathbb{E}_{\mu (s)}\left[ g(\tau (s),s,z,x)\right]
\text{ s.t. }\mathbb{E}_{\mu (s)}\left[ g(\tau (s),s,z,x)\right] \geq 0.
\label{FP}
\end{equation}%
We use the notation $\xi (x)$ as the action of the sender whom the receiver
of type $x$ optimally chooses as his match partner. If (\ref{FP}) has a
solution for $x\in X,$ $\xi (x)$\ is the solution. Otherwise, $\xi (x)=\eta
. $ Note that $X^{\ast }$ be the set of receiver types such that $\xi (x)$\
is a solution for (\ref{FP}).
\begin{definition}
\label{def_stable_matching}Given $(\sigma ,\mu ),$ $\{\tau ,m\}$ is a \emph{%
stable matching outcome} if (i) $\tau $\emph{\ }clears the markets, i.e.,
for all $A\in P(S^{\ast })$ such that $H\left( \left\{ x|x\in m(\xi (x))%
\text{, }\xi (x)\in A\right\} \right) =G\left( \left\{ z|\sigma \left(
z\right) \in A\right\} \right) $, (ii) $m$ is stable, i.e., there is no pair
of a sender with action $s$ and a receiver of type $x\notin m\left( s\right)
$ such that, for some $t^{\prime }\in T$, some $z$ with $\sigma (z)=s\in
S^{\ast }$,
\begin{align}
\mbox{(a) }& \mathbb{E}_{\mu (s)}\left[ g(t^{\prime },s,z^{\prime },x)\right]
>\mathbb{E}_{\mu (m^{-1}(x))}\left[ g\left( \tau \left( m^{-1}(x)\right)
,m^{-1}(x),z^{\prime },x\right) \right] , \label{stable_matching1} \\
\mbox{(b) }& u(t^{\prime },s,z)>u(\tau (s),s,z). \label{stable_matching2}
\end{align}
\end{definition}
Note that $z^{\prime }$ on each side of (\ref{stable_matching1}) is the
random variable governed by $\mu (s)$ and $\mu (m^{-1}(x))$ respectively.
Condition (i) implies that the market-clearing reaction function $\tau $
induces a measure preserving matching function $m$. Condition (ii) implies
that the induced $m$ is stable where no two agents would like to block the
outcome after every sender has chosen her action.
\begin{definition}
\label{definition1}$\{\sigma ,\mu ,\tau ,m\}$ constitutes a \emph{%
competitive signaling equilibrium} (henceforth simply an equilibrium) with
incomplete information if
\begin{enumerate}
\item for all $z\in Z$, $\sigma (z)$ is optimal
\item $\mu $ is \emph{consistent}:
\begin{enumerate}
\item if $s\in $ range $\sigma $ satisfies $G(\{z|\sigma (z)=s\})>0,$ then $%
\mu (s)$ is determined from $G$ and $\sigma ,$ using Bayes' rule.
\item if $s\in $ range $\sigma $ but $G(\{z|\sigma (z)=s\})=0$, then $\mu
(s) $ is any probability distribution with supp $\mu (s)=$ cl $\left\{
z|\sigma (z)=s\right\} $
\item if $s\notin $ range $\sigma $, then $\mu (s)$ is unrestricted.
\end{enumerate}
\item given $(\sigma ,\mu ),$ $\{\tau ,m\}$ is a \emph{stable matching
outcome.}
\end{enumerate}
\end{definition}
\section{Criterion D1 and stronger monotone equilibrium\label%
{section_montone_equilibrium}}
Given the indeterminacy of the off-equilibrium-path beliefs, an equilibrium
refinement called \emph{Criterion D1} was developed by Cho and Kreps (1987)
and Banks and Sobel (1987). It restricts the off-equilibrium-path beliefs.
Following Ramey (1996), we define Criterion D1 as follows. Given an
equilibrium $\{\sigma ,\mu ,\tau ,m\}$, we define type $z$'s equilibrium
utility $U(z).$ If $\sigma \left( z\right) \in S^{\ast },$ then $%
U(z):=u(\tau \left( \sigma \left( z\right) \right) ,\sigma \left( z\right)
,z);$ otherwise, $U(z)=0$.
\begin{definition}[Criterion D1]
\label{definition_Criterion_D1}Fix any $s\notin $ range $\sigma $ and any $%
t\in T$. Suppose that there is a non-empty set $Z^{\prime }\subset Z$ such
that the following is true: for each $z\notin Z^{\prime }$, there exists $%
z^{\prime }$ such that
\begin{equation}
u(t,s,z)\geq U(z)\Longrightarrow u(t,s,z^{\prime })>U(z^{\prime })\text{.}
\label{Criterion_D1}
\end{equation}%
Then, the equilibrium is said to violate Criterion D1 unless it is the case
that supp $\mu (s)\subset Z^{\prime }$.
\end{definition}
Intuitively, following the observation for an off-equilibrium-path action $%
s, $ zero posterior weight is placed on a type $z$ whenever there is another
type $z^{\prime }$ that has a stronger incentive to deviate from the
equilibrium in the sense that type $z^{\prime }$ would strictly prefer to
deviate for any given $t$ that would give type $z$ a weak incentive to
deviate.
We can equivalently define Criterion D1 by the contrapositive of (\ref%
{Criterion_D1}), that is
\begin{equation}
u(t,s,z^{\prime })\leq U(z^{\prime })\Longrightarrow u(t,s,z)<U(z).
\label{Criterion_D12}
\end{equation}%
Upon observing an off-equilibrium action $s$, zero posterior weight is
placed on a type $z$ whenever a type $z$ is strictly worse off by deviating
for any $t$ that would make type $z^{\prime }$ weakly worse with the same
deviation.
For the monotonicity of the belief, we employ the notion of the stronger set
order (Shannon (1995)), which is stronger than the strong set order
(Veinnott (1989)).
\begin{definition}[Stronger set order]
Consider two sets $A$ and $B$ in the power set $P(Y)$ for $Y$ a lattice with
the given relation $\geq $. We say that $A\leq _{c}B$, read
\textquotedblleft $A$ is completely lower than $B$\textquotedblright\ if for
every $a\in A$ and $b\in B,$ $a\leq b.$ Given a partially ordered set $K$
with the given relation $\geq $, a set-valued function $M:K\rightarrow P(Y)$
is monotone non-decreasing in the stronger set order if $k^{\prime }\leq k$
implies that $M(k^{\prime })\leq _{c}M(k)$.
\end{definition}
If a set-valued function is non-decreasing with respect to the stronger set
order, then it is also non-decreasing with respect to the strong set order.
For a single-valued function, the two set orders are identical. Importantly,
if $M$ is monotone non-decreasing in the stronger set order, $M(k)$ and $%
M(k^{\prime })$ have at most one element in common: for any ordered pair of $%
(k,k^{\prime }),$ $M(k)\cap M(k^{\prime })$ is $\emptyset $ or a singleton.
Consider a belief function $\mu :S\rightarrow \Delta (Z)$. The monotonicity
of a belief function is defined by the stronger set order on the supports of
the probability distributions. A belief function is non-decreasing in the
stronger set order if $s^{\prime }\leq s$ implies supp $\mu (s^{\prime
})\leq _{c}$ supp $\mu (s)$. We also use the stronger set order for the
monotonicity of a matching function $m:S^{\ast }\rightarrow P(X^{\ast }).$ A
matching function is non-decreasing in the stronger set order if $s^{\prime
}\leq s$ implies $m(s^{\prime })\leq _{c}m(s)$. Now we define the stronger
monotone equilibrium as follows.
\begin{definition}[Stronger Monotone Equilibrium]
\label{definition_monotone_eq}An equilibrium $\left\{ \sigma ,\mu ,\tau
,m\right\} $ is stronger monotone if $\sigma ,$ $\mu ,\tau ,$ and $m$ are
non-decreasing in the stronger set order.
\end{definition}
We impose the following assumptions for $u$.
\begin{description}
\item[Assumption A] $u(t,s,z)$ is (i) decreasing in $s$, increasing in $t$
and $z$, and satisfies (ii) the strict single crossing property in $%
((t,s);z). $\footnote{%
Let $A$ be a lattice, $\Theta $ be a partially ordered set and $f:A\times
\Theta \rightarrow
\mathbb{R}
.$ Then, $f$ satisfies the \emph{single crossing property} in $(a;\theta )$
if for $a^{\prime }>a^{\prime \prime }$ and $\theta ^{\prime }>\theta
^{\prime \prime },$ (i) $f(a^{\prime },\theta ^{\prime \prime })\geq
f(a^{\prime \prime },\theta ^{\prime \prime })$ implies $f(a^{\prime
},\theta ^{\prime })\geq f(a^{\prime \prime },\theta ^{\prime })$ and (ii) $%
f(a^{\prime },\theta ^{\prime \prime })>f(a^{\prime \prime },\theta ^{\prime
\prime })$ implies $f(a^{\prime },\theta ^{\prime })>f(a^{\prime \prime
},\theta ^{\prime })$. If $f(a^{\prime },\theta ^{\prime \prime })\geq
f(a^{\prime \prime },\theta ^{\prime \prime })$ implies $f(a^{\prime
},\theta ^{\prime })>f(a^{\prime \prime },\theta ^{\prime })$ for every $%
\theta ^{\prime }>\theta ^{\prime \prime },$ then $f$ satisfies the \emph{%
strict single crossing property} in $(a;\theta )$.}
\end{description}
Given Assumption A, the stronger monotonicity of $\sigma $ and $\tau $ comes
from Lemma \ref{lemma_monotone_eq_A'} in Appendix. The stronger monotonicity
of $\mu $ is equivalent to Criterion D1.
\begin{corollary}
\label{corollary_monotone_belief}Let $\sigma $ and $\mu $ be a sender action
function and a belief function in equilibrium, respectively. If Assumption A
is satisfied, $\mu $ passes Criterion D1 if and only if it is non-decreasing
in the stronger set order.
\end{corollary}
The stronger monotonicity of $\mu $ implies that for any $s$ in the interval
of off-path sender actions induced by the discontinuity of $\sigma $ at an
interior sender type $z$, the support of $\mu (s)$ is a singleton and it is $%
\{z\}.$ This implication is satisfied if and only if $\mu $ satisfies
Criterion D1 given Assumption A. Cho and Sobel monotonicity of $\mu $ does
not lead to this implication although any belief function $\mu $ that passes
Criterion D1 satisfies Cho and Sobel monotonicity.\footnote{%
Suppose that an action $s$ is chosen by some sender type $z$ on the
equilibrium path. Cho and Sobel monotonicity means that a receiver should
believe that $s^{\prime }>s$ is not chosen by a lower sender type than $z$.}
\begin{corollary}
\label{corollary_monotone_belief1} According to Lemma \ref%
{theorem_monotone_belief} in Appendix, the support of the belief $\mu (s)$
conditional on $s\notin $ range $\sigma $ is a singleton if it passes
Criterion D1. This implies that if the unique type in the support of the
belief $\mu (s)$ is weakly worse off by deviating to $s\notin $ range $%
\sigma ,$ any other type is strictly worse off with the same deviation.
\end{corollary}
The proof of Corollary \ref{corollary_monotone_belief1} is straightforward,
so it is omitted. Corollaries \ref{corollary_monotone_belief} and \ref%
{corollary_monotone_belief1} play a crucial role in deriving a
non-separating stronger monotone equilibrium in the DM's optimal stronger
monotone equilibrium design.
To establish the stronger monotonicity of an equilibrium, $\left\{ \sigma
,\mu ,\tau ,m\right\} $, we still need to identify sufficient conditions
under which $m$ is non-decreasing in the stronger set order. We impose
Assumption B for $g$ and apply the Milgrom-Shannon Monotone Selection
Theorem (Milgrom and Shannon (1994)).
\begin{description}
\item[Assumption B] (i) $g(t,s,z,x)$ is supermodular\footnote{%
Given a lattice $A,$ $f:A\rightarrow
\mathbb{R}
$ is \emph{supermodular} if $f(a\wedge b)+f(a\vee b)\geq $ $f(a)+f(b)$ for
all $a$ and $b$ in $A$. $f:A\rightarrow
\mathbb{R}
$ is \emph{strictly} \emph{supermodular} if $f(a\wedge b)+f(a\vee b)>$ $%
f(a)+f(b)$ for all unordered $a$ and $b$ in $A$.} in $(t,s,z)$ and satisfies
the single crossing property in $(\left( t,s,z\right) ;x)$ and the strict
single crossing property in $(z;x)$ at each $(s,t)$. (ii) $g(t,s,z,x)$ is
increasing in $x.$
\end{description}
\begin{theorem}[Milgrom-Shannon Monotone Selection Theorem]
Let $f:A\times \Theta \rightarrow
\mathbb{R}
,$ where $A$ is a lattice and $\Theta $ is a partially ordered set. If $f$
is quasisupermodular\footnote{%
Given a lattice $A,$ a function $f:A\rightarrow
\mathbb{R}
$ is \emph{quasisupermodular} if (i) $f(a)\geq f(a\wedge b)$ implies $%
f(a\vee b)\geq f(b)$ and (ii) $f(a)>f(a\wedge b)$ implies $f(a\vee b)>f(b).$
If $f$ is supermodular, then it is quasisupermodular.} in $a$ and satisfies
the strict single crossing property in $(a;\theta ),$ then every selection $%
a^{\ast }(\theta )$ from $\arg \max_{a\in A}f(a,\theta )$ is non-decreasing.
\end{theorem}
\begin{theorem}[Stronger Monotone Signaling Equilibrium Theorem]
\label{theorem_stronger_monotone_eq}
Suppose that Assumptions A and B are satisfied. Then, an equilibrium $%
\left\{ \sigma ,\mu ,\tau ,m\right\} $ is stronger monotone if and only if
it passes Criterion D1.
\end{theorem}
\begin{proof}
Given Assumption A, the stronger monotonicity of $\sigma ,\mu ,$ and $\tau $
comes from Lemma \ref{lemma_monotone_eq_A'} and Corollary \ref%
{corollary_monotone_belief}, in the Online Appendix. Given the stronger
monotonicity of $\sigma ,\mu ,$ and $\tau $, consider a receiver's matching
problem that is $\max_{s\in S^{\ast }}V(s,x),$ where $V(s,x):=\mathbb{E}%
_{\mu (s)}\left[ g(\tau (s),s,z,x)\right] .$ For any $s,s^{\prime }\in
S^{\ast }$ such that $s>s^{\prime },$ we have that $\tau (s)>\tau (s^{\prime
})$ and $z\geq z^{\prime }$ for any $z\in $ supp $\mu (s)$ and $z^{\prime
}\in \mu (s^{\prime }).$ Therefore, the first three arguments in $g$ are
linearly ordered with respect to $s.$ Given Assumption B(i), this implies
that $V(s,x)$ satisfies the strict single crossing property. Choose an
arbitrary selection $\xi _{\circ }(x)\in \arg \max_{s\in S^{\ast }}V(s,x).$
Then, by Milgrom and Shannon's Monotone Selection Theorem, $\xi _{\circ }(x)$
is non-decreasing in $x$. Note that $\max_{s\in S^{\ast }}V(s,x)$ is a
maximization problem with no individual rationality. For all $x\in X,$ let
\begin{equation*}
\xi (x)=\left\{
\begin{array}{cc}
\xi _{\circ }(x) & \text{if }V(\xi _{\circ }(x),x)\geq 0, \\
\eta & \text{otherwise.}%
\end{array}%
\right.
\end{equation*}%
$V(s,x)$ is increasing in $x$ because of Assumption B(ii) and hence we have
that $x<x^{\prime }$ for any $x$ with $\xi (x)=\eta $ and any $x^{\prime }$
with $\xi (x^{\prime })\neq \eta $. This property and the non-decreasing
property of $\xi _{\circ }(x)$ make $\xi (x)$ non-decreasing in $x.$
For any $s\in S^{\ast },$ the set of receiver types who are matched with
senders with $s$ can be expressed as $m(s)=\xi ^{-1}(s):=\{x|\xi (x)=s\}$.
Because $\xi (x)$ is non-decreasing in $x,$ $m$ is non-decreasing with
respect to the stronger set order.
\end{proof}
Without loss of generality, we can focus on stronger monotone equilibria to
derive all D1 equilibria given Assumptions A and B.
A couple of remarks are in order. First of all, Cho and Sobel monotonicity
of beliefs\footnote{\label{footnote1}Suppose that an action $s$ is chosen by
some sender type $t$ on the equilibrium path. Cho and Sobel monotonicity
means that a receiver should believe that $s^{\prime }>s$ is not chosen by a
lower sender type than $t$.} (Cho and Sobel (1990)), a partial implication
of Criterion D1, is instrumental for the selection of a separating
equilibrium as a unique D1 equilibrium: Among those who chose the same
action, the highest sender type always has a profitable upward deviation
given Cho and Sobel monotonicity, so a pooled action cannot be sustained in
a D1 equilibrium. However, this argument does not apply if a receiver cannot
reward such an upward deviation with a higher reaction when the upper bound
of feasible reactions is too low. Given any set of feasible reactions that
the DM may choose, we can derive a unique D1 equilibrium using the stronger
monotonicity of beliefs, which is the full implication of Criterion D1.
Secondly, one could find statements in the literature for games with totally
ordered signal spaces of something like \textquotedblleft single crossing
implies monotonicity on the path if one imposes monotonicity off path
beliefs.\textquotedblright\ Corollary \ref{corollary_monotone_belief}
provides a stronger result in that it shows the equivalence between
Criterion D1 and the stronger monotonicity of beliefs in a general model.
\section{Unique Stronger Monotone Equilibrium given $T$\label%
{Sec_Eq_w_lower_bound}}
The sender's type set is $Z=[\underline{z},\overline{z}]\subset
\mathbb{R}
$ and the receiver's type set is $X=[\underline{x},\overline{x}]\subset
\mathbb{R}
$. $X$ and $Z$ do not have to be bounded. The equilibrium analysis with
non-negative unbounded type sets can be analogously done. We assume that $S=%
\mathbb{R}
_{+}.$ Let $0\in S$ be the null action. Utilities are transferable through a
receiver's reaction $t$. A receiver's utility is $g(t,s,z,x)=v(x,s,z)-t$ and
a sender's utility is $u(t,s,z)=t-c(s,z)$. $v$ can be interpreted as gross
match surplus and $c$ is the cost of taking an action for senders.
Given that receivers can randomize their reactions and utility functions of
both receivers and senders are quasilinear in reactions, choosing $T$ is
equivalent to choosing its convex hull for the DM's perspective. This
reduces the DM's choice set without loss of generality. For simplicity of
notation, we expand the whole set of feasible reactions to $%
\mathbb{R}
_{+}\cup \{\infty \}$. The DM only needs to consider an interval, $[t_{\ell
},t_{h}]$ as the set of feasible reactions $T$, where $0\leq t_{\ell }\leq
t_{h}\leq \infty $ given that receivers can randomize their reactions. If $%
t_{\ell }=0$ and $t_{h}=\infty $, then there is no restrictions on feasible
reactions. If $t_{\ell }=t_{h},$ then $[t_{\ell },t_{h}]$ is a singleton
that allows only one feasible reaction. Let us start with assumptions.
Focusing on stronger monotone equilibria, the DM maximizes the aggregate net
surplus.
\begin{description}
\item[Assumption 1.] \label{Ass1}(i) $c(s,z)$ is increasing in $s$ but
decreasing in $z$ and (ii) $-c(s,z)$ is strictly supermodular in $(s,z)$.
\end{description}
It is easy to see that Assumption 1 implies Assumption A given the form of
the utility function, $u(t,s,z)=t-c(s,z)$.\footnote{%
If the domain $A$ of a real-valued function $f$ is a subset of $%
\mathbb{R}
^{N},$ then the (strict) supermodularity of $f$ is equivalent to
non-decreasing (increasing) differences (Theorem 2.6.1 and Corollary 2.6.1
in Topkis (1998)), which in turn guarantees the (strict) single crossing
property.}
\begin{description}
\item[Assumption 2.] \label{Ass2}(i)\textbf{\ }$v(x,s,z)$ is supermodular in
$(x,s,z)$ and strictly supermodular in $(z,x),$ (ii) $v$ is increasing in $%
x. $
\end{description}
\begin{lemma}
\label{lemma_ass_2_b}If Assumption 2 holds, then $g$\ satisfies Assumptions
B.
\end{lemma}
Because Assumptions A and B are implied by Assumptions 1 and 2, Theorem \ref%
{theorem_stronger_monotone_eq} goes through in this section.
Focusing on stronger monotone equilibria, the DM maximizes the aggregate net
surplus.
We impose Assumptions 3, 4, 5, and 6 below for the differentiability of the
separating part of a stronger monotone equilibrium and the existence of a
stronger monotone equilibrium.
\begin{description}
\item[Assumption 3] \label{ass3}(i) $v$ is non-negative, increasing in $z$,
and non-decreasing in action $s$. (ii) $v$ is differentiable and $v_{s}$ and
$v_{z}$ are continuous.
\item[Assumption 4] \label{ass4}$c$ is differentiable with $c(0,z)=0$, $%
\lim_{s\rightarrow \infty }c(s,z)=\infty $ for all $z\in \left[ \underline{z}%
,\overline{z}\right] $, and $c_{s}$ is continuous.
\item[Assumption 5] \label{ass5}If $v(x,s,z)$ is increasing in $s$, it is
concave in $s$ with $\lim_{s\rightarrow 0}v_{s}(x,s,z)=\infty $ and $%
\lim_{s\rightarrow \infty }v_{s}(x,s,z)=0$ and $c(s,z)$ is strictly convex
in $s$ with $\lim_{s\rightarrow 0}c_{s}(s,z)=0$ and $\lim_{s\rightarrow
\infty }$ $c_{s}(s,z)=\infty $.
\item[Assumption 6] \label{ass6}$0<G^{\prime }(z)<\infty $ for all $z\in %
\left[ \underline{z},\overline{z}\right] $ and $0<H^{\prime }(x)<\infty $
for all $x\in \lbrack \underline{x},\overline{x}]$.
\end{description}
We define the function $n$\ as $n\equiv H^{-1}\circ G$\ so that $%
H(n(z))=G(z) $ for all $z\in \lbrack \underline{z},\overline{z}]$. A
bilaterally efficient action $\zeta (x,z)$ for type $z$ given $x$ maximizes $%
v(x,s,z)-c(s,z)$ and
\begin{equation}
v(\underline{x},\zeta (\underline{x},\underline{z}),z)-c(\zeta (\underline{x}%
,\underline{z}),\underline{z})\geq 0. \label{constrained_eff_b}
\end{equation}%
We normalize $\zeta (\underline{x},\underline{z})$ to $0.$ The reservation
utility for each agent is zero. We assume that every agent enters the market
if she can get at least her reservation utility by entering the market in
equilibrium
We introduce a \emph{well-behaved stronger monotone equilibrium}, a type of
stronger monotone equilibrium, that encompasses a separating equilibrium and
a pooling equilibrium as well. A stronger monotone equilibrium is called
well-behaved if it is characterized by two threshold sender types, $z_{\ell
} $ and $z_{h}$ such that every sender of type below $z_{\ell }$ stays out
of the market, every sender in $[z_{\ell },z_{h})$ differentiates themselves
with their unique action choice, and every sender in $[z_{h},\overline{z}]$
pools themselves with the same action. If $z_{\ell }<z_{h}=\bar{z},$ then a
well-behaved equilibrium is separating. If $z_{\ell }=z_{h},$ then a
well-behaved equilibrium is pooling. If $z_{\ell }<z_{h}<\bar{z},$ then it
is (strictly) well-behaved with both separating and pooling parts in the
equilibrium. We shall show that any stronger monotone equilibrium (i.e., any
D1 equilibrium) is unique as well as well-behaved.
We first start with a stronger monotone separating equilibrium. Once we
characterize it, the characterization of any stronger monotone well-behaved
equilibrium comes naturally. For now, let us assume that the lower bound $%
t_{\ell }$ of the interval $T$ that the DM chooses for feasible reactions is
less than the maximal value of $v-c$ that can be created by the highest
types $\overline{z}$ and $\overline{x}$. Otherwise all agents would stay out
of the market.
Let $z_{\ell }$ be the lowest sender type who is matched in equilibrium and $%
s_{\ell }$ her action. The following two inequalities must be satisfied at $%
(s_{\ell },z_{\ell })$:
\begin{align}
v\left( n\left( z\right) ,s,z\right) -t_{\ell }& \geq 0, \label{lem1} \\
t_{\ell }-c\left( s,z\right) & \geq 0. \label{lem2}
\end{align}%
The two cases must be distinguished. If $z_{\ell }=\underline{z},$ then all
types are matched in equilibrium. This is the \emph{first case}. In this
case, if we have a separating part in equilibrium, there is no information
rent in the lowest match between type $\underline{z}$ and type $\underline{x}
$. Therefore, the equilibrium action $s_{\ell }$ in the lowest match is
bilaterally efficient, i.e., $s_{\ell }=\zeta (\underline{x},\underline{z}%
)=0.$ In this case, we assume that the DM sets $t_{\ell }$ to $c\left( \zeta
(\underline{x},\underline{z}),\underline{z}\right) =0$.
If $t_{\ell }$ is so high that type $\underline{x}$ cannot achieve a
non-negative value of $v-t_{\ell }$ in a match with type $\underline{z}$ who
takes an action that costs her $t_{\ell }$, then the lowest match must be
between types $z_{\ell }$ and $x_{\ell }:=n(z_{\ell })$ in the interior of
both type distributions. (\ref{lem1}) and (\ref{lem2}) must be also
satisfied with equality at $(s_{\ell },z_{\ell })$. This is the \emph{second
case}. If either one of them, e.g., (\ref{lem1}), is positive, then a
receiver whose type is below but arbitrarily close to $x_{\ell }$ finds it
profitable to be matched with type $z_{\ell }$ instead of staying out of the
market.
\begin{lemma}
\label{lemmaA}If there is a solution $(s_{\ell },z_{\ell })$ that solves (%
\ref{lem1}) and (\ref{lem2}), it is unique.
\end{lemma}
Now consider the upper bound $t_{h}$ of the interval $T$ that the DM chooses
for feasible reactions. What happens if it is equal to $\infty $? In any
stronger monotone equilibrium with $t_{h}=\infty $, the first-order
necessary condition for the sender's equilibrium action choice that solves
her problem in \eqref{WP3} would satisfy that for all $z\in (z_{\ell },\bar{z%
})$%
\begin{equation}
\tau ^{\prime }(\sigma \left( z\right) )-c_{s}(\sigma \left( z\right) ,z)=0.
\label{FOCS}
\end{equation}%
On the other hand, the equilibrium reaction choice $\tau (s)$ by the
receiver who is matched with a sender with action $s$ solves his problem in %
\eqref{FP} and its first-order necessary condition must satisfy that for all
$s\in $ Int$S^{\ast }$
\begin{equation}
\tau ^{\prime }(s)=v_{s}(m(s),s,\mu \left( s\right) )+v_{z}(m(s),s,\mu
\left( s\right) )\mu ^{\prime }\left( s\right) , \label{FOCR}
\end{equation}%
where $m(s)=n(\mu \left( s\right) )$ is the type of the receiver who is
matched with a sender with action $s.$ Note that the equilibrium matching
function $m$ is stronger monotone due to Theorem \ref%
{theorem_stronger_monotone_eq}. Because all senders on the market
differentiate themselves with unique action choices in a stronger monotone
separating equilibrium, $m$ is strictly increasing over $S^{\ast }$ and the
matching is assortative in terms of the receiver's type and the sender's
action (and the receiver's type and the sender' type as well).
In our two-sided matching model, the market clearing conditions must be
embedded into senders' action choices and the belief on sender types. In
Theorem \ref{thm_differentiable_sep_eq}, the differentiability of $\tau $
comes from senders' optimal action choices and the differentiability of $\mu
$ comes from receivers' optimal choice of a sender, given the
market-clearing condition and both the continuity of $\sigma $ and the
differentiability of $\tau $ that we can derive from sender's optimal action
choices. Theorem \ref{thm_differentiable_sep_eq} is the consequence of
Assumptions 1 - 5. The proof of Theorem \ref{thm_differentiable_sep_eq} is
rather long but its differentiability results contribute to the literature.%
\footnote{%
To apply the differentiability results in a model with one sender and one
receiver (Mailath (1987)) to a two-sided matching model, Hopkins (2012)
imposes the restriction that there is no complementary between receiver type
$x$ and sender action $s.$ This restriction gets rid of a matching effect on
the marginal productivity of a sender's action. This restriction is not
needed for establishing our differentiability result in Theorem \ref%
{thm_differentiable_sep_eq} above.}
\begin{theorem}
\label{thm_differentiable_sep_eq}In any well-behaved stronger monotone
equilibrium with $t_{h}=\infty $, (i) $S^{\ast }$ is a compact real
interval, $[\sigma (\underline{z}),\sigma \left( \bar{z}\right) ]$, (ii) $%
\tau :S^{\ast }\rightarrow T$ is increasing and continuous on $S^{\ast }$
and has continuous derivative $\tilde{\tau}^{\prime }$ on Int $S^{\ast },$
and (iii) $\mu :S\rightarrow \Delta (Z)$ is increasing and continuous on $%
S^{\ast }$ and has continuous derivative $\mu ^{\prime }$ on Int $S^{\ast }.$
\end{theorem}
Because $\mu $ is the inverse of $\sigma ,$ the differentiability of $\sigma
$ is immediate from Theorem \ref{thm_differentiable_sep_eq}.(iii). Combining
(\ref{FOCS}) and (\ref{FOCR}) yields a function $\phi (s,z)$ defined below:
\begin{equation}
\phi (s,z):=\frac{-\left[ v_{s}\left( n(z),s,z\right) -c_{s}\left(
s,z\right) \right] }{v_{z}\left( n(z),s,z\right) }.
\label{differential_characteristic}
\end{equation}%
This is the first-order ordinary differential equation, $\mu ^{\prime }=\phi
(s,\mu \left( s\right) )$ with the initial condition $(s_{\ell },z_{\ell }).$
\begin{lemma}
\label{thm_unique_separating_eq}If $v$\ and $c$\ are such that $\phi $\
defined in (\ref{differential_characteristic}) is uniformly Lipshitz
continuous, then the solution exists and it is unique.
\end{lemma}
Lemma \ref{thm_unique_separating_eq} is the simple application of the
Picard-Lindelof Theorem (See Teschl (2012)). Let $\tilde{\mu}$ be the unique
solution for the differential equation.
Given $z_{\ell }$ induced by $t_{\ell }$ and $t_{\ell }<t_{h}=\infty $, $\{%
\tilde{\sigma},\tilde{\mu},\tilde{\tau},\tilde{m}\}$ denotes a stronger
monotone separating equilibrium.\footnote{%
The characterization of the stronger monotone separating equilibrium can be
found in Theorem \ref{proposition1} in the Online Appendix.}
Once we drive $\tilde{\mu}$, we can construct the functions $\tilde{\sigma},$
$\tilde{\tau},$ and $\tilde{m}$ implied by $\tilde{\mu}$. $\tilde{\sigma}(z)$
for all $z\in \lbrack z_{\ell },\overline{z}]$ is determined by $\tilde{%
\sigma}(z)=\tilde{\mu}^{-1}(z)$ for all $z\in \lbrack z_{\ell },\overline{z}%
],$ where $\tilde{\mu}^{-1}(z)$ is the type of a sender that satisfies $z=%
\tilde{\mu}\left( \tilde{\mu}^{-1}(z)\right) $ for all $z\in \lbrack z_{\ell
},\overline{z}].$ For $s\in \lbrack s_{\ell },\tilde{\sigma}(\overline{z})],$
we can derive the matching function $\tilde{m}$ according to $\tilde{m}%
(s)=n\left( \tilde{\mu}(s)\right) $. Because $\tilde{\mu}$ is continuous
everywhere and differentiable at all $s\in $ Int $S^{\ast },$ integrating
the right-hand-side of (\ref{FOCR}) with the initial condition with $\tilde{%
\tau}(s_{\ell })=t_{\ell }$ induces%
\begin{equation}
\tilde{\tau}(s)=\int_{s_{\ell }}^{s}\left[ v_{s}(\tilde{m}(y),y,\tilde{\mu}%
\left( y\right) )+v_{z}(\tilde{m}(y),y,\tilde{\mu}\left( y\right) )\tilde{\mu%
}^{\prime }\left( y\right) \right] dy+t_{\ell }. \label{equilibrium_wage}
\end{equation}%
However, if $t_{h}<\tilde{\tau}(\tilde{\sigma}(\bar{z}))$, then we have no
separating equilibrium. In this case, let $Z(s)$ denote the set of the types
of senders who choose the same action $s$.
\begin{lemma}
\label{lemma_no_bottom_bunching}If $Z(s)$ has a positive measure in a
stronger monotone equilibrium, then it is an interval with $\max Z(s)=%
\overline{z}.$
\end{lemma}
Lemma \ref{lemma_binding_upper_bound} below shows that if there is pooling
on the top of the sender side, the reaction to those senders pooled at the
top must be the upper bound of feasible reaction $t_{h}.$
\begin{lemma}
\label{lemma_binding_upper_bound}If $Z(s)$ has a positive measure in a
stronger monotone equilibrium, then $t_{h}$ is the reaction to the senders
of types in $Z(s)$.
\end{lemma}
We can establish Lemmas \ref{lemma_no_bottom_bunching} and \ref%
{lemma_binding_upper_bound} using only Cho and Sobel monotonicity of $\mu $
without relying on the stronger monotonicity of $\mu $. However, we cannot
derive a D1 equilibrium with Cho and Sobel monotonicity as explained after
Theorem \ref{theorem1}.
Using Lemmas \ref{lemma_no_bottom_bunching} and \ref%
{lemma_binding_upper_bound}, we can establish that the stronger monotone
separating equilibrium $\{\tilde{\sigma},\tilde{\mu},\tilde{\tau},\tilde{m}%
\} $ is a unique stronger monotone equilibrium if $\tilde{\tau}(\tilde{\sigma%
}(\bar{z}))\leq t_{h}.$
\begin{theorem}
\label{thm_uniqueSME}Suppose that the DM chooses $T=[t_{\ell },t_{h}]$ such
that $0\leq t_{\ell }<\tilde{\tau}(\tilde{\sigma}(\bar{z}))\leq t_{h}$, a
unique stronger monotone equilibrium is the well-behaved stronger monotone
equilibrium and it is separating.
\end{theorem}
\begin{proof}
Lemma \ref{thm_unique_separating_eq} leads to the existence of the unique
stronger monotone separating equilibrium. The remaining question is whether
there are other stronger monotone equilibria. Lemma \ref%
{lemma_no_bottom_bunching} is still valid. Therefore, if there is bunching
in sender action $s$, it must be among senders in a type interval $Z(s)$
with $\overline{z}$ as its maximum. However, such bunching is not sustained
because if we follow the logic in the proof of Lemma \ref%
{lemma_binding_upper_bound}, we can show that there is a profitable small
upward deviation from $s$ for the sender of type-$\overline{z}$. Therefore,
there is no additional stronger monotone equilibrium.
\end{proof}
Theorem \ref{thm_uniqueSME} extends the uniqueness result in Cho and Sobel
(1990) and Ramey (1996) to a two-sided matching model. If $t_{h}<\tilde{\tau}%
(\tilde{\sigma}(\bar{z}))$, there are only two types of non-separating
stronger monotone equilibria as shown in Lemma \ref%
{theorem_all_eq_w/o_separating}. The reason is that pooling can happen only
among senders in an interval with $\bar{z}$ being its maximum due to Lemma %
\ref{lemma_no_bottom_bunching},
\begin{lemma}
\label{theorem_all_eq_w/o_separating}If there is an upper bound of reactions
$t_{h}<\tilde{\tau}(\tilde{\sigma}(\bar{z}))$, then, there are two possible
stronger monotone equilibria: (i) a strictly well-behaved stronger monotone
equilibrium and (ii) a stronger monotone pooling equilibrium.
\end{lemma}
Lemma \ref{theorem_all_eq_w/o_separating} follows Theorem \ref%
{theorem_stronger_monotone_eq} (Stronger Monotone Equilibrium Theorem) and
Lemmas \ref{lemma_no_bottom_bunching} and \ref{lemma_binding_upper_bound}.
For the uniqueness of a stronger monotone equilibrium when $t_{h}<\tilde{\tau%
}(\tilde{\sigma}(\bar{z}))$, we impose an additional assumption as follows.
\begin{description}
\item[Assumption 7] \label{ass7 copy(1)}$\lim_{z\rightarrow \underline{z}%
}c(s,z)=\infty $ for all $s>0$ and either (i) or (ii) is satisfied:
(i) $v(x,s,z)=v(x,s^{\prime },z)$ for all $s,s^{\prime }\in
\mathbb{R}
_{+}$, $v(\underline{x},s,z)=0$ for all $s,z$, and $v(x,s,z)>0$ for all $x>%
\underline{x},$ all $z>\underline{z},$ and all $s\in
\mathbb{R}
_{+}$.
(ii) $v(x,0,z)=0$ and $v(x,s,z)$ is increasing in $s$ for all $x$ and $z$,
and $v(x,0,z)-c(0,z)\geq 0$ for all $x>\underline{x},$ all $z>\underline{z}$.
\end{description}
We first consider a stronger monotone pooling equilibrium. This is a type of
stronger monotone equilibrium when $t_{\ell }=t_{h}=t^{\ast }.$ Every seller
of type above $z_{\ell }=z_{h}=z^{\ast }$ enters the market with the pooled
action $s^{\ast }$.%
\begin{gather}
t^{\ast }-c(s^{\ast },z^{\ast })\geq 0, \label{pooling_sender1} \\
\mathbb{E}\left[ v\left( n\left( z^{\ast }\right) ,s^{\ast },z^{\prime
}\right) |z^{\prime }\geq z^{\ast }\right] -t^{\ast }\geq 0,\text{ }
\label{pooling_receiver1}
\end{gather}%
where each condition holds with equality if $z^{\ast }>\underline{z}.$
\begin{theorem}
\label{lemma_unique_pooling}For only a single feasible reaction $t^{\ast
}>0, $ the only possible stronger monotone pooling equilibrium is a stronger
monotone pooling equilibrium with $z^{\ast }>\underline{z}$ and $s^{\ast }>0$
that satisfy (\ref{pooling_sender1}) and (\ref{pooling_receiver1}), each
with equality. For only a single feasible reaction, $t^{\ast }=0,$ the only
possible stronger monotone pooling equilibrium is a stronger monotone
pooling equilibrium with $z^{\ast }=\underline{z}$ and $s^{\ast }=0$.
\end{theorem}
\begin{proof}
Fix $t^{\ast }>0.$ We first show that $z^{\ast }>\underline{z}$. On the
contrary, suppose that $z^{\ast }=\underline{z}.$ Then, $s^{\ast }=0.$
Otherwise (i.e., $s^{\ast }>0$), (\ref{pooling_sender1}) is not satisfied
because $\lim_{z\rightarrow \underline{z}}c(s,z)=\infty $ for all $s>0$ in
Assumption 7. If Assumption 7.(i) is satisfied, $\mathbb{E}\left[ v\left(
n\left( z^{\ast }\right) ,s^{\ast },z^{\prime }\right) |z^{\prime }\geq
z^{\ast }\right] =0$ because $n\left( z^{\ast }\right) =\underline{x}.$
Therefore, (\ref{pooling_receiver1}) is not satisfied. If Assumption 7.(ii)
is satisfied, $s^{\ast }=0$ implies that $\mathbb{E}\left[ v\left( n\left(
z^{\ast }\right) ,s^{\ast },z^{\prime }\right) |z^{\prime }\geq z^{\ast }%
\right] =0.$ Therefore, (\ref{pooling_receiver1}) is not satisfied. It means
that if $t^{\ast }>0,$ then $z^{\ast }>\underline{z}$.
Given $t^{\ast }>0,$ let $z^{\ast }>\underline{z}$ be the threshold sender
type in a pooling equilibrium and hence (\ref{pooling_sender1}) and (\ref%
{pooling_receiver1}), each with equality. It means that $s^{\ast }>0.$ If a
sender reduces her action below $s^{\ast },$ the stronger monotone belief
implies that her type is believed to be $z^{\ast }$ and no receiver is
willing to match with her at $t^{\ast }.$ Furthermore, no sender wants to
choose her action above $s^{\ast }$ because the reaction is fixed to $%
t^{\ast }.$ Given binding (\ref{pooling_sender1}) and (\ref%
{pooling_receiver1}), no agent who stays out of the market wants to enter it
and vice versa no agent who enters the market stays out of the market.
If $t^{\ast }=0,$ then we must have $s^{\ast }=0.$ Otherwise the sender with
$s^{\ast }$ will have utility less than her reservation utility. Suppose
that Assumption 7.(i) is satisfied. Then, every receiver of type above $%
\underline{x}$ gets positive (expected) utility by matching with a sender.
Therefore, every receiver wants to enter the market. Then, every sender must
enter the market as well. Therefore, $z^{\ast }=\underline{z}.$ Suppose that
Assumption 7.(ii) is satisfied. Because $s^{\ast }=0,$ any receiver who is
matched with a sender with $s^{\ast }=0$ gets the same utility as his
reservation utility. All receivers and sender receive zero utility
regardless of their decisions on market entry (So, the aggregate net surplus
is always zero). Because agents enter the market whenever they are
indifferent between entering the market and staying out, agents enter the
market, $z^{\ast }=\underline{z}.$ It is clear to see that no agent want to
leave the market.
\end{proof}
Now consider a (strictly) well-behaved equilibrium with both separating and
pooling parts when $t_{\ell }<t_{h}<\tilde{\tau}(\tilde{\sigma}(\bar{z})).$
The system of equations represented in (\ref{jumping_sellers}) and (\ref%
{jumping_buyers}) is the \emph{key} to understand jumping and pooling in the
upper tail of the match distribution with $t_{h}<\tilde{\tau}(\tilde{\sigma}(%
\bar{z}))$:
\begin{gather}
t_{h}-c\left( s,z\right) =\tilde{\tau}\left( \tilde{\sigma}\left( z\right)
\right) -c\left( \tilde{\sigma}\left( z\right) ,z\right) ,
\label{jumping_sellers} \\
\mathbb{E}[v(n\left( z\right) ,s,z^{\prime })|z^{\prime }\geq
z]-t_{h}=v\left( n\left( z\right) ,\tilde{\sigma}\left( z\right) ,z\right) -%
\tilde{\tau}\left( \tilde{\sigma}\left( z\right) \right) .
\label{jumping_buyers}
\end{gather}
Let $(s_{h},z_{h})$ denote a solution of (\ref{jumping_sellers}) and (\ref%
{jumping_buyers}). Note that (\ref{jumping_sellers}) makes the type $z_{h}$
sender indifferent between choosing $s_{h}$ for $t_{h}$ and $\tilde{\sigma}%
\left( z_{h}\right) $ for $\tilde{\tau}\left( \tilde{\sigma}\left(
z_{h}\right) \right) .$ The expression on the left hand side of (\ref%
{jumping_sellers}) is the equilibrium utility for the type $z_{h}$ receiver.
The expression on the left hand side of (\ref{jumping_buyers}) is the
utility for the type $n\left( z_{h}\right) $ receiver who chooses a sender
with action $s_{h}$ as his partner by choosing $t_{h}$ for her. This is the
equilibrium utility for type $n(z_{h})$. The expression on the right-hand
side is his utility if he chooses a sender of type $z_{h}$ with action $%
\tilde{\sigma}\left( z_{h}\right) $ as his partner by choosing the reaction $%
\tilde{\tau}\left( \tilde{\sigma}\left( z_{h}\right) \right) $.
\begin{lemma}
\label{lemmaB}If there is a solution $(s_{h},z_{h})$ of (\ref%
{jumping_sellers}) and (\ref{jumping_buyers}), it is unique.
\end{lemma}
Lemma \ref{lemma2} below shows that there is jumping in reactions and
actions at the threshold sender type.
\begin{lemma}
\label{lemma2}If there exists a solution $(s_{h},z_{h})$ of (\ref%
{jumping_sellers}) and (\ref{jumping_buyers}) with $z_{\ell }<z_{h}<\bar{z}$%
, then $\tilde{\tau}\left( \tilde{\sigma}\left( z_{h}\right) \right) <t_{h}<%
\tilde{\tau}\left( \tilde{\sigma}\left( \bar{z}\right) \right) $ and $\tilde{%
\sigma}\left( z_{h}\right) <s_{h}<\tilde{\sigma}\left( \bar{z}\right) $.
\end{lemma}
We exploit Assumptions 1 and 2(i) for the proof of Lemma \ref{lemma2}.
Theorem \ref{theorem1} establishes the existence of a unique well-behaved
stronger monotone equilibrium given $T=[t_{\ell },t_{h}]$ with $0\leq
t_{\ell }<\tilde{\tau}(\tilde{\sigma}(\bar{z}))<t_{h}$. Let $x_{h}:=n(z_{h})$%
. Note that Theorem \ref{theorem1} allows for the possibility of separating
or pooling as well as strictly well-behaved.
\begin{theorem}
\label{theorem1}Suppose that the DM chooses a set of feasible reactions $%
T=[t_{\ell },t_{h}]$ with $0\leq t_{\ell }<\tilde{\tau}(\tilde{\sigma}(\bar{z%
}))<t_{h}$ under which $(z_{\ell },s_{\ell })$ is the lower threshold sender
type and her equilibrium action and $(z_{h},s_{h})$\ is the upper threshold
sender type and her equilibrium action. Then, there exists a unique
well-behaved stronger monotone equilibrium $\left\{ \hat{\sigma},\hat{\mu},%
\hat{\tau},\hat{m}\right\} $. It is characterized as follows.
\begin{enumerate}
\item $\hat{\sigma}$ follows (i) $\hat{\sigma}(z)=0$ if $z\in \left[
\underline{z},z_{\ell }\right) $; (ii) $\hat{\sigma}(z)=s_{\ell }$ if $%
z=z_{\ell }$; (iii) $\hat{\sigma}(z)$ satisfies that $\hat{\tau}^{\prime }(%
\hat{\sigma}\left( z\right) )-c_{s}(\hat{\sigma}\left( z\right) ,z)=0$ if $%
z\in (z_{\ell },z_{h})$; (iv) $\hat{\sigma}(z)=s_{h}$ if $z\in \left[ z_{h},%
\overline{z}\right] $. Further, $\hat{\sigma}\left( z_{h}\right) <s_{h}$.
\item $\hat{\mu}$ follows (i) $\hat{\mu}(s)=G(z|\underline{z}\leq z<z_{\ell
})$ if $s=0$; (ii) $\hat{\mu}(s)=z_{\ell }$ if $s\in (0,\hat{\sigma}(z_{\ell
}))$; (iii) $\hat{\mu}(s)=\hat{\sigma}^{-1}(s)$ if $s\in \left[ \hat{\sigma}%
(z_{\ell }),\hat{\sigma}(z_{h})\right) $; (iv) $\hat{\mu}(s)=z_{h}$ if $s\in
\lbrack \lim_{z\nearrow z_{h}}\hat{\sigma}(z),s_{h})$; (v) $\hat{\mu}%
(s)=G(z|z_{h}\leq z\leq \overline{z})$ if $s=s_{h}$; (vi) $\hat{\mu}(s)=%
\overline{z}$ if $s>s_{h}$.
\item $\hat{\tau}(s)$ with $\hat{\tau}(s_{\ell })=t_{\ell }$ satisfies (i) $%
v_{s}\left( x,s,\hat{\mu}(s)\right) +v_{z}\left( x,s,\hat{\mu}(s)\right)
\hat{\mu}^{\prime }(s)-\hat{\tau}^{\prime }(s)=0$ at $s=\xi (x)$ for all $%
x\in $ $(x_{\ell },x_{h})$ and (ii) $\hat{\tau}(s)=t_{h}$ if $s\geq s_{h}$.
Further, $\hat{\tau}\left( \hat{\sigma}\left( z_{h}\right) \right) <t_{h}$
\item $\hat{m}$ follows that (i) $\hat{m}(s)=n(\hat{\mu}(s))$ if $s\in \left[
\hat{\sigma}(z_{\ell }),\hat{\sigma}(z_{h})\right) $, (ii) $\hat{m}(s)=\left[
x_{h},\overline{x}\right] $ if $s=s_{h}.$
\end{enumerate}
\end{theorem}
If a well-behaved equilibrium has both separating and pooling, it follows
the separating equilibrium with the same $z_{\ell }$ before $z$ hits $z_{h}$
according to Conditions
1(i)--(iii), 2(i)--(iii), 3(i), and 4(i) in Theorem \ref{theorem1} above. As
Condition 1 in Theorem \ref{theorem1} and Lemma \ref{lemma2} show, in a
(strictly) well-behaved stronger monotone equilibrium, we have jumping in
equilibrium sender actions at the threshold sender type $z_{h}$, followed by
pooling. In Figure \ref{fig:figure1}, the equilibrium sender actions consist
of the three different blue parts.\footnote{%
Note that $\lim_{z\nearrow z_{h}}\hat{\sigma}(z)=\tilde{\sigma}(z_{h})$ in
Figure \ref{fig:figure1}} Note that equilibrium matching is assortative in
terms of sender action and receiver type (and therefore in terms of sender
type and receiver type) in the separating part of the equilibrium but it is
random in the pooling part of the equilibrium. Therefore, there is matching
inefficiency in the pooling part but there may be potential savings in the
signaling cost associated with the pooled action choice by senders above $%
z_{h}$.
\begin{figure}[tbp]
\centering\includegraphics[scale=0.4]{figures/fig_seller_signal.pdf}
\caption{Senders' equilibrium actions}
\label{fig:figure1}
\end{figure}
Because $z_{h}$ is in the interior of the sender's type interval in a
strictly well-behaved stronger monotone equilibrium, Cho and Sobel
monotonicity does not pin down the belief $\hat{\mu}(s)$ conditional on an
off-path action $s\in \lbrack \lim_{z\nearrow z_{h}}\hat{\sigma}(z),s_{h})$,
whereas the stronger monotonicity of the belief (see Corollary \ref%
{corollary_monotone_belief} in the Online Appendix) uniquely pins it down as
one that puts all the probability weights on $z_{h}$ as specified in
Condition 2(iv). Further, because the stronger monotonicity of the belief is
equivalent to Criterion D1, we only need to show that the sender type $z_{h}$
has no incentive to deviate to an off-path action in $[\lim_{z\nearrow z_{h}}%
\hat{\sigma}(z),s_{h})$ in order to show that no sender has an incentive to
deviate to such an off-path action.
Theorem \ref{thm_unique_well_behaved} below shows that $\left\{ \hat{\sigma},%
\hat{\mu},\hat{\tau},\hat{m}\right\} $ is a unique stronger monotone
equilibrium if $0\leq t_{\ell }<t_{h}<\tilde{\tau}(\tilde{\sigma}\left(
\overline{z}\right) )$.
\begin{theorem}
\label{thm_unique_well_behaved}Suppose that the DM fixes a set of feasible
reactions to $T=[t_{\ell },t_{h}]$ with $0\leq t_{\ell }<t_{h}<\tilde{\tau}(%
\tilde{\sigma}\left( \overline{z}\right) )$. A unique stronger monotone
equilibrium is $\left\{ \hat{\sigma},\hat{\mu},\hat{\tau},\hat{m}\right\} $.
\end{theorem}
\begin{proof}
Since there is no separating equilibrium with $T=[t_{\ell },t_{h}]$ with $%
t_{\ell }<t_{h}<\tilde{\tau}\left( \tilde{\sigma}\left( \overline{z}\right)
\right) .$ Because of Lemma \ref{theorem_all_eq_w/o_separating}, it is
sufficient to show that there is no pooling equilibrium. On contrary,
suppose that there exists a pooling equilibrium. Because of Lemma \ref%
{lemma_binding_upper_bound}, $t_{h}$ is the equilibrium reaction for senders
with pooled action $s^{\ast }.$ Therefore, $x^{\ast }=n(z^{\ast })$ and the
following system of equations is satisfied in a pooling equilibrium:
\begin{gather}
t_{h}-c(s^{\ast },z^{\ast })\geq 0, \label{bottom_seller1_0} \\
\mathbb{E}\left[ v\left( n\left( z^{\ast }\right) ,s^{\ast },z^{\prime
}\right) |z^{\prime }\geq z^{\ast }\right] -t_{h}\geq 0,\text{ }
\label{bottom_buyer1_0}
\end{gather}%
where both inequalities hold with equality if $z^{\ast }>\underline{z}$.
Suppose that $z^{\ast }>\underline{z}$. Then, (\ref{bottom_seller1_0}) and (%
\ref{bottom_buyer1_0}) hold with equality. Further, because both $t_{h}$ and
$z^{\ast }$ are positive, $s^{\ast }$ must be positive from (\ref%
{bottom_seller1_0}) with equality. On the other hand, there should be no
profitable downward deviation for senders. Therefore,
\begin{equation*}
v(n(z^{\ast }),s,z^{\ast })-c(s,z^{\ast })\leq \mathbb{E}\left[ v\left(
n\left( z^{\ast }\right) ,s^{\ast },z^{\prime }\right) |z^{\prime }\geq
z^{\ast }\right] -c(s^{\ast },z^{\ast })\text{ for all }s<s^{\ast }
\end{equation*}%
Because (\ref{bottom_seller1_0}) and (\ref{bottom_buyer1_0}) hold with
equality, this becomes
\begin{equation}
v(n(z^{\ast }),s,z^{\ast })-c(s,z^{\ast })\leq 0\text{ for all }s<s^{\ast }.
\label{no_pooling}
\end{equation}
If $v(x,s,z)$ satisfies Assumption 7(i), then, $v(n(z^{\ast }),0,z^{\ast
})-c(0,z^{\ast })=v(n(z^{\ast }),0,z^{\ast })>0.$ Therefore, (\ref%
{no_pooling}) is violated. If $v$ and $c$ are satisfies Assumption 7(ii),
then there exists $s<s^{\ast }$ such that $v(n(z^{\ast }),s,z^{\ast
})-c(s,z^{\ast })>0.$ Therefore, (\ref{no_pooling}) is violated.
Therefore, if there is a stronger monotone pooling equilibrium, it must be
the case where $z^{\ast }=\underline{z}.$ In this case, $s^{\ast }=0$.
Otherwise, the sender type $z$ arbitrarily close to $0$ will get negative
utility because $t_{h}<\infty $ and $\lim_{z\rightarrow \underline{z}%
}c(s,z)=\infty $ for all $s>0$ (Assumption 7).
Given $z^{\ast }=\underline{z}$ and $s^{\ast }=0,$ every sender will get
positive utility upon being matched. We distinguish the two cases. If
Assumption 7(ii) is satisfied, then $s^{\ast }=0$ makes the surplus equal to
zero upon being matched, so no receiver is willing to pay $t_{h}>0.$
Therefore there is no pooling equilibrium. If Assumption 7(i) is satisfied,
then we must have $n(z^{\ast })>\underline{x}$ in order to make (\ref%
{bottom_buyer1_0}) hold, because $v(\underline{x},0,z)=0$ for all $z.$ This
implies that there are more senders than receivers, and hence the market
clearing condition is not satisfied. Therefore, there is no pooling
equilibrium.
\end{proof}
Theorems \ref{thm_uniqueSME}, \ref{lemma_unique_pooling}, and \ref%
{thm_unique_well_behaved} establish the unique stronger monotone equilibrium
given each type of the feasible reaction sets.
\section{Optimal equilibrium design\label{section_optimal_design}}
Lemma \ref{lem_unified_well_behaved} shows that the DM only needs to
consider a well-behaved stronger monotone equilibrium because it also covers
a stronger monotone separating equilibrium and a stronger monotone pooling
equilibrium. Combining with Lemma \ref{lem_unified_well_behaved}, the next
two propositions reduce the DM's the design problem of an optimal stronger
monotone equilibrium to the choice of $z_{\ell }$ and $z_{h}$ subject to $%
z_{\ell }\in Z$ and $z_{h}\geq z_{\ell }$ given that a unique well-behaved
stronger monotone equilibrium is the only stronger monotone equilibrium. In
other words, for the DM's optimal equilibrium design, choosing the lower and
upper bounds of the feasible reaction interval is equivalent to choosing the
two threshold sender types $z_{\ell }$ and $z_{h}$, one for market entry and
the other for pooling on the top.
\begin{proposition}
\label{prop_unbounded_design}(i) For any given $z_{\ell }\in \lbrack
\underline{z},\overline{z}),$ there exists a unique solution $(s_{\ell
},t_{\ell })$ of (\ref{lem1}) and (\ref{lem2}). (ii) Suppose that the DM
chooses $t_{\ell }$ in (i) above Then, $(z_{\ell },s_{\ell })$ solves (\ref%
{lem1}) and (\ref{lem2}) given $t_{\ell }$ and it is unique.
\end{proposition}
Note that $s_{\ell }=\zeta (\underline{x},\underline{z})=0$ given $z_{\ell }=%
\underline{z}$. $s_{\ell }$ is also determined uniquely by $z_{\ell }$ when $%
z_{\ell }>\underline{z}$ because it solves
\begin{equation}
v(n(z_{\ell }),s,z_{\ell })-c(s,z_{\ell })=0, \label{s_l_determination}
\end{equation}%
which is the sum of (\ref{lem1}) and (\ref{lem2}) with equality. Therefore,
Proposition \ref{prop_unbounded_design} implies that we can retrieve $%
t_{\ell }$ that induces $(s_{\ell },z_{\ell })$ from (\ref{lem1}) with
equality when $z_{\ell }=\underline{z}$ or either (\ref{lem1}) or (\ref{lem2}%
), each with equality when $z_{\ell }>\underline{z}$. Therefore, the DM's
point of view, choosing $z_{\ell }$ is equivalent to choosing $t_{\ell }$.
\begin{proposition}
\label{prop_bounded_design}(i) For any given $z_{h}\in (z_{\ell },\overline{z%
}),$ there exists a unique $(s_{h},t_{h})$ of (\ref{jumping_sellers}) and (%
\ref{jumping_buyers}). (ii) Suppose that the DM chooses $t_{h}$ in (i)
above. Then, $(z_{h},s_{h})$ solves (\ref{jumping_sellers}) and (\ref%
{jumping_buyers}) given $t_{h}$ and it is unique.
\end{proposition}
Note that $s_{h}$ is determined solely by $z_{h}$ because it solves
\begin{equation}
\mathbb{E}[v(n\left( z_{h}\right) ,s,z^{\prime })|z^{\prime }\geq
z_{h}]-c\left( s,z_{h}\right) =v\left( n\left( z_{h}\right) ,\tilde{\sigma}%
\left( z_{h}\right) ,z_{h}\right) -c\left( \tilde{\sigma}\left( z_{h}\right)
,z_{h}\right) , \label{s_h_determination}
\end{equation}%
which is the sum of (\ref{jumping_sellers}) and (\ref{jumping_buyers}) at $%
z_{h}$. Therefore, Proposition \ref{prop_bounded_design} implies that we can
retrieve $t_{h}$ that induces $(s_{h},z_{h})$ from either (\ref%
{jumping_sellers}) or (\ref{jumping_buyers}). The DM can first choose the
threshold sender type $z_{h}$ and retrieve the upper bound of feasible
reactions $t_{h}$ that induces $z_{h}$ in a well-behaved equilibrium.
\begin{lemma}
\label{lem_unified_well_behaved}As $z_{h}\rightarrow \bar{z},$ $\left\{ \hat{%
\sigma},\hat{\mu},\hat{\tau},\hat{m}\right\} $ converges to the stronger
monotone separating equilibrium with the same lower threshold sender type $%
z_{\ell }.$ As $z_{h}\rightarrow z_{\ell },$ $\left\{ \hat{\sigma},\hat{\mu},%
\hat{\tau},\hat{m}\right\} $ converges to the stronger monotone pooling
equilibrium in which $t_{\ell }$ is a single feasible reaction, $z_{\ell }$
is the threshold sender type for market entry, and $s_{\ell }$ is the pooled
action for senders in the market.\footnote{%
As $z_{h}\rightarrow z_{\ell }$, (\ref{jumping_sellers}) and (\ref%
{jumping_buyers}) become (\ref{pooling_sender1}) and (\ref{pooling_receiver1}%
), each with equality if $z^{\ast }>$\underline{$z$}. From (\ref%
{pooling_sender1}) and (\ref{pooling_receiver1}), we can also directly
derive $(t^{\ast },s^{\ast })$ given each $z^{\ast }$ or $(z^{\ast },s^{\ast
})$ given each $t^{\ast }$ for a pooling equilibrium.}
\end{lemma}
Given Propositions \ref{prop_unbounded_design}--\ref{prop_bounded_design}
and Lemma \ref{lem_unified_well_behaved}, we can say that for the DM's point
of view, choosing an interval of feasible reactions $T=[t_{\ell },t_{h}]$ is
equivalent to choosing the corresponding $z_{\ell }$ and $z_{h}.$
Because a unique stronger monotone equilibrium is always well-behaved, this
implies that the solution for the DM's unconstrained design problem of the
optimal stronger monotone equilibrium is the same as the solution for the
DM's design problem of the optimal stronger monotone equilibrium where the
DM chooses the lower and upper threshold sender types, $z_{\ell }$ and $%
z_{h} $, for a well-behaved stronger monotone equilibrium.
Given a well-behaved stronger monotone equilibrium $\left\{ \hat{\sigma},%
\hat{\mu},\hat{\tau},\hat{m}\right\} $ with the lower and upper threshold
sender types, $z_{\ell }$ and $z_{h}$, the aggregate net surplus is
\begin{multline*}
\Pi (z_{\ell },z_{h}):=\int_{z_{\ell }}^{z_{h}}v(n(z),\hat{\sigma}%
(z),z)dG\left( z\right) -\int_{z_{\ell }}^{z_{h}}c(\hat{\sigma}%
(z),z)dG\left( z\right) \\
+\int_{z_{h}}^{\bar{z}}\mathbb{E}\left[ v(n(z),s_{h}(z_{h}),z^{\prime
})|z^{\prime }\geq z_{h}\right] dG\left( z\right) -\int_{z_{h}}^{\bar{z}%
}c(s_{h}(z_{h}),z)dG\left( z\right) ,
\end{multline*}%
where $s_{h}(z_{h})$ is the pooled action chosen by all sender types above $%
z_{h}$ and it is unique given any $z_{h}\in \lbrack z_{\ell },\bar{z}]$
because of Proposition \ref{prop_bounded_design}(i). Note that the first
line in $\Pi (z_{\ell },z_{h})$ is the aggregate net surplus in the
separating part of the equilibrium where matching is \emph{assortative} in
terms of types. The second line is the aggregate net surplus in the pooling
part of the equilibrium with \emph{random} matching and hence matching
efficiency is lower in this pooling part but there is potential savings in
the cost due to the pooled action chosen by all senders above $z_{h}$.
\begin{theorem}
\label{thm_optimal_design}The solution for the DM's unconstrained design
problem of the optimal stronger monotone equilibrium is the same as the
solution for the DM's design following problem of the optimal stronger
monotone equilibrium:%
\begin{equation*}
\max_{\overline{z}>z_{\ell }\geq \underline{z},z_{h}\geq z_{\ell }}\Pi
(z_{\ell },z_{h})
\end{equation*}
\end{theorem}
If $z_{\ell }<z_{h}<\bar{z},$ then the stronger monotone equilibrium is
strictly well-behaved. If $z_{\ell }<z_{h}=\bar{z},$ it is separating. If $%
z_{\ell }=z_{h}<\bar{z},$ it is pooling.
Generally, the aggregate equilibrium surplus depends on $v,$ $c,$ $G$, and $%
H.$ For the optimal equilibrium design, we propose an approach that
approximates the distribution of receiver types with the \textquotedblleft
shift\textquotedblright\ and \textquotedblleft relative
spacing\textquotedblright\ parameters given an arbitrary distribution of
sender types. Consider a gross match surplus function that follows the form
of $v(x,s,z)=As^{a}xz$ with $0\leq a<1.$ The cost of choosing an action $s$
is $c(s,z)=\beta \frac{s^{2}}{z}$ for the sender of type $z,$ where $\beta
>0 $. The lowest sender type is $\underline{z}=0.$ Note that $v$, $c$, and $%
\underline{z}=0$ satisfy Assumption 7.
A sender's type follows a probability distribution $G$, whereas a receiver's
type follows $H$. Recall that $n$ is defined as $H^{-1}\circ G$\ so that $%
H(n(z))=G(z)$ for all $z.$ We assume that $n$ takes the following form:
\begin{equation}
n(z)=kz^{q}, \label{matching_function}
\end{equation}%
where $k>0$ and $q\geq 0.$ Note that $k$ is the \textquotedblleft
shift\textquotedblright\ parameter and $q$ is the \textquotedblleft relative
spacing\textquotedblright\ parameter. The relative spacing parameter $q$
shows the relative heterogeneity of receiver types to sender types. Recall
that $n(z)$ denotes the type of a receiver who is matched with the sender of
type $z$ in the stronger monotone separating equilibrium. This approach is
general in the sense that it approximates the distribution of receiver types
with the \textquotedblleft shift\textquotedblright\ and \textquotedblleft
relative spacing\textquotedblright\ parameters for any arbitrary
distribution of sender types.
To derive a well-behaved stronger monotone equilibrium, we first need to
solve the first-order differential equilibrium $\tilde{\mu}^{\prime
}(s)=\phi (s,\tilde{\mu}(s))$ in (\ref{differential_characteristic}) with
the initial condition $(z_{\ell },s_{\ell }).$ The value of $s_{\ell }$ only
depends on $z_{\ell }.$ If $z_{\ell }=0,$ then $s_{\ell }(z_{\ell })=\zeta
(0,0)=0.$ If $z_{\ell }>0,$ then $s_{\ell }(z_{\ell })$ is determined by (%
\ref{s_l_determination}) and it is $s_{\ell }(z_{\ell })=\left( \frac{Ak}{%
\beta }z_{\ell }^{q+2}\right) ^{\frac{1}{2-a}}$. Note that $s_{\ell
}(z_{\ell })$ is continuous everywhere including at $z_{\ell }=0.$
\begin{proposition}
\label{prop_diff_eq}Given any initial condition $(z_{\ell },s_{\ell }\left(
z_{\ell }\right) ),$ the solution for first-order differential equation $\mu
^{\prime }(s)=\phi (s,\mu (s))$ is
\begin{equation*}
\tilde{\mu}(s)=\left[
\begin{array}{c}
\left( \dfrac{2\beta (2+q)}{Ak}\right) \dfrac{s^{2-a}}{2+a+aq} \\
+\left( \dfrac{s_{\ell }(z_{\ell })}{s}\right) ^{a(2+q)}\dfrac{\left[
Ak(2+a+aq)z_{\ell }^{2+q}-2\beta (2+q)s_{\ell }(z_{\ell })^{(2-a)}\right] }{%
Ak(2+a+aq)}%
\end{array}%
\right] ^{\dfrac{1}{2+q}}.
\end{equation*}
\end{proposition}
Note that $\tilde{\sigma}(z)$ is the inverse of $\tilde{\mu}(s)$, which is
derived numerically as $\tilde{\mu}(s)$ does not allow for a closed-form
solution for its inverse. Given $z_{h},$ $s_{h}(z_{h})$ is unique and it
solves (\ref{s_h_determination}), which is%
\begin{equation}
Aks_{h}^{a}z_{h}^{q}\mathbb{E}[z^{\prime }|z^{\prime }\geq z_{h}]-\beta
\frac{s_{h}^{2}}{z_{h}}=Ak\tilde{\sigma}\left( z_{h}\right)
^{a}z_{h}^{1+q}-\beta \frac{\tilde{\sigma}\left( z_{h}\right) ^{2}}{z_{h}}.
\label{s_h_choice}
\end{equation}%
We need to numerically derive $s_{h}(z_{h})$ as it does not allow for a
closed form solution. Given a choice of $z_{\ell }$ and $z_{h}$, the
aggregate net surplus is
\begin{multline*}
\Pi _{w}(z_{\ell },z_{h},q,a,G)=\int_{z_{\ell }}^{z_{h}}\left[ v(n(z),\hat{%
\sigma}(z),z)-c(\hat{\sigma}(z),z)\right] g(z)dz \\
+\int_{z_{h}}^{\bar{z}}\left[ \mathbb{E}[v(n\left( z\right) ,s_{h}\left(
z_{h}\right) ,z^{\prime })|z^{\prime }\geq z_{h}]-c(s_{h}\left( z_{h}\right)
,z)\right] g(z)dz \\
=\int_{z_{\ell }}^{z_{h}}\left( Akz^{q+1}\hat{\sigma}(z)^{a}-\beta \frac{%
\hat{\sigma}(z)^{2}}{z}\right) g(z)dz \\
+Aks_{h}(z_{h})^{a}\mathbb{E}[z^{\prime }|z^{\prime }\geq
z_{h}]\int_{z_{h}}^{\bar{z}}z^{q}g(z)dz-\beta s_{h}(z_{h})^{2}\int_{z_{h}}^{%
\bar{z}}\frac{1}{z}g(z)dz,
\end{multline*}%
where $\hat{\sigma}(z)=\tilde{\sigma}\left( z\right) $ for $z\in \lbrack
z_{\ell },z_{h}].$
Given $(a,q,G),$ we can find the best well-behaved equilibrium through the
following maximization problem:%
\begin{eqnarray*}
&&\max_{(z_{\ell },z_{h})}\Pi _{w}(z_{\ell },z_{h},q,a,G) \\
&&\text{subject to }0\leq z_{\ell }\leq z_{h}\leq \bar{z}.
\end{eqnarray*}
\subsection{More efficient non-separating equilibria}
Before turning to numerical analysis, we show that the DM can always
increases the efficiency of the stronger monotone equilibrium by restricting
the set of feasible reactions when $q$ and $a$ are small.
Suppose that the DM fixes the lower bound of feasible reactions such that $%
z_{\ell }=0$ and hence $s_{\ell }(z_{\ell })=0$ (no sender stays out of the
market). In this case, the belief function $\tilde{\mu}(s)$ allows for the
closed-form expression of its inverse, which is the sender's equilibrium
action function in the separating equilibrium:%
\begin{equation}
\tilde{\sigma}(z)=\left( \frac{Ak}{2\beta }\frac{aq+a+2}{q+2}\right) ^{\frac{%
1}{2-a}}z^{\frac{q+2}{2-a}}\text{ for }z<z_{h}. \notag
\end{equation}%
The aggregate net surplus in the well-behaved stronger monotone separating
equilibrium is
\begin{multline*}
\Pi _{w}(0,z_{h},q,a,G)= \\
\left( \left( Ak\right) ^{\frac{2}{2-a}}\left( \frac{aq+a+2}{2\beta \left(
q+2\right) }\right) ^{\frac{a}{2-a}}-\beta \left( \frac{Ak}{2\beta }\frac{%
aq+a+2}{q+2}\right) ^{\frac{2}{2-a}}\right) \int_{0}^{z_{h}}z^{\frac{2q+2+a}{%
2-a}}dG(z) \\
+Aks(z_{h})^{a}\mathbb{E}[z|z\geq z_{h}]\int_{z_{h}}^{\bar{z}%
}z^{q}g(z)dz-\beta s_{h}(z_{h})^{2}\int_{z_{h}}^{\bar{z}}\frac{1}{z}g(z)dz.
\end{multline*}%
As $z_{h}$ approaches $\bar{z}$, the maximum of the support of $G,$ $\Pi
_{w}(0,z_{h},q,a,G)$ becomes the aggregate net surplus $\Pi _{s}(a,q,G)=$ $%
\Pi _{w}(0,\bar{z},q,a,G)$ without any restrictions on feasible reactions
(i.e., $\Pi ^{s}(a,q,G)$ is the aggregate net surplus in the baseline
separating equilibrium). We show that, when the relative heterogeneity of
receiver types ($q$) and the productivity of the sender action ($a$) are not
too large, there is a strictly well-behaved equilibrium only with the
binding upper bound of feasible reactions that is more efficient than the
separating equilibrium without any restrictions on the feasible reactions
regardless of $G.$
\begin{theorem}
\label{thm_well_behaved_design}There are $\hat{q},\hat{a}>0$ such that for
any given $(q,a)\in \lbrack 0,\hat{q}]\times \lbrack 0,\hat{a}]$, the DM can
set up the interval of feasible reactions $[0,\hat{t}]$ that induces a
unique strictly well-behaved stronger monotone equilibrium, which is more
efficient than the stronger monotone separating equilibrium with no
restrictions on feasible reactions. Given $[0,\hat{t}]$, it is a unique
stronger monotone equilibrium.
\end{theorem}
\noindent \textbf{Proof}. First, we construct a (unique) strictly
well-behaved stronger monotone equilibrium with $0=z_{\ell }<z_{h}<$
supremum of the support of $G.$ Let $s_{h}(z_{h},a,q)$ be the value of $%
s_{h} $ that solves (\ref{s_h_choice}) at every $a$ and $q.$ Because
functions in (\ref{s_h_choice}) are continuous in $a$ and $q$, $%
s_{h}(z_{h},a,q)$ is continuous in $a$ and $q.$ Given (\ref{s_h_choice}), we
have
\begin{multline*}
\lim_{q,a\rightarrow 0}\left( As_{h}(z_{h},q,a)^{a}kz_{h}{}^{q}\mathbb{E}%
\left[ z|z\geq z_{h}\right] -\beta \frac{s_{h}(z_{h},q,a)^{2}}{z_{h}}\right)
\\
= \lim_{q,a\rightarrow 0}\left( Ak\tilde{\sigma}\left( z_{h}\right)
^{a}z_{h}{}^{q+1}-\beta \frac{\tilde{\sigma}\left( z_{h}\right) ^{2}}{z_{h}}%
\right).
\end{multline*}%
Therefore, we have that $\lim_{q,a\rightarrow 0}s_{h}(z_{h},q,a)=\sqrt{%
z_{h}{}^{2}Ak\left( \mathbb{E}\left[ z|z\geq z_{h}\right] -1\right) /\beta }%
. $ This implies that
\begin{multline*}
\lim_{q,a\rightarrow 0}\Pi _{w}(0,z_{h},q,a,G)=\int_{0}^{z_{h}}\frac{Akz}{2}%
dG(z)+\int_{z_{h}}^{\bar{z}}Ak\mathbb{E}\left[ z|z\geq z_{h}\right] dG(z) \\
-z_{h}{}^{2}Ak\left( \mathbb{E}\left[ z|z\geq z_{h}\right] -1\right)
\int_{z_{h}}^{\bar{z}}\frac{1}{z}dG(z).
\end{multline*}%
Taking the limit of $\lim_{q,a\rightarrow 0}\Pi _{w}(0,z_{h},q,a,G)$ with
respect to $z_{h}$ yields
\begin{equation*}
\lim_{z_{h}\rightarrow 0}\left[ \lim_{q,a\rightarrow 0}\Pi
_{w}(0,z_{h},q,a,G)\right] =\int_{0}^{\bar{z}}AkzdG(z)=Ak\mu _{z},
\end{equation*}%
where $\mu _{z}$ is the unconditional mean of the sender type $z.$
On the other hand, the limit of the aggregate net surplus in the stronger
monotone separating equilibrium is
\begin{equation*}
\lim_{q,a\rightarrow 0}\Pi _{s}(q,a,G)=\int_{0}^{\bar{z}}AkzdG(z)-\int_{0}^{%
\bar{z}}\frac{Akz}{2}dG(z)=\frac{Ak\mu _{z}}{2}.
\end{equation*}%
Therefore, we have that
\begin{equation}
\lim_{z_{h}\rightarrow 0}\left[ \lim_{q,a\rightarrow 0}\Pi
_{w}(0,z_{h},q,a,G)\right] -\lim_{q,a\rightarrow 0}\Pi _{s}(q,a,G)=\frac{%
Ak\mu _{z}}{2}>0. \label{well_behaved_zero}
\end{equation}
Because $\Pi _{w}(0,z_{h},q,a,G)$ and $\Pi _{s}(q,a,G)$ are continuous,
there exists $\hat{q}>0$, and $\hat{a}>0$ and $\hat{z}_{h}(\hat{q},\hat{a}%
)\in $ Int $Z$ such that for every $(q,a)\in \lbrack 0,\hat{q}]\times
\lbrack 0,\hat{a}]$ and every $z_{h}\in (0,\hat{z}_{h}(\hat{q},\hat{a})]$, $%
\Pi _{w}(0,z_{h},q,a,G)>\Pi _{s}(q,a,G)$. We can retrieve $t_{h}$ given $%
z_{h}\in (0,\hat{z}_{h}(\hat{q},\hat{a})]$. $\blacksquare $
In the well-behaved stronger monotone equilibrium constructed above, a small
fraction of senders and receivers on the low end of the type distribution
follow their equilibrium sender actions, reactions, and assortative matching
that would have occurred in the stronger monotone separating equilibrium.
The rest of senders and receivers are matched randomly because the rest of
senders all choose the same action. We can also establish the same result
for a stronger monotone pooling equilibrium as represented in Theorem \ref%
{thm_eq_design} below.
\begin{theorem}
\label{thm_eq_design}There are $\tilde{q},\tilde{a}>0$ such that, for any
given $(q,a)\in \lbrack 0,\tilde{q}]\times \lbrack 0,\tilde{a}]$, the DM can
induce a unique stronger monotone pooling equilibrium that is more efficient
than the stronger monotone separating equilibrium without restrictions on
feasible reactions.
\end{theorem}
The intuition behind Theorem \ref{thm_eq_design} is the same as that behind
Theorem \ref{thm_well_behaved_design}. A major difference is that a pooling
equilibrium forces a small fraction of senders and receivers on the low end
of type distribution to stay out of the market even though they can produce
positive net surplus, whereas everyone is matched in the strictly
well-behaved equilibrium identified in \ref{thm_well_behaved_design}.
Theorems \ref{thm_well_behaved_design} and \ref{thm_eq_design} in fact show
that a separating equilibrium is not optimal in the classical Spencian model
(Spence 1973) of pure signaling with no heterogeneity of receivers (i.e., $%
a=q=0$).
\subsection{Numerical analysis}
We turn our attention to numerical analysis. For the concreteness of our
numerical analysis, one can think of senders as workers and receivers as
firms. A sender's type can be then viewed as unobservable skill and her
action as observable skill. A firm's type can be viewed as its size. In
macro/development, firm size is measured by the amount of labour it
employes; In finance, it is measured by the firm's market value if it is
publicly traded. In an entry-level job market, a worker's unobservable skill
could be her ability to understand a task given to her and to figure out how
to complete it. In a managerial job market, a worker's unobservable skill
could be her ability to come up with new business idea or innovation.
Poschke (2018) documented that the mean and variance of the firm size
distribution are larger in rich countries and increased over time for US
firms when firm size is measured by the number of workers employed in a
firm. While Poschke (2018) showed that a frictionless general equilibrium
model of occupational choice with skill-biased change accounts for key
aspects of the US experience, his model is mute to (a) the implication of
such changes in the firm size distribution on efficiency and (b) the
effectiveness of the DM's policy choice over the feasible wages on improving
efficiency. We are interested in addressing these questions, examining the
impact of the distributional changes in firm size on workers' investment in
their observable skill, $s.$
For numerical illustrations, we consider a specific distribution of $G$ with
various combinations of $q$ and $a$. The support of the worker's
unobservable skill $z$ is set to be $[0,3]$ and is generated from the beta
distribution multiplied by 3 with the following shape parameters:
\begin{equation*}
\{(1,1),(5,5),(3,5),(5,3)\}.
\end{equation*}%
Figure~\ref{fg:beta-pdf} shows the probability density functions with
different shape parameter values. Note that Beta(1,1) corresponds to the
uniform distribution and Beta(5,5) to a symmetric bell-shaped distribution.
Beta(3,5) and Beta(5,3) correspond to right-skewed and left skewed
unobservable skill distributions, respectively. The model parameters $q$ and
$a$ vary over $\{0,0.1,\ldots ,2\}$ and $\{0,0.1,\ldots ,0.9\}$,
respectively. Therefore, we compute the optimal well-behaved equilibrium
(i.e., optimal stronger monotone equilibrium) for 840 $(=4\times 21\times
10) $ different specifications in total. For the remaining parameters, we
set $A=1$, $k=1$, and $\beta =0.5$. Note that both the mean and the variance
of firm size $x=z^{q}$ increase in $q$ across all these settings, which
reflects the empirical findings in Poschke (2018).\textbf{\ }Finally, we set
the effective zero as $10^{-6}$.
\begin{figure}[tbhp]
\begin{threeparttable}
\caption{Probability Density Functions of the Beta Distribution}
\label{fg:beta-pdf}\centering
\begin{tabular}{cccc}
~~Beta(1,1) & ~~Beta(5,5) & ~~Beta(3,5) & ~~Beta(5,3) \\
\includegraphics[scale=0.19]{figures/pdf_beta_s1_1_s2_1} & %
\includegraphics[scale=0.19]{figures/pdf_beta_s1_5_s2_5} & %
\includegraphics[scale=0.19]{figures/pdf_beta_s1_3_s2_5} & %
\includegraphics[scale=0.19]{figures/pdf_beta_s1_5_s2_3}
\end{tabular}
\begin{tablenotes}
\footnotesize
\item Notes. The send type variable $z$ is generated by $3\cdot Beta(\cdot,\cdot).$
\end{tablenotes}
\end{threeparttable}
\end{figure}
\subsubsection{Optimal stronger monotone equilibrium}
Figures~\ref{fg:sol-well-behaved}--\ref{fg:surplus_gains} show the optimal
solution paths and the relative surplus gains of the well-behaved
equilibrium, respectively. It turns out that $z_{\ell }=0$ in all
specifications. Thus, in Figure \ref{fg:sol-well-behaved}, we report only $%
z_{h}$ that solves the optimization problem in each design. In the graph,
the horizontal axis denotes the parameter value of $q$. To make the graph
readable, we report the solution paths of $z_{h}$ for four different values
of $a=0,0.3,0.6$, and $0.9$.
\begin{figure}[tbhp]
\begin{threeparttable}
\caption{Solutions $z_h$ for Different Parameter Specifications}\label{fg:sol-well-behaved}
\centering
\begin{tabular}{c c}
~~Beta(1,1) & ~~Beta(5,5) \\
\includegraphics[scale=0.23]{figures/beta_shape1_1_shape2_1} &
\includegraphics[scale=0.23]{figures/beta_shape1_5_shape2_5} \\
\\
~~Beta(3,5) & ~~Beta(5,3) \\
\includegraphics[scale=0.23]{figures/beta_shape1_3_shape2_5} &
\includegraphics[scale=0.23]{figures/beta_shape1_5_shape2_3} \\
\end{tabular}
\begin{tablenotes}
\footnotesize
\item Notes. We show only solutions $z_h$ since $z_l=0$ for all designs. Each line represents solutions over $q \in [0,2]$ for each $a$ value. For example, the circle point at $(1.5, 2.5)$ in Beta(1,1) denotes the best well-behaved equilibrium of $(z_l,z_h)=(0, 2.5)$ when $z$ follows $3\times Beta(1,1)$ and $(q,a)=(1.5, 0.9)$.
\end{tablenotes}
\end{threeparttable}
\end{figure}
We illustrate the well-behaved equilibrium with two examples. When $G$
follows Beta(1,1) and $(q,a)=(1.5,0.9)$, the optimal well-behaved
equilibrium is achieved at $(z_{\ell },z_{h})=(0,2.5)$ as denoted by a
circle point on the vertical line in the top-left graph of Figure \ref%
{fg:sol-well-behaved}. This implies that every worker enters the market and
that the workers with $0\leq z<2.5$ differentiate themselves with unique
observable skill choices. Furthermore, we can compute from \eqref{s_h_choice}
that those workers with $z\geq 2.5$ choose pooled observable skill $%
s_{h}=27.1$. The upper threshold unobservable skill $z_{h}=2.5$ is induced
when the DM sets the upper bound of feasible reactions $t_{h}=894.6$ (We can
derive the value of $t_{h}$ from (\ref{jumping_sellers}) or (\ref%
{jumping_buyers}) given $z_{h}$ and $s_{h}$). From Theorem \ref%
{thm_optimal_design}, we can conclude that this is a unique optimal stronger
monotone equilibrium that maximizes the aggregate net surplus and that it is
reached when the DM sets the set of feasible wages as $[t_{\ell
},t_{h}]=[0,894.6].$\footnote{%
With no restrictions on the set of feasible reactions, the action chosen by
the highest sender type $\bar{z}=3$ is $\tilde{\sigma}(3)=39.3$ and the
receiver's feasible reaction for her is $\tilde{\tau}(\tilde{\sigma}%
(1))=3355.5>t_{h}=894.6.$ The aggregate net surplus in the separating
equilibrium is $26.2$, whereas the aggregate net surplus in the optimal
well-behaved equilibrium is $26.5$. Therefore, the optimal well-behaved
equilibrium increases the aggregate net surplus by 1.1\%.} The second
example is in case that $G$ follows Beta(1,1) and $(q,a)=(0.2,0.2)$. Then,
the optimal well-behaved equilibrium is achieved at $(z_{\ell
},z_{h})=(0,0.3)$. Those workers with $z\geq 0.3$ choose pooled observable
skill $s_{h}=0.8$. This equilibrium is reached when the DM sets the set of
feasible wages as $[t_{\ell },t_{h}]=[0,1.3]$.\footnote{%
The aggregate net surplus in the optimal well-behaved equilibrium is $1.3$,
whereas the aggregate net surplus in the separating equilibrium is $1.$
Therefore, the optimal well-behaved equilibrium increases the aggregate net
surplus by 30.5\%.}%
\begin{comment}\todo[inline]{YS: Some wordings have changed. Please check if
I didn't ruin the original intention.}
\end{comment}
We have some remarks on these numerical results. First, as we have discussed
above, $z_{\ell }$ is equal to zero for all specifications. In the case of
the separating equilibrium, it is clear that $z_{\ell }=0$ is optimal since
any positive $z_{\ell }$ does not improve efficiency. With $z_{\ell }>0$, we
lose some positive surplus that could have been created by lower matches. At
the same time, it further increases the inefficiently high equilibrium
action of every worker with unobservable skill $z>z_{\ell }$. However, this
is not certain in the well-behaved equilibrium with $z_{h}<3.$ In this case,
raising $z_{\ell }$ leads to an increase in the pooled observable skill $%
s_{h}$ chosen by worker with unobservable skill above $z_{h}.$ Because $%
s_{h} $ is lower than the equilibrium observable skill chosen by for the
worker with the highest unobservable skill level in a separating
equilibrium, it is possible that $s_{h}$ may be even lower than the
efficient level of observable skill for some workers on the top end of the
unobservable skill distribution. In this case, raising $s_{h}$ through
raising $z_{\ell }$ increases efficiency for those workers while decreasing
efficiency for the other workers. Our numerical analysis shows that, when $%
z_{\ell }>0$, the efficiency loss by lower types dominates any possible
efficiency gains by higher types across all designs considered.
\begin{figure}[tp]
\begin{threeparttable}
\caption{Relative Surplus Gains of the Well-Behave Equilibrium}\label{fg:surplus_gains}
\centering
\begin{tabular}{c c}
~~Beta(1,1) & ~~Beta(5,5) \\
\includegraphics[scale=0.33]{figures/rel_sur_gains_beta_shape1_1_shape2_1} &
\includegraphics[scale=0.33]{figures/rel_sur_gains_beta_shape1_5_shape2_5} \\
\\
~~Beta(3,5) & ~~Beta(5,3) \\
\includegraphics[scale=0.33]{figures/rel_sur_gains_beta_shape1_3_shape2_5} &
\includegraphics[scale=0.33]{figures/rel_sur_gains_beta_shape1_5_shape2_3} \\
\end{tabular}
\begin{tablenotes}
\footnotesize
\item Notes.
We compute the aggregate net surplus gains of the optimal well-behaved equilibrium compared to the separating equilibrium by $100\times (\Pi_w - \Pi_s)/\Pi_s$, where $\Pi_w$ and $\Pi_s$ are aggregate net surplus of the well-behaved and separating equilibria.
\end{tablenotes}
\end{threeparttable}
\end{figure}
Second, $z_{h}$ is strictly increasing in $a$ for any given $q$. Also, $%
z_{h} $ is strictly increasing in $q$ for any given $a$ besides $a=0$. When $%
a=0$, (i.e.,~the worker's observable skill is not productive at all), we
observe $z_{h}=0$ in a range of $q$ values. This result implies that the
optimal equilibrium becomes the pooling equilibrium. To make the
well-behaved equilibrium deviate from the pooling equilibrium, the relative
spacing parameter $q$ has to be larger than a threshold that depends on the
distribution of $z$. Otherwise, the inefficiency associated with high costs
of separating themselves among senders with lower unobservable skill
dominates matching efficiency created by separating themselves and it is the
optimal equilibrium design to force every worker not to choose any
observable skill by setting $z_{h}=0$.
Third, the optimal well-behaved equilibrium converges to the separating
equilibrium as $a$ converges to 1 and $q$ increases. For example, when we
conduct an additional analysis with Beta(1,1), $a=0.99$ and $q=25$, the
optimal well-behaved equilibrium is $(z_{\ell },z_{h})=(0,2.99)$, which is
very close to the separating equilibrium with $z_{\ell }=0$ and $z_{h}=3$.
However, this occurs only with extreme parameter values. The strictly
well-behaved equilibrium is still the optimal equilibrium in a wide range of
$(q,a)$, that is, the cost savings associated with the pooled observable
skill choice by workers above $z_{h}$ outweighs the decrease in matching
efficiency due to random matching in the pooling part of the equilibrium.
\begin{comment}\todo[inline]{YS: Edited the third remark. I found that my previous concept of convergence is not
correct. It is more like the efficiency gain (equivalence) concept. Convergence of the surplus values can happen with lower $z_h$ values as we found the efficiency gain map. Then, it should be regarded as a different equilibrium concept that gives the same
objective function value (surplus value). The convergence of equilibrium should
be thought of as $ z_h \rightarrow 1$.}\end{comment}
\begin{comment}Third, the optimal well-behaved equilibrium converges to the separating
equilibrium as $a$ converges to 1 and $q$ increases. However, the
convergence speed over $q$ is quite slow as we can conjecture from the
solution path curvature for large $q$ values in Figure \ref%
{fg:sol-well-behaved}. For example, when we compute $z_h$ for $a=0.99$ and $%
q=10$, the solution is $z_h=0.986$ for Beta(1,1). When we compute it for
other shape parameters, the solution $z_h$ has the value between 0.981 to
0.988. Based on this slow convergence rate, we can conclude that the
(strictly) well-behaved equilibrium is the optimal design in a wide range of
$(q,a)$ for various beta distributions.
\end{comment}
Fourth, the different shapes of the probability density function affect the
curvature of the $z_{h}$ paths for all $a$ and the threshold of $q$ that
makes the well-behaved equilibrium the optimal equilibrium for $a=0$.
Finally, we show the relative surplus gains from the optimal well-behaved
equilibrium in Figure~\ref{fg:surplus_gains}. For each design, we compute
the aggregate net surpluses for the optimal well-behaved and separating
equilibria. Then, we compute the relative gain of the well-behaved
equilibrium by $100\times (\Pi _{w}-\Pi _{s})/\Pi _{s}$. For example, under
Beta(1,1), the relative gains are 52.8\% and 0.7\% when $(a,q)$ are
(0.1,0.1) and (0.9, 2.0), respectively. We also find that the surplus gains
become larger as there are relatively more workers with high unobservable
skill levels, i.e.,~more density weights on higher $z$ values (see, for
example, Beat(3,5) and Beta(5,3) in Figure \ref{fg:surplus_gains}).
To elaborate the final point in detail, we ordered the Beta distributions
according to the first-order stochastic dominance and check whether the
efficiency gains are larger in the case of a stochastically dominating
distribution. Note that Beta(5,3) dominates Beta(5,5), which again dominates
Beta(3,5) as denoted in Figure \ref{fg:beta-cdf} in the appendix. In Figure %
\ref{fg:diff_surplus_gains_short}, we show the differences of relative
surplus gains between the paired beta distributions. For example, in the
figure of \textquotedblleft Beta(3,5) vs.\ Beta(5,5)\textquotedblright , we
subtract the heat map of Beta(3,5) from Beta(5,5). More specifically, let $%
R(\beta _{i}):=100\times (\Pi _{w}(\beta _{i})-\Pi _{s}(\beta _{i})/\Pi
_{s}(\beta _{i})$ be the ratio of the net surplus gains given a beta
distribution denoted by $\beta _{i}$. Then, $R(\beta _{1})-R(\beta _{2})$ is
the difference of ratios (ratio\_diff) for two beta distributions $\beta
_{1} $ and $\beta _{2}$, where $\beta _{1}$ first-order stochastically
dominates $\beta _{2}$. The results show that the differences are positive
across all $(a,q)$. This implies that the efficiency gains by the
stochastically dominating distribution are \emph{uniformly} larger than that
by the dominated one.
\begin{figure}[tp]
\begin{threeparttable}
\caption{Differences of Relative Surplus Gains }\label{fg:diff_surplus_gains_short}
\centering
\begin{tabular}{c c}
~~Beta(3,5) vs.~Beta(5,5) & ~~Beta(5,5) vs.~Beta(5,3) \\
\includegraphics[scale=0.33]{figures/diff_RSG_3_5_vs_5_5} &
\includegraphics[scale=0.33]{figures/diff_RSG_5_5_vs_5_3} \\
\end{tabular}
\begin{tablenotes}
\footnotesize
\item Notes. Each cell in the graph shows the difference of efficiency gains in the corresponding cell of beta distributions. Specifically, let $R(\beta_1):=100\times (\Pi_w(\beta_1) - \Pi_s(\beta_1)/\Pi_s(\beta_1)$ be the ratio of the net surplus gains given a beta distribution denoted by $\beta_1$. Then, the difference of ratios (ratio\_diff) is defined as $R(\beta_1) - R(\beta_2)$ for two beta distributions $\beta_1$ and $\beta_2$, where $\beta_1$ first-order stochastically dominates $\beta_2$.
\end{tablenotes}
\end{threeparttable}
\end{figure}
\begin{comment}
We now consider a larger set of beta distributions with the stochastic
dominance relationship. From the cdfs in Figure \ref{fg:beta-cdf}, we
confirm that all beta distributions therein except beta(1,1) show the
stochastic dominance relationship. In Figure \ref{fg:diff_surplus_gains_long}%
, we conduct the same exercise above across all sequential pairs. We again
find that all the differences are positive in each $(a,q)$ cell. Therefore,
this numerical evidence supports that the relative efficiency gains could be
larger uniformly over $(a,q)$ as one beta distribution stochastically
dominates the other.
\end{comment}
\subsubsection{Underlying efficiency and relative surplus gains}
We now examine the change in the efficiency of the baseline separating
equilibrium and the change in the relative net surplus gain in the optimal
well-behaved equilibrium as $q$ increases (i.e., as the mean and variance of
the firm size distribution increase). To define the efficiency measure, we
first derive the maximum aggregate net surplus under complete information.
It is based on the efficient choice of observable skill $s^{\ast }(z)$ by a
worker with unobservable skill $z$ with a firm with size $n(z)=kz^{q}$ as
her employee: $s^{\ast }\left( z\right) \in \arg \max \left[
Aks^{a}z^{q+1}-\beta \frac{s^{2}}{z}\right] .$ We have a unique $s^{\ast
}\left( z\right) =\left[ \frac{aAkz^{q+2}}{2\beta }\right] ^{\frac{1}{2-a}}$%
. Then the maximum aggregate net surplus is%
\begin{equation*}
\Pi ^{\ast }\left( a,q,G\right) =\int_{\underline{z}}^{\overline{z}%
}v(n(z),s^{\ast }\left( z\right) ,z)-c(s^{\ast }\left( z\right) ,z)dG(z).
\end{equation*}%
On the other hand, the separating equilibrium without the DM's intervention
is $\Pi _{s}(a,q,G)=\Pi _{w}\left( \underline{z},\overline{z},a,q,G\right) .$
The efficiency of the separating equilibrium is defined as
\begin{equation*}
E\left( a,q,G\right) =\frac{\Pi _{s}\left( a,q,G\right) }{\Pi ^{\ast }\left(
a,q,G\right) }.
\end{equation*}%
Figure \ref{fg:Efficiency_measure_1} shows the efficiency of the separating
equilibrium on the first column and the relative net surplus gain of the
optimal well-behaved equilibrium on the second as we change the value of $%
a\in \{0,0.3,0.6,0.9\}$.\footnote{%
Figures \ref{fg:Efficiency_measure_all_1} - \ref{fg:Efficiency_measure_all_3}
in Appendix provide the full graphs based on all values of $a$.} Given any
value of $a$ and any unobservable skill distribution, the efficiency of the
separating equilibrium increases as $q$ increases. This may suggest that the
efficiency in rich countries is higher than poor countries and that it also
increases over time in the U.S., given the empirical findings in Poschke
(2018). As the efficiency of the separating equilibrium increases in
response to an increase in $q$, the relative net surplus gain in the optimal
well-behaved equilibrium decreases. In addition, as the value of $a$ is
increases, the rate of the increase in the efficiency with respect to an
increase in $q$ is getting smaller and the relative gain is smaller at every
value of $q$. For example, when $a=0,$ the relative net surplus gain is
above 20\% at $q=1$ regardless of the beta distribution, whereas, with $%
a=0.9,$ it is less than 10\% at $q=1$. As $a$ and $q$ both increase, the
relative net surplus gain quickly converges to zero.
\begin{figure}[p]
\begin{threeparttable}
\caption{Efficiency Measures and Relative Surplus Gains}\label{fg:Efficiency_measure_1}
\centering
\begin{tabular}{c c}
~~Eff. Measures & ~~Rel. Sur. Gains \\
\multicolumn{2}{c}{\underline{$a=0.0$}} \\
\includegraphics[scale=0.35]{figures/Fig_eff_measure_over_q_and_FSD_a_0} &
\includegraphics[scale=0.35]{figures/Fig_rel_sur_gain_over_q_and_FSD_a_0} \\
\multicolumn{2}{c}{\underline{$a=0.3$}} \\
\includegraphics[scale=0.35]{figures/Fig_eff_measure_over_q_and_FSD_a_3} &
\includegraphics[scale=0.35]{figures/Fig_rel_sur_gain_over_q_and_FSD_a_3} \\
\multicolumn{2}{c}{\underline{$a=0.6$}} \\
\includegraphics[scale=0.35]{figures/Fig_eff_measure_over_q_and_FSD_a_6} &
\includegraphics[scale=0.35]{figures/Fig_rel_sur_gain_over_q_and_FSD_a_6} \\
\multicolumn{2}{c}{\underline{$a=0.9$}} \\
\includegraphics[scale=0.35]{figures/Fig_eff_measure_over_q_and_FSD_a_9} &
\includegraphics[scale=0.35]{figures/Fig_rel_sur_gain_over_q_and_FSD_a_9} \\
\end{tabular}
\begin{tablenotes}
\footnotesize
\item Notes. The efficient measures are computed by $\Pi_s/\Pi^*$, where $\Pi^*$ is the aggregate net surplus without any restriction on feasible reactions. The relative surplus gains are computed by $100\times (\Pi_w - \Pi_s)/\Pi_s$, where $\Pi_w$ is aggregate net surplus of the well-behaved equilibrium.
\end{tablenotes}
\end{threeparttable}
\end{figure}
Our result suggests that in terms of efficiency improvement, the DM's
equilibrium design is most effective (i) when the firm size distribution has
the smallest mean and variance; and (ii) when the productivity parameter of
observable skill ($a$ in our notation) has the lowest value. As the mean and
variance of the firm size distribution increase or the productivity
parameter of observable skill increases, our result shows that the DM's
equilibrium design quickly loses its effectiveness. This highlights how a
trade-off between matching efficiency and (net) signaling costs changes as
the firm size distribution becomes more spread in terms of its mean and
variance and the direct productivity of observable skill increases and its
impact on optimal equilibrium design.
\section{Concluding remarks\label{sec_discussion}}
In this paper we generalize Spencian competitive signaling (Spence (1973))
with two-sided matching. A decision maker can choose a set of feasible
reactions before senders and receivers move. We characterize a unique
stronger monotone equilibrium (unique D1 equilibrium) given each set of
feasible reactions. We propose a general method that the DM can use for the
design of an optimal unique stronger monotone equilibrium and study the
optimal equilibrium design in various settings. Our analysis sheds light on
the impact of a trade-off between matching efficiency and signaling costs on
optimal equilibrium designing and how the trade-off depends on the relative
heterogeneity of receiver types to sender types, the distribution of sender
types, and the productivity of the sender's action.
\clearpage
|
1,116,691,499,728 | arxiv | \section{Introduction}
If a singularity is not covered by a black hole horizon, it can be seen by distant observers and is called a naked singularity. The weak ``cosmic censorship'' conjecture states that naked singularities cannot be formed by gravitational collapse with physically reasonable matter \cite{penrose}. A precise statement of this conjecture was given in \cite{wald-cos}. Although a general proof of this conjecture has not been given, evidence in favor of it has been found and discussed in the past few decades. One way of testing the cosmic censorship conjecture is to see whether the black hole horizon can be destroyed by an object falling into the black hole. In the seminal work, Wald \cite{wald72} proved that a test particle cannot destroy the horizon of an extremal Kerr-Newman black hole. This work has been revisited and extended by a number of authors in the last decade \cite{hubeny}-\cite{vitor4}. It is worth mentioning that gravitational lensing by naked singularities has been studied in the past decade \cite{vir1,vir2}, making observational test of the cosmic censorship possible.
There were two crucial assumptions in Wald's treatment. First, the existing black hole is extremal. Second, only linear terms in the particle's energy, charge and angular momentum are kept in the analysis. By releasing the two assumptions, Hubeny showed that a nearly extremal Reissner-Nordstrom (RN) black hole can be overcharged by a test particle. Recently, Jacobson showed that a nearly extremal Kerr black hole can be overspun. These results apparently indicate violations of the cosmic censorship, at least, they point out that the test particle assumption may not be valid and the radiative and self-force effects should be considered.
Note that the results in \cite{hubeny, ted} agree with Wald's in the extremal limit. So it seems that the cosmic censorship holds anyway when one tries to overcharge or overspin extremal black holes. However, an overlooked fact is that the authors of \cite{hubeny} and \cite{ted} only considered the RN black hole and Kerr black hole respectively, while Wald considered the combination, i.e., the Kerr-Newman (KN) black hole. To distinguish from RN and Kerr solutions, we shall refer to KN black holes as those with nonvanishing charge and angular momentum. By reexamining Wald's arguments, we find that counter examples can be found if higher-order terms are included in the calculation (High-order terms have been considered in \cite{hubeny, ted} for RN and Kerr black holes, but caused no violation to the cosmic censorship in the extremal cases ). This tells us that the cosmic censorship is not safe even for extremal black holes. We further find that the allowed range of the particle's energy is very small, which means that the particle's parameters must be finely tuned. This suggests that radiative and self-force effects are necessary for a complete proof of the cosmic censorship. Although it is difficult to perform a full analysis on these effects, notable progress has been made recently. Barausse, Cardoso and Khanna \cite{vitor-prl, vitor-prd} showed that, for some orbits, the conservative self-force may have the right sign to prevent the violation of the cosmic censorship. Most recently, Zimmerman, Vega, Poisson and Haas \cite{poisson} incorporated the particle's electromagnetic self-force, and their numerical results have provided strong evidence supporting the cosmic censorship.
\section{Review of Wald's proof} \label{review}
In this section, we review the gedanken experiment in extremal charged Kerr black holes proposed by Wald\cite{wald72}. Consider the charged Kerr solution,
\bean
ds^2=g_{tt}dt^2+g_{rr}dr^2+g_{\theta\theta}d\theta^2+g_{\phi\phi}d\phi^2+2g_{t\phi}
dtd\phi\,.
\eean
Assume the vector potential is in the form,
\bean
A_{a}=A_t dt_a+A_\phi d\phi_a\,.
\eean
A charged particle with mass $m$ and charge $q$ moves in the spacetime with four-velocity
\bean
u^a=\dot t\ppa{t}{a}+\dot r\ppa{r}{a}+\dot \theta\ppa{\theta}{a}+\dot \phi\ppa{\phi}{a} \,.
\eean
The conserved energy and angular momentum are
\bean
E\eqn -t^a(m u_a+q A_a) \,,\label{ex}\\
L\eqn \phi^a(m u_a+q A_a) \,.\label{le}
\eean
Solving \eqs{ex} and \meq{le} for $\dot t$ and $\dot \phi$, we have
\bean
\dot t\eqn\frac{E g_{\phi\phi}+g_{t\phi}L+A_t g_{\phi\phi} q-A_\phi g_{t\phi} q}{
m(g_{t\phi}^2-g_{\phi\phi} g_{tt})} \,,\\
\dot\phi\eqn-\frac{E g_{t\phi}+g_{tt}L+A_t g_{t\phi} q-A_\phi g_{tt} q}{
m(g_{t\phi}^2-g_{\phi\phi} g_{tt})}\,.
\eean
Substituting the two formulas into
\bean
g_{ab}u^au^b=-1
\eean
and solving the quadratic equation for $E$, we find
\bean
E\eqn \frac{-g_{t\phi}L-q A_t g_{\phi\phi}+q A_\phi g_{t\phi}}{g_{\phi\phi}}\nonumber\\
&\pm& \frac{1}{g_{\phi\phi}}\sqrt{(g_{t\phi}^2-g_{\phi\phi}g_{tt})[L^2-2qLA_\phi+q^2 A_\phi^2+m^2g_{\phi\phi}(1+g_{rr}\dot r^2+g_{\theta\theta}\dot \theta^2)]} \non \,. \label{epm}
\eean
Note that $u^a$ is future pointing, which implies $\dot t>0$. Therefore, we should take the plus sign in front of the square root in \eq{epm}.
Consequently,
\bean
E\geq \frac{-g_{t\phi}L-q A_t g_{\phi\phi}+q A_\phi g_{t\phi}}{g_{\phi\phi}} \,.\label{egq}
\eean
The Kerr-Newmann metric is given by \cite{waldbook},
\bean
g_{tt}\eqn -\frac{\Delta-a^2 \sin^2\theta}{\Sigma} \,,\\
g_{t\phi}\eqn-\frac{a\sin^2\theta(r^2+a^2-\Delta)}{\Sigma}\,,\\
g_{\phi\phi}\eqn\frac{(r^2+a^2)^2-\Delta a^2\sin^2\theta}{\Sigma}\sin^2\theta \,,\\
A_t\eqn-\frac{Qr}{\Sigma},\ \ \ A_\phi=\frac{Qr}{\Sigma}a\sin^2\theta \,,\\
g_{rr}\eqn\frac{\Sigma}{\Delta} \,,\\
g_{\theta\theta}\eqn\Sigma\,,
\eean
with
\bean
\Sigma\eqn r^2+a^2\cos^2\theta \,,\\
\Delta\eqn r^2+a^2+Q^2-2Mr \,.
\eean
Then at the horizon $r=r_+$, \eq{epm} is written as
\bean
E=\frac{aL+qQr}{a^2+r^2}+m\sqrt{\frac{(a^2+2r_+^2+a^2\cos^2(2\theta))^2}{4(a^2+r_+^2)^2}\dot r^2}\label{ees} \,,
\eean
and thus
\bean
E\geq \frac{aL+qQr}{a^2+r^2}\,.
\eean
For an extremal black hole $r_+=M$, we have
\bean
E\geq \frac{aL+qQM}{a^2+M^2} \label{e1} \,.
\eean
On the other hand, to destroy the black hole horizon with $M^2=Q^2+a^2$, the particle must satisfy
\bean
(E+M)^2<(Q+q)^2+\left(\frac{aM+L}{M+E}\right) ^2 \label{e2}\,.
\eean
Expanding the last term around $E=0$, we have
\bean
E^2+M^2+2ME< Q^2+q^2+2qQ-\frac{(L+aM)^2}{M^2}+\frac{2(L+aM)^2E}{M^3}\,.
\eean
Using $M^2=Q^2+a^2$ and keeping the terms linear to $q,E,L$, we have
\bean
E<\frac{aL+MqQ}{M^2+a^2}\,,
\eean
which contradicts \eq{e1}. Thus, the cosmic censorship is upheld if higher-order terms are neglected. In the next section, we shall see that higher-order terms do not change this result if one attempts to destroy an extremal Kerr or RN black hole.
\section{Kerr and RN cases}
The above result is derived from a Kerr-Newman black hole. Now let us consider the following two reduced cases:
1. Pure Kerr ($Q=q=0, M=a$)
\eq{e1} reduces to
\bean
E\geq \frac{L}{2M}\,,
\eean
and \eq{e2} reduces to
\bean
E+M<\frac{M^2+L}{E+M}\,,
\eean
i.e.,
\bean
E^2+2ME<L\,,
\eean
\bean
E<\frac{L}{2M}-\frac{E^2}{2M}<\frac{L}{2M}\,,
\eean
so no solution can be found.
2. Pure RN($a=L=0, M=Q$)
\eq{e1} reduces to
\bean
E\geq q\,,
\eean
and \eq{e2} reduces to
\bean
E+M<Q+q \,,
\eean
i.e.,
\bean
E<q \,.
\eean
Obviously, there is no solution.
Thus, there is no violation of cosmic censorship for either Kerr black hole or RN black hole, agreeing with the results of Hubeny, Jacobson and Sotiriou \cite{hubeny,ted} . Differing from the treatment in Section \ref{review}, no linear approximation has been made in the above proof.
\section{Violation of the cosmic censorship for extremal KN black holes}
From the last section, we see that the cosmic censorship conjecture has passed the test of gedanken experiments in extremal RN or Kerr black holes, even without linear approximation. However, it is unknown whether higher-order terms can lead to a different conclusion for extremal KN black holes ($Q\neq 0$ and $a\neq 0$). We first show that the two inequalities \meq{e1} and \meq{e2} can be simplified and combined into one.
Define
\bean
W=(M+E)^2\,,
\eean
and rewrite \eq{e2} as
\bean
W^2-(Q+q)^2W-(aM+L)^2<0 \,.
\eean
This means
\bean
W_1<W<W_2\
\eean
with
\bean
W_{1,2}=\frac{(Q+q)^2\pm\sqrt{(Q+q)^4+4(aM+L)^2}}{2} \label{w12} \,.
\eean
From \eq{e1} we have
\bean
W>\left(\frac{aL+qQM}{a^2+M^2}+M\right)^2\equiv W_3 \,.
\eean
Obviously, $W_1<0$ and $W_2, W_3>0$.
Therefore, the necessary and sufficient condition for both inequalities \meq{e1} and \meq{e2} being satisfied is
\bean
W_2>W_3 \,,
\eean
i.e.,
\bean
s&\equiv& W_2-W_3 \\
&=& \frac{(Q+q)^2+\sqrt{(Q+q)^4+4(aM+L)^2}}{2}-\left(\frac{aL+qQM}{a^2+M^2}+M\right)^2
\label{s2} \\
&>&0 \label{s} \,.
\eean
Expanding \eq{s2} out to the second order in $q$ and $L$, we find
\bean
\frac{2a^2M^2(3M^2-a^2)}{(a^2+M^2)^3}q^2+\frac{M^2(-3a^2+M^2)}{(a^2+M^2)^3}L^2-\frac{2aM
Q(3M^2-a^2)}{(a^2+M^2)^3}qL>0 \,.\label{wql}
\eean
Now we can estimate the allowed range of $E$. From
\bean
W_3<W<W_2 \,,
\eean
we see the allowed range of $E$, denoted by $\Delta E$, satisfies
$2M\Delta E \sim W_2-W_3 $. Then \eq{wql} suggests that $\Delta E$ is of order
$q^2/M$ or $L^2/M^3$.
Note that the first term in \eq{wql} is always positive since $M^2\geq a^2$ for a KN black hole. So
\eq{wql} shows that as long as $Q\neq0$, $a\neq 0$ and $q\neq 0$, there always exist solutions if $L$ is sufficiently small. To be specific, we choose the parameter set to be $M=100$ , $a=90$, and then $Q=\sqrt{M^2-a^2}=43.6$. We further choose $q=0.1$ such that the test body condition $q\ll Q$ is met. Now $s$ in \eq{s} can be treated as a function of $L$. The plot in \fig{fig-sL} confirms that small values of $L$ always lead to positive $s$.
\begin{figure}[htmb]
\centering \scalebox{0.5} {\includegraphics{sL.eps}}
\caption{ Plot of $s-L$. For small values of $L$, $s$ is always positive.} \label{fig-sL}
\end{figure}
For illustration, we take $L=5$ and find $4.8944\times 10^{-2}<E<4.8964\times 10^{-2}$. So $\Delta E\sim 2\times 10^{-5}$, which is comparable to $q^2/M=10^{-4}$ and $L^2/M^3=2.5\times 10^{-5}$, as expected.
\begin{figure}[htmb]
\centering \scalebox{0.5} {\includegraphics{vr.eps}}
\caption{The effective potential is negative for all $r>r_+$. } \label{vr}
\end{figure}
Next, we show that such a particle can be released from infinity and falls all the way into the black hole. Since the metric is axisymmetric, there exist orbits lying entirely in the equatorial plane $\theta=\pi/2$. For such an orbit, one can solve \eq{epm} for $\dot r^2$ and obtain
\bean
\dot r^2=-V(r) \,,
\eean
where the effective potential $V(r)$ is given by
\bean
V(r)\eqn -\frac{1}{m^2 r^4} (a^4 E^2 - 2 a^3 E L + q^2 Q^2 r^2 - 2 E q Q r^3 +
E^2 r^4 - L^2 \Delta - m^2 r^2 \Delta \non
&+&
2 a L (q Q r + E (-r^2 + \Delta)) +
a^2 (L^2 + E (-2 q Q r + 2 E r^2 - E \Delta)))\,.\non
\label{vvr}
\eean
We still choose $M=100, a=90,q=0.1, L=5$ as above, and $m=E=0.048955$ such that $E$ is in the allowed range. Numerical calculation shows that $V(r)$ is negative for all $r\geq r_+$ (see \fig{vr}). It is easy to check that our choice $m=E$ indicates that the particle stays at rest relative to a stationary observer at infinity, so this initial condition is realizable in practice.
\section{Discussion and Conclusions}
We have shown that, without taking into account the radiative and self-force effects, a test particle may destroy the horizon of an extremal charged Kerr black hole, resulting in an apparent violation of the cosmic censorship. The violation is generic for any extremal KN black hole. As shown by Wald \cite{wald72}, there would be no violation if higher-order terms are neglected. We also show that the energy of the particle must be finely tuned, i.e., the allowed range of energy $\Delta E$ is of order $q^2/M$ or $L^2/M^3$. A similar fine tuning has been pointed out and discussed in \cite{ted2} for nearly extremal Kerr black holes. Smith and Will \cite{charge-sch} show that a charged particle in Schwarzschild spacetime will feel a repulsive electrostatic self-force induced by the spacetime curvature. Consequently, the particle has an additional self-interacting energy with magnitude $Mq^2/r^2$ \cite{hod}. If we use this result to estimate the magnitude of the self-force correction to the energy of a particle outside a RN black hole, it becomes $q^2/M$ at the extremal black horizon $r=M$, which is the same order as $\Delta E$ we discussed above. This indicates that the self-force effect is important in testing the cosmic censorship. Despite the self-force effect, there is another open issue related to this
scenario. A hidden assumption in the above argument is that once the black hole absorbs the particle, it will settle down to a new stationary state. However, this result is not guaranteed by current theories \cite{ted2}. So far, all results can only be taken as some indication that cosmic censorship might fail.
\section*{Acknowledgements}
This research was supported by NSFC Grants No. 10605006, 10975016 and 11235003.
|
1,116,691,499,729 | arxiv | \section{Introduction}
\IEEEPARstart{D}{igital} images are subject to a wide variety of degradations, which in most cases can be modeled as
\begin{equation}
\mathbf{Z} = \boldsymbol{D} \mathbf{C} + \mathbf{N},
\label{eq:modelNewI}
\end{equation}
where $\mathbf{Z}$ is the observation, $\boldsymbol{D}$ is the degradation operator, $\mathbf{C}$ is the underlying ground-truth image and $\mathbf{N}$ is additive noise. Different settings of the degradation matrix $\boldsymbol{D}$ model different problems such as zooming, deblurring or missing pixels. Different versions of the noise term $\mathbf{N}$ include the classical additive Gaussian noise with constant variance or more complicated and realistic models such as signal dependent noise.
Due to the inherent ill-posedness of such inverse problems, standard approaches impose some prior on the image, in either variational or Bayesian approaches. Popular image models have been proposed through the total variation~\cite{rudin1992nonlinear}, wavelet decompositions~\cite{portilla2003image} or the sparsity of image patches~\cite{elad2006image}.
Buades et al.~\cite{buades05} introduced the use of patches and the self-similarity hypothesis to the denoising problem leading to a new era of patch-based image restoration techniques.
A major step forward in fully exploiting the potential of patches was achieved by several state-of-the-art restoration methods with the introduction of patch prior models, in a Bayesian framework. Some methods are devoted to the denoising problem~\cite{lyu09,chatterjee12,lebrun13,wang13}, while others propose a more general framework for the solution of image inverse problems~\cite{zoran11,yu12}, including inpainting, deblurring and zooming. The work by Lebrun et al.~\cite{lebrun12,lebrun13} presents a thorough analysis of several recent restoration methods, revealing their common roots and their relationship with the Bayesian approach.
Among the state-of-the-art restoration methods, two noticeable approaches are the patch-based Bayesian approach by Yu et al.~\cite{yu12}, namely the piece-wise linear estimators (PLE), and the non-local Bayes (NLB) algorithm by Lebrun et al.~\cite{lebrun13}. PLE is a general framework for the solution of image inverse problems under Model~\eqref{eq:modelNewI}, while NLB is a denoising method ($\boldsymbol{D} = Id$). Both methods use a Gaussian patch prior learnt from image patches through iterative procedures. In the case of PLE, patches are modeled according to a Gaussian Mixture Model (GMM), with a relatively small number of classes (19 in all their experiments), whose parameters are learnt from all image patches\footnote{Actually, the authors report the use of $128 \times 128$ image sub-regions in their experiments, so we may consider PLE as a semi-local approach.}. In the case of NLB, each patch is associated with a single Gaussian model, whose parameters are computed from similar patches chosen from a local neighbourhood, i.e., a search window centered at the patch. We refer hereafter to this kind of per-patch modelling as \textit{local}.
Zoran and Weiss~\cite{zoran11} (EPLL) follow a similar approach, but instead of iteratively updating the GMM from image patches, they use a larger number of classes that are fixed and learnt from a large database of natural image patches. Wang and Morel~\cite{wang13} claim that, in the case of denoising, it is better to have fewer models that are updated with the image patches (as in PLE) than having a large number of fixed models (as in EPLL).
All of the previous restoration approaches share a common Bayesian framework based on Gaussian patch priors. Relying on local priors~\cite{lebrun13,wang13} has proven more accurate for the task of image denoising than relying on a mixture of a limited number of Gaussian models~\cite{zoran11,yu12}. In particular, NLB outperforms PLE for this task~\cite{wang13c}, mostly due to its local model estimation. On the other hand, PLE yields state-of-the-art results in other applications such as interpolation of missing pixels (especially with high masking rates), deblurring and zooming.
As a consequence we are interested in taking advantage of a local patch modelling for more general inverse problems than denoising. The main difficulty lies in the estimation of the models, especially when the image degradations involve a high rate of missing pixels, in which case the estimation is seriously ill-posed.
In this work we propose to model image patches according to a Gaussian prior, whose parameters, the mean $\boldsymbol{\mu}$ and the covariance matrix $\S$, will be estimated locally from similar patches. In order to tackle this problem, we include prior knowledge on the model parameters making use of a hyperprior, i.e. a probability distribution on the parameters of the prior. In Bayesian statistics, $\boldsymbol{\mu}$ and $\S$ are known as hyperparameters, while the prior on them is called a hyperprior. Such a framework is often called hierarchical Bayesian modelling~\cite{gelman2014bayesian}. Its application to inverse problems in imaging is not new. In particular, in the field of image restoration, this methodology was proposed by Molina et al.~\cite{Molina1994,Molina1999}, and was more recently applied to image unmixing problems~\cite{dobigeon2008semi} and to image deconvolution and the estimation of the point spread function of a camera~\cite{Orieux2010}. However, to our knowledge, this is the first time that such a hierarchical Bayesian methodology is used to reduce ill-posedness in patch-based image restoration. In this context, the use of a hyperprior
compensates for the patches missing information.
There are two main contributions of this work:
First, as described above, we propose a robust framework enabling the use of Gaussian local priors on image patches for solving a useful family of restoration problems by drawing on a hierarchical Bayesian approach.
The second advantage of the proposed framework is its ability to deal with
signal dependent noise, therefore making it adapted to realistic digital photography applications.
Experiments on both synthetic and real data show that the approach is well suited to various problems involving a diagonal degradation operator. First, we show state-of-the-art results in image restoration problems such as denoising, zooming and interpolation of missing pixels. Then we consider the generation of high dynamic range (HDR) images from a single snapshot using spatially varying pixel exposures~\cite{nayar00} and demonstrate that our approach significantly outperforms existing methods to deal with this inverse problem. It is worth mentioning that modified sensors enabling such approaches have been recently made available by Sony but are not yet fully exploited by available smartphones and digital cameras.
The article is organized as follows. Section~\ref{sec:newMethod} introduces the proposed approach while Section~\ref{sec:implDetails} presents the main implementation aspects. Supportive experiments are presented in Section~\ref{ssec:expsNewMethod}. Section~\ref{sec:HDR} is devoted to the application of the proposed framework to the HDR imaging problem. Last, conclusions are summarized in Section~\ref{sec:conclusions}.
\section{Hyperprior Bayesian Estimator}
\label{sec:newMethod}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{aguer001.pdf}
\caption{Diagram of the proposed iterative approach.}
\label{fig:methodDiagram}
\end{figure}
The proposed restoration method, called Hyperprior Bayesian Estimator (HBE}%{HPNLB}%{HPBE{}), assumes a Gaussian prior for image patches, with parameters $\boldsymbol{\mu}$ and $\S$. A \textbf{joint maximum a posteriori} formulation is used to estimate both the image patches and the parameters $\boldsymbol{\mu}$ and $\S$, thanks to a Bayesian hyperprior model on these parameters, stabilizing the local estimation of the Gaussian statistics. As a consequence, we can exploit the accuracy of local model estimation for general restoration problems, in particular with missing values (e.g. for interpolation or zooming). Figure~\ref{fig:methodDiagram} illustrates the proposed approach which is described in detail in the following.
\subsection{Patch degradation model}
\label{ssec:degradation}
The observed image $\z$ is decomposed into $I$ overlapping patches $\{\z_i\}_{i=1,\dots,I}$ of size $\sqrt{n}\times\sqrt{n}$. Each patch $\z_i \in \sR^{n \times 1}$ is considered to be a realization of the random variable $\mathbf{Z}_i$ given by
\begin{equation}
\mathbf{Z}_i = \boldsymbol{D}_i \mathbf{C}_i + \mathbf{N}_i,
\label{eq:modelNew}
\end{equation}
where $\boldsymbol{D}_i \in \sR^{n \times n}$ is a degradation operator, $\mathbf{C}_i \in \sR^{n \times 1}$ is the original patch we seek to estimate and $\mathbf{N}_i \in \sR^{n \times 1}$ is an additive noise term, modeled by a Gaussian distribution $\mathbf{N}_i\sim \mathcal{N}(0,\S_{N_i})$.
Therefore, the distribution of $\mathbf{Z}_i$ given $\mathbf{C}_i$ can be written as
\begin{flalign}
&p(\mathbf{Z}_i \;|\; \mathbf{C}_i) \sim \mathcal{N}(\boldsymbol{D}_i\mathbf{C}_i,\S_{N_i})& \nonumber\\
&\propto|\S_{N_i}^{-1}|^{\frac 1 2} \exp\left(-\frac 1 2 (\mathbf{Z}_i - \boldsymbol{D}_i
\mathbf{C}_i)^T \S_{N_i}^{-1} (\mathbf{Z}_i - \boldsymbol{D}_i \mathbf{C}_i)\right).&
\label{eq:modelPatch}
\end{flalign}
In this noise model, the matrix $\S_{N_i}$ is only assumed to be diagonal (the noise is uncorrelated). It can represent a constant variance, spatially variable variances or even variances dependent on the pixel value (to approximate Poisson noise).
This degradation model is deliberately generic. We will see in Section~\ref{ssec:expsNewMethod} that keeping a broad noise model is essential to properly tackle the problem of HDR imaging from a single image. The model also includes the special case of multiplicative noise.
\subsection{Joint Maximum A Posteriori }
\label{ssec:jointmap}
We assume a Gaussian prior for each patch, with unknown mean $\boldsymbol{\mu}$ and covariance matrix $\S$, $p(\mathbf{C}_i \;|\; \boldsymbol{\mu},\S) \sim \mathcal{N}(\boldsymbol{\mu},\S).$
To simplify calculations we work with the precision matrix $\L = \S^{-1}$. As it is usual when considering hyperpriors, we assume that the parameters $\boldsymbol{\mu}$ and $\S$ follow a conjugate distribution. In our case, that boils down to assuming a Normal-Wishart\footnote{The Normal-Wishart distribution is the conjugate prior of a multivariate normal distribution with unknown mean and covariance matrix. $\mathcal{W}$ denotes the Wishart distribution~\cite{nla.cat-vn751926}.} prior for the couple ($\boldsymbol{\mu},\L$),
\begin{flalign}
\label{eq:pmuL}
p(\boldsymbol{\mu},\L) &= \mathcal{N}(\boldsymbol{\mu} | \boldsymbol{\mu}_0,(\k\L)^{-1})\mathcal{W}(\L|(\nu \S_0)^{-1},\nu)\\
&\propto |\L|^{1/2} \exp{\left( -\frac{\k}{2}(\boldsymbol{\mu} - \boldsymbol{\mu}_0) \L (\boldsymbol{\mu} - \boldsymbol{\mu}_0)^T \right)} \nonumber \\
&\phantom{\propto } |\L|^{(\nu-n-1)/2} \exp{ \left( -\frac{1}{2} \text{tr}(\nu \S_0 \L) \right) }, \nonumber
\end{flalign}
with parameters $\boldsymbol{\mu}_0$, $\S_0$, scale parameter $\k > 0$ and $\nu > n - 1$ degrees of freedom.
Now, assume that we observe a group $\{\mathbf{Z}_i\}_{i=1,\dots,M}$ of similar patches and that we want to recover the restored patches $\{\mathbf{C}_i\}_{i=1,\dots,M}$. If these unknown $\{\mathbf{C}_i\}$ are independent\footnote{We rely on the classical independence assumption made in the patch-based literature, even if it is wrong in case patches overlap.} and follow the same Gaussian model, we can compute the joint maximum a posteriori
\begin{flalign}
&\argmax\limits_{\{\mathbf{C}_i\},\boldsymbol{\mu},\L} \;\;p( \{\mathbf{C}_i\},\boldsymbol{\mu},\L \;|\;\{\mathbf{Z}_i\}) = \\
\;\;\;\;=&\;\;p(\{\mathbf{Z}_i\} \;|\; \{\mathbf{C}_i\},\boldsymbol{\mu},\L)\; \; p(\{\mathbf{C}_i\} \;|\; \boldsymbol{\mu},\L)\; \; p(\boldsymbol{\mu},\L)& \nonumber\\
\;\;\;\;=&\;\;p(\{\mathbf{Z}_i\} \;|\; \{\mathbf{C}_i\})\; \; p(\{\mathbf{C}_i\} \;|\; \boldsymbol{\mu},\L)\; \; p(\boldsymbol{\mu},\L).& \nonumber
\end{flalign}
In this product, the first term is given by the noise model~\eqref{eq:modelPatch}, the second one is the Gaussian prior on the set of patches $\{\mathbf{C}_i\}$ and the third one is the hyperprior~\eqref{eq:pmuL}. In the last equality we omit the explicit dependence on $\boldsymbol{\mu}$ and $\L$ in $p(\{\mathbf{Z}_i\} \;|\; \{\mathbf{C}_i\},\boldsymbol{\mu},\L)$, since these parameters are completely determined by the set $\{\mathbf{C}_i\}$.
\subsection{Optimality conditions}
Computing the joint maximum a posteriori amounts to minimizing
{\small
\begin{eqnarray*}
\label{eq:pmuLAp}
f(\{\mathbf{C}_i\},\boldsymbol{\mu},\L) &:=& -\log p( \{\mathbf{C}_i\},\boldsymbol{\mu},\L \;|\;\{\mathbf{Z}_i\}) \\
&=&\frac 1 2 (\mathbf{Z}_i - \boldsymbol{D}_i
\mathbf{C}_i)^T \S_{N_i}^{-1} (\mathbf{Z}_i - \boldsymbol{D}_i \mathbf{C}_i)\\
&-& {\frac{\nu -n +M}{2}}\log |\L|\\
&+&\frac 1 2 \sum_{i=1}^M (\mathbf{C}_i - \boldsymbol{\mu})^T \L (\mathbf{C}_i - \boldsymbol{\mu}) \\
&+&\frac {\kappa}{2}(\boldsymbol{\mu}-\boldsymbol{\mu}_0)^T\L(\boldsymbol{\mu}-\boldsymbol{\mu}_0)+\frac {1}{2}\mathrm{trace}[\nu\S_0 \L ],
\end{eqnarray*}
} over the set $\mathbb{R}^{nM} \times \mathbb{R}^n \times S_n^{++}(\mathbb{R})$, with $S_n^{++}(\mathbb{R})$ the set of real symmetric positive definite matrices of size $n$.
The function $f$ is biconvex respectively in the variables $(\{\mathbf{C}_i\},\boldsymbol{\mu})$ and $\mathbf \L$. To minimize this energy for a given set of hyper-parameters $(\boldsymbol{\mu}_0,\L_0)$, we will use an alternating convex minimization scheme. At each iteration, $f$ is first minimized with respect to $(\{\mathbf{C}_i\},\boldsymbol{\mu})$ with $\L$ fixed, then viceversa.
Differentiating $f$ with respect to each variable, we get explicit optimality equations for the minimization scheme. The proofs of the following propositions are straightforward and available in the supplementary material.
\begin{proposition}
Assume that $\L$ is fixed and that the covariance $\S_{N_i}$ does not depend on the $\{\mathbf{C}_i\}$. The function $(\{{\mathbf{C}_i}\},\boldsymbol{\mu}) \mapsto f(\{{\mathbf{C}_i}\},\boldsymbol{\mu},\L)$ is convex on $\mathbb{R}^{n(M+1)}$ and attains its minimum at $(\{\hat{\mathbf{C}_i}\},\hat{\boldsymbol{\mu}})$, given by
\begin{align}
\widehat{\boldsymbol{\mu}} &=& \left(\kappa \mathrm{Id} + \sum_{i=1}^M \mathbf{A}_i\boldsymbol{D}_i \right)^{-1} \left(\sum_{i=1}^M \mathbf{A}_i \mathbf{Z}_i +\kappa \boldsymbol{\mu}_0 \right).
\label{eq:muhat}
\end{align}
\begin{align}
\widehat{\mathbf{C}_i} &=& \mathbf{A}_i (\mathbf{Z}_i - \boldsymbol{D}_i \hat{\boldsymbol{\mu}}) + \hat{\boldsymbol{\mu}}, \;\;\;\;\;\;\forall i \in \{1,\dots M\}
\label{eq:Ci1}
\end{align}
with $\mathbf{A}_i = \L^{-1} \boldsymbol{D}_i^T( \boldsymbol{D}_i \L^{-1} \boldsymbol{D}_i^T + \S_{N_i})^{-1}$.
\end{proposition}
\begin{proposition}
Assume that the variables $(\{{\mathbf{C}_i}\},\boldsymbol{\mu})$ are fixed. The function $\L \rightarrow f(\{{\mathbf{C}_i}\},\boldsymbol{\mu},\L)$ is convex on $S_n^{++}(\mathbb{R})$ and attains its minimum at $\hat{\L}$ such that
{\small \begin{equation}
\hat{\L}^{-1} = \frac{\nu \S_0 + \kappa(\boldsymbol{\mu} - \boldsymbol{\mu}_0) (\boldsymbol{\mu} - \boldsymbol{\mu}_0)^T + \sum_{i=1}^M (\mathbf{C}_i - \boldsymbol{\mu}) (\mathbf{C}_i - \boldsymbol{\mu})^T}{\nu +M-n}.
\label{eq:Lhat}
\end{equation}}
\end{proposition}
The expression of $\widehat{\mathbf{C}}_i$ in~\eqref{eq:Ci1} is obtained under the hypothesis that the noise covariance matrix $\mathbf{\Sigma}_{N_i}$ does not depend on $\mathbf{C}_i$. Under the somewhat weaker hypothesis that the noise $N_i$ and the signal $\mathbf{C}_i$ are uncorrelated, this estimator is also the affine estimator $\tilde{\mathbf{C}}_i$ that minimizes the Bayes risk $\mathbb{E}[(\tilde{\mathbf{C}}_i - \mathbf{C}_i)^2]$ (c.f. supplementary material).
The uncorrelatedness of $N_i$ and $C_i$ is a reasonable hypothesis in practice. This includes various noise models, such as $N_i = f(C_i) \varepsilon_i$ with $\varepsilon_i$ independent of $C_i$, which approximates CMOS and CCD raw data noise~\cite{aguerrebere12}.
From~\eqref{eq:muhat}, we find that the MAP estimator of $\boldsymbol{\mu}$ is a weighted average of two terms: the mean estimated from the similar restored patches and the prior $\boldsymbol{\mu}_0$. The parameter $\k$ controls the confidence level we have on the prior $\boldsymbol{\mu}_0$. With the same idea, we observe that the MAP estimator for $\L$ is a combination of the prior $\L_0$ on $\L$, the covariance imposed by ${\boldsymbol{\mu}}$ and the covariance matrix estimated from the patches ${\mathbf{C}}_i$.
\subsection{Alternating convex minimization of $f$}
The previous propositions imply that we can derive an explicit alternating convex minimization scheme for $f$, presented in Algorithm~\ref{algo:alternate}. Starting with a given value of $\L$, at each step, $\boldsymbol{\mu}^l$ and $\mathbf{C}^l$ are computed according to Equations~\eqref{eq:muhat} and ~\eqref{eq:Ci1}, then $\L^l$ is updated according to~\eqref{eq:Lhat}.
\begin{algorithm}
\KwIn{$\Z$, $\boldsymbol{D}$, $\boldsymbol{\mu}_0, \S_0, \kappa, \nu$}
\KwOut{ $\{\hat{\mathbf{C}_i}\}$,$\hat{\boldsymbol{\mu}}$, $\hat{\L}$, }
\textbf{Initialization:} Set $\L^0=\S_0^{-1}$\\
\For{l = 1 to $\text{maxIts}$}{
Compute $({\mathbf{C}^{l}},{\boldsymbol{\mu}}^{l}) = \argmin_{(\mathbf{C},\boldsymbol{\mu})} f(\mathbf{C},\boldsymbol{\mu},\L^{l-1})$ by equations~\eqref{eq:muhat} and~\eqref{eq:Ci1}
Compute ${\L}^{l} = \argmin_{\L} f({\mathbf{C}}^l,\boldsymbol{\mu}^l,\L)$ by Eq.~\eqref{eq:Lhat}}
$\{\hat{\mathbf{C}_i} = \mathbf{C}_i^{\text{maxIts}}\}$, $\hat{\boldsymbol{\mu}} = {\boldsymbol{\mu}}^{\text{maxIts}}$, $\hat{\L} = \L^{\text{maxIts}}$
\caption{Alternating convex minimization for $f$}
\label{algo:alternate}
\end{algorithm}
We show in Appendix~\ref{ap:mapEst} the following convergence result for the previous algorithm. The proof adapts the arguments in~\cite{Gorski2007} to our particular case.
\begin{proposition}
The sequence $f( \{\mathbf{C}_i^l\},\boldsymbol{\mu}^l,\L^l )$ converges monotonically when $l\rightarrow +\infty$. The sequence $\{\{\mathbf{C}_i^l\},\boldsymbol{\mu}^l,\L^l\}$ generated by the alternate minimization scheme has at least one accumulation point. The set of its accumulation points forms a connected and compact set of partial optima and stationary points of $f$, and they all have the same function value.
\end{proposition}
In practice, we observe in our experiments that the algorithm always converges after a few iterations.
\subsection{Full restoration algorithm}
\label{ssec:pasosAlgo}
The full restoration algorithm used in our experiments is summarized in Algorithm~\ref{algo:newMethod} and illustrated by Figure~\ref{fig:methodDiagram}. It alternates between two stages: the minimization of $f$ using Algorithm~\ref{algo:alternate}, and the estimation of the hyper-parameters $\boldsymbol{\mu}_0, \S_0$. In order to estimate these parameters, we rely on an \textit{oracle image} computed by aggregation of all the patches estimated on the first stage (details are provided in Section~\ref{ssec:initialization}).
\begin{algorithm}
\KwIn{$\Z$, $\boldsymbol{D}$, $\boldsymbol{\mu}_0, \S_0, \kappa, \nu$ (see details in Section~\ref{ssec:paramsAnalisis})}
\KwOut{$\tilde{\mathbf{C}}$}
Decompose $\Z$ and $\boldsymbol{D}$ into overlapping patches. \\
\textbf{Initialization:} Compute first oracle image $\mathbf{C}_\mathrm{oracle}$ (see details in Section~\ref{ssec:initialization})\\
\For{it = 1 to $\text{maxIts}_2$}{
\For{all patches not yet restored}{
Find patches similar ($L^2$ distance) to the current $\z_i$ in $\mathbf{C}_\mathrm{oracle}$ (see details in Section~\ref{sec:implDetailsSearch}).\\
Compute $\boldsymbol{\mu}_0$ and $\S_0$ from $\mathbf{C}_\mathrm{oracle}$ (see details in Section~\ref{ssec:paramsAnalisis}).\\
Compute $(\{\hat{\mathbf{C}}_i\},\hat{\boldsymbol{\mu}},\hat{\S})$ following Algorithm~\ref{algo:alternate}.\\
}
Perform aggregation to restore the image. \\
Set $\mathbf{C}_\mathrm{oracle} = \tilde{\mathbf{C}}$.\\
}
\caption{HBE algorithm.}
\label{algo:newMethod}
\end{algorithm}
\section{Implementation details}
\label{sec:implDetails}
\subsection{Search for similar patches}
\label{sec:implDetailsSearch}
The similar patches are all patches within a search window centered at the current patch,
whose $L_2$ distance to the central patch is less than a given threshold. This threshold is given by a tolerance parameter $\varepsilon$ times the distance to the nearest neighbour (the most similar one). In all our experiments, the search window was set to size $25 \times 25$ (with a patch size of $8\times 8$) and $\varepsilon=1.5$. The patch comparison is performed in an oracle image (i.e. the result of the previous iteration), so all pixels are known. However, it may be useful to assign different confidence levels to the known pixels and to those originally missing and then restored. For all the experimental results presented in Section~\ref{ssec:expsNewMethod}, the distance between patches $\c_p$ and $\c_q$ in the oracle image $\mathbf{C}_\mathrm{oracle}$ is computed as
\begin{equation}
d(p,q) = \frac{\sum_{j=1}^N (\c^j_p - \c^j_q)^2 \omega^j_{p,q}}{\sum_{j=1}^N \omega^j_{p,q}},
\end{equation}
where $j$ indexes the pixels in the patch, $\omega^j_{p,q}=1$ if $\boldsymbol{D}^j_p = \boldsymbol{D}^j_q = 1$ (known pixel) and $\omega^j_{p,q}=0.01$ otherwise (originally missing then restored pixel)~\cite{arias12}. With this formulation, known pixels are assigned a much higher priority than unknown ones. Variations on these weights could be explored.
\subsection{Optional speed-up by Collaborative Filtering}
\label{ssec:colabFilt}
The proposed method computes one Gaussian model per image patch according to Equations~\eqref{eq:muhat} and~\eqref{eq:Lhat}. In order to reduce the computational cost, we can rely on the collaborative filtering idea previously introduced for patch-based denoising techniques~\cite{lebrun13,dabov07}. Based on the hypothesis that similar patches share the same model, we assign the same model to all patches in the set of similar patches (as defined in Section~\ref{sec:implDetailsSearch}).
The number of similar patches jointly restored depends on the image and the tolerance parameter $\varepsilon$, but it is often much smaller than what would result from the patch clustering performed by methods that use global GMMs such as PLE or EPLL. Performance degradation is observed in practice when using a very large tolerance parameter ($\varepsilon>3$), showing that mixing more patches than needed is detrimental. The collaborative filtering strategy helps accelerating the algorithm up to a certain point, but a trade-off with performance needs to be considered.
\subsection{Parameter choices}
\label{ssec:paramsAnalisis}
The four parameters of the Normal-Wishart distribution: $\k$, $\nu$, the prior mean $\boldsymbol{\mu}_0$ and the prior covariance matrix $\S_0$, must be set in order to compute $\boldsymbol{\mu}$ and $\S$.
\paragraph{Choice of $\k$ and $\nu$}
The computation of $\boldsymbol{\mu}$ according to~\eqref{eq:muhat} combines the mean $\sum_{i=1}^M \mathbf{A}_i \mathbf{Z}_i $ estimated from the similar patches and the prior mean $\boldsymbol{\mu}_0$. The parameter $\k$ is related to the degree of confidence we have on the prior $\boldsymbol{\mu}_0$. Hence, its value should be a trade-off between the confidence we have in the prior accuracy vs. the one we have in the information provided by the similar patches.
The latter improves when both $M$ (\emph{i.e.} the number of similar patches) and $P=\operatorname{trace}(\boldsymbol{D}_i)$ (\emph{i.e.} the number of known pixels in the current patch) increase. These intuitive insights suggest the following rule to set $\kappa$:
\begin{equation}
\kappa = M\alpha, \quad \alpha = \left\{
\begin{array}{rl}
\alpha_L & \text{if $P$ and $M$} > \text{threshold} \\
\alpha_H & \text{otherwise}.
\end{array} \right.
\end{equation}
A similar reasoning leads to the same rule for $\nu$,
\begin{equation}
\nu = M\alpha + n
\end{equation}
where the addition of $n$ ensures the condition $\nu > n - 1$ required by the Normal-Wishart prior to be verified.
This rule is used to obtain the experimental results presented in Section~\ref{ssec:expsNewMethod}, and proved to be a consistently good choice despite its simplicity. However, setting these parameters in a more general setting is not a trivial task and should be the subject of further study. In particular we could explore a more continuous dependence of $\alpha$ on $P$, $M$, and possibly a third term $Q=\sum_{i=1}^n S_{ii}$ where $S = \sum_{j=1}^M \L^{-1} \boldsymbol{D}_j \L^{*}_j \boldsymbol{D}_j$. This term estimates to what an extent similar patches cover the missing pixels in the current patch.
\paragraph{Setting of $\boldsymbol{\mu}_0$ and $\S_0$} Assuming an oracle image $\mathbf{C}_\mathrm{oracle}$ is available (see details in Section~\ref{ssec:pasosAlgo}), $\boldsymbol{\mu}_0$ and $\S_0$ can be computed using the classical MLE estimators from a set of similar patches $(\tilde{\c}_1,\dots,\tilde{\c}_M)$ taken from $\mathbf{C}_\mathrm{oracle}$
\begin{equation}
\boldsymbol{\mu}_0 = \frac{1}{M} \sum_{j=1}^{M} \tilde{\c}_j, \quad \S_0 = \frac{1}{M-1} \sum_{j=1}^{M} (\tilde{\c}_j - \boldsymbol{\mu}_0)(\tilde{\c}_j - \boldsymbol{\mu}_0)^T.
\label{eq:musec}
\end{equation}
This is the same approach followed by Lebrun et al.~\cite{lebrun13} to estimate the patch model parameters in the case of denoising.
\subsection{Initialization}
\label{ssec:initialization}
A good initialization is crucial since we aim at solving a non-convex problem through an iterative procedure. To initialize the proposed algorithm we follow the approach proposed by Yu et al.~\cite{yu12} (described in detail in Appendix A in the supplementary material). They propose to initialize the PLE algorithm by learning the $K$ GMM covariance matrices from synthetic images of edges with different orientations as well as from the DCT basis to represent isotropic patterns. As they state, in dictionary learning, the most prominent atoms represent local edges which are useful to represent and restore contours. Hence, this initialization helps to correctly restore corrupted patches even in quite extreme cases. The oracle of the first iteration of the proposed approach is the output of the first iteration of the PLE algorithm.
\subsection{Computational complexity}
With the original per-patch strategy, the complexity of the algorithm is given by step 3 in Algorithm 1: $[(4n^3 + n^3/3)M + n^3/3]$, so the total complexity is $[(4n^3 + n^3/3)M + n^3/3] \times maxIts \times maxIts_2 \times T$ (where $T =$ total number of patches to be restored and assuming the Cholesky factorization is used for matrix inversion). The collaborative filtering strategy reduces this value by a factor that depends on the number of groups of similar patches, which depends on the image contents and the distance tolerance parameter $\varepsilon$. The main difference with the PLE algorithm complexity ($(3n^3 + n^3/3) \times its_{PLE} \times T$) is a factor given by the number of groups defined by the collaborative filtering approach and the ratio between $its_{PLE}$ and $maxIts \times maxIts_2$. As mentioned by Yu et al.~\cite{yu12}, computational complexity can be further reduced in the case of binary masks by removing the zero rows and inverting a matrix of size $n^2/S \times n^2/S$ instead of $n^2 \times n^2$ where $S$ is the masking ratio. Moreover, the proposed algorithm can be run in parallel in different image subregions thus allowing for even further acceleration in multiple-core architectures.
The complexity comparison to NLB needs to be made in the case where the degradation is additive noise with constant variance (translation invariant degradation), which is the task performed by NLB. In that case, the complexity of the proposed approach (without considering collaborative filtering nor parallelization, which are both done also in NLB), is $11n^3/3 \times maxIts \times maxIts_2 \times T$ whereas that of NLB is $2 \times (4n^3/3)$.
\section{Image Restoration Experiments}
\label{ssec:expsNewMethod}
In this section we illustrate the ability of the proposed method to solve several image inverse problems. Both synthetic (i.e., where we have added the degradation artificially) and real data (i.e., issued from a real acquisition process) are used. The considered problems are: interpolation, combined interpolation and denoising, denoising, and zooming. The reported values of peak signal-to-noise ratio ($PSNR{} = 20\log_{10}(255/\sqrt{MSE})$) are averaged over 10 realizations for each experiment (variance is below 0.1 for interpolation and below 0.05 for combined interpolation and denoising and for denoising only). Similar results are obtained with the structural similarity index (SSIM) which is included in the supplementary material (Appendix B).
\begin{table*}
\footnotesize
\setlength{\tabcolsep}{2.2pt}
\centering
\begin{tabular}[h]{c c c c c c c c c c c c c c c c c c c c c}
\toprule
& \multicolumn{12}{c}{\textbf{Interpolation - PSNR (dB)}} & \multicolumn{8}{|c}{\textbf{Interpolation \& Denoising - PSNR (dB)}} \\
\cmidrule{1-21}
& \multicolumn{4}{c}{50\%} & \multicolumn{4}{c}{70\%} & \multicolumn{4}{c|}{90\%} & \multicolumn{4}{c}{70\%} & \multicolumn{4}{c}{90\%}\\
\cmidrule{2-21}
& HBE}%{HPNLB}%{HPBE{} & PLE & EPLL & \multicolumn{1}{c|}{E-PLE} & HBE}%{HPNLB}%{HPBE{} & PLE & EPLL & \multicolumn{1}{c|}{E-PLE} & HBE}%{HPNLB}%{HPBE{} & PLE & EPLL & \multicolumn{1}{c|}{E-PLE} & HBE}%{HPNLB}%{HPBE{} & PLE & EPLL & \multicolumn{1}{c|}{E-PLE} & HBE}%{HPNLB}%{HPBE{} & PLE & EPLL & E-PLE\\
\cmidrule{2-21}
barbara & \textbf{39.11} & 36.93 & 32.99 & \multicolumn{1}{c|}{35.43} & \textbf{34.69} & 32.50 & 27.96 & \multicolumn{1}{c|}{28.77} & \textbf{24.86} & 23.62 & 23.30 & \multicolumn{1}{c|}{23.26} & \textbf{33.34} & 31.99 & 27.63 & \multicolumn{1}{c|}{27.75} & \textbf{24.57} & 23.53 & 23.27 & 23.20 \\
boat & \textbf{34.92} & 34.32 & 34.21 & \multicolumn{1}{c|}{33.59} & \textbf{31.37} & 30.74 & 30.38 & \multicolumn{1}{c|}{30.26} & \textbf{25.96} & 25.35 & 24.72 & \multicolumn{1}{c|}{25.43} & \textbf{30.61} & 30.41 & 30.15 & \multicolumn{1}{c|}{29.54} & \textbf{25.78} & 25.45 & 24.71 & 25.47 \\
traffic & 30.17 & 30.12 & \textbf{30.19} & \multicolumn{1}{c|}{28.86} & \textbf{27.27} & 27.12 & 27.13 & \multicolumn{1}{c|}{26.64} & \textbf{22.84} & 22.34 & 21.85 & \multicolumn{1}{c|}{22.27} & 26.99 & 26.98 & \textbf{27.05} & \multicolumn{1}{c|}{26.35} & \textbf{22.87} & 22.43 & 22.21 & 22.35 \\
\toprule
& \multicolumn{12}{c|}{\textbf{Denoising - PSNR (dB)}} & & \multicolumn{5}{c}{\textbf{Zooming - PSNR (dB)}} \\
\cmidrule{1-21}
$\sigma^2$ & \multicolumn{3}{c}{10} & \multicolumn{3}{c}{30} & \multicolumn{3}{c}{50} & \multicolumn{3}{c|}{80} & & \multicolumn{5}{c}{$\times 2$}\\
\cmidrule{1-21}
& HBE}%{HPNLB}%{HPBE{} & NLB & \multicolumn{1}{c|}{EPLL} & HBE}%{HPNLB}%{HPBE{} & NLB & \multicolumn{1}{c|}{EPLL} & HBE}%{HPNLB}%{HPBE{} & NLB & \multicolumn{1}{c|}{EPLL} & HBE}%{HPNLB}%{HPBE{} & NLB & \multicolumn{1}{c|}{EPLL} & & HBE}%{HPNLB}%{HPBE{} & PLE & EPLL & E-PLE & Lanczos\\
\cmidrule{2-21}
barbara & \textbf{41.26} & 41.20 & \multicolumn{1}{c|}{40.56} & \textbf{38.40} & 38.26 & \multicolumn{1}{c|}{37.32} & \textbf{37.13} & 36.94 & \multicolumn{1}{c|}{35.84} & \textbf{35.96} & 35.73 & \multicolumn{1}{c|}{34.51} & & \textbf{38.17} & 37.11 & 31.34 & 36.51 & 28.01 & & \\
boat & \textbf{40.05} & 39.99 & \multicolumn{1}{c|}{39.47} & 36.71 & \textbf{36.76} & \multicolumn{1}{c|}{36.34} & 35.41 & \textbf{35.46} & \multicolumn{1}{c|}{35.13} & 34.30 & \textbf{34.33} & \multicolumn{1}{c|}{34.12} & & \textbf{32.35} & 31.96 & 31.95 & 32.08 & 29.60 & & \\
traffic & 40.73 & \textbf{40.74} & \multicolumn{1}{c|}{40.55} & \textbf{37.03} & 36.99 & \multicolumn{1}{c|}{36.86} & \textbf{35.32} & 35.26 & \multicolumn{1}{c|}{35.20} & \textbf{33.78} & 33.70 & \multicolumn{1}{c|}{33.72} & & 25.05 & 24.78 & \textbf{25.17} & 24.91 & 21.89 & & \\
\toprule
\end{tabular}
\caption{Results of the interpolation, combined interpolation and denoising, denoising and zooming tests described in Section~\ref{ssec:synthExps}. Patch size of $8 \times 8$ for all methods in all tests. Parameter setting for interpolation, combined interpolation and denoising, and zooming, HBE}%{HPNLB}%{HPBE{}: $\alpha_H = 1$, $\alpha_L = 0.5$, PLE: $\sigma = 3$, $\varepsilon = 30$, $K = 19$~\cite{yu12}, EPLL: default parameters~\cite{zoran11_web}, E-PLE: parameters set as specified in~\cite{wang13b}. Parameter setting for denoising, HBE}%{HPNLB}%{HPBE{}: $\alpha_H = \alpha_L = 100$, NLB: code provided by the authors~\cite{lebrun13IPOL} automatically sets parameters from input $\sigma^2$, EPLL: default parameters for the denoising example~\cite{zoran11_web}}
\label{tab:psnrInterp}
\end{table*}
\subsection{Synthetic degradation}
\label{ssec:synthExps}
\paragraph{Interpolation} Random masks with 50\%, 70\% and 90\% missing pixels are applied to the tested ground-truth images. The interpolation performance of the proposed method is compared to that of PLE~\cite{yu12}, EPLL~\cite{zoran11} and E-PLE~\cite{wang13b} using a patch size of $8 \times 8$ for all methods. PLE parameters are set as indicated in~\cite{yu12} ($\sigma = 3$, $\varepsilon = 30$, $K = 19$). We used the EPLL code provided by the authors~\cite{zoran11_web} with default parameters and the E-PLE code available in~\cite{wang13b} with the parameters set as specified in the companion demo. The parameters for the proposed method are set to $\alpha_H = 1$, $\alpha_L = 0.5$ ($\alpha_H$ and $\alpha_L$ define the values for $\kappa$ and $\nu$, see Section~\ref{ssec:paramsAnalisis}). The PSNR{} results are shown in Table~\ref{tab:psnrInterp}. Figure~\ref{fig:syntheticExpsInterp} shows some extracts of the obtained results, the PSNR{} values for the extracts and the corresponding difference images with respect to the ground-truth. The proposed method gives sharper results than the other considered methods. This is specially noticeable on the reconstruction of the texture of the fabric of Barbara's trousers shown in the first row of Figure~\ref{fig:syntheticExpsInterp} or on the strips that appear through the car's window shown in the second row of the same figure.
\begin{figure*}
\centering
\subfigure[Ground-truth]{\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer002.pdf}}\subfigure[HBE}%{HPNLB}%{HPBE{} (\textbf{30.01 dB})]{\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer003.pdf}}\subfigure[PLE (26.78 dB)]{\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer004.pdf}}\subfigure[EPLL (21.12 dB)]{\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer005.pdf}}\subfigure[E-PLE (23.12 dB)]{\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer006.pdf}}
\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer007.pdf}\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer008.pdf}\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer009.pdf}\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer010.pdf}\includegraphics[width=0.17\linewidth]{barbara/interp/ext_10/pdf/aguer011.pdf}\\
\subfigure[Ground-truth]{\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer012.pdf}}\subfigure[HBE}%{HPNLB}%{HPBE{} (\textbf{30.20 dB})]{\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer013.pdf}}\subfigure[PLE (27.89 dB)]{\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer014.pdf}}\subfigure[EPLL (27.83 dB)]{\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer015.pdf}}\subfigure[E-PLE (26.79 dB)]{\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer016.pdf}}\\
\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer018.pdf}\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer019.pdf}\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer020.pdf}\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer021.pdf}\includegraphics[width=0.17\linewidth]{traffic/interp/ext_7/pdf/aguer022.pdf}\\
\caption{\textbf{Synthetic data. Interpolation with 70\% of randomly missing pixels}. \textbf{Left to right:} (first row) Ground-truth (extract of barbara), result by HBE}%{HPNLB}%{HPBE{}, PLE, EPLL, E-PLE. (second row) input image, difference with respect to the ground-truth of each of the corresponding results. (third and fourth row) Idem for an extract of the traffic image. See Table~\ref{tab:psnrInterp} for the PSNR{} results for the complete images. Please see the digital copy for better details reproduction.}
\label{fig:syntheticExpsInterp}
\end{figure*}
\paragraph{Combined interpolation and denoising} For this experiment, the ground-truth images are corrupted with additive Gaussian noise with variance 10, and a random mask with 70\% and 90\% of missing pixels. The parameters for all methods are set as in the previous interpolation-only experiment. Table~\ref{tab:psnrInterp} summarizes the PSNR{} values obtained by each method. Figure~\ref{fig:syntheticExpsInterpDeno1} shows some extracts of the obtained results, the PSNR{} values for the extracts and the corresponding difference images with respect to the ground-truth. Once again, the results show that the proposed approach outperforms the others. Fine structures, such as the mast and the ropes of the ship, as well as textures, as in Barbara's headscarf, are much better preserved.
\begin{figure*}
\centering
\subfigure[Ground-truth]
{\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer023.pdf}}\subfigure[HBE}%{HPNLB}%{HPBE{} (\textbf{26.20 dB})]
{\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer024.pdf}}\subfigure[PLE (24.76 dB)]{\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer025.pdf}}\subfigure[EPLL (23.84 dB)]{\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer026.pdf}}\subfigure[E-PLE (23.60 dB)]{\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer027.pdf}}
\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer028.pdf}\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer029.pdf}\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer030.pdf}\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer031.pdf}\includegraphics[width=0.17\linewidth]{barbara/intDeno/ext_10/pdf/aguer032.pdf}\\
\subfigure[Ground-truth]{\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer033.pdf}}\subfigure[HBE}%{HPNLB}%{HPBE{} (\textbf{28.34 dB})]{\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer034.pdf}}\subfigure[PLE (27.50 dB)]{\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer035.pdf}}\subfigure[EPLL (27.27 dB)]{\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer036.pdf}}\subfigure[E-PLE (26.83 dB)]{\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer037.pdf}}
\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer038.pdf}\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer039.pdf}\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer040.pdf}\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer041.pdf}\includegraphics[width=0.17\linewidth]{boat/intDeno/ext_4/pdf/aguer042.pdf}\\
\caption{\textbf{Synthetic data. Combined interpolation and denoising with 70\% of randomly missing pixels and additive Gaussian noise ($\sigma^2 = 10$)}. \textbf{Left to right:} (first row) Ground-truth (extract of barbara), result by HBE}%{HPNLB}%{HPBE{}, PLE, EPLL, E-PLE. (second row) input image, difference with respect to the ground-truth of each of the corresponding results. (third and fourth row) Idem for an extract of the boat image. See Table~\ref{tab:psnrInterp} for the PSNR{} results for the complete images. Please see the digital copy for better details reproduction.}
\label{fig:syntheticExpsInterpDeno1}
\end{figure*}
\paragraph{Denoising} For the denoising task the proposed approach should perform very similarly to the state-of-the-art denoising algorithm NLB~\cite{lebrun13}. The following experiments are conducted in order to verify this.
The ground-truth images are corrupted with additive Gaussian noise with variance $\sigma^2=10, 30, 50,80$. The code provided by the authors~\cite{lebrun13IPOL} automatically sets the NLB parameters from the input $\sigma^2$ and the patch size, in this case $8 \times 8$. For this experiment, there are no unknown pixels to interpolate (the mask $\boldsymbol{D}$ is the identity matrix).
The results of both methods are very similar if HBE}%{HPNLB}%{HPBE{} is initialized with the output of the first step of NLB~\cite{lebrun13} (instead of using the initialization described in Section~\ref{ssec:pasosAlgo}) and the parameters $\kappa$ and $\nu$ are large enough. In this case, $\boldsymbol{\mu}_0$ and $\S_0$ are prioritized in equations~\eqref{eq:muhat} and~\eqref{eq:Lhat} and both algorithms are almost the same. That is what we observe in practice with $\alpha_H = \alpha_L = 100$, as demonstrated in the results summarized in Table~\ref{tab:psnrInterp}. The denoising performance of HBE}%{HPNLB}%{HPBE{} is degraded for small $\kappa$ and $\nu$ values. This is due to the fact that $\boldsymbol{\mu}_0$ and $\S_0$, as well as $\boldsymbol{\mu}$ and $\S$ in NLB, are computed from an oracle image resulting from the first restoration step. This restoration includes not only the denoising of each patch, but also an aggregation step that greatly improves the final result. Therefore, the contribution of the first term of~\eqref{eq:muhat} to the computation of $\hat{\boldsymbol{\mu}}$ degrades the result compared to that of using $\boldsymbol{\mu}_0$ only (i.e. using a large $\kappa$).
\paragraph{Zooming} In order to evaluate the zooming capacity of the proposed approach, ground-truth images are downsampled by a factor 2 (no anti-aliasing filter is used) and the zooming is compared to the ground-truth. The results are compared with PLE, EPLL, E-PLE and Lanczos interpolation. Table~\ref{tab:psnrInterp} summarizes the obtained PSNR{} values. Figure~\ref{fig:syntheticZooming1} shows extracts from the obtained results, the PSNR{} values for the extracts and the corresponding difference images with respect to the ground-truth. Again, HBE}%{HPNLB}%{HPBE{} yields a sharper reconstruction than the other methods.
\begin{figure*}
\centering
\subfigure[Ground-truth]{\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/aguer043.pdf}}\subfigure[HBE}%{HPNLB}%{HPBE{} (\textbf{38.17} dB)]{\includegraphics[width=0.166\linewidth]{{nl_ple/barbara/zoom/aguer044}.pdf}}\subfigure[PLE (37.11 dB)]{\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/aguer045.pdf}}\subfigure[EPLL (31.34 dB)]{\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/aguer046.pdf}}\subfigure[E-PLE (36.51 dB)]{\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/aguer047.pdf}}\subfigure[Lanczos (28.01 dB)]{\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/aguer048.pdf}}
\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/aguer049.pdf}\includegraphics[width=0.166\linewidth]{{{nl_ple/barbara/zoom/aguer050}.pdf}}\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/aguer051.pdf}\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/{aguer052}.pdf}\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/{aguer053}.pdf}\includegraphics[width=0.166\linewidth]{nl_ple/barbara/zoom/aguer054.pdf}
\caption{\textbf{Synthetic data. Zooming $\times 2$. Left to right:} (first row) Ground-truth high resolution image (extract of barbara). Result by HBE}%{HPNLB}%{HPBE{}, PLE, EPLL, E-PLE, lanczos interpolation. (second row) Input low-resolution image, difference with respect to the ground-truth of each of the corresponding results. Please see the digital copy for better details reproduction.}
\label{fig:syntheticZooming1}
\end{figure*}
\subsection{Real data}
\begin{figure}
\centering
\begin{minipage}[c]{.45\linewidth}
\begin{center}
\includegraphics[width=\linewidth]{aguer055.pdf}
\end{center}
\end{minipage}
\begin{minipage}[c]{.5\linewidth}
\includegraphics[width=.4\linewidth]{nl_ple/barbara/zoom/aguer056.png}\hspace{.5pt}\includegraphics[width=.4\linewidth]{nl_ple/barbara/zoom/aguer057.png}
\vspace{.5pt}
\includegraphics[width=.4\linewidth]{nl_ple/barbara/zoom/aguer058.png}
\end{minipage}
\caption{\textbf{Left.} \textbf{Real data.} JPEG version of the raw image used in the experiments presented in Section~\ref{exps:realData}. The boxes show the extracts displayed in Figure~\ref{fig:realDataZooming}. \textbf{Right.} \textbf{Synthetic data.} Ground-truth images used in the experiments presented in Section~\ref{ssec:synthExps}. The green boxes indicate the extracts used for the zooming experiments.}
\label{fig:gtruthRealData}
\end{figure}
\label{exps:realData}
For this experiment, we use raw images captured with a Canon 400D camera (set to ISO 400 and exposure time 1/160 seconds). The main noise sources for CMOS sensors are: the Poisson photon shot noise, which can be approximated by a Gaussian distribution with equal mean and variance; the thermally generated readout noise, which is modeled as an additive Gaussian distributed noise and the spatially varying gain given by the photo response non uniformity (\textsc{prnu}{})~\cite{aguerrebere12,aguerrebere13}. We thus consider the following noise model for the non saturated raw pixel value $\Zp(p)$ at position $p$
\begin{equation}
\Zp(p) \sim \mathcal{N}(\a a_p \tau \mathrm{C}(p) + \mean_R, \a^2 a_p \tau \mathrm{C}(p) + \sigma^2_R),
\label{eq:modelZOrig}
\end{equation}
where $\a$ is the camera gain, $a_p$ models the \textsc{prnu}{} factor, $\tau$ is the exposure time, $\mathrm{C}(p)$ is the irradiance reaching pixel $p$, $\mean_R$ and $\sigma^2_R$ are the readout noise mean and variance. The camera parameters have to be estimated by a calibration procedure~\cite{aguerrebere12}. The noise covariance matrix $\S_{N}$ is thus diagonal with entries that depend on the pixel value $(\S_{N})_p= \a^2 a_p \tau \mathrm{C}(p) + \sigma^2_R$.
In order to evaluate the interpolation capacity of the proposed approach, we consider the pixels of the green channel only (i.e. 50\% of the pixels in the RGGB Bayer pattern) and interpolate the missing values. We compare the results to those obtained using an adaptation of PLE to images degraded with noise with variable variance (PLEV)~\cite{aguerrebere14ICCP}. The results for the EPLL and E-PLE methods are not presented here since these methods are not suited for this kind of noise. Figure~\ref{fig:realDataZooming} shows extracts of the obtained results (see Figure~\ref{fig:gtruthRealData} for a JPEG version of the raw image showing the location of the extracts). As it was already observed in the synthetic data experiments, fine details and edges are better preserved. Compare for example the reconstruction of the balcony edges and the wall structure in the first row of Figure~\ref{fig:realDataZooming}, as well as the structure of the roof and the railing in the second row of the same image.
\begin{figure*}
\centering
\fboxsep=0pt\fboxrule=1pt\fcolorbox{myViolet}{white}{\includegraphics[width=0.17\linewidth]{ndame/intDeno/ext_4/pdf/aguer059.pdf}}\includegraphics[width=0.17\linewidth]{{{ndame/intDeno/ext_4/pdf/aguer060}.pdf}}\includegraphics[width=0.17\linewidth]{{{ndame/intDeno/ext_4/pdf/aguer061}.pdf}}\includegraphics[width=0.17\linewidth]{ndame/intDeno/ext_4/pdf/aguer062.pdf}\includegraphics[width=0.17\linewidth]{ndame/intDeno/ext_4/pdf/aguer063.pdf} \\
\fboxsep=0pt\fboxrule=1pt\fcolorbox{myRed}{white}{\includegraphics[width=0.17\linewidth]{ndame/intDeno/ext_3/pdf/aguer064.pdf}}\includegraphics[width=0.17\linewidth]{{{ndame/intDeno/ext_3/pdf/aguer065}.pdf}}\includegraphics[width=0.17\linewidth]{{{ndame/intDeno/ext_3/pdf/aguer066}.pdf}}\includegraphics[width=0.17\linewidth]{ndame/intDeno/ext_3/pdf/aguer067.pdf}\includegraphics[width=0.17\linewidth]{ndame/intDeno/ext_3/pdf/aguer068.pdf} \\
\fboxsep=0pt\fboxrule=1pt\fcolorbox{myGreen}{white}{\includegraphics[width=0.17\linewidth]{ndame/intDeno/ext_5/pdf/aguer069}}\includegraphics[width=0.17\linewidth]{{{ndame/intDeno/ext_5/pdf/aguer070}.pdf}}\includegraphics[width=0.17\linewidth]{{{ndame/intDeno/ext_5/pdf/aguer071}.pdf}}\includegraphics[width=0.17\linewidth]{ndame/intDeno/ext_5/pdf/aguer072.pdf}\includegraphics[width=0.17\linewidth]{ndame/intDeno/ext_5/pdf/aguer073.pdf}
\scriptsize{Ground-truth\hspace{55pt} HBE}%{HPNLB}%{HPBE{} \hspace{65pt} PLEV \hspace{60pt} Bicubic \hspace{60pt} Lanczos}
\caption{\textbf{Real data. Zooming $\times 2$.} Interpolation of the green channel of a raw image (RGGB). \textbf{Left to right:} Input low-resolution image, result by HBE}%{HPNLB}%{HPBE{}, PLEV~\cite{aguerrebere14ICCP}, bicubic and lanczos interpolation.}
\label{fig:realDataZooming}
\end{figure*}
\subsection{Discussion}
In all the previous experiments, the results obtained with HBE}%{HPNLB}%{HPBE{} outperform or are very close to those obtained by the other evaluated methods. Visually, details are better reconstructed and images are sharper.
We interpret this as the inability of a fixed set of patch classes (19 for PLE) to accurately represent patches that seldom appear in the image, such as edges or textures (as in Barbara's trouser). The fact that methods such as PLE are actually semi-local (classes are estimated on $128 \times 128$ regions~\cite{yu12}) does not solve this issue. On the contrary, a local model estimation as the one performed by HBE}%{HPNLB}%{HPBE{} correctly handles those cases.
The performance difference is much more remarkable for the higher masking rates. In such cases, the robustness of the estimation is crucial. Indeed the proposed method performs the estimation from similar patches in a local window. The hypothesis of self-similarity being reinforced by considering local neighbourhoods, such a strategy restricts the possible models, therefore making the estimation more robust. Furthermore, the local model estimation, previously shown to be successful at describing patches~\cite{lebrun13}, gives a better reconstruction even when a very large part of the patch is missing.
EPLL uses more mixture components (200 components are learnt from $2 \times 10^6$ patches of natural images~\cite{zoran11}) in its GMM model than PLE. It was observed in~\cite{wang13} that this strategy is less efficient than PLE for the denoising task. In this work, we also observe that the proposed approach outperforms EPLL, not only in denoising, but also in inpainting and zooming. However, here it is harder to tell if the improvement is due to the local model estimation performed from similar patches or to the different restoration strategies followed by these methods.
\section{High dynamic range imaging from a single snapshot}
\label{sec:HDR}
In this section, we apply the proposed framework to generate HDR images from a single shot. HDR imaging aims at reproducing an extended dynamic range of luminosity compared to what can be captured using a standard digital camera, which is often not enough to produce an accurate representation of real scenes. In the case of a static scene and a static camera, the combination of multiple images with different exposure levels is a simple and efficient solution~\cite{debevec97,granados10,aguerrebere13}. However, several problems arise when either the camera or the elements in the scene move~\cite{aguerrebere13ICCP,sidibe2009ghost}.
An alternative to the HDR from multiple frames is to use a single image with spatially varying pixel exposures (SVE)~\cite{nayar00}. An optical mask with spatially varying transmittance is placed adjacent to a conventional image sensor, thus controlling the amount of light that reaches each pixel (see Figure~\ref{fig:HDR_synth})~\cite{nayar00,yasuma10,schoberl13}.
The greatest advantage of this acquisition method is that it avoids the need for image alignment and motion estimation. Another advantage is that the saturated pixels are not organized in large regions. Indeed, some recent multi-image methods tackle motion problems by taking a reference image and then by estimating motion or reconstructing the image relative to this reference~\cite{sen12,aguerrebere13ICCP}. A problem encountered by these approaches is the need to inpaint very large saturated and underexposed regions in the reference frame. The SVE acquisition strategy avoids this problem since, in general, all scene regions are sampled by at least one of the exposures.
Taking advantage of the ability of the proposed framework to simultaneously estimate missing pixels and denoise well-exposed ones, we propose a novel approach to generate HDR images from a single shot acquired with spatially varying pixel exposures. The proposed approach shows significant improvements over existing methods.
\subsection{Spatially varying exposure acquisition model}
\label{sec:model}
An optical mask with spatially varying transmittance is placed adjacent to a conventional image sensor to give different exposure levels to the pixels. This optical mask does not change the acquisition process of the sensor. Hence, the noise model~\eqref{eq:modelZOrig} can be adapted to the SVE acquisition by including the per-pixel SVE gain $o_p$\footnote{Some noise sources not modeled here, such as blooming, might have an impact in the SVE acquisition strategy and should be considered in a more accurate image modeling.}:
\begin{equation}
\Zp(p) \sim \mathcal{N}(\a o_p a_p \tau \mathrm{C}(p) + \mean_R, \a^2 o_p a_p \tau \mathrm{C}(p) + \sigma^2_R).
\label{eq:modelZ}
\end{equation}
In the approach proposed by Nayar and Mitsunaga~\cite{nayar00}, the varying exposures follow a regular pattern. Motivated by the aliasing problems of regular sampling patterns, Sch\"oberl et al.~\cite{schoberl12} propose to use spatially varying exposures on a non-regular pattern. Figure~\ref{fig:HDR_synth} shows examples of both acquisition patterns. This observation led us to choose the non-regular pattern in the proposed approach.
\begin{figure*}
\centering
\begin{minipage}[c]{.16\linewidth}
\begin{center}
\includegraphics[width=.85\linewidth]{aguer074.png}
\vspace{.4em}
\includegraphics[width=0.48\linewidth]{aguer075.png} \includegraphics[width=0.48\linewidth]{aguer076.png}
\end{center}
\end{minipage}
\begin{minipage}[c]{.83\linewidth}
\begin{center}
\fboxsep=0pt\fboxrule=.8pt\fcolorbox{myGreen}{white}{\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_1/png/aguer077.png}}\hspace{.5pt}\fboxsep=0pt\fboxrule=.8pt\fcolorbox{myRed}{white}{\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_1/png/aguer078.png}}\hspace{.5pt}\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_1/png/aguer079.png}\hspace{.5pt}\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_1/png/aguer080.png}\hspace{.5pt}\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_1/png/aguer081.png}\hspace{.5pt}\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_1/png/aguer082.png}\\
\fboxsep=0pt\fboxrule=.8pt\fcolorbox{myGreen}{white}{\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_6/png/aguer083.png}}\hspace{.5pt}\fboxsep=0pt\fboxrule=.8pt\fcolorbox{myRed}{white}{\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_6/png/aguer084.png}}\hspace{.5pt}\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_6/png/aguer085.png}\hspace{.5pt}\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_6/png/aguer086.png}\hspace{.5pt}\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_6/png/aguer087.png}\hspace{.5pt}\includegraphics[width=0.16\linewidth]{../images/nl_ple/eglise1/ext_6/png/aguer088.png}
\end{center}
\scriptsize{\hspace{30pt} Input \hspace{40pt} Ground-truth \hspace{43pt} HBE}%{HPNLB}%{HPBE{} \hspace{50pt} PLEV \hspace{35pt} Sch\"oberl{} et al. \hspace{10pt} Nayar \& Mitsunaga}
\end{minipage}
\caption{\textbf{Synthetic data.} \textbf{Left:} (\textbf{top}) Tone mapped version of the ground-truth image used for the experiments in Section~\ref{ssec:expSynthHDR}.
(\textbf{bottom}) Regular (left) and non-regular (right) optical masks for an example of 4 different filters. \textbf{Right:} Results for extracts 1 and 6. From left to right: Input image with random pattern, ground-truth, results by HBE}%{HPNLB}%{HPBE{}, PLEV~\cite{aguerrebere14ICCP}, Sch\"oberl{} et al.~\cite{schoberl12}, Nayar and Mitsunaga~\cite{nayar00}. 50\% missing pixels (for both random and regular pattern). See PSNR{} values for these extracts in Table~\ref{tab:HDRpsnr}. Please see the digital copy for better details reproduction.}
\label{fig:HDR_synth}
\end{figure*}
\subsection{Hyperprior Bayesian Estimator for Single Shot High Dynamic Range Imaging}
\label{sec:irrEst}
\subsubsection{Problem statement}
In order to reconstruct the dynamic range of the scene we need to solve an inverse problem. We want to estimate the irradiance image $C$ from the SVE image $\Z$, knowing the exposure levels of the optical mask and the camera parameters. For this purpose we map the raw pixel values to the irradiance domain $\Yp$ with
\begin{equation}
\Yp(p) = \frac{\Zp(p) - \mean_R}{\a o_p a_p \tau}.
\label{eq:Y}
\end{equation}
We take into account the effect of saturation and under-exposure by introducing the exposure degradation matrix $\boldsymbol{D}$, a diagonal matrix given by
\begin{equation}
(\boldsymbol{D})_p = \left\{
\begin{array}{rl}
1 & \text{if } \boldsymbol{\mu}_R < \Zp(p) < z_{sat}, \\
0 & \text{otherwise},
\end{array} \right.
\label{eq:U}
\end{equation}
with $z_{sat}$ equal to the pixel saturation value. From~\eqref{eq:modelZ} and~\eqref{eq:U}, $\Yp(p)$ can be modeled as
\begin{equation}
\Yp(p) | (\boldsymbol{D})_p \sim \mathcal{N} \left( (\boldsymbol{D})_p\mathrm{C}(p),\frac{\a^2 o_p a_p \tau (\boldsymbol{D})_p\mathrm{C}(p) + \sigma^2_R}{(\a o_p a_p \tau)^2} \right).
\label{eq:modelIrr}
\end{equation}
Notice that~\eqref{eq:modelIrr} is the distribution of $\Yp(p)$ for a given exposure degradation factor $(\boldsymbol{D})_p$, since $(\boldsymbol{D})_p$ is itself a random variable that depends on $\Zp(p)$. The exposure degradation factor must be included in~\eqref{eq:modelIrr} since the variance of the over or under exposed pixels no longer depends on the irradiance $\mathrm{C}(p)$ but is only due to the readout noise $\sigma^2_R$. From~\eqref{eq:modelIrr} we have
\begin{equation}
\Yp = \boldsymbol{D} \mathbf{C} + \mathbf{N},
\label{eq:Ymodel}
\end{equation}
where $\mathbf{N}$ is zero-mean Gaussian noise with diagonal covariance matrix $\S_{\mathbf{N}}$ given by
\begin{equation}
(\S_{\mathbf{N}})_j = \frac{\a^2 o_p a_p \tau (\boldsymbol{D})_p\mathrm{C}(p) + \sigma^2_R}{(\a o_p a_p \tau)^2}.
\end{equation}
Then the problem of irradiance estimation can be stated as retrieving $C$ from $\Yp$, which implies denoising the well-exposed pixel values ($(\boldsymbol{D})_p = 1$) and estimating the unknown ones ($(\boldsymbol{D})_p = 0$).
\subsubsection{Proposed solution}
From~\eqref{eq:Ymodel}, image $\Yp$ is under the hypothesis of the HBE framework introduced in Section~\ref{sec:newMethod}. The proposed HDR imaging algorithm consists of the following steps: \textbf{1)} generate $\boldsymbol{D}$ from $\Zp$ according to~\eqref{eq:U}, \textbf{2)} obtain $\Yp$ from $\Zp$ according to~\eqref{eq:Y}, \textbf{3)} apply the HBE approach to $\Yp$ with the given $\boldsymbol{D}$ and $\S_{\mathbf{N}}$.
\subsection{Experiments}
\label{sec:exps}
The proposed reconstruction method was thoroughly tested in several synthetic and real data examples. A brief summary of the results is presented in this section.
\subsubsection{Synthetic data}
\label{ssec:expSynthHDR}
Sample images are generated according to Model~\eqref{eq:Ymodel} using the HDR image in Figure~\ref{fig:HDR_synth} as the ground-truth. Both a random and a regular pattern with four equiprobable exposure levels $o = \{ 1,8,64,512\}$ are simulated. The exposure time is set to $\tau=1/200$ seconds and the camera parameters are those of a Canon 7D camera set to ISO 200 ($\a=0.87$, $\sigma^2_R=30$, $\mean_R=2048$, $\text{v}_{\text{sat}}=15000$)~\cite{aguerrebere13}.
Figure~\ref{fig:HDR_synth} shows extracts of the results obtained by the proposed method, by PLEV~\cite{aguerrebere14ICCP} (basically an adaptation of PLE to the same single image framework) and by Sch\"oberl{} et al.~\cite{schoberl12} for the random pattern and by Nayar et Mitsunaga~\cite{nayar00} using the regular pattern. The percentage of unknown pixels in the considered extracts is 50\% (it is nearly the same for both the regular and non-regular pattern). Table~\ref{tab:HDRpsnr} shows the PSNR values obtained in each extract marked in Figure~\ref{fig:HDR_synth}. The proposed method manages to correctly reconstruct the irradiance on the unknown pixels. Moreover, its denoising performance is much better than that of Sch\"oberl{} et al. and Nayar and Mitsunaga, and still sharper than PLEV.
\begin{table}
\footnotesize
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}[h]{c c c c c c c}
\toprule
& \multicolumn{6}{c}{\textbf{PSNR (dB)}}\\
\cmidrule{2-7}
& 1 (Fig.~\ref{fig:HDR_synth}) & 2 (Fig.~\ref{fig:HDR_synth}) & 3 & 4 & 5 & 6 \\
\cmidrule{1-7}
HBE}%{HPNLB}%{HPBE{} & \textbf{33.08} & \textbf{33.87} & 22.95 & \textbf{35.10} & \textbf{36.80} & \textbf{35.66} \\
PLEV & 29.65 & 30.82 & 22.77 & 33.99 & 36.42 & 34.73 \\
Sch\"oberl{} et al. & 30.38 & 31.16 & 21.39 & 30.04 & 32.84 & 31.02 \\
Nayar and Mitsunaga & 29.39 & 30.10 & \textbf{23.24} & 25.83 & 30.26 & 26.90 \\
\toprule
\end{tabular}
\caption{PSNR{} values for the extracts shown in Figure~\ref{fig:HDR_synth}.}
\label{tab:HDRpsnr}
\end{table}
\begin{figure*}
\centering
\begin{minipage}[c]{.24\linewidth}
\begin{center}
\includegraphics[width=.99\linewidth]{../images/nl_ple/palomas/aguer089.jpg}
\vspace{1pt}
\includegraphics[width=.99\linewidth]{../images/nl_ple/palomas/aguer090.png}
\end{center}
\end{minipage}
\begin{minipage}[c]{.75\linewidth}
\begin{center}
\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer091.png}\hspace{.5pt}\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer092.png}\hspace{.5pt}\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer093.png}\hspace{.5pt}\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer094.png}\hspace{.5pt}\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer095.png}\\
\vspace{1pt}
\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer096.png}\hspace{.5pt}\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer097.png}\hspace{.5pt}\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer098.png}\hspace{.5pt}\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer099.png}\hspace{.5pt}\includegraphics[height=2.5cm]{../images/nl_ple/palomas/aguer100.png}\\
\end{center}
\end{minipage}
\caption{\textbf{Real data.} \textbf{Left:} Tone mapped version of the HDR{} image obtained by the proposed approach and its corresponding mask of unknown (black) and well-exposed (white) pixels. \textbf{Right:} Comparison of the results obtained by the proposed approach (first row) and PLEV (second row) in the extracts indicated in the top image. Please see the digital copy for better details reproduction.}
\label{fig:realExpsPalomas}
\end{figure*}
\subsubsection{Real data}
The feasibility of the SVE random pattern has been shown in~\cite{schoberl13} and that of the SVE regular pattern in~\cite{yasuma10}. Nevertheless, these acquisition systems are still not available for general usage.\footnote{
While writing the last version of this article the authors got aware of Sony's latest sensor IMX378.
This sensor has a special mode called SME-HDR, which is a variation of the SVE acquisition principle.
Whereas this sensor was adopted by the Google Pixel smartphone in 2016, the special SME-HDR mode is never activated by the Google Pixel phone, according to experts from the company DxO, and we found no way to activate it and access the raw image.
}
However, as stated in Section~\ref{sec:model}, the only variation between the classical and the SVE acquisition is the optical filter. Hence, the noise at a pixel $p$ captured using SVE with an optical gain factor $o_p$ and exposure time $\tau/o_p$ and a pixel captured with a classical camera using exposure time $\tau$ should be very close. We take advantage of this fact in order to evaluate the reconstruction performance of the proposed approach using real data.
For this purpose, we generate an SVE image $\z_{sve}$ from four raw images $\{ \z_{raw}^i \}_{i=1,\dots,4}$ acquired with different exposure times. The four different exposure times simulate four different filters of the SVE optical mask. The value at position $(x,y)$ in $\z_{sve}$ is chosen at random among the four available values at that position $\{ \z_{raw}^i(x,y) \}_{i=1,\dots,4}$. Notice that the Bayer pattern is kept on $\z_{sve}$ by construction. The images $\{ \z_{raw}^i \}_{i=1,\dots,4}$ are acquired using a remotely controlled camera and a tripod so as to be perfectly aligned.
This protocol does not allow us to take scenes with moving objects. Let us emphasize, however, that using a real SVE device, this, as well as the treatment of moving camera, would be a non-issue.
Figures~\ref{fig:realExpsPalomas} and~\ref{fig:realExpsTelecom} show the results obtained from two real scenes, together with the masks of well-exposed (white) and unknown (black) pixels (the SVE raw images are included in Appendix B in the supplementary material). Recall that among the unknown pixels, some of them are saturated and some of them are under exposed. Square patches of size 6 and 8 were used for the examples in Figure~\ref{fig:realExpsTelecom} and Figure~\ref{fig:realExpsPalomas} respectively. Demosaicing~\cite{hamilton97} and tone mapping~\cite{mantiuk08} are used for displaying purposes.
We compare the results to those obtained by PLEV~\cite{aguerrebere14ICCP}. A comparison against the methods by Nayar and Mitsunaga and Sch\"oberl et al. is not presented since they do not specify how to treat raw images with a Bayer pattern. The proposed method manages to correctly reconstruct the unknown pixels even in extreme conditions where more than $70\%$ of the pixels are missing, as for example the last extract in Figure~\ref{fig:realExpsTelecom}.
These examples show the suitability of the proposed approach to reconstruct the irradiance information in both very dark and bright regions simultaneously. See for instance the example in Figure~\ref{fig:realExpsTelecom}, where the dark interior of the building (which can be seen through the windows) and the highly illuminated part of another building are both correctly reconstructed (see the electronic version of the article for better visualization).
\begin{figure*}
\centering
\begin{minipage}[c]{.24\linewidth}
\begin{center}
\includegraphics[width=.99\linewidth]{../images/nl_ple/telecom/aguer101.jpg}
\vspace{1pt}
\includegraphics[width=0.99\linewidth]{../images/nl_ple/telecom/aguer102-small.png}
\end{center}
\end{minipage}
\begin{minipage}[c]{.75\linewidth}
\begin{center}
\includegraphics[height=2.5cm]{../images/nl_ple/telecom/aguer103.png} \includegraphics[height=2.5cm]{../images/nl_ple/telecom/aguer104.png} \includegraphics[height=2.5cm]{../images/nl_ple/telecom/aguer105.png} \includegraphics[height=2.5cm]{../images/nl_ple/telecom/aguer106.png}\\
\vspace{2pt}
\includegraphics[height=2.5cm]{../images/nl_ple/telecom/aguer107.png} \includegraphics[height=2.5cm]{../images/nl_ple/telecom/aguer108.png} \includegraphics[height=2.5cm]{../images/nl_ple/telecom/aguer109.png} \includegraphics[height=2.5cm]{../images/nl_ple/telecom/aguer110.png}
\end{center}
\end{minipage}
\caption{\textbf{Real data.} \textbf{Left:} Tone mapped version of the HDR{} image obtained by the proposed approach and its corresponding mask of unknown (black) and well-exposed (white) pixels. \textbf{Right:} Comparison of the results obtained by the proposed approach (first row) and PLEV (second row) in the extracts indicated in the top image. Please see the digital copy for better details reproduction.}
\label{fig:realExpsTelecom}
\end{figure*}
\section{Conclusions}
\label{sec:conclusions}
In this work we have presented a novel image restoration framework. It has the benefits of local patch characterization (that was key to the success of NLB as a state of the art denoising method), but manages to extend its use to more general restoration problems where the linear degradation operator is diagonal, by combining local estimation with Bayesian restoration based on hyperpriors. This includes problems such as zooming, inpainting and interpolation. In this way, all these restoration problems are set under the same framework. It does not include image deblurring or deconvolution, since the degradation operator is no longer diagonal. Correctly addressing deconvolution with large kernels with patch-based approaches and Gaussian prior models is a major challenge that will be the subject of future work.
We have presented a large series of experiments both on synthetic and real data that confirm the robustness of the proposed strategy based on hyperpriors. These experiments show that for a wide range of image restoration problems HBE}%{HPNLB}%{HPBE{} outperforms several state-of-the-art restoration methods.
This work opens several perspectives. The first one concerns the relevance of the Gaussian patch model and its relation to the underlying image patches manifold. If this linear approximation has proven successful for image restoration, its full relevance in other areas remains to be explored, especially in all domains requiring to compare image patches.
Another important related question is the one of the estimation of the degradation model in images jointly degraded by noise, missing pixels, blur, etc. Restoration approaches generally rely on the precise knowledge of this model and of its parameters. In practice however, we often deal with images for which the acquisition process is unknown, and that have possibly been affected by post-treatments. In such cases, blind restoration remains an unsolved challenge.
Finally, we have presented a novel application of the proposed general framework to the generation of HDR images from a single variable exposure (SVE) snapshot. The SVE acquisition strategy allows the creation of HDR images from a single shot without the drawbacks of multi-image approaches, such as the need for global alignment and motion estimation to avoid ghosting problems. The proposed method manages to simultaneously denoise and reconstruct the missing pixels, even in the presence of (possibly complex) motions, improving the results obtained by existing methods. Examples with real data acquired in very similar conditions to those of the SVE acquisition show the capabilities of the proposed approach.
\appendices
\section{Alternate minimization scheme convergence}
\label{ap:mapEst}
We study in the following the convergence of the alternate minimization algorithm~\ref{algo:alternate}.
To show the main convergence result, we need the following lemma
\begin{lemma}
The function $f$ is coercive on $\mathbb{R}^{n(M+1)}\times S_n^{++}(\mathbb{R})$.
\end{lemma}
\begin{proof}
We need to show that
\begin{align*}
\lim_{\|(\{\mathbf{C}_i\},\boldsymbol{\mu},\L )\|\rightarrow +\infty}f(\{\mathbf{C}_i\},\boldsymbol{\mu},\L ) &= +\infty.
\end{align*}
Now, $\|(\{\mathbf{C}_i\},\boldsymbol{\mu},\L )\|\rightarrow +\infty$ if and only if $\|{\mathbf{C}_i}\| \rightarrow +\infty$ or $\|\boldsymbol{\mu}\|\rightarrow +\infty$ or $\|\L\|\rightarrow +\infty$.
The matrix $\L$ being positive-definite, the terms $\frac 1 2 \sum_{i=1}^M (\mathbf{C}_i - \boldsymbol{\mu})^T \L (\mathbf{C}_i - \boldsymbol{\mu})$ and $\frac {\kappa}{2}(\boldsymbol{\mu}-\boldsymbol{\mu}_0)^T\L(\boldsymbol{\mu}-\boldsymbol{\mu}_0)$ are both positive. Thus
\begin{eqnarray*}
f( \{\mathbf{C}_i\},\boldsymbol{\mu},\L )
&\geq& -{\frac{\nu -n +M}{2}}\log |\L|\\
&+&\frac {1}{2}\mathrm{trace}[\nu\S_0 \L ].
\end{eqnarray*}
Now, this function of $\L$ is convex and coercive on $S_n^{++}(\mathbb{R})$, which implies that $f( \{\mathbf{C}_i\},\boldsymbol{\mu},\L )\rightarrow +\infty$ as soon as $\|\L\|\rightarrow +\infty$. It also follows that the previous function of $\L$ has a global minimum that we denote by $m_{\L}$.
We can now write
\begin{align*}
f(\{\mathbf{C}_i\},\boldsymbol{\mu},\L ) &\geq m_{\L} +\frac 1 2 \sum_{i=1}^M (\mathbf{C}_i - \boldsymbol{\mu})^T \L (\mathbf{C}_i - \boldsymbol{\mu}) \\
&+\frac {\kappa}{2}(\boldsymbol{\mu}-\boldsymbol{\mu}_0)^T\L(\boldsymbol{\mu}-\boldsymbol{\mu}_0)
\end{align*}
and this function of $(\{\mathbf{C}_i\},\boldsymbol{\mu})$ clearly tends towards $+\infty$ as soon as $\|{\mathbf{C}_i}\| \rightarrow +\infty$ or $\|\boldsymbol{\mu}\|\rightarrow +\infty$.
\end{proof}
We now show the main convergence result for our alternate minimization algorithm. The proof adapts the arguments in~\cite{Gorski2007} to our case.
\addtocounter{proposition}{-1}
\begin{proposition}
The sequence $f( \{\mathbf{C}_i^l\},\boldsymbol{\mu}^l,\L^l )$ converges monotonically when $l\rightarrow +\infty$. The sequence $\{\{\mathbf{C}_i^l\},\boldsymbol{\mu}_l,\L^l\}$ generated by the alternate minimization scheme has at least one accumulation point. The set of its accumulation points forms a connected and compact set of partial optima and stationary points of $f$, all having the same function value.
\end{proposition}
\begin{proof}
The sequence $f( \{\mathbf{C}_i^l\},\boldsymbol{\mu}^l,\L^l )$ obviously decreases at each step by construction. The coercivity and continuity of $f$ imply that this sequence is also bounded from below, and thus converges. The convergence of $f( \{\mathbf{C}_i^l\},\boldsymbol{\mu}^l,\L^l )$ implies that the sequence $\{\{\mathbf{C}_i^l\},\boldsymbol{\mu}_l,\L^l\}$ is bounded. It follows that it has at least one accumulation point $(\{\mathbf{C}_i^{\star}\},\boldsymbol{\mu}^{\star},\L^{\star})$ and that there exists a strictly increasing sequence $(l_k)_{k\in \sN}$ of integers such that $\{\{\mathbf{C}_i^{l_k}\},\boldsymbol{\mu}^{l_k},\L^{l_k}\}_{k\in \sN}$ converges to $\{\{\mathbf{C}_i^{\star}\},\boldsymbol{\mu}^{\star},\L^{\star}\}$.
Now, we can show that such an accumulation point is a partial optimum of $f$, \textit{i.e.} that $f(\{\mathbf{C}_i^{\star}\},\boldsymbol{\mu}^{\star},.)$ attains its minimum at $\L^*$ and $f(.,.,\L^*)$ attains its minimum at $(\{\mathbf{C}_i^{\star}\},\boldsymbol{\mu}^{\star})$. By construction,
\begin{align*}
f(\{\mathbf{C}^{l_{k}}\},\boldsymbol{\mu}^{l_{k}},\L^{l_{k}}) \leq f(\{\mathbf{C}^{l_{k}}\},\boldsymbol{\mu}^{l_{k}},\L), \;\;\;\forall \L \in S_n^{++}(\mathbb{R})
\end{align*}
which implies by continuity of $f$ that
\begin{align}
\label{eq:partial1}
f(\{\mathbf{C}^{*}\},\boldsymbol{\mu}^{*},\L^{*}) = \argmin_{\L \in S_n^{++}(\mathbb{R})} f(\{\mathbf{C}^{*}\},\boldsymbol{\mu}^{*},\L).
\end{align}
Let us denote $G(\{\mathbf{C}\},\boldsymbol{\mu},\L) = (\{\mathbf{C}'\},\boldsymbol{\mu}',\L')$ with
\begin{align*}
(\{\mathbf{C}'\},\boldsymbol{\mu}') &= \argmin_{(\{\mathbf{C},\}\boldsymbol{\mu})} f(\{\mathbf{C}\},\boldsymbol{\mu},\L)\\
\L' &= \argmin_{\L} f(\{\mathbf{C}'\},\boldsymbol{\mu}',\L).
\end{align*}
The alternate minimization scheme consists in updating $G$ at each iteration.
From Equations~\eqref{eq:muhat},~\eqref{eq:Ci1}, and~\eqref{eq:Lhat}, we see that $G$ is explicit and continuous. Since $\{l_k\}_{k\in \sN}$ is strictly increasing, for each $k\in{\sN}^*$, $l_k \geq l_{k-1}+1$. The sequence $\{f( \{\mathbf{C}_i^l\},\boldsymbol{\mu}^l,\L^l )\}_{l\in\sN}$ decreases, so
\begin{align*}
f(G(\{\mathbf{C}^{l_{k-1}}\},\boldsymbol{\mu}^{l_{k-1}},\L^{l_{k-1}})) &=
f(\{\mathbf{C}^{l_{k-1}+1}\},\boldsymbol{\mu}^{l_{k-1}+1},\L^{l_{k-1}+1}) \\
&\geq f(\{\mathbf{C}^{l_{k}}\},\boldsymbol{\mu}^{l_{k}},\L^{l_{k}})\\
&\geq f(G(\{\mathbf{C}^{l_{k}}\},\boldsymbol{\mu}^{l_{k}},\L^{l_{k}})).
\end{align*}
Therefore, as $k\rightarrow +\infty$, since $G$ is continuous, it follows that
\begin{align*}
f(G(\{\mathbf{C}^*\},\boldsymbol{\mu}^*,\L^*)) &= f(\{\mathbf{C}^*\},\boldsymbol{\mu}^*,\L^*).
\end{align*}
Now, writing $(\{\mathbf{C}^{**}\},\boldsymbol{\mu}^{**},\L^{**}) = G(\{\mathbf{C}^*\},\boldsymbol{\mu}^*,\L^*)$, we get
\begin{align*}
f(\{\mathbf{C}^*\},\boldsymbol{\mu}^*,\L^*) &\geq \argmin_{(\mathbf{C},\boldsymbol{\mu})} f(\{\mathbf{C}\},\boldsymbol{\mu},\L^*)=f(\{\mathbf{C}^{**}\},\boldsymbol{\mu}^{**},\L^{*})\\
&\geq \argmin_{\L} f(\{\mathbf{C}^{**}\},\boldsymbol{\mu}^{**},\L)= f(\{\mathbf{C}^{**}\},\boldsymbol{\mu}^{**},\L^{**}).
\end{align*}
We can conclude that all these terms are equal and in particular
\begin{align}
\label{eq:partial2}
f(\{\mathbf{C}^*\},\boldsymbol{\mu}^*,\L^*) &= f(\{\mathbf{C}^{**}\},\boldsymbol{\mu}^{**},\L^{*}) = \argmin_{(\mathbf{C},\boldsymbol{\mu})} f(\{\mathbf{C}\},\boldsymbol{\mu},\L^{*}).
\end{align}
From~\eqref{eq:partial1} and~\eqref{eq:partial2} we deduce that the accumulation point $(\{\mathbf{C}^{\star}\},\boldsymbol{\mu}^{\star},\L^{\star})$ is a partial optimum of $f$ and since $f$ is differentiable, it is also a stationary point of $f$.
Moreover, since $f(.,.,\L^*)$ is strictly convex and has a unique minimum, it follows from~\eqref{eq:partial2} that $(\{\mathbf{C}^{**}\},\boldsymbol{\mu}^{**}) = (\{\mathbf{C}^*\},\boldsymbol{\mu}^*)$. As a consequence,
$\L^{**} = \argmin_{\L} f(\{\mathbf{C}^{**}\},\boldsymbol{\mu}^{**},\L) = \argmin_{\L} f(\{\mathbf{C}^{*}\},\boldsymbol{\mu}^{*},\L) = \L^{*}$. Therefore, the accumulation point $(\{\mathbf{C}^{\star}\},\boldsymbol{\mu}^{\star},\L^{\star})$ (and actually any accumulation point of the sequence) is also a fixed point of function $G$.
We have shown that accumulation points of the sequence $\{\{\mathbf{C}^l\},\boldsymbol{\mu}^l,\L^l \}$ are partial optima of $f$ and fixed points of the function $G$. The set of accumulation points is obviously compact.
Let us show that it is also a connected set. First, observe that whatever the norm $\|\|$, the sequence $\|(\{\mathbf{C}^{l+1}\},\boldsymbol{\mu}^{l+1},\L^{l+1} ) - (\{\mathbf{C}^l\},\boldsymbol{\mu}^l,\L^l)\|$ converges to $0$ when $l\rightarrow \infty$. If it was not the case, it would be possible to extract a subsequence $(\{\mathbf{C}^{l_k}\},\boldsymbol{\mu}^{l_k},\L^{l_k} )$ converging to an accumulation point $(\{\mathbf{C}^{*}\},\boldsymbol{\mu}^{*},\L^{*} )$ while $(\{\mathbf{C}^{l_k+1}\},\boldsymbol{\mu}^{l_k+1},\L^{l_k+1} )$ converges to a different accumulation point $(\{\mathbf{C}^{'}\},\boldsymbol{\mu}^{'},\L^{'} )$, but we know that it is impossible since $(\{\mathbf{C}^{l_k+1}\},\boldsymbol{\mu}^{l_k+1},\L^{l_k+1} ) = G(\{\mathbf{C}^{l_k}\},\boldsymbol{\mu}^{l_k},\L^{l_k} )$ would also tend toward $(\{\mathbf{C}^{*}\},\boldsymbol{\mu}^{*},\L^{*} )$. The sequence $(\{\mathbf{C}^{l}\},\boldsymbol{\mu}^{l},\L^{l} )$ being bounded and such that $\|(\{\mathbf{C}^{l+1}\},\boldsymbol{\mu}^{l+1},\L^{l+1} ) - (\{\mathbf{C}^l\},\boldsymbol{\mu}^l,\L^l)\|$ converges to $0$, the set of its accumulation points is connected (see~\cite{ostrowski1960solution}).
The fact that all accumulation points have the same function value is obvious since the sequence $\{f( \{\mathbf{C}^l\},\boldsymbol{\mu}^l,\L^l )\}_{l\in\sN}$ decreases.
\end{proof}
\section*{Acknowledgement}
We would like to thank the reviewers for their thorough and fruitful contributions, as well as Mila Nikolova and Alasdair Newson for their insightful comments and the authors of~\cite{lebrun13IPOL,wang13b,zoran11_web} for kindly providing their code.
This work has been partially funded by the French Research Agency (ANR) under grant nro ANR-14-CE27-001 (MIRIAM), and by the ``Investissement d'avenir'' project, reference ANR-11-LABX-0056-LMH.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,499,730 | arxiv | \subsubsection*{Author contributions}
J.B.: Developed \& deployed model, analyzed results, wrote manuscript.\\ \setlength\parskip{2pt}
G.B.: Developed idea, approved data, reviewed manuscript.\\
M.C.: Developed idea (medical application), analyzed results, wrote manuscript.\\
M.D.: Data collection \& processing and contribution in the writing.\\
J.G.: Contributed to the background orientation, data collection, the interpretation of the statistical analysis of the model and the writing.\\
J.R.: Developed the web application and the architecture of the project.\\
N.W.: Analyzed results, deployed model, pre-processed data, wrote manuscript.\\
\end{scriptsize}
\section{Introduction}
To date, SARS-CoV-2 has infected several millions and killed hundreds of thousands around the globe.
Due to its long incubation time, fast, accurate and reliable techniques for early disease diagnosis are key in successfully fighting the spread~\cite{li2020early}.
The standard genetic test, reverse transcription polymerase chain reaction (RT-PCR), is characterized by high reliability (at least in most countries) but a relatively long processing time (more than an hour).
Alternatively, fast serology tests are in early stages of development, and are based on antibodies that the immune system only produces in an advanced stage of the disease.
It is of great concern, however, that several publications have reported the problem of false negatives thrown by molecular genetics and immunological tests~\cite{Imm1, RNA1, RNA2,kanne2020essentials}.
Overall, global containment efforts suffer from bottlenecks in diagnosis due to partially unreliable tests and lacking availability of the necessary testing equipment; a situation exacerbated by often asymptomatic, yet infected patients that are not properly managed due to the lack of precision around the global process~\cite{Imm1}.
\begin{comment}
\subsection{SARS-CoV-2 molecular virology}
Coronavirus (SARS-CoV-2) is a positive sense single stranded ribonucleic acid enveloped virus \cite{li2020molecular}. It received the appellation coronavirus because of the spatial configuration of its particles \cite{zu2020coronavirus}, with the spikes anchored to the membrane forming a structure like a corona \cite{li2020molecular}. The viral genome (RNA) contains at least 10 open reading frames (ORF). The first two (ORF1a and ORF1b), that also represent almost 66\% of the total genome, are responsible for the production of the viral replicase transcriptase complex \cite{fehr2015coronaviruses}. This complex will intervene in the production of new copies of the viral genome through transcription and replication \cite{li2020molecular}. The other ORFs, that account for the residual 33\%, are responsible for the production of the viral structural proteins such us spike (S), envelope (E), nucleocapsid (N) and membrane (M) proteins along with others with not yet known roles \cite{li2020molecular}, \cite{zu2020coronavirus}. Considerable evidence from studies of the virus suggests that, to produce the infection, it needs to bind to the angiotensin-converting enzyme 2 (ACE2) in human cells first. That will allow the virus to enter the cell, more specifically alveolar epithelial cells, and complete its replication inside \cite{sun2020understanding}.
\subsection{Diagnostic routine}
To be able to determine if a person is infected constitutes a very challenging process.
In symptomatic cases, typical indicators including fever, non-productive cough, dyspnea, myalgia, fatigue and normal or decreased leukocyte counts among others factors \cite{huang2020clinical,li2020molecular}.
We can also determine if there is viral RNA in the patient by employing a molecular biology technique that basically retro-transcribes the RNA to DNA. Once the DNA is produced, a probe that is directed to viral genes (commonly OFR1 and N) gets amplified with sections of the viral genes. For the amplification to occur, first the probe needs to bind with high specificity to a sequence of one of the previously mentioned target genes \cite{chu2020molecular}.
There is also research in the area of IGM and IGG detection, as a supporting diagnostic tool. Since once the host starts to react to the viral infection, the production of these to immunoglobulins starts to rise, they were also proposed as complementary biomedical tools. Several tests were performed with ELISA and gold immunochromatographic assay (GICA) for IGM and IGG. Both methods registered sensitivities around 80 percent, without significant differences between them \cite{xiang2020evaluation}.
\end{comment}
\paragraph{Biomedical imaging.}
In this context, biomedical imaging techniques have great potential to complement the conventional diagnostic techniques of COVID-19 such as molecular biology (RT-PCR) and immune (IGM/G) assays.
Specifically, imaging can be a viable tool in the detection process by providing a fast assessment of patients in order to guide the selection of subsequent molecular and immunological tests.
Indeed, it was reported in two studies that CT scans can detect COVID-19 at higher sensitivity rate (98, respectively 88\%) compared to RT-PCR (71\% and 59\%) in cohorts of 51~\cite{fang2020sensitivity} and 1014 patients~\cite{CThigh}.
Note that sensitivity of RT-PCR varies heavily across countries, ranging from 65\%~\cite{kanne2020essentials} to 96\%~\cite{mossa2020radiology}.
While CT scans are the gold standard for pneumonia detection~\cite{bourcier2014performance}, X-ray scans are still the first line examination, despite their low specificity and sensitivity~\cite{niederman2001guidelines,weinstock2020chest}, whereas ultrasound (US) has only received growing attention in the last few years~\cite{gehmacher1995ultrasound} and achieved promising results~\cite{gazon2011agreement}.
In this contribution, we emphatically make the case for a more prominent role of the latter, based on clear evidence for the diagnostic value of lung ultrasound (US), which is provided in much detail in a comparison of CT, X-ray and US in~\autoref{sec:related}.
\paragraph{Lung point-of-care ultrasound (POCUS).}
Note first that lung ultrasound is already an established method for monitoring pneumonia and related lung diseases~\cite{gehmacher1995ultrasound,pagano2015lung,chavez2014lung}.
It has been suggested as the preferred diagnosis technique for lung infections, especially in resource limited settings such as emergency situations or low-income countries~\cite{amatya2018diagnostic} and it has started to replace X-ray as first-line examination~\cite{bourcier2014performance,gazon2011agreement,lichtenstein2004comparative,bourcier2016lung}.
Although literature on the applicability of ultrasound for COVID-19 is still scarce, a growing body of evidence for disease-specific patterns in US has lead to advocacy for an amplified role of US in the research community~\cite{buonsenso2020covid,soldati2020there,smith2020point,sofia2020thtoracic}.
The strengths of POCUS are numerous and include its simplicity of execution, its ease of repeatability, its non-invasiveness, its execution without relocation and its ease of disinfection at the bedside.
The devices are small and portable and can be wrapped in single-use plastics to reduce the risk of contamination and promoting sterilization procedures.
Moreover, US is very cost-effective, with an estimated \$140 for an US examination compared to \$370 for chest X-ray \cite{jones2016feasibility} and \$675 – \$8600 for chest CT \cite{chestCTsource}. The low price of the device itself, starting from \$2000,
facilitates the distribution to hospitals and primary care centers \cite{soldati2020proposal}.
The diagnostic routine can be accelerated by connecting the device to a cloud service and uploading the recordings automatically.
As a tool that is ubiquitously available even in sparsely equipped medical facilities and that can serve not only for diagnosis but also for monitoring disease evolution on a daily basis, POCUS is the ideal biomedical imaging tool in the current crisis.
The growing expectations of POCUS are perhaps best exemplified by the NIH's recently launched large-scale initiative on "Point Of Care UltraSonography (POCUS) for risk stratification of COVID-19 patients" (accessible at: \url{https://clinicaltrials.gov/ct2/show/NCT04338100}).
However, this initiative is launched without the availability of an automatic tool that can assist in COVID-19 detection and patient stratification.
Therefore, there is an evident need for an open-source framework that pools COVID-19 ultrasound scans from worldwide sources and exploits the power of deep learning to develop a system that can complement the work of physicians in a timely manner.
\paragraph{Automatic detection.}
In the last months, a myriad of preprints attempting to use machine learning for biomedical image analysis for COVID-19 has appeared, but to the best of our knowledge they exclusively focus on X-ray or CT (for reviews see~\cite{shi2020review,ulhaq2020computer,kalkreuth2020covid,pham2020artificial}) and neither a publication nor a preprint has used ultrasound data for automatic COVID-19 detection.
Here, we aim to close this gap with a first approach of training a deep learning model to detect COVID-19 on POCUS images.
It is crucial to note that medical doctors must be trained thoroughly to reliably differentiate COVID-19 from pneumonia and that the relevant patterns are hard to discern for the human eye~\cite{ng2020imaging}.
Therefore, automatic detection is highly relevant as it has been shown to reduce the time doctors invest to make a diagnosis~\cite{shan2020lung}.
\paragraph{Our contribution.}
In this work, we propose the first framework for automatized detection of COVID-19 on US images.
Our study is in line with others demonstrating that deep learning can be a promising tool to detect COVID-19 from CT~\cite{li2020artificial} or X-ray~\cite{wang2020covid}.
Our contributions can be summarized in the following three steps:
\begin{enumerate}[leftmargin=*,align=left]
\item We publish the first dataset of lung POCUS recordings of COVID-19, pneumonia and healthy patients. The collected data is heterogeneous, but was pre-processed manually to remove artifacts and checked by a medical doctor for its quality.
\item We trained a convolutional neural network (that we dub \texttt{POCOVID-Net}) on the available data and evaluated it in 5-fold cross validation.
We report a classification accuracy of 89\% and a sensitivity to detect COVID-19 of 96\%.
The model demonstrates the diagnostic value of the collected data and the applicability of deep learning for US images.
\item We offer a free web service that first promotes clinical data collection by giving users the possibility to upload data, and secondly provides an interface to our trained model.
\end{enumerate}
\section{Related work}\label{sec:related}
To outline the background of the application of biomedical imaging techniques for COVID-19 diagnosis, we first compare the information content of the three main methods, and then report of previous attempts to automatize the detection.
\subsection{Biomedical imaging for COVID-19}
Three biomedical imaging sources are considered of interest for screening, diagnostics and management of COVID-19:
\paragraph{Chest X-Ray.}
Although chest X-rays were traditionally heavily used for diagnosing lung conditions, they are not well suited to detect COVID-19 at early stages~\cite{chen2020epidemiological}, for example~\cite{weinstock2020chest} found that 89\% of 493 COVID-19 patients had normal or only mildly abnormal X-ray scans.
Instead, it is a reliable tool to evidence bilateral multifocal consolidation, partially fused into massive consolidation with small pleural effusions and "white lung”\cite{chinese2020radiological}.
However, multiple studies demonstrated the superiority of ultrasound imaging in detecting pneumonia and related lung conditions~\cite{bourcier2014performance,bourcier2016lung,reali2014can,claes2017performance}.
\paragraph{Computed tomography (CT).}
CT presents a more viable technique for early COVID-19 detection and has been the most promoted screening tool so far~\cite{bao2020covid,lee2020covid}.
Reviews report high detection rates among symptomatic individuals~\cite{bao2020covid, salehi2020coronavirus}, as CT can unveil air space consolidation, traction bronchiectasis, paving appearance and bronchovascular thickening~\cite{wang2020clinical,pan2020time}. Also, (multifocal) ground glass opacities (GGO) were observed especially frequently, often bilateral and with consolidations and prominent peripherally subpleaural distribution \cite{kanne2020chest}.
GGO are zones of increased attenuation that usually appear in several interstitial and alveolar processes with conservation of the bronchial and vascular margins \cite{franquet2011imaging}.
However, CT involves evident practical downsides such as exposing the patient to excessive radiation, high cost, the availability of sophisticated equipment (only $\sim$30,000 machines exist globally~\cite{castillo2012industry}), the need for extensive sterilization~\cite{fiala2020brief,mossa2020radiology} and patient mobilisation.
\paragraph{Ultrasound.}
Ultrasound can evidence pleural and interstitial thickening, subpleural consolidation and other physiological phenomena linked to changes in lung structure when the infection is in early stages
\cite{buonsenso2020covid}. Studies report abnormalities in bilateral B-lines (hydroaeric comet-tail artifacts arising from the pleural line), as well as identifiable lesions in the bilateral lower lobes as the main characteristics to enable COVID-19 detection~\cite{huang2020preliminary, peng2020findings}. In~\autoref{fig:overview} an example of B-lines visible in the US image of a COVID-19 patient is shown.
In a review, \cite{fiala2020brief} observed great agreement between ultrasound and CT when monitoring COVID-19 patients (especially between B-lines in ultrasound and GGOs in CT~\cite{poggiali2020can}) and concluded a high potential for ultrasound in evaluating early lung patients, especially to guide subsequent testing in triage situations.
Others reported concordance between US and chest recordings in 7/8 monitored adolescents with COVID-19, while the last patient had a normal radiography despite irregular US~\cite{sofia2020thtoracic}.
For a review on the timeline of US findings in relation to CT see~\cite{fiala2020ultrasound}.
\subsection{Automatic detection of
COVID-19}
While to the best of our knowledge no work has been published so far on the detection on ultrasound images, various work exists on automatic inference on CT and X-Ray scans.
\paragraph{Data collection initiatives.}
Perhaps the most significant initiative is from Cohen et. al.~\cite{cohen2020covid} who started building an open-access database of X-ray (now also CT images) that, to date, contains $\sim$150 COVID-19 images:
Using deep convolutional neural networks, many have claimed strong performances on the X-ray data of Cohen et al, ranging from
91\% up to 98\%~\cite{luz2020towards,hall2020finding,narin2020automatic,alqudahcovid,zhang2020covid,abbas2020classification,bukhari2020diagnostic}.
Besides that, the \texttt{COVID-Net} open source initiative presented the \texttt{COVIDx} dataset of chest radiography (X-ray) data~\cite{wang2020covid} that was assembled from~\cite{cohen2020covid} and other sources, resulting in 183 COVID-19 images across a total of ~13k samples. While the authors report 92~\% accuracy~\cite{wang2020covid}, others reported higher numbers with more refined models~\cite{farooq2020covid, afshar2020covid}.
Regarding CT imaging,~\cite{zhao2020covid} published a database of 275 COVID-19 CT scans and reported an accuracy of 85\%.
In light of these seemingly convincing performances, it needs to be emphasized that these databases still contain rather limited number of samples with suboptimal quality. \cite{wynants2020prediction} review 13 models proposed for the detection of COVID-19 on CT scans, and conclude that most are at high risk of bias due to qualitative and quantitative pitfalls in their data.
Since at the same time the amount of proprietary samples is rising quickly, we encourage all responsible hospitals and decision makers to contribute their data to open-access initiatives.
\input{table_data}
\vspace{-2mm}
\paragraph{Reports on propriertary data.}
Further preprints gathered their CT data independently and reported accuracy between 80\% and 90\% on 400-2700 slices~\cite{wang2020deep,xu2020deep,shi2020large} or even accuracy up to 95\% on $\sim$2000 or more slices~\cite{song2020deep,wu2020jcs,gozes2020rapid}.
Apart from simple classification models, also segmentation technique were employed to identify infected areas and infer disease progression state to accelerate inspection time of doctors~\cite{shan2020lung, chen2020deep}.
Similarly,~\cite{li2020artificial} collected a dataset of 4,356 chest CT scans from 3,322 patients and trained a deep convolutional neural network (\texttt{COVNet}) that could differentiate COVID-19 from community-acquired pneumonia and regular scans with a ROC-AUC of 0.96 (sensitivity 90\%, specificity 96\%).
However, to date, the data underlying all these efforts remain unavailable to the public.
\section{A lung US dataset for COVID-19 detection}
We assembled and pre-processed a dataset of a total of 64 lung POCUS video recordings, divided into 39 videos of COVID-19, 14 videos of (typical bacterial) pneumonia and 11 videos of healthy patients. Note that despite the novelty of the disease, COVID-19 images account for 60\% of our dataset.
So far, we have restricted ourselves to convex utrasound probes.
Linear and convex probes are the most standard ones in medical services, and we are focusing on the latter because more data was available.
The linear probe is a higher frequency probe and gives better resolution images, but with less tissue penetration and therefore more superficial images. The convex probe is more suitable for deep organs (abdomen, fetal ultrasound, etc.) or in obese patients. Usually, linear probes are preferred for lung ultrasound, but in practice, most medical facilities are equipped with a curved probe than can be used for everything, which explains why more convex probe images and videos were found online. However, available linear probes were collected as well, such that the model can be trained on both types of images once data availability is increased.
\autoref{tab:data_overview} gives an overview of our sources, comprising community platforms, open medical repositories, health-tech companies and other scientific literature. Main sources of data were
\href{https://www.grepmed.com}{grepmed.com},
\href{https://thepocusatlas.com}{thepocusatlas.com},
\href{https://www.butterflynetwork.com}{butterflynetwork.com},
\href{https://radiopaedia.org}{radiopaedia.org}, while individual samples were retrieved from \href{https://everydayultrasound.com}{everydayultrasound.com}, and \href{https://nephropocus.com}{nephropocus.com} amongst others. We provide more details on the dataset in an extensive list in Appendix~\ref{data_appendix}: First, the exact source url is given for each single single video / image file. Second, technical details are listed, comprising image resolution, the frame rate and the number of frames for each video after processing.
Importantly, all samples of our database were observed by a medical doctor and notes on the visible patterns in each video (e.g. B-Lines or consolidations) were added.
They confirm that in all collected videos of COVID-19 and pneumonia disease-specific patterns are visible.
However, in order to train a machine learning model, a larger dataset of images instead of videos was required.
\paragraph{Data processing.}
Since the 64 videos were taken from various sources, the format and illumination differ significantly.
In order to generate a diverse and still sufficiently large data set, images were selected from the videos with a frame rate of 3Hz and a maximum of 30 frames per video.
This resulted in an average of 17$\pm6$ frames per video and a total of 1103 images (654 COVID-19, 277 bacterial pneumonia, 172 healthy).
To homogenize the dataset, we cropped the images with a quadratic window excluding measure bars and texts visible on the sides or top of the videos.
Five videos were manually processed with \texttt{Adobe After Effect} in order to remove measure scales and other artifacts that were overlaying the US recording.
Examples of the cropped images are shown in~\autoref{fig:overview}. We are however aware that the heterogeneity of the data is still problematic, and we are constantly searching for more data to prevent the model from over fitting on the specific properties of the available recordings.
\section{Classification with \texttt{POCOVID-Net}}
\subsection{Methods}
We propose a convolutional neural network that we name \texttt{POCOVID-Net} to tackle the present computer vision task.
First, we use the convolutional part of \texttt{VGG-16} \cite{Simonyan15}, an established deep convolutional neural network that has been demonstrated to be successful on various image types. It is followed by one hidden layer of 64 neurons with \texttt{ReLU} activation, dropout of 0.5~\cite{srivastava2014dropout} and batch normalization~\cite{ioffe2015batch}; and further by the output layer with \texttt{softmax} activation.
The model was pre-trained on \texttt{Imagenet} to extract image features such as shapes and textures.
All images are resized to $224 \times 224$ and fed through the convolutional layers of the model.
During training, only the weights of the last three layers were fine-tuned, while the other ones were frozen to the values from pre-training.
This results in a total of $2,392,963$ trainable and $12,355,008$ non-trainable parameters.
The model is trained with a the cross entropy loss function on the \texttt{softmax} outputs, and optimized with \texttt{Adam}~\cite{kingma2014adam} with an initial learning rate of $1\mathrm{e}{-4}$.
Furthermore, we use data augmentation techniques to diversify the dataset.
In explanation, the \texttt{Keras ImageDataGenerator} is used, which applies a series of random transformations on each image when generating a batch (in-place augmentation).
Here, we allow transformations of the following types: Rotations of up to 10 degrees, horizontal and vertical flips, and shifts of up to 10\% of the image height or width respectively. As such transformations can naturally occur with diverse ultrasound devices and recording parameters, augmentation adds valuable and realistic diversity that helps to prevent overfitting.
\subsection{Results}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/roc_curves.pdf}
\caption{\textbf{Class-wise performances of \texttt{POCOVID-Net}.} Multi-class ROC curves of COVID-19 detection models from ultrasound images are depicted as averages across a 5-fold cross-validation. The shaded area shows the standard deviation of scores across folds.
Pneumonia is detected reliably, while the ROC-AUC for COVID-19 is 0.94 and the scores for the healthy class vary significantly. The point of highest accuracy is visualized as a coloured circle for each class.}
\label{fig:roc}
\end{figure}
All reported results were obtained with 5-fold cross validation.
It was ensured that the frames of a single video are present within a single fold only, such that train and test data are completely disjoint recordings.\footnote{As a consequence though, the sizes of each fold vary, for example with 110 COVID-19 images in the smallest and 183 in the largest split.}
As mentioned above, the model was trained to classify frames as COVID-19, pneumonia or healthy. When splitting the data, it was assured that the number of samples per class is similar in all folds. The performance of the proposed model on the frame-wise classification task is visualized in~\autoref{fig:roc}, depicting the ROC curve for each class.
Clearly, the model learns to classify the images, with all ROC-AUC scores above or equal to 0.94. Pneumonia seems to appear very distinctive, whereas the performance on COVID-19 and regular images is lower, and varies across folds. Nevertheless, the ROC-AUC score of COVID-19 detection is 0.94.
\autoref{fig:roc} also depicts where the accuracy is maximal for each class. It can be observed that the rate of false positives at the maximal-accuracy point is larger for COVID-19 than for pneumonia and healthy patients. In a clinical setting where false positives are less problematic than false negatives, this property is desired.
Furthermore, \autoref{fig:confusion} provides more details on the predictions in the form of three confusion matrices: one with absolute values and two normalized along each axis.
\begin{figure*}[!htb]
\includegraphics[width=\textwidth]{figures/confusion_matrix.pdf}
\caption{
\textbf{Confusion matrices of POCOVID-Net on lung ultrasound images.} \textit{Left:} Absolute number of predictions.
\textit{Middle:} Relative values normalized by the number of predictions, such that precision scores can be read off the diagonal.
\textit{Right:} Normalized by the number of ground truth members of each class, where the diagonal depicts sensitivity.
Most importantly, the sensitivity of recognizing COVID-19 is 96\%.
}
\label{fig:confusion}
\end{figure*}
Most importantly, 628 out of 654 COVID-19 images were classified correctly, leading to a sensitivity or recall of 96\%. From the confusion matrices it becomes clear that pneumonia can be distinguished best with a sensitivity of 93\% and precision of 95\%. Note that only three frames of pneumonia were classified as healthy patients, showing - at the very least - the model's ability to recognize strong irregularities in lung images. Nevertheless, it is clear that further work is necessary to improve the false negative rate of COVID-19, as 75 images are classified as healthy lungs. We believe that a main reason for that is the low number of lung POCUS recordings of healthy subjects compared to the number of COVID-19 images. Thus, model performance might be improved significantly with more data being collected, for example in the form of collaborations with ultrasound companies or hospitals.
To demonstrate the effectiveness of the proposed model, we compare the results to the performance of a model called COVID-Net that has recently been proposed for the classification of X-Ray images~\cite{wang2020covid}.
Training COVID-Net on our data, it achieves an accuracy of 81\% (averaged across all folds), whereas our model has 89\% accuracy. Factoring in the balanced accuracy score over the three classes (82\% vs 63\%), \texttt{POCOVID-Net} clearly outperforms \texttt{COVID-Net}.
Apart from the \texttt{COVID-Net} model, we also tested an architecture following \cite{li2020artificial} on our data. The authors employ \texttt{Res-Net}~\cite{he2016deep} instead of \texttt{VGG-16}.
In our experiments we observed that using \texttt{Res-Net} or \texttt{NasNet}~\cite{zoph2018learning} (a more recent pretrained model) resulted in significantly worse results.
\begin{table*}[t]
\centering
\begin{tabular}{llcccccc}
\toprule
{} & \textbf{Class} & \textbf{Sensitivity} & \textbf{Specificity} & \textbf{Precision} & \textbf{F1-score} & \textbf{Frames} & \textbf{Videos/Images} \\
\midrule\\
\multirow{3}{*}{\pbox{20cm}{\textbf{POCOVID-Net}\\ \small{Acc.: 0.89} \\ Bal. Acc.: 0.82}
} & COVID-19 & 0.96 & 0.79 & 0.88 & 0.92 & 654 & 39 \\
& Pneumonia & 0.93 & 0.98 & 0.95 & 0.94 & 277 & 14 \\
& Healthy & 0.55 & 0.98 & 0.78 & 0.62 & 172 & 11 \\
\\\hline\\
\multirow{3}{*}{\pbox{20cm}{\textbf{COVID-Net}\\ \small{Acc.: 0.81} \\ Bal. Acc.: 0.63}
} & COVID-19 & 0.98 & 0.57 & 0.77 & 0.86 & 654 & 39 \\
& Pneumonia & 0.89 & 0.98 & 0.95 & 0.92 & 277 & 14 \\
& Healthy & 0.01 & 1.00 & 0.20 & 0.01 & 172 & 11 \\
\bottomrule
\end{tabular}
\caption{\textbf{Performance comparison.} Comparison of both classification models on 5-fold cross validation for each class. Acc. abbreviates accuracy and Bal. Acc. abbreviates balanced accuracy. The proposed model outperforms COVID-Net with respect to F1-scores. In particular, COVID-Net fails to handle unbalanced classes (sensitivity for healthy patients is zero), leading to a lower specificity for COVID-19.}
\label{tab:resultsclasses}
\end{table*}
Furthermore,~\autoref{tab:resultsclasses} breaks down the comparison between \texttt{COVID-Net} and our model in more detail. An explanation for the low balanced accuracy of \texttt{COVID-Net} is given by the apparent inability of the model to deal with unbalanced data, leading to rarely any healthy-classifications. We were not able to obtain any better results when training the model on our data with the implementation provided by the authors. Although the model is able to differentiate between COVID and pneumonia to some extent (specificity of 0.98 for pneumonia), our model is superior in all scores except for sensitivity of COVID-19, resulting in a much higher false positive rate though.
In the future, it might be beneficial though to combine several different models with ensemble methods.
\begin{comment}
\begin{table}
\centering
\begin{tabular}{llll}
\toprule
{} & \textbf{Accuracy} & \textbf{Balanced Accuracy} \\
\midrule
\textbf{POCOVID-net} & 0.85 & 0.85 \\
\textbf{COVID-net} & 0.81 & 0.78 \\
\bottomrule
\end{tabular}
\label{results-general}
\caption{\todo{Add caption}}
\end{table}
\end{comment}
Finally, in an eventual clinical deployment of \texttt{POCOVID-Net} it would actually be desired to classify a video instead of single frames. We therefore summarized the frame-wise scores to determine the video class, employing two different methods: First, a majority vote of the predicted classes was taken, and secondly we average the class probabilities as predicted by the network, and then select the class with the highest average probability. Both methods achieved the same accuracy of 92\% videos that were correctly classified, and a balanced video accuracy of 84\%.
In summary, \texttt{POCOVID-Net} is able to learn frame-wise classification into COVID-19, pneumonia and healthy, where sensitivity for COVID-19 is already very high at 96\%.
It was demonstrated that the proposed architecture outperforms \texttt{COVID-Net}, and aggregating the frame-wise predictions into video classification yields a general classification performance of 92\% accuracy. Last, it was argued that further work is required to reduce the number of false positive predictions of healthy lungs as COVID-19.
\section{Web service (\texttt{POCOVIDScreen})}
The dataset and proposed model constitute a very preliminary first step toward the detection of COVID-19 from ultrasound data.
Envisioning an interface that simplifies data sharing processes for medical practitioners thus attempting to foster collaborations with data scientists in order to serve the global need for rapid development of tools to alleviate the COVID-19 crisis, we have decided to build a web platform accessible at:~\url{https://pocovidscreen.org}.
The platform (see preview in~\autoref{fig:preview-web}) is open-access and was designed for two purposes: First, users can contribute to the open-access dataset by uploading their US recordings (i.e. images or videos from COVID-19, pneumonia or healthy controls).
The collected lung ultrasound data will be continuously reviewed and approved by medical doctors, carefully processed and then integrated into our database on \texttt{GitHub}.
This strategy is chosen in order to simplify the data sharing process as much as possible.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/pocovidscreen.png}
\caption{\textbf{Preview of our web-service \texttt{POCOVIDScreen}}. Users can upload images or videos to contribute to the dataset, or test the model \texttt{POCOVID-Net} on their own images.}
\label{fig:preview-web}
\end{figure}
Secondly, users can access our trained model to perform a rapid screening of their own (unlabeled) data.
Best performance is to be expected if the image is cropped to a quadratic section of the relevant part, similarly to our data.
Subsequently, the prediction is performed by evaluating all five models trained during cross validation.
The output scores are averaged and the predicted class, in conjunction with a probability, is displayed to the user.
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/covidscreen-upload.png}
\caption{
\textbf{Screening process}
}
\label{fig:preview-screening}
\end{figure}
\end{comment}
In short, we hope that this tool can serve as a starting point which will lead to the development of better prediction models.
Our web-service facilitates the process of data collection thus transforming the task into a community effort. Additionally it gives users the opportunity to use our model to infer the class of their own data, so far of course with a preliminary model whose outputs should not be considered of any clinical significance.
\begin{comment}
\todo{from Jérémie: I am not sure it is necessary to detail the architecture of the platform here.}
\julie{I neither indeed the platform is not for the science interesting, but we should mention it since we have it.}
\nina{Yes I agree}
\jannis{It's good to have for the interested reader. I will move it to the appendix.}
\todo{Summary}
\end{comment}
\section{Discussion}
In the current global pandemic, it is as relevant as hardly ever before that the research community pools its expertise to provide solutions in the near future.
Our contribution is the exploration of automatizing COVID-19 detection from lung ultrasound imaging in order to provide a quick assessment of the possibility of a person to be infected with COVID-19.
\paragraph{Summary}
Our first step towards this goal is to release a collection of POCUS images and videos, that were gathered and pre-processed from the referenced sources.
The videos are reliably labeled and can be split to generate a dataset of more than a thousand images.
We would like to invite researchers to contribute to our database, e.g by pointing to new publications or by making lung ultrasound recordings available.
We will constantly update this dataset on:~\url{https://github.com/jannisborn/covid19_pocus_ultrasound}.
Second, the proposed machine learning model, \texttt{POCOVID-Net}, is intended as a proof-of-concept showing the predictive power of the data, as well as the capabilities of a computer vision model to recognize certain patterns in US data as they appear in COVID-19 and pneumonia.
Our model, based on a pre-trained convolutional network, was demonstrated a detection accuracy of 89\% and a video accuracy of 92\%.
With a high sensitivity rate of 96\% for the COVID-19 class (specificity 79\%), we provide evidence that automatic detection of COVID-19 is a promising future endeavour.
Despite these encouraging results, readers are reminded that we have presented very preliminary results herein and that we do not claim any diagnostic performance.
We also do not consider the detection on US data as a replacement for other test methods and neither as an alternative to extensive training of doctors in the usage of ultrasound for the detection task.
Last, with the presented web service we provide an interface that makes our model publicly available to researchers and hospitals around the world.
Most importantly, this interface resembles a user-friendly way to contribute lung ultrasound data to our open-access initiative.
\subsection*{Conclusion}
We believe that POCUS can provide a quick, easy and low-cost method to assess the possibility of a SARS-CoV-2 infection.
We hope to have opened a branch for automatic detection of COVID-19 from ultrasound data and envision our method as a first step toward an auxiliary tool that can assist medical doctors.
In case of a positive first-line examination, further testing is required for corroboration (e.g. CT scan or RT-PCR).
Lung ultrasound may not only take a key role in disease diagnosis, but can be utilized to monitor disease evolution through regular checks performed non-invasively and without the need of relocation.
Regarding the patient journey, we envision \texttt{POCOVID-Net} as a preliminary test for random screening and a step toward a complementary test to PCR for COVID-19 suspicions.
If a patient is suspected of COVID-19, POCUS could be used to assess the presence of lung lesion that might not have clinical repercussions and thus help detecting people at risk of lung complications which would allow to focus in a more dedicated manner on high-risk patients.
For random screening, if \texttt{POCOVID-Net} identifies the patient with COVID-19 lung symptoms, further tests such as RT-PCR, CT-scan and medical check-up should be conducted.
As POCUS devices are easily transportable, this test could take place in a primary care center where a physician or technical specialist can go to perform several screenings and avoid contamination in hospitals.
It should be noted that with one device, it is possible to perform 4 to 5 lung screenings per hour, taking into account the time needed for installation and cleaning.
From the machine learning perspectives several improvements to \texttt{POCOVID-Net} are possible and should be considered, given more data becomes available.
First, an evident improvement of the framework would be to perform inference directly on the videos (e.g. temporal CNNs) instead of the current frame based image analysis.
While we did not perform this herein explicitly (due to the lack of sufficient data), we report a promising indication, namely that the video classification based on a frame-based majority vote improved the error rate by more than 25\% (from 89\% to 92\% accuracy).
Secondly, the benefit of pre-training the network on large image databases could be improved by training the model on (non lung) ultrasound samples instead of using \texttt{ImageNet}, a database of real life objects.
This pre-training may help detecting ultrasound specific patterns such as B-Lines.
In addition, to exploit the higher availability of CT or X-ray scans, transfer learning strategies could be adopted as in \cite{zahangir2020covid_mtnet, apostolopoulos2020covid}.
Furthermore, generative models could help to complement the scarce data about COVID-19 as recently proposed in~\cite{loey2020within}.
We aim to extend the functionality of the website in the future, to further encourage the community effort of researchers, doctors and companies to build a dataset of POCUS images that can leverage the predictive power of automatic detection systems, and thereby also the value of ultrasound imaging for COVID-19 in general.
If the approach turns out to be successful, we plan to build an app as suggested in~\cite{li2020covid} that can enable medical doctors to draw inference from
their ultrasound images with unprecedented ease, convenience and speed.
\section*{Acknowledgements}
We would like to thank Jasmin Frieden for help in data processing and Moritz Gruber and J. Leo van Hemmen for feedback on the manuscript.
\section*{License}
The example images in~\autoref{fig:overview} are available via creative commons license (CC BY-NC 4.0) from:~\url{thepocusatlas.com} (access date: 17.04.2020).
\bibliographystyle{unsrt}
|
1,116,691,499,731 | arxiv | \section{Introduction}
\setcounter{equation}{0}
\setcounter{thm}{0}
Consider a family of immersions $F:M^n\times [0,T)\to{\mathbb R}^{n+1}$ of $n$-dimensional hypersurfaces in ${\mathbb R}^{n+1}$. We say that $M_t=F_t(M^n)$, $F_t(x)=F(x,t)$, moves by the inverse mean curvature flow if
\begin{equation*}
\frac{\partial}{\partial t}F(x,t)=\frac{\nu}{H}\quad\forall x\in{\mathbb R}^n, 0<t<T
\end{equation*}
where $H(x,t)>0$ and $\nu$ are the mean curvature and unit exterior normal of the surface
$F_t$ at the point $F(x,t)$. Note that when $M_t$ is the graph $\overline{F}(x,t)=(x.u(x,t))$ of some function $u:{\mathbb R}^n\times (0,T) \to {\mathbb R}$, $n\ge 1$, then
\begin{equation*}
\nu=\left(\frac{\nabla u}{\sqrt{1+|\nabla u|^2}},\frac{-1}{\sqrt{1+|\nabla u|^2}}\right).
\end{equation*}
Recently there has been a lot of study on the inverse mean curvature flow for the compact case by
C.~Gerhardt, G.~Huisken, T.~Ilmanen, K.~Smoczyk, J.~Urbas and others \cite{G}, \cite{HI1}, \cite{HI2}, \cite{HI3}, \cite{S}, \cite{U}. There are also a lot of progress for the non-compact case recently by B.~Allen, P.~Daskalopoulos, G.~Huisken, B.~Lambert, T.~Marquardt and J.~Scheuer \cite{A}, \cite{DH}, \cite{LS}, \cite{M1}, \cite{M2}.
As observed by P.~Daskalopoulos and G.~Huisken in \cite{DH}, if $M_t$ is the graph $\overline{F}(x,t)=(x.u(x,t))$ of some function $u:{\mathbb R}^n\times (0,T) \to {\mathbb R}$, $n\ge 1$, then $u$ satisfies
\begin{equation}\label{imcf-graph-eqn}
u_t=-\sqrt{1+|\nabla u|^2}\left(\mbox{div}\,\left(\frac{\nabla u}{\sqrt{1+|\nabla u|^2}}\right)\right)^{-1}
\end{equation}
and if $f:{\mathbb R}^n\to {\mathbb R}$ is solution of
\begin{equation}\label{imcf-graph-elliptic-eqn}
\mbox{div}\,\left(\frac{\nabla f}{\sqrt{1+|\nabla f|^2}}\right)=\frac{1}{\lambda}\cdot\frac{\sqrt{1+|\nabla f|^2}}{x\cdot\nabla f-f}\quad\mbox{ in }{\mathbb R}^n,
\end{equation}
then for any $\lambda>0$, the function
\begin{equation*}
u(x,t)=e^{\lambda t}f(e^{-\lambda t}x), \quad (x,t)\in{\mathbb R}^n\times{\mathbb R}
\end{equation*}
is a self-similar solution of \eqref{imcf-graph-eqn} in ${\mathbb R}^n\times {\mathbb R}$. In \cite{DH} and reference [7] of \cite{DH} P.~Daskalopoulos, G.~Huisken and J.R.~King also stated the existence of radially symmetric solution of \eqref{imcf-graph-elliptic-eqn} for any $n\ge 2$, $\lambda>\frac{1}{n-1}$ and $\mu:=f(0)<0$. Note that if $f$ is a radially symmetric solution of \eqref{imcf-graph-elliptic-eqn}, then $f$ satisfies
\begin{equation}\label{imcf-graph-ode}
f_{rr}+\frac{n-1}{r}\cdot(1+f_r^2)f_r-\frac{1}{\lambda}\cdot\frac{(1+f_r^2)^2}{rf_r-f}=0\quad\forall r>0
\end{equation}
and $f_r(0)=0$.
Since there is no proof of this result in \cite{DH}, in this paper I will give a detailed proof of the existence of solution of \eqref{imcf-graph-ode}. More precisely I will prove the following existence result.
\begin{thm}\label{existence_soln-thm}
For any $n\ge 2$, $\lambda>\frac{1}{n-1}$ and $\mu<0$, the equation
\begin{equation}\label{imcf-graph-ode-initial-value-problem}
\left\{\begin{aligned}
&f_{rr}+\frac{n-1}{r}\cdot(1+f_r^2)f_r-\frac{1}{\lambda}\cdot\frac{(1+f_r^2)^2}{rf_r-f}=0\quad\forall r>0\\
&f(0)=\mu,\quad f_r(0)=0\end{aligned}\right.
\end{equation}
has a unique solution $f\in C^1([0,\infty))\cap C^2(0,\infty)$ which satisfies
\begin{equation}\label{f-structure-ineqn}
rf_r(r)>f(r)\quad\forall r\ge 0
\end{equation}
and
\begin{equation}\label{fr-+ve}
f_r(r)>0\quad\forall r>0.
\end{equation}
\end{thm}
We also obtain the following large time behavior solution of \eqref{imcf-graph-ode-initial-value-problem}.
\begin{thm}\label{asymptotic-behaviour-time-infty-thm}
Let $n\ge 2$, $\lambda>\frac{1}{n-1}$, $\mu<0$ and $f\in C^1([0,\infty))\cap C^2(0,\infty)$ be the unique solution of \eqref{imcf-graph-ode-initial-value-problem}. Then
\begin{equation}\label{growth-rate}
\mbox{$\lim_{r\to\infty}$}\frac{rf_r(r)}{f(r)}=\frac{\lambda (n-1)}{\lambda (n-1)-1}.
\end{equation}
\end{thm}
\begin{rmk}
Note that the condition $\mu<0$ is imposed to ensure the positivity of the denominator of the third term of \eqref{imcf-graph-ode-initial-value-problem} so that one can obtain the convexity of the solution $f$ of \eqref{imcf-graph-ode-initial-value-problem} which is stated in Corollary \ref{f-to-infty-cor}.
\end{rmk}
The plan of the paper is as follows. In section 2 we will prove Theorem \ref{existence_soln-thm}. In section 3 we will prove Theorem \ref{asymptotic-behaviour-time-infty-thm}.
\section{Existence of solution}
\setcounter{equation}{0}
\setcounter{thm}{0}
In this section we will prove Theorem \ref{existence_soln-thm}. We will first use a fixed point argument to prove the existence of a solution of \eqref{imcf-graph-ode-initial-value-problem} in a small interval of the origin. The local solution is then extended to a global solution of \eqref{imcf-graph-ode-initial-value-problem} by a continuity argument using another fixed argument. We first start with a lemma.
\begin{lem}\label{local-existence-lem}
For any $n\ge 2$, $\lambda>0$ and $\mu<0$, there exists a constant $R_0>0$ such that the equation
\begin{equation}\label{imcf-graph-ode-initial-value-problem2}
\left\{\begin{aligned}
&f_{rr}+\frac{n-1}{r}\cdot(1+f_r^2)f_r-\frac{1}{\lambda}\cdot\frac{(1+f_r^2)^2}{rf_r-f}=0\quad\mbox{ in }(0,R_0)\\
&f(0)=\mu,\quad f_r(0)=0\end{aligned}\right.
\end{equation}
has a unique solution $f\in C^1([0,R_0))\cap C^2(0,R_0)$ which satisfies
\begin{equation}\label{f-structure-ineqn2}
rf_r(r)-f(r)>0\quad\mbox{ in }[0,R_0).
\end{equation}
\end{lem}
\begin{proof}
Uniqueness of solution of \eqref{imcf-graph-ode-initial-value-problem2} follows from standard ODE theory. Hence we only need to prove existence of solution of \eqref{imcf-graph-ode-initial-value-problem2}. We first observe that if $f$ satisfies \eqref{imcf-graph-ode-initial-value-problem2} and \eqref{f-structure-ineqn2} for some constant $R_0>0$, then by multiplying \eqref{imcf-graph-ode-initial-value-problem2} by $r$ and integrating over $(0,r)$,
we get
\begin{equation*}
\int_0^rsf_{rr}(s)\,ds+(n-1)\int_0^r(1+f_r(s)^2)f_r(s)\,ds=\frac{1}{\lambda}\int_0^r\frac{s(1+f_r(s)^2)^2}{sf_r(s)-f(s)}\,ds\quad \forall 0<r<R_0.
\end{equation*}
Hence
\begin{equation}\label{imcf-graph-ode-integral}
rf_r(r)+(n-2)\int_0^rf_r(s)\,ds=\frac{1}{\lambda}\int_0^r\frac{s(1+f_r(s)^2)^2}{sf_r(s)-f(s)}\,ds-(n-1)\int_0^rf_r(s)^3\,ds\quad \forall 0<r<R_0.
\end{equation}
Let
\begin{equation}\label{big-h-defn}
H(r)=\int_0^rf_r(s)\,ds
\end{equation}
and
\begin{equation}\label{e-defn}
E(r)=\frac{1}{\lambda}\int_0^r\frac{s(1+f_r(s)^2)^2}{sf_r(s)-f(s)}\,ds-(n-1)\int_0^rf_r(s)^3\,ds.
\end{equation}
Then \eqref{imcf-graph-ode-integral} is equivalent to
\begin{equation}\label{H-E-eqn0}
rH_r(r)+(n-2)H(r)=E(r)\quad \forall 0<r<R_0.
\end{equation}
Hence
\begin{equation}\label{H-E-eqn}
H(r)=\frac{1}{r^{n-2}}\int_0^r\rho^{n-3}E(\rho)\,d\rho\quad \forall 0<r<R_0.
\end{equation}
Then by \eqref{H-E-eqn0} and \eqref{H-E-eqn},
\begin{align}\label{f'-eqn}
f_r(r)=&H_r(r)=\frac{1}{r}(E(r)-(n-2)H(r))\notag\\
=&\frac{1}{r}\left\{\frac{1}{\lambda}\int_0^r\frac{s(1+f_r(s)^2)^2}{sf_r(s)-f(s)}\,ds-(n-1)\int_0^rf_r(s)^3\,ds\right.\notag\\
&\qquad -\frac{(n-2)}{r^{n-2}}\int_0^r\rho^{n-3}\left[\frac{1}{\lambda}\int_0^{\rho}\frac{s(1+f_r(s)^2)^2}{sf_r(s)-f(s)}\,ds\right.
-(n-1)\left.\left.\int_0^{\rho}f_r(s)^3\,ds\right]\,d\rho\right\}
\end{align}
which suggests one to use a fixed point argument to prove existence of solution of \eqref{imcf-graph-ode-initial-value-problem2}.
Let $0<\varepsilon<1$. We now define the Banach space
$$
{\mathcal X}_\varepsilon:=\left\{(g,h): g, h\in C\left( [0,\varepsilon]; {\mathbb R}\right) \,\,\mbox{ such that }\,\, s^{-1/2}h(s)\in L^{\infty}(0,\varepsilon) \right\}
$$
with a norm given by
$$||(g,h)||_{{\mathcal X}_\varepsilon}=\max\left\{\|g\|_{L^\infty([0, \varepsilon])} ,\|s^{-1/2}h(s)\|_{L^{\infty}(0,\varepsilon)}\right\}.$$
For any $(g,h)\in {\mathcal X}_\varepsilon,$ we define
$$\Phi(g,h):=\left(\Phi_1(g,h),\Phi_2(g,h)\right),$$
where for $0<r\leq\varepsilon,$
\begin{equation}\label{eq-existence-contraction-map}
\left\{\begin{aligned}
&\Phi_1(g,h)(r):=\mu+\int_0^r h(s)\,ds,\\
&\Phi_2(g,h)(r):=\frac{1}{r}\left\{E(g,h)(r)-\frac{(n-2)}{r^{n-2}}\int_0^r\rho^{n-3}E(g,h)(\rho)\,d\rho\right\}
\end{aligned}\right.
\end{equation}
with
\begin{equation*}
E(g,h)(r)=\frac{1}{\lambda}\int_0^r\frac{s(1+h(s)^2)^2}{sh(s)-g(s)}\,ds-(n-1)\int_0^rh(s)^3\,ds.
\end{equation*}
For any $0<\eta\le |\mu|/4$, let $${\mathcal D}_{\varepsilon,\eta}:=\left\{ (g,h)\in {\mathcal X}_\varepsilon: ||(g,h)-(\mu,0)||_{{\mathcal X}_{\varepsilon}}\leq \eta\right\}.$$
Note that ${\mathcal D}_{\varepsilon,\eta}$ is a closed subspace of ${\mathcal X}_\varepsilon$. We will show that if $\varepsilon\in(0,1)$ is sufficiently small, the map $(g,h)\mapsto\Phi(g,h)$ will have a unique fixed point in ${\mathcal D}_{\varepsilon,\eta}$.
We first prove that $\Phi({\mathcal D}_{\varepsilon,\eta})\subset {\mathcal D}_{\varepsilon,\eta}$ if $\varepsilon\in(0,1)$ is sufficiently small. Let $(g,h)\in {\mathcal D}_{\varepsilon,\eta}$. Then
\begin{equation*}
|s^{-1/2}h(s)|\le\eta\le |\mu|/4\quad\mbox{ and } \quad|g(s)-\mu|\le |\mu|/4\quad\forall 0<s\le\varepsilon.
\end{equation*}
Hence
\begin{equation}\label{h-g-bd}
|h(s)|\le \eta s^{1/2}\le (|\mu|/4)s^{1/2}\quad\mbox{ and } \quad \frac{5\mu}{4}\le g(s)\le\frac{3\mu}{4}\quad\forall 0\le s\le\varepsilon.
\end{equation}
Thus
\begin{equation}\label{h-g-lower-bd}
sh(s)-g(s)\ge\frac{3|\mu|}{4}-\frac{|\mu|}{4}=\frac{|\mu|}{2}>0\quad\forall 0\le s\le\varepsilon.
\end{equation}
Then
\begin{equation}\label{phi1-bd}
|\Phi_1(g,h)(r)-\mu|\le\int_0^r |h(s)|\,ds\le\eta\varepsilon\le\eta\quad\forall 0\le r\le\varepsilon.
\end{equation}
Now by \eqref{h-g-bd} and \eqref{h-g-lower-bd},
\begin{align}\label{E-upper-bd}
\left|E(g,h)(r)\right|\le&\left|\frac{1}{\lambda}\int_0^r\frac{s(1+h(s)^2)^2}{sh(s)-g(s)}\,ds\right|+(n-1)\left|\int_0^rh(s)^3\,ds\right|\notag\\
\le&\frac{2\left(1+(|\mu|^2/16)\right)^2}{\lambda|\mu|}\int_0^rs\,ds+(n-1)\left(\frac{|\mu|}{4}\right)^3\int_0^rs^{3/2}\,ds\notag\\
\le&c_1(r^2+r^{5/2})\quad\forall 0\le r\le\varepsilon
\end{align}
where
\begin{equation*}
c_1=\max\left(\frac{\left(1+(|\mu|^2/16)\right)^2}{\lambda|\mu|},\frac{2(n-1)}{5}\left(\frac{|\mu|}{4}\right)^3\right).
\end{equation*}
Then by \eqref{E-upper-bd},
\begin{align}\label{E-integral-upper-bd}
\frac{(n-2)}{r^{n-2}}\int_0^r\rho^{n-3}|E(g,h)(\rho)|\,d\rho
\le&\frac{(n-2)c_1}{r^{n-2}}\int_0^r\rho^{n-3}(\rho^2+\rho^{5/2})\,d\rho\notag\\
\le&(n-2)c_1(r^2+r^{5/2})\quad\forall 0<r\le\varepsilon.
\end{align}
By \eqref{eq-existence-contraction-map}, \eqref{E-upper-bd} and \eqref{E-integral-upper-bd},
\begin{equation}\label{phi2-bd}
\left|r^{-1/2}\Phi_2(g,h)(r)\right|\le (n-1)c_1 (r^{1/2}+r)\le 2(n-1)c_1r^{1/2}\le\eta\quad\forall 0<r\le\varepsilon
\end{equation}
if $0<\varepsilon\le\varepsilon_1$ where
$$
\varepsilon_1=\min \left(1,\frac{\eta^2}{4(n-1)^2c_1^2}\right).
$$
Thus by \eqref{phi1-bd} and \eqref{phi2-bd}, $\Phi({\mathcal D}_{\varepsilon,\eta})\subset {\mathcal D}_{\varepsilon,\eta}$ for any $0<\varepsilon\le\varepsilon_1$.
We now let $0<\varepsilon\le\varepsilon_1$. Let $(g_1,h_1),(g_2,h_2)\in {\mathcal D}_{\varepsilon,\eta}$ and $\delta:=||(g_1,h_1)-(g_2,h_2)||_{{\mathcal X}_\varepsilon}$. Then
\begin{equation}\label{h12-g12-difference}
\left\{\begin{aligned}
&s^{-1/2}|h_1(s)-h_2(s)|\le\delta\quad\,\,\forall 0<s\le\varepsilon\\
&|g_1(s)-g_2(s)|\le\delta\qquad\quad\forall 0<s\le\varepsilon.
\end{aligned}\right.
\end{equation}
By \eqref{h-g-bd} and \eqref{h-g-lower-bd},
\begin{equation}\label{hi-gi-bd}
|h_i(s)|\le \eta s^{1/2}\le (|\mu|/4)s^{1/2}, \, \frac{5\mu}{4}\le g_i(s)\le\frac{3\mu}{4}\,\mbox{ and }\, sh_i(s)-g_i(s)\ge\frac{|\mu|}{2}>0\,\,\forall 0\le s\le\varepsilon, i=1,2.
\end{equation}
Now by \eqref{h12-g12-difference},
\begin{equation}\label{phi1-contraction}
|\Phi_1(g_1,h_1)(r)-\Phi_1(g_2,h_2)(r)|\le\int_0^r |h_1(s)-h_2(s)|\,ds\le\delta\int_0^r s^{1/2}\,ds\le\frac{2\varepsilon^{3/2}}{3}\delta \le\frac{2}{3}\delta\quad\forall 0\le r\le\varepsilon
\end{equation}
and
\begin{align}\label{phi2-difference}
&|\Phi_2(g_1,h_1)(r)-\Phi_2(g_2,h_2)(r)|\notag\\
\le&\frac{1}{r}\left\{|E(g_1,h_1)(r)-E(g_2,h_2)(r)|+\frac{(n-2)}{r^{n-2}}\int_0^r\rho^{n-3}|E(g_1,h_1)(\rho)-E(g_2,h_2)(\rho)|\,d\rho\right\}\quad\forall 0< r\le\varepsilon.
\end{align}
By \eqref{h12-g12-difference} and \eqref{hi-gi-bd},
\begin{align}\label{h-g-bd20}
&\left|\frac{(1+h_1(s)^2)^2}{sh_1(s)-g_1(s)}-\frac{(1+h_2(s)^2)^2}{sh_2(s)-g_2(s)}\right|\notag\\
\le &4\frac{\left|(1+h_1(s)^2)^2(sh_2(s)-g_2(s))-(1+h_2(s)^2)^2(sh_1(s)-g_1(s))\right|}{|\mu|^2}\quad\forall 0\le r\le\varepsilon
\end{align}
and
\begin{align}\label{h-g-bd21}
&\left|(1+h_1(s)^2)^2(sh_2(s)-g_2(s))-(1+h_2(s)^2)^2(sh_1(s)-g_1(s))\right|\notag\\
\le&\left|(1+h_1(s)^2)^2-(1+h_2(s)^2)^2\right||sh_2(s)-g_2(s)|+(1+h_2(s)^2)^2|sh_2(s)-g_2(s)-sh_1(s)+g_1(s)|\notag\\
\le&|h_1(s)-h_2(s)||h_1(s)+h_2(s)|\left|2+h_1(s)^2+h_2(s)^2\right|(|sh_2(s)|+|g_2(s)|)\notag\\
&\qquad+(1+h_2(s)^2)^2(s|h_2(s)-h_1(s)|+|g_2(s)-g_1(s)|)\notag\\
\le &\delta s^{1/2}\cdot 2\eta(2+2\eta^2)\left(\frac{|\mu|}{4}+\frac{5|\mu|}{4}\right)+(1+\eta^2)^2(s^{3/2}+1)\delta\notag\\
\le&c_2\delta\quad\forall 0\le s\le\varepsilon
\end{align}
where
\begin{equation*}
c_2=6\eta (1+\eta^2)|\mu|+2(1+\eta^2)^2.
\end{equation*}
By \eqref{h-g-bd20} and \eqref{h-g-bd21},
\begin{equation}\label{integral-ineqn1}
\int_0^r\left|\frac{s(1+h_1^2)^2}{sh_1(s)-g_1(s)}-\frac{s(1+h_2^2)^2}{sh_2(s)-g_2(s)}\right|\,ds\le\frac{2c_2r^2}{|\mu|^2}\delta\quad\forall 0\le r\le\varepsilon
\end{equation}
and by \eqref{h12-g12-difference} and \eqref{hi-gi-bd},
\begin{align}\label{integral-ineqn2}
\int_0^r|h_1(s)^3-h_2(s)^3|\,ds\le&\int_0^r|h_1(s)-h_2(s)||h_1(s)^2+h_1(s)h_2(s)+h_2(s)^2|\,ds\notag\\
\le&3\eta^2\delta\int_0^rs^{3/2}\,ds\notag\\
\le&\frac{6\eta^2r^{5/2}}{5}\delta\quad\forall 0\le r\le\varepsilon.
\end{align}
By \eqref{integral-ineqn1} and \eqref{integral-ineqn2},
\begin{equation}\label{E-difference}
|E(g_1,h_1)(r)-E(g_2,h_2)(r)|\le c_3(r^2+r^{5/2})\delta\quad\forall 0\le r\le\varepsilon
\end{equation}
where
\begin{equation*}
c_3=\max\left(\frac{2c_2}{|\mu|^2\lambda},\frac{6(n-1)\eta^2}{5}\right).
\end{equation*}
Hence
\begin{align}\label{E-difference-integral}
\frac{(n-2)}{r^{n-2}}\int_0^r\rho^{n-3}|E(g_1,h_1)(\rho)-E(g_2,h_2)(\rho)|\,d\rho
\le&\frac{(n-2)c_3\delta}{r^{n-2}}\int_0^r\rho^{n-3}(\rho^2+\rho^{5/2})\,d\rho\notag\\
\le&(n-2)c_3(r^2+r^{5/2})\delta
\quad\forall 0\le r\le\varepsilon.
\end{align}
By \eqref{phi2-difference}, \eqref{E-difference} and \eqref{E-difference-integral},
\begin{equation}\label{phi2-contraction}
r^{-1/2}|\Phi_2(g_1,h_1)(r)-\Phi_2(g_2,h_2)(r)|\le(n-1)c_3(r^{1/2}+r)\delta\le 2(n-1)c_3r^{1/2}\delta\quad\forall 0<r\le\varepsilon.
\end{equation}
We now let
\begin{equation*}
\varepsilon_2=\min\left(\varepsilon_1,\frac{1}{9(n-1)^2c_3^2}\right)
\end{equation*}
and $0<\varepsilon\le\varepsilon_2$.
Then by \eqref{phi1-contraction} and \eqref{phi2-contraction},
\begin{equation*}
\|\Phi(g_1,h_1)-\Phi(g_2,h_2)\|_{{\mathcal X}_\varepsilon}\le\frac{2}{3}\|(g_1,h_1)-(g_2,h_2)\|_{{\mathcal X}_\varepsilon}\quad\forall (g_1,h_1),(g_2,h_2)\in {\mathcal D}_{\varepsilon,\eta}.
\end{equation*}
Hence $\Phi$ is a contraction map on ${\mathcal D}_{\varepsilon,\eta}$. Then by the Banach fixed point theorem the map $\Phi$ has a unique fixed point. Let $(g,h)\in {\mathcal D}_{\varepsilon,\eta}$ be the unique fixed point of the map $\Phi$. Then
\begin{equation*}
\Phi(g,h)=(g,h).
\end{equation*}
Hence
\begin{equation*}
g(r)=\mu+\int_0^r h(s)\,ds\quad\forall 0<r<\varepsilon\quad
\mbox{ and }\quad g(0)=\mu
\end{equation*}
which implies
\begin{equation}\label{g-eqn}
g_r(r)=h(r)\quad\forall 0<r<\varepsilon\quad
\mbox{ and }\quad g(0)=\mu
\end{equation}
and
\begin{equation*}
h(r)=\frac{1}{r}\left\{E(g,h)(r)-\frac{(n-2)}{r^{n-2}}\int_0^r\rho^{n-3}E(g,h)(\rho)\,d\rho
\right\}\quad\forall 0<r<\varepsilon.
\end{equation*}
Thus
\begin{equation}\label{h-formula}
r^{n-1}h(r)=r^{n-2}E(g,h)(r)-(n-2)\int_0^r\rho^{n-3}E(g,h)(\rho)\,d\rho \quad\forall 0<r<\varepsilon.
\end{equation}
Differentiating \eqref{h-formula} with respect to $r$, $\forall 0<r<\varepsilon$,
\begin{equation*}
(n-1)r^{n-2}h(r)+r^{n-1}h_r(r)=r^{n-2}\frac{\partial}{\partial r} E(g,h)(r)=r^{n-2}\left\{\frac{1}{\lambda}\frac{r(1+h(r)^2)^2}{rh(r)-g(r)}-(n-1)h(r)^3\right\}.
\end{equation*}
Hence
\begin{equation}\label{h-eqn}
h_r(r)+(n-1)\frac{(h(r)+h(r)^3)}{r}=\frac{1}{\lambda}\frac{(1+h(r)^2)^2}{rh(r)-g(r)}\quad\forall 0<r<\varepsilon.
\end{equation}
By \eqref{h-g-lower-bd}, \eqref{g-eqn}, \eqref{h-formula} and \eqref{h-eqn}, $g\in C^1([0,\varepsilon))\cap C^2(0,\varepsilon)$ satisfies \eqref{imcf-graph-ode-initial-value-problem2} and \eqref{f-structure-ineqn2} with $R_0=\varepsilon$ and the lemma follows.
\end{proof}
\begin{lem}\label{local-existence-extension-lem}
Let $n\ge 2$, $\lambda>0$, $r_0'\ge r_1\ge r_0>0$, $a_1>0$ and $a_0, b_0\in{\mathbb R}$, $|a_0|, |b_0|\le M$ for some constant $M>0$ be such that
\begin{equation}\label{a0-b0-positivity-relation}
r_1b_0-a_0\ge a_1.
\end{equation}
Then there exists a constant $\delta_1>0$ depending on $a_1$, $r_0$, $r_0'$ and $M$, but is independent of $r_1$ such that there exists a unique solution $f\in C^2([r_1,r_1+\delta_1))$ of
\begin{equation}\label{imcf-graph-ode-bdary-value-problem}
\left\{\begin{aligned}
&f_{rr}+\frac{n-1}{r}\cdot(1+f_r^2)f_r-\frac{1}{\lambda}\cdot\frac{(1+f_r^2)^2}{rf_r-f}=0\quad\mbox{ in }(r_1,r_1+\delta_1)\\
&f(r_1)=a_0,\quad f_r(r_1)=b_0
\end{aligned}\right.
\end{equation}
which satisfies
\begin{equation}\label{f-structure-ineqn10}
rf_r(r)>f(r)\quad\forall r\in [r_1,r_1+\delta_1).
\end{equation}
\end{lem}
\begin{proof}
Uniqueness of solution of \eqref{imcf-graph-ode-bdary-value-problem} follows from standard ODE theory. Hence we only need to prove existence of solution of \eqref{imcf-graph-ode-bdary-value-problem}. We first observe that if $f$ satisfies \eqref{imcf-graph-ode-bdary-value-problem} and \eqref{f-structure-ineqn10} for some constant $\delta_1>0$, then by multiplying \eqref{imcf-graph-ode-bdary-value-problem} by $r$ and integrating over $(r_1,r)$,
we get $\forall r_1<r<r_1+\delta_1$,
\begin{equation*}
\int_{r_1}^rsf_{rr}(s)\,ds+(n-1)\int_{r_1}^r(1+f_r(s)^2)f_r(s)\,ds=\frac{1}{\lambda}\int_{r_1}^r\frac{s(1+f_r(s)^2)^2}{sf_r(s)-f(s)}\,ds.
\end{equation*}
Hence $\forall r_1<r<r_1+\delta_1$,
\begin{equation*}
rf_r(r)-r_1b_0+(n-2)\int_{r_1}^rf_r(s)\,ds=\frac{1}{\lambda}\int_{r_1}^r\frac{s(1+f_r(s)^2)^2}{sf_r(s)-f(s)}\,ds-(n-1)\int_{r_1}^rf_r(s)^3\,ds.
\end{equation*}
Thus $\forall r_1<r<r_1+\delta_1$,
\begin{equation*}
f_r(r)=\frac{1}{r}\left\{\frac{1}{\lambda}\int_{r_1}^r\frac{s(1+f_r(s)^2)^2}{sf_r(s)-f(s)}\,ds-(n-1)\int_{r_1}^rf_r(s)^3\,ds-(n-2)\int_{r_1}^rf_r(s)\,ds\right\}+\frac{r_1}{r}b_0
\end{equation*}
which suggests one to use a fixed point argument to prove existence of solution of \eqref{imcf-graph-ode-bdary-value-problem}.
Let $\varepsilon_1=\min\left(\frac{1}{3},\frac{a_1}{4(M+r_0'+1)}\right)$ and $0<\varepsilon\le\varepsilon_1$. We now define the Banach space
$$
{\mathcal X}_\varepsilon':=\left\{(g,h): g, h\in C\left( [r_1,r_1+\varepsilon]; {\mathbb R}\right)\right\}
$$
with a norm given by
$$||(g,h)||_{{\mathcal X}_\varepsilon'}=\max\left\{\|g\|_{L^\infty(r_1,r_1+\varepsilon)} ,\|h(s)\|_{L^{\infty}(r_1,r_1+\varepsilon)}\right\}.$$
For any $(g,h)\in {\mathcal X}_\varepsilon',$ we define
$$\Phi(g,h):=\left(\Phi_1(g,h),\Phi_2(g,h)\right),$$
where for $r_1<r<r_1+\varepsilon,$
\begin{equation}\label{eq-extension-existence-contraction-map}
\left\{
\begin{aligned}
&\Phi_1(g,h)(r):=a_0+\int_{r_1}^r h(s)\,ds,\\
&\Phi_2(g,h)(r):=\frac{1}{r}\left\{\frac{1}{\lambda}\int_{r_1}^r\frac{s(1+h(s)^2)^2}{sh(s)-g(s)}\,ds-(n-1)\int_{r_1}^rh(s)^3\,ds-(n-2)\int_{r_1}^rh(s)\,ds\right\}+\frac{r_1}{r}b_0.
\end{aligned}\right.
\end{equation}
For any $0<\eta\le\varepsilon_1$, let
$${\mathcal D}_{\varepsilon,\eta}':=\left\{ (g,h)\in {\mathcal X}_\varepsilon': ||(g,h)-(a_0,b_0)||_{{\mathcal X}_{\varepsilon}'}\leq \eta\right\}.
$$
Note that ${\mathcal D}_{\varepsilon,\eta}'$ is a closed subspace of ${\mathcal X}_\varepsilon'$. We will show that if $\varepsilon\in(0,\varepsilon_2)$ is sufficiently small where $\varepsilon_2=\min (\varepsilon_1,\eta/(M+1))$, the map $(g,h)\mapsto\Phi(g,h)$ will have a unique fixed point in ${\mathcal D}_{\varepsilon,\eta}'$.
We first prove that $\Phi({\mathcal D}_{\varepsilon,\eta}')\subset {\mathcal D}_{\varepsilon,\eta}'$ if $\varepsilon\in (0,\varepsilon_2)$ is sufficiently small. Let $(g,h)\in {\mathcal D}_{\varepsilon,\eta}'$. Then
\begin{equation*}
|h(s)-b_0|\le\eta\quad\mbox{ and } \quad|g(s)-a_0|\le \eta\quad\forall r_1<s<r_1+\varepsilon.
\end{equation*}
Hence
\begin{equation}\label{h-g-bd4}
|h(s)|\le |b_0|+1\quad\mbox{ and } \quad |g(s)|\le |a_0|+1\quad\forall r_1< s< r_1+\varepsilon.
\end{equation}
Thus
\begin{align}\label{h-g-lower-bd5}
sh(s)-g(s)=&r_1b_0-a_0+(s-r_1)h(s)+r_1(h(s)-b_0)+(a_0-g(s))\notag\\
\ge&a_1-(1+|b_0|)\varepsilon-r_1\eta-\eta\notag\\
\ge&a_1-\frac{a_1}{4}-\frac{a_1}{4}\notag\\
\ge&\frac{a_1}{2}>0\quad\forall r_1\le s\le r_1+\varepsilon
\end{align}
and
\begin{equation}\label{phi1-contraction-map-5}
|\Phi_1(g,h)(r)-a_0|\le\int_{r_1}^r |h(s)|\,ds\le (1+|b_0|)\varepsilon\le\eta\quad\forall r_1\le r\le r_1+\varepsilon.
\end{equation}
Now by \eqref{h-g-bd4} and \eqref{h-g-lower-bd5},
\begin{align}\label{phi2-contraction-map-5}
&\left|\Phi_2(g,h)(r)-b_0\right|\notag\\
\le&\left|\frac{1}{\lambda r_0}\int_{r_1}^r\frac{s(1+h(s)^2)^2}{sh(s)-g(s)}\,ds\right|+\frac{(n-1)}{r_0}\left|\int_{r_1}^rh(s)^3\,ds\right|+\frac{(n-2)}{r_0}\left|\int_{r_1}^rh(s)\,ds\right|+\frac{|r_1-r|}{|r|}|b_0|\notag\\
\le&\frac{\left(1+(1+|b_0|)^2)\right)^2}{a_1r_0\lambda}\left|r^2-r_1^2\right|+\frac{(n-1)(1+|b_0|)^3}{r_0}|r-r_1|+\frac{(n-2)(1+|b_0|)}{r_0}|r-r_1|+\frac{|r_1-r|}{r_0}|b_0|\notag\\
\le&a_2\varepsilon\quad\forall r_1\le r\le r_1+\varepsilon
\end{align}
where
\begin{equation*}
a_2:=\frac{\left(1+(M+1)^2)\right)^2}{a_1r_0\lambda}(2r_0'+1)+\frac{(n-1)(M+1)^3}{r_0}+\frac{(n-2)(M+1)}{r_0}+\frac{M}{r_0}.
\end{equation*}
Let $\varepsilon_3=\min (\varepsilon_2,\eta/a_2)$ and $0<\varepsilon\le\varepsilon_3$. Then by \eqref{phi2-contraction-map-5},
\begin{equation}\label{phi2-contraction-map-6}
\left|\Phi_2(g,h)(r)-b_0\right|\le\eta\quad \forall r_1\le r\le r_1+\varepsilon.
\end{equation}
By \eqref{phi1-contraction-map-5} and \eqref{phi2-contraction-map-6}, $\Phi({\mathcal D}_{\varepsilon,\eta}')\subset {\mathcal D}_{\varepsilon,\eta}'$ for all $0<\varepsilon\le\varepsilon_3$.
We now let $0<\varepsilon\le\varepsilon_3$. Let $(g_1,h_1),(g_2,h_2)\in {\mathcal D}_{\varepsilon,\eta}'$ and $\delta:=||(g_1,h_1)-(g_2,h_2)||_{{\mathcal X}_\varepsilon'}$. Then
\begin{equation}\label{h-g-diff50}
\left\{\begin{aligned}
&|h_1(s)-h_2(s)|\le\delta\quad\forall r_1<s<r_1+\varepsilon\\
&|g_1(s)-g_2(s)|\le\delta\quad\forall r_1<s<r_1+\varepsilon
\end{aligned}\right.
\end{equation}
and
\begin{equation*}
|h_i(s)-b_0|\le\eta\quad\mbox{ and } \quad|g_i(s)-a_0|\le \eta\quad\forall r_1<s<r_1+\varepsilon, i=1,2
\end{equation*}
Hence
\begin{equation}
|h_i(s)|\le |b_0|+1\quad\mbox{ and } \quad |g_i(s)|\le |a_0|+1\quad\forall r_1< s< r_1+\varepsilon, i=1,2.\label{h-g-bd4'}
\end{equation}
Thus
\begin{equation}\label{phi1-contraction-map-10}
|\Phi_1(g_1,h_1)(r)-\Phi_1(g_2,h_2)(r)|\le\int_{r_1}^r |h_1(s)-h_2(s)|\,ds\le\varepsilon\delta
\le\frac{\delta}{3}\quad \forall r_1\le r\le r_1+\varepsilon
\end{equation}
and
\begin{align}\label{phi2-contraction-map-10}
&|\Phi_2(g_1,h_1)(r)-\Phi_2(g_2,h_2)(r)|\notag\\
\le&\frac{1}{r_0\lambda}\int_{r_1}^r\left|\frac{(1+h_1(s)^2)^2}{sh_1(s)-g_1(s)}-\frac{(1+h_2(s)^2)^2}{sh_2(s)-g_2(s)}\right|s\,ds+\frac{(n-1)}{r_0}\int_{r_1}^r\left|h_1(s)^3-h_2(s)^3\right|\,ds\notag\\
&\qquad +\frac{(n-2)}{r_0}\int_{r_1}^r|h_1(s)-h_2(s)|\,ds\quad \forall r_1\le r\le r_1+\varepsilon.
\end{align}
Now by \eqref{h-g-lower-bd5},
\begin{equation}\label{hg12-diff-lower-bd20}
sh_i(s)-g_i(s)\ge\frac{a_1}{2}>0\quad\forall r_1\le s\le r_1+\varepsilon,i=1,2.
\end{equation}
Hence by \eqref{hg12-diff-lower-bd20},
\begin{align}\label{h-g-ratio-difference}
&\left|\frac{(1+h_1(s)^2)^2}{sh_1(s)-g_1(s)}-\frac{(1+h_2(s)^2)^2}{sh_2(s)-g_2(s)}\right|\notag\\
\le &4\frac{\left|(1+h_1(s)^2)^2(sh_2(s)-g_2(s))-(1+h_2(s)^2)^2(sh_1(s)-g_1(s))\right|}{a_1^2}\quad\forall r_1\le s\le r_1+\varepsilon.
\end{align}
By \eqref{h-g-diff50} and \eqref{h-g-bd4'},
\begin{align}\label{polynomial-difference10}
&\left|(1+h_1(s)^2)^2(sh_2(s)-g_2(s))-(1+h_2(s)^2)^2(sh_1(s)-g_1(s))\right|\notag\\
\le&\left|(1+h_1(s)^2)^2-(1+h_2(s)^2)^2\right||sh_2(s)-g_2(s)|+(1+h_2(s)^2)^2|sh_2(s)-g_2(s)-sh_1(s)+g_1(s)|\notag\\
\le&|h_1(s)-h_2(s)||h_1(s)+h_2(s)|\left|2+h_1(s)^2+h_2(s)^2\right|(|sh_2(s)|+|g_2(s)|)\notag\\
&\qquad+(1+h_2(s)^2)^2(s|h_2(s)-h_1(s)|+|g_2(s)-g_1(s)|)\notag\\
\le&a_3\delta\quad\forall r_1\le s\le r_1+\varepsilon
\end{align}
where
\begin{equation*}
a_3=8(M+1)^2(1+(M+1)^2)+2(1+(M+1)^2)^2.
\end{equation*}
Now let
\begin{equation*}
\varepsilon_4:=\min\left(\varepsilon_3,\frac{a_1^2r_0\lambda}{18a_3(2r_0'+1)},\frac{r_0}{27(n-1)(1+M)^2}\right)
\end{equation*}
and let $0<\varepsilon\le\varepsilon_4$.
Then by \eqref{h-g-diff50}, \eqref{h-g-bd4'}, \eqref{h-g-ratio-difference} and \eqref{polynomial-difference10}, $\forall 0\le r\le\varepsilon$,
\begin{equation}\label{h-g-polyn-diff-integral-upper-bd}
\frac{1}{r_0\lambda}\int_{r_1}^r\left|\frac{(1+h_1(s)^2)^2}{sh_1(s)-g_1(s)}-\frac{(1+h_2(s)^2)^2}{sh_2(s)-g_2(s)}\right|s\,ds\le\frac{2a_3\delta}{a_1^2r_0\lambda}|r^2-r_1^2|\le\frac{2a_3(2r_0'+1)\varepsilon}{a_1^2r_0\lambda}\delta\le\frac{\delta}{9},
\end{equation}
\begin{align}\label{h-cubic-diff-integral-upper-bd}
\frac{(n-1)}{r_0}\int_{r_1}^r\left|h_1(s)^3-h_2(s)^3\right|\,ds=&\frac{(n-1)}{r_0}\int_{r_1}^r|h_1(s)-h_2(s)|\left|h_1(s)^2+h_1(s)h_2(s)+h_2(s)^2\right|\,ds\notag\\
\le&\frac{3(n-1)(1+M)^2\varepsilon}{r_0}\delta\le\frac{\delta}{9}\quad\forall r_1\le s\le r_1+\varepsilon,
\end{align}
and
\begin{equation}\label{h-diff-integral-upper-bd}
\frac{(n-2)}{r_0}\int_{r_1}^r|h_1(s)-h_2(s)|\,ds\le\frac{(n-2)\varepsilon}{r_0}\delta\le\frac{\delta}{9}\quad\forall r_1\le s\le r_1+\varepsilon.
\end{equation}
By \eqref{phi2-contraction-map-10}, \eqref{h-g-polyn-diff-integral-upper-bd}, \eqref{h-cubic-diff-integral-upper-bd} and \eqref{h-diff-integral-upper-bd},
\begin{equation}\label{phi2-contraction-map-11}
|\Phi_2(g_1,h_1)(r)-\Phi_2(g_2,h_2)(r)|\le\frac{\delta}{3}\quad \forall r_1\le r\le r_1+\varepsilon.
\end{equation}
By \eqref{phi1-contraction-map-10} and \eqref{phi2-contraction-map-11},
\begin{equation*}
\|\Phi(g_1,h_1)-\Phi(g_2,h_2)\|_{{\mathcal X}_\varepsilon'}\le\frac{1}{3}\|(g_1,h_1)-(g_2,h_2)\|_{{\mathcal X}_\varepsilon'}\quad\forall (g_1,h_1),(g_2,h_2)\in {\mathcal D}_{\varepsilon,\eta}.
\end{equation*}
Hence $\Phi$ is a contraction map on ${\mathcal D}_{\varepsilon,\eta}'$. Then by the Banach fixed point theorem the map $\Phi$ has a unique fixed point. Let $(g,h)\in {\mathcal D}_{\varepsilon,\eta}'$ be the unique fixed point of the map $\Phi$. Then
\begin{equation*}
\Phi(g,h)=(g,h).
\end{equation*}
Hence
\begin{equation*}
g(r)=a_0+\int_{r_1}^r h(s)\,ds\quad\forall r_1\le r\le r_1+\varepsilon
\end{equation*}
which implies
\begin{equation}\label{g-eqn10}
g_r(r)=h(r)\quad\forall r_1\le r\le r_1+\varepsilon\quad
\mbox{ and }\quad g(r_1)=a_0
\end{equation}
and $\forall r_1\le r\le r_1+\varepsilon$,
\begin{equation*}
h(r)=\frac{1}{r}\left\{\frac{1}{\lambda}\int_{r_1}^r\frac{s(1+h(s)^2)^2}{sh(s)-g(s)}\,ds-(n-1)\int_{r_1}^rh(s)^3\,ds-(n-2)\int_{r_1}^rh(s)\,ds\right\}+\frac{r_1}{r}b_0.
\end{equation*}
Thus
\begin{equation}\label{h-formula10}
\left\{\begin{aligned}
&rh(r)=\frac{1}{\lambda}\int_{r_1}^r\frac{s(1+h(s)^2)^2}{sh(s)-g(s)}\,ds-(n-1)\int_{r_1}^rh(s)^3\,ds-(n-2)\int_{r_1}^rh(s)\,ds+r_1b_0\\
&h(r_1)=b_0\end{aligned}\right.
\end{equation}
Differentiating \eqref{h-formula10} with respect to $r$,
\begin{equation}\label{h-eqn10}
rh_r(r)+h(r)=\frac{r(1+h(r)^2)^2}{\lambda(rh(r)-g(r))}-(n-1)h(r)^3-(n-2)h(r)\quad\forall r_1\le r\le r_1+\varepsilon.
\end{equation}
By \eqref{h-g-lower-bd5}, \eqref{g-eqn10}, \eqref{h-formula10} and \eqref{h-eqn10}, $g\in C^2([r_1,r_1+\varepsilon))$ satisfies \eqref{imcf-graph-ode-bdary-value-problem} and \eqref{f-structure-ineqn10} with $\delta_1=\varepsilon$ and the lemma follows.
\end{proof}
\begin{lem}\label{f-monotone-lemma}
Let $n\ge 2$, $\lambda>0$, $\mu<0$ and $R_0>0$. Suppose $f\in C^1([0,R_0))\cap C^2(0,R_0)$ is the solution of \eqref{imcf-graph-ode-initial-value-problem2} which satisfies \eqref{f-structure-ineqn2}. Then
\begin{equation}\label{f-rr}
\mbox{$\lim_{r\to 0}$} f_{rr}(r)=\frac{1}{n\lambda|\mu|}
\end{equation}
and
\begin{equation}\label{f-derivative-+ve}
f_r(r)=\frac{1}{\lambda h(r)}\int_0^r\frac{h(s)(1+f_r(s)^2)^2}{sf_r(s)-f(s)}\,ds>0\quad\forall 0<r<R_0
\end{equation}
where
\begin{equation}\label{h-defn}
h(r)=r^{n-1}\mbox{exp}\left((n-1)\int_0^rs^{-1}f_r(s)^2\,ds\right)
\end{equation}
and there exists a constant $\delta_2>0$ such that
\begin{equation}\label{f-structure-ineqn5}
rf_r(r)-f(r)\ge\delta_2\quad\mbox{ in }[0,R_0).
\end{equation}
\end{lem}
\begin{proof}
Let $H(r)$ and $E(r)$ be given by \eqref{big-h-defn} and \eqref{e-defn}. In order to prove \eqref{f-rr} we first observe that by the proof of Lemma \ref{local-existence-lem} and \eqref{phi2-bd}, \eqref{f'-eqn} holds and there exist constants $0<R_1<R_0$ and $C_1>0$ such that
\begin{equation}\label{f'-bd1}
\frac{|f_r(r)|}{r}\le C_1\quad\forall 0<r<R_1.
\end{equation}
By \eqref{f'-bd1} the function $h$ given by \eqref{h-defn} is well-defined.
Multiplying \eqref{imcf-graph-ode-initial-value-problem2} by $h$ and integrating over $(0,r)$, \eqref{f-derivative-+ve} follows.
Let $\{r_k\}_{k=1}^{\infty}\subset (0,R_1)$ be a sequence such that $r_k\to 0$ as $k\to\infty$. By \eqref{f-derivative-+ve} and \eqref{f'-bd1} the sequence $\{r_k\}_{k=1}^{\infty}$ has a sequence which we may assume without loss of generality to be the sequence itself such that $f_r(r_k)/r_k$ converges to some point $a_0\in [0,C_1]$ as $k\to\infty$. Then by \eqref{imcf-graph-ode-initial-value-problem2}, \eqref{f'-eqn} and the l'Hospital rule,
\begin{align*}
a_0=&\mbox{$\lim_{k\to\infty}$}\frac{f_r(r_k)}{r_k}
=\mbox{$\lim_{k\to\infty}$}\frac{E(r_k)-(n-2)H(r_k)}{r_k^2}
=\mbox{$\lim_{k\to\infty}$}\frac{E_r(r_k)-(n-2)H_r(r_k)}{2r_k}\notag\\
=&\frac{1}{2}\mbox{$\lim_{k\to\infty}$}\frac{\frac{1}{\lambda}\frac{r_k(1+f_r(r_k)^2)^2}{r_kf_r(r_k)-f(r_k)}-(n-1)f_r(r_k)^3-(n-2)f_r(r_k)}{r_k}\notag\\
=&\frac{1}{2}\left(\frac{1}{\lambda|\mu|}-(n-2)a_0\right)
\end{align*}
which implies that
\begin{equation*}
a_0=\frac{1}{n\lambda|\mu|}.
\end{equation*}
Since the sequence $\{r_k\}_{k=1}^{\infty}$ is arbitrary,
\begin{equation}\label{f'/r-limit}
\mbox{$\lim_{r\to 0}$}\frac{f_r(r)}{r}=\frac{1}{n\lambda|\mu|}.
\end{equation}
Letting $r\to 0$ in \eqref{imcf-graph-ode-initial-value-problem2}, by \eqref{f'/r-limit} we get
\begin{equation*}
\mbox{$\lim_{r\to 0}f_{rr}(r)$}+\frac{n-1}{n\lambda|\mu|}-\frac{1}{\lambda|\mu|}=0
\end{equation*}
and \eqref{f-rr} follows.
What is left to show is \eqref{f-structure-ineqn5}. Let $w(r)=rf_r(r)-f(r)$. By \eqref{imcf-graph-ode-initial-value-problem2} and a direct computation $w$ satisfies
\begin{equation}\label{w-derivative-eqn}
w_r(r)=r(1+f_r(r)^2)\left(\frac{1+f_r(r)^2}{\lambda w(r)}-\frac{(n-1)}{r^2}(w(r)+f(r))\right)\quad\forall 0<r<R_0.
\end{equation}
By \eqref{f-derivative-+ve},
\begin{equation}\label{a2-defn}
\mbox{$a_2:=\lim_{r\to R_0}f(r)\in (\mu,\infty]$}
\end{equation}
exists. We now divide the proof into 2 cases.
\noindent\textbf{Case 1}: $a_2\in (0,\infty]$
\noindent By \eqref{f-derivative-+ve} there exists $r_1\in (R_0/2,R_0)$ such that
\begin{equation}\label{f-lower-upper-bd}
f(r)>\min\left(\frac{a_2}{2},R_0\sqrt{(n-1)\lambda}\right)\quad\forall r_1<r<R_0.
\end{equation}
Let
\begin{equation}\label{a4-defn}
a_3=\min_{0\le r\le r_1}w(r)
\end{equation}
and
\begin{equation}\label{a5-defn-2}
a_4=\min\left(\frac{a_2}{8(n-1)\lambda},\frac{a_3}{2},\frac{R_0}{4\sqrt{(n-1)\lambda}}\right).
\end{equation}
Then $a_3>0$ and $a_4>0$.
Suppose there exists $r_2\in (r_1,R_0)$ such that $w(r_2)<a_4$. Let $(a,b)\in (0, R_0)$ be the maximal interval containing $r_2$ such that
\begin{equation}\label{w-upper-bd}
w(r)<a_4\quad\forall a<r<b
\end{equation}
holds. Since $w(r_1)\ge a_3>a_4$, $a>r_1$ and $w(a)=a_4$. By \eqref{f-lower-upper-bd}, \eqref{a5-defn-2} and \eqref{w-upper-bd}, we get
\begin{equation}\label{w-upper-bd-f-w-ineqn}
w(r)<\frac{R_0}{4\sqrt{(n-1)\lambda}}\quad\mbox{ and }\quad f(r)>4(n-1)\lambda w(r)\quad\forall a<r<b.
\end{equation}
Hence by \eqref{f-structure-ineqn2}, \eqref{f-lower-upper-bd} and \eqref{w-upper-bd-f-w-ineqn}, for any $a<r<b$ the right hand side of \eqref{w-derivative-eqn} is bounded below by
\begin{align*}
\ge&r(1+f_r(r)^2)\left(\frac{1+(f(r)/r)^2}{\lambda w(r)}-\frac{(n-1)}{r^2}(w(r)+f(r))\right)\notag\\
\ge&r(1+f_r(r)^2)\left(\frac{1+(f(r)/R_0)^2}{\lambda w(r)}-\frac{4(n-1)}{R_0^2}(w(r)+f(r))\right)\notag\\
\ge&r(1+f_r(r)^2)\left(\frac{1}{4\lambda w(r)}\left(1-\frac{16(n-1)\lambda}{R_0^2}w(r)^2\right)+
\frac{3}{4\lambda w(r)}+\frac{f(r)}{\lambda R_0^2w(r)}\left(f(r)-4(n-1)\lambda w(r)\right)\right)\\
\ge&\frac{3r_1}{4\lambda w(r)}.
\end{align*}
Hence
\begin{align}\label{w-lower-bd5}
&w_r(r)\ge\frac{3r_1}{4\lambda w(r)}\quad\forall a<r<b\notag\\
\Rightarrow\quad&w(r)>w(a)=a_4\quad\forall a<r<b
\end{align}
which contradicts \eqref{w-upper-bd}. Thus no such $r_2$ exists and $w(r)\ge a_4$ for all $r_1\le r<R_0$ and \eqref{f-structure-ineqn5} holds with $\delta_2=a_4$.
\noindent\textbf{Case 2}: $a_2\le 0$
Choose $r_1\in (R_0/2,R_0)$. Let $a_3$ be given by \eqref{a4-defn} and
\begin{equation}\label{a5-defn-3}
a_4=\min\left(\frac{R_0}{4\sqrt{(n-1)\lambda}},\frac{a_3}{2}\right).
\end{equation}
Then $a_3>0$ and $a_4>0$.
Suppose there exists $r_2\in (r_1,R_0)$ such that $w(r_2)<a_4$. Let $(a,b)\in (0, R_0)$ be the maximal interval containing $r_2$ such that \eqref{w-upper-bd} holds. Then $a>r_1$ and $w(a)=a_4$. By \eqref{f-derivative-+ve}, $f(r)<0$ for all $0<r<R_0$. Hence by \eqref{w-upper-bd}, for any $a<r<b$ the right hand side of \eqref{w-derivative-eqn} is bounded below by
\begin{align*}\label{w_r-lower-bd4}
\ge&r(1+f_r(r)^2)\left(\frac{1}{\lambda w(r)}-\frac{4(n-1)w(r)}{R_0^2}\right)\notag\\
\ge&r(1+f_r(r)^2)\left(\frac{1}{4\lambda w(r)}\left(1-\frac{16(n-1)\lambda}{R_0^2}w(r)^2\right)+
\frac{3}{4\lambda w(r)}\right)\notag\\
\ge&\frac{3r_1}{4\lambda w(r)}.
\end{align*}
Thus \eqref{w-lower-bd5} holds which contradicts \eqref{w-upper-bd}. Hence no such $r_2$ exists and $w(r)\ge a_4$ for all $r_1\le r<R_0$ and \eqref{f-structure-ineqn5} holds with $\delta_2=a_4$ and the lemma follows.
\end{proof}
\begin{lem}\label{f''-positive-lemma}
Let $n\ge 2$, $\lambda>0$, $\mu<0$ and $R_0>0$. Suppose $f\in C^1([0,R_0))\cap C^2(0,R_0)$ is the solution of \eqref{imcf-graph-ode-initial-value-problem2} which satisfies \eqref{f-structure-ineqn2}. Then
\begin{equation}\label{f''-positive}
f_{rr}(r)>0\quad\forall 0<r<R_0.
\end{equation}
\end{lem}
\begin{proof}
By \eqref{f-rr} there exists a constant $0<R_1<R_0$ such that
\begin{equation}\label{f''-local-positive}
f_{rr}(r)>0\quad\forall 0<r<R_1.
\end{equation}
Let $R_2=\max\{R\in (0,R_0):f_{rr}(r)>0\quad\forall 0<r<R\}$. Then $R_1\le R_2\le R_0$. Suppose $R_2<R_0$. Then
\begin{equation}\label{f''-sign-eqn}
f_{rr}(R_2)=0, \qquad f_{rr}(r)>0\quad\forall 0<r<R_2\quad\mbox{ and }\quad f_{rrr}(R_2)\le 0.
\end{equation}
On the other hand by differentiating \eqref{imcf-graph-ode-initial-value-problem2} with respect to $r$ and putting $r=R_2$ we have
\begin{align*}
f_{rrr}(R_2)=&\frac{n-1}{R_2^2}(f_r(R_2)+f_r(R_2)^3)-\frac{n-1}{R_2}(f_{rr}(R_2)+3f_r(R_2)^2f_{rr}(R_2))\notag\\
&\qquad+\frac{1}{\lambda}\left\{\frac{4(1+f_r(R_2)^2)f_r(R_2)f_{rr}(R_2)}{R_2f_r(R_2)-f(R_2)}-\frac{R_2(1+f_r(R_2)^2)^2f_{rr}(R_2))}{(R_2f_r(R_2)-f(R_2))^2}\right\}\notag\\
=&\frac{n-1}{R_2^2}(f_r(R_2)+f_r(R_2)^3)\notag\\
>&0
\end{align*}
which contradicts \eqref{f''-sign-eqn}. Hence $R_2=R_0$ and the lemma follows.
\end{proof}
\begin{lem}\label{f-derivative-sequence-bd-lemma}
Let $n\ge 2$, $\lambda>\frac{1}{n-1}$, $\mu<0$ and $R_0>0$. Suppose $f\in C^1([0,R_0))\cap C^2(0,R_0)$ is the solution of \eqref{imcf-graph-ode-initial-value-problem2} which satisfies \eqref{f-structure-ineqn2}. Then there exists a constant $M_1>0$ such that
\begin{equation}\label{f-derivative-locally-finite10}
0\le f_r(r)\le M_1\quad\forall 0\le r<R_0.
\end{equation}
\end{lem}
\begin{proof}
Let $a_2$ be given by \eqref{a2-defn}. By Lemma \ref{f''-positive-lemma},
$a_3:=\lim_{r\to R_0}f_r(r)\in (0,\infty]$
exists. Suppose $a_3=\infty$. We then claim that $a_2=\infty$. Suppose not. Then $a_2<\infty$ and $\mu<f(r)\le a_2$ for all $0<r<R_0$. By \eqref{imcf-graph-ode-initial-value-problem2},
\begin{align}\label{f''-f'3-ration-limit100}
\mbox{$\lim_{r\to\infty}$}\frac{f_{rr}(r)}{(1+f_r(r)^2)f_r(r)}=&\mbox{$\lim_{r\to\infty}$}\left(\frac{1}{\lambda}\cdot\frac{1+f_r(r)^2}{(rf_r(r)-f(r))f_r(r)}-\frac{n-1}{r}\right)\notag\\
=&\frac{1}{\lambda}\mbox{$\lim_{r\to\infty}$}\frac{f_r(r)^{-2}+1}{(r-(f(r)/f_r(r)))}-\frac{n-1}{R_0}\notag\\
=&\frac{1}{R_0}\left(\frac{1}{\lambda}-(n-1)\right)<0.
\end{align}
By \eqref{f''-f'3-ration-limit100} there exists $R_1\in (0,R_0)$ such that
\begin{equation*}
\frac{f_{rr}(r)}{(1+f_r(r)^2)f_r(r)}<0\quad\forall R_1\le r<R_0\quad\Rightarrow\quad
f_{rr}(r)<0\quad\forall R_1\le r<R_0
\end{equation*}
which contradicts \eqref{f''-positive}. Hence $a_2=\infty$ and we can choose a constant $0<R_2<R_0$ such that $f(r)>0$ for any $R_2\le r<R_0$.
We claim that there exists a constant $M_2>0$ such that
\begin{equation}\label{f'-f-ratio-bd}
f_r(r)\le M_2f(r)\quad\forall R_2\le r<R_0.
\end{equation}
Suppose \eqref{f'-f-ratio-bd} does not hold for any $M_2>0$. Then there exists a sequence $\{r_k\}_{k=1}^{\infty}\subset (R_2, R_0)$, $r_k\to R_0$ as $k\to\infty$, such that
\begin{equation}\label{f'-f-ratio-limit-infty}
\mbox{$\lim_{r\to R_0}$}\frac{f_r(r_k)}{f(r_k)}=\infty.
\end{equation}
By \eqref{imcf-graph-ode-initial-value-problem2} and \eqref{f'-f-ratio-limit-infty},
\begin{align}\label{f''-f'3-ration-limit}
\mbox{$\lim_{k\to\infty}$}\frac{f_{rr}(r_k)}{(1+f_r(r_k)^2)f_r(r_k)}=&\mbox{$\lim_{k\to\infty}$}\left(\frac{1}{\lambda}\cdot\frac{1+f_r(r_k)^2}{(r_kf_r(r_k)-f(r_k))f_r(r_k)}-\frac{n-1}{r_k}\right)\notag\\
=&\frac{1}{\lambda}\mbox{$\lim_{k\to\infty}$}\frac{f_r(r_k)^{-2}+1}{(r_k-(f(r_k)/f_r(r_k)))}-\frac{n-1}{R_0}\notag\\
=&\frac{1}{R_0}\left(\frac{1}{\lambda}-(n-1)\right)<0.
\end{align}
By \eqref{f''-f'3-ration-limit} there exists $k_0\in{\mathbb Z}^+$ such that
\begin{equation*}
\frac{f_{rr}(r_k)}{(1+f_r(r_k)^2)f_r(r_k)}<0\quad\forall k\ge k_0\quad\Rightarrow\quad
f_{rr}(r_k)<0\quad\forall k\ge k_0
\end{equation*}
which contradicts \eqref{f''-positive}. Hence there exists a constant $M_2>0$ such that \eqref{f'-f-ratio-bd} holds. Integrating \eqref{f'-f-ratio-bd} over $(R_2,R_0)$,
\begin{equation}\label{f-upper-bd11}
f(r)\le e^{M_2R_0}f(R_2)\quad\forall R_2\le r<R_0.
\end{equation}
By \eqref{f'-f-ratio-bd} and \eqref{f-upper-bd11},
\begin{equation*}
f_r(r)\le M_2e^{M_2R_0}f(R_2)\quad\forall R_2\le r<R_0
\end{equation*}
which contradicts the assumption that $a_3=\infty$. Hence $a_3<\infty$ and \eqref{f-derivative-locally-finite10} holds with $M_1=a_3$ and the lemma follows.
\end{proof}
We are now ready for the proof of Theorem \ref{existence_soln-thm}.
\noindent{\bf Proof of Theorem \ref{existence_soln-thm}}:
Since uniqueness of solution of \eqref{imcf-graph-ode-initial-value-problem} follows by standard ODE theory. We only need to prove existence of solution of \eqref{imcf-graph-ode-initial-value-problem}.
By lemma \ref{local-existence-lem} there exists a constant $R_1>0$ such that the equation
\eqref{imcf-graph-ode-initial-value-problem2} has a unique solution $f\in C^1([0,R_1))\cap C^2(0,R_1)$ which satisfies \eqref{f-structure-ineqn2} in $(0,R_1)$. Let $(0,R_0)$, $R_0\ge R_1$, be the maximal interval of existence of solution $f\in C^1([0,R_0))\cap C^2(0,R_0)$ of \eqref{imcf-graph-ode-initial-value-problem2} which satisfies \eqref{f-structure-ineqn2}.
Suppose $R_0<\infty$.
By Lemma \ref{f-derivative-sequence-bd-lemma} there exists a constant $M_1>0$ such that \eqref{f-derivative-locally-finite10} holds.
By Lemma \ref{f-monotone-lemma} there exists a constant $\delta_2>0$ such that
\eqref{f-structure-ineqn5} holds. By \eqref{f-structure-ineqn2}, \eqref{f-derivative-+ve} and \eqref{f-derivative-locally-finite10},
\begin{equation}\label{f-rk-bd}
\mu<f(r)\le R_0M_1\quad\forall 0<r<R_0.
\end{equation}
By \eqref{f-derivative-+ve}, \eqref{f-structure-ineqn5}, \eqref{f-derivative-locally-finite10}, \eqref{f-rk-bd} and Lemma \ref{local-existence-extension-lem}, there exists a constant $\delta_1>0$ such that for any $r_1\in (R_0/2,R_0)$, there exists a unique solution $f_1\in C^2([r_1,r_1+\delta_1))$ of
\eqref{imcf-graph-ode-bdary-value-problem} which satisfies
\eqref{f-structure-ineqn10} in $(r_1,r_1+\delta_1)$ with $a_0=f(r_1)$ and $b_0=f_r(r_1)$. We now choose $r_1\in (R_0/2,R_0)$ such that $R_0-r_1<\delta_1/2$. We extend $f$ to a function on $[0,r_1+\delta_1)$ by setting $f(r)=f_1(r)$ for all $r\in (r_1,r_1+\delta_1)$. Then $f$ is a solution of \eqref{imcf-graph-ode-initial-value-problem} in $[0,r_1+\delta_1)$ which satisfies \eqref{f-structure-ineqn2} in $[0,r_1+\delta_1)$. Since $r_1+\delta_1>R_0$, this contradicts the choice of $R_0$. Hence $R_0=\infty$. By Lemma \ref{f-monotone-lemma}, \eqref{fr-+ve} holds and the theorem follows.
{\hfill$\square$\vspace{6pt}}
\section{Asymptotic behaviour of solution}
\setcounter{equation}{0}
\setcounter{thm}{0}
In this section we will prove Theorem \ref{asymptotic-behaviour-time-infty-thm}.
We first observe that by Lemma \ref{f''-positive-lemma} we have the following result.
\begin{cor}\label{f-to-infty-cor}
Let $n\ge 2$, $\lambda>\frac{1}{n-1}$, $\mu<0$ and $f$ be the unique solution of \eqref{imcf-graph-ode-initial-value-problem} which satisfies \eqref{f-structure-ineqn}.
Then
\begin{equation}\label{f''-positive-10}
f_{rr}(r)>0\quad\forall r>0
\end{equation}
and
\begin{equation}\label{f-tends-to-infty}
\mbox{$\lim_{r\to\infty}$}f(r)=\infty.
\end{equation}
\end{cor}
Note that by \eqref{f-tends-to-infty} there exists a constant $R_1>0$ such that
\begin{equation*}
f(r)>0\quad\forall r\ge R_1.
\end{equation*}
\begin{lem}\label{f'-to-infty-lem}
Let $n\ge 2$, $\lambda>\frac{1}{n-1}$, $\mu<0$ and $f$ be the unique solution of \eqref{imcf-graph-ode-initial-value-problem} which satisfies \eqref{f-structure-ineqn}.
Then
\begin{equation}\label{f'-go-to-infty-at-x=infty}
\mbox{$\lim_{r\to\infty}$}f_r(r)=\infty.
\end{equation}
\end{lem}
\begin{proof}
By \eqref{f-structure-ineqn},
\begin{equation}\label{r-f'/f-lower-bd=1}
\frac{rf_r(r)}{f(r)}>1\quad\forall r\ge R_1.
\end{equation}
By \eqref{f''-positive-10},
$a_3:=\lim_{r\to\infty}f_r(r)\in (0,\infty]$
exists. Suppose $a_3<\infty$. Then by \eqref{f-tends-to-infty} and the l'Hospital rule,
\begin{equation}\label{r-f'/f-ratio-go-to1}
\mbox{$\lim_{r\to\infty}$}\frac{rf_r(r)}{f(r)}=\frac{\lim_{r\to\infty}f_r(r)}{\lim_{r\to\infty}\frac{f(r)}{r}}=\frac{\lim_{r\to\infty}f_r(r)}{\lim_{r\to\infty}f_r(r)}=\frac{a_3}{a_3}=1.
\end{equation}
Then by \eqref{imcf-graph-ode-initial-value-problem2}, \eqref{r-f'/f-lower-bd=1} and \eqref{r-f'/f-ratio-go-to1},
\begin{align*}
\mbox{$\lim_{r\to\infty}$}\frac{rf_{rr}}{(1+f_r^2)f_r}
=&\frac{1}{\lambda}\mbox{$\lim_{r\to\infty}$}\frac{r(1+f_r(r)^2)}{(rf_r(r)-f(r))f_r(r)}-(n-1)\notag\\
=&\frac{1}{\lambda}\mbox{$\lim_{r\to\infty}$}\frac{\frac{rf_r(r)}{f(r)}\cdot(1+f_r(r)^{-2})}{\frac{rf_r(r)}{f(r)}-1}-(n-1)\notag\\
=&\infty.
\end{align*}
Hence there exists $R_2>R_1$ such that
\begin{equation*}
\frac{rf_{rr}(r)}{(1+f_r(r)^2)f_r(r)}>1\quad\forall r\ge R_2.
\end{equation*}
Thus
\begin{equation*}
\frac{f_{rr}}{f_r}>\frac{1}{r}\quad\forall r\ge R_2.
\end{equation*}
Therefore
\begin{equation*}
f_r(r)\ge\frac{f_r(R_2)}{R_2}r\quad\forall r\ge R_2.
\end{equation*}
Hence
\begin{equation*}
a_3=\mbox{$\lim_{r\to\infty}$}f_r(r)=\infty
\end{equation*}
and contradiction arises. Hence $a_3<\infty$ does not hold. Thus $a_3=\infty$ and the lemma follows.
\end{proof}
\noindent{\bf Proof of Theorem \ref{asymptotic-behaviour-time-infty-thm}}:
Let
\begin{equation*}
q(r)=\frac{rf_r(r)}{f(r)}\quad\forall r\ge R_1.
\end{equation*}
By \eqref{imcf-graph-ode-initial-value-problem2} and a direct computation $q$ satisfies
\begin{equation}\label{q-eqn}
q_r(r)=\frac{q(r)}{r}\left\{(1+f_r(r)^2)\left(\frac{q(r)(1+f_r(r)^{-2})}{\lambda(q(r)-1)}-(n-1)\right)+1-q(r)\right\}\quad\forall r>R_1.
\end{equation}
Let $\alpha_0=\frac{\lambda (n-1)}{\lambda (n-1)-1}$, $0<\varepsilon<\min (1,\alpha_0-1)$, $a_{1,\varepsilon}=\alpha_0+\varepsilon$ and $a_{2,\varepsilon}=\alpha_0-\varepsilon$. Then
$a_{1,\varepsilon}>\alpha_0>a_{2,\varepsilon}>1$ and
\begin{equation}\label{a1-epsilon-alpha-0-ineqn}
\frac{a_{1,\varepsilon}}{\lambda (a_{1,\varepsilon}-1)}<\frac{\alpha_0}{\lambda (\alpha_0-1)}=n-1<\frac{a_{2,\varepsilon}}{\lambda (a_{2,\varepsilon}-1)}.
\end{equation}
By \eqref{a1-epsilon-alpha-0-ineqn} there exists $M_1>1$ such that
\begin{equation*}
\delta_1:=\left(n-1-\frac{a_{1,\varepsilon}(1+M_1^{-2})}{\lambda (a_{1,\varepsilon}-1)}\right)(1+M_1^2)-1>0
\end{equation*}
and
\begin{equation*}
\delta_1':=(1+M_1^2)\left(\frac{a_{2,\varepsilon}}{\lambda (a_{2,\varepsilon}-1)}-(n-1)\right)-\alpha_0>0.
\end{equation*}
By \eqref{f'-go-to-infty-at-x=infty} there exists a costant $R_2>R_1$ such that
\begin{equation}\label{f'-upper-bd5}
f_r(r)\ge M_1\quad\forall r\ge R_2.
\end{equation}
We will now prove that $q(r)$ is bounded above by $a_{1,\varepsilon}$ when $r$ is sufficiently large. Now either
\begin{equation}\label{q-upper-limit-bd}
q(r)\le a_{1,\varepsilon}\quad\forall r\ge R_2
\end{equation}
or
\begin{equation}\label{q-big}
\exists r_1>R_2\quad\mbox{ such that }q(r_1)>a_{1,\varepsilon}
\end{equation}
holds. Suppose \eqref{q-big} holds.
Let $R_3=\sup\{r_2>r_1:q(r)>a_{1,\varepsilon}\quad\forall r_1\le r<r_2\}$. Suppose $R_3=\infty$. By \eqref{q-eqn} and \eqref{f'-upper-bd5}, $\forall r>r_1$,
\begin{align}\label{q'-ineqn1}
q_r\le&\frac{q(r)}{r}\left\{(1+f_r(r)^2)\left(\frac{a_{1,\varepsilon}(1+M_1^{-2})}{\lambda(a_{1,\varepsilon}-1)}-(n-1)\right)+1\right\}\notag\\
\le&\frac{q(r)}{r}\left\{-(1+M_1^2)\left(n-1-\frac{a_{1,\varepsilon}(1+M_1^{-2})}{\lambda(a_{1,\varepsilon}-1)}\right)+1\right\}\notag\\
\le&-\delta_1\frac{q(r)}{r}.
\end{align}
Hence
\begin{equation}\label{q'-ineqn10}
\frac{q_r}{q}\le-\frac{\delta_1}{r}\quad\forall r>r_1.
\end{equation}
Integrating \eqref{q'-ineqn10} over $(r_1,r)$,
\begin{equation*}
q(r)\le q(r_1)(r_1/r)^{\delta_1}\quad\forall r>r_1.
\end{equation*}
Hence
\begin{equation*}
q(r)<\frac{a_{1,\varepsilon}}{2}\quad\forall r>\left(\frac{a_{1,\varepsilon}}{2q(r_1)}\right)^{-1/\delta_1}r_1
\end{equation*}
which contradicts the assumption that $R_3=\infty$. Hence $R_3<\infty$ and by continuity of $q$, $q(R_3)=a_{1,\varepsilon}$. By \eqref{q'-ineqn1}, $q_r(R_3)\le-\delta_1q(R_3)/R_3<0$. Hence there a constant $\delta_2>0$ such that $q(r)<a_{1,\varepsilon}$ for all $R_3<r<R_3+\delta_2$. Let $R_4=\sup\{r_4>R_3:q(r)<a_{1,\varepsilon}\quad\forall R_3<r<r_4\}$. Suppose $R_4<\infty$. Then $q(R_4)=a_{1,\varepsilon}$ and $q_r(R_4)\ge 0$. On the other hand by an argument similar to the proof of \eqref{q'-ineqn1}, $q_r(R_4)\le -\delta_1q(R_4)/R_4<0$ and contradiction arises. Hence $R_4=\infty$. Thus
\begin{equation}\label{q-upper-limit-bd2}
q(r)\le a_{1,\varepsilon}\quad\forall r\ge R_3.
\end{equation}
By \eqref{q-upper-limit-bd} and \eqref{q-upper-limit-bd2} there always exists some constant $R_5(\varepsilon)>R_2$ such that
\begin{equation}\label{q-upper-limit-bd3}
q(r)\le a_{1,\varepsilon}=\alpha_0+\varepsilon\quad\forall r\ge R_5(\varepsilon).
\end{equation}
We will now prove that $q(r)$ is bounded below by $a_{2,\varepsilon}$ when $r$ is sufficiently large. Now either
\begin{equation}\label{q-lower-limit-bd}
q(r)\ge a_{2,\varepsilon}\quad\forall r\ge R_5(\varepsilon)
\end{equation}
or
\begin{equation}\label{q-small}
\exists r_1'>R_5(\varepsilon)\quad\mbox{ such that }q(r_1')<a_{2,\varepsilon}
\end{equation}
holds. Suppose \eqref{q-small} holds.
Let $R_3'=\sup\{r_2'>r_1':q(r)<a_{2,\varepsilon}\quad\forall r_1'<r<r_2'\}$. Suppose $R_3'=\infty$. By \eqref{q-eqn} and \eqref{q-upper-limit-bd3}, $\forall r>r_1'$,
\begin{equation}\label{q'-ineqn2}
q_r\ge\frac{q(r)}{r}\left\{(1+M_1^2)\left(\frac{a_{2,\varepsilon}}{\lambda(a_{2,\varepsilon}-1)}-(n-1)\right)-\alpha_0\right\}
\ge\delta_1'\frac{q(r)}{r}.
\end{equation}
Hence
\begin{equation}\label{q'-ineqn11}
\frac{q_r}{q}\ge\frac{\delta_1'}{r}\quad\forall r>r_1'.
\end{equation}
Integrating \eqref{q'-ineqn11} over $(r_1',r)$,
\begin{equation*}
q(r)\ge q(r_1')(r/r_1')^{\delta_1'}\quad\forall r>r_1'.
\end{equation*}
Hence
\begin{equation*}
q(r)>2a_{2,\varepsilon}\quad\forall r>\left(\frac{2a_{2,\varepsilon}}{q(r_1')}\right)^{1/\delta_1'}r_1'
\end{equation*}
which contradicts the assumption that $R_3'=\infty$. Hence $R_3'<\infty$ and by continuity of $q$, $q(R_3')=a_{2,\varepsilon}$. By \eqref{q'-ineqn2}, $q_r(R_3')\ge\delta_1'q(R_3')/R_3'>0$. Hence there a constant $\delta_2'>0$ such that $q(r)>a_{2,\varepsilon}$ for all $R_3'<r<R_3'+\delta_2'$. Let $R_4'=\sup\{r_4>R_3':q(r)>a_{2,\varepsilon}\quad\forall R_3'<r<r_4\}$. Suppose $R_4'<\infty$. Then $q(R_4')=a_{2,\varepsilon}$ and $q_r(R_4')\le 0$. On the other hand by an argument similar to the proof of \eqref{q'-ineqn2}, $q_r(R_4')\ge\delta_1'q(R_4')/R_4'>0$ and contradiction arises. Hence $R_4'=\infty$. Thus
\begin{equation}\label{q-lower-limit-bd2}
q(r)\ge a_{2,\varepsilon}\quad\forall r\ge R_3'.
\end{equation}
By \eqref{q-lower-limit-bd} and \eqref{q-lower-limit-bd2} there always exists some constant $R_5'(\varepsilon)>R_5(\varepsilon)$ such that
\begin{equation}\label{q-lower-limit-bd3}
q(r)\ge a_{2,\varepsilon}=\alpha_0-\varepsilon\quad\forall r\ge R_5'(\varepsilon).
\end{equation}
Since $\varepsilon\in (0,\min (1,\alpha_0-1))$ is arbitrary, by \eqref{q-upper-limit-bd3} and \eqref{q-lower-limit-bd3} we get \eqref{growth-rate} and Theorem \ref{asymptotic-behaviour-time-infty-thm} follows.
{\hfill$\square$\vspace{6pt}}
|
1,116,691,499,732 | arxiv | \section{Introduction}
Let $\ms G=(\ms V,\ms E)$ be a countably infinite connected graph
with uniformly bounded degrees and a distinguished vertex $0\in\ms V$, which we call the root.
For example, $\ms G$ could be the integer lattice $\mbb Z^d$,
any semiregular tessellation/honeycomb of $\mbb R^d$ that includes the
origin, or a much more general graph.
In this paper, we are interested in the spectral theory of
random Schr\"odinger-type operators of the form
\[Hf(v)=-H_Xf(v)+\big(V(v)+\xi(v)\big)f(v),\qquad v\in\ms V,~f:\ms V\to\mbb R,\]
where we assume that
\begin{enumerate}
\item $H_X$ is the infinitesimal generator of some continuous-time
Markov process $X$ on $\ms G$ (which need not be symmetric);
\item $\xi:\ms V\to\mbb R$ is a random noise (which may have long-range
dependence); and
\item $V:\ms V\to\mbb R\cup\{\infty\}$ is a deterministic potential with sufficient growth
at infinity (as measured by the size of $V(v)$ as $v$ grows farther away from
the root), ensuring that $H$ has a purely discrete spectrum.
\end{enumerate}
More specifically, we are interested in studying the {\it spatial conditioning} of the spectrum
of $H$, i.e., understanding the random configuration of $H$'s eigenvalues in some domain
$B\subset\mbb C$ conditional on the configuration of eigenvalues outside of $B$.
As a first step in this direction, we establish that under general assumptions on $H_X$, $\xi$, and $V$,
$H$'s spectrum is {\it number rigid} in the sense of Ghosh and Peres \cite{GP17}; that is,
the number of eigenvalues of $H$ in bounded domains $B\subset\mbb C$
is a measurable function of the configuration of $H$'s eigenvalues outside of $B$
(we point to Definition \ref{Definition: Rigidity} for a precise definition).
To the best of our knowledge, ours is the first work to study the occurrence of
such a phenomenon in the spectrum of random Schr\"odinger operators acting on discrete spaces.
The spectral theory of differential operators (including non-self-adjoint operators; e.g.,
\cite{AAD01,Bo17,davies1980,DN02,DHK09,FKV18,H17,KL18,LS09}) is among the
most promiment research programs in mathematical physics; see, for instance,
\cite{HislopSigal,Teschl}. In particular, starting from the pioneering
work of Anderson \cite{Anderson58}, the study of Sch\"odinger
operators perturbed by irregular noise has attracted a lot of attention; we refer
to \cite{AizenmanWarzel,CarLac} for general introductions to the subject. A particularly active program in this direction
is the work on {\it Anderson localization}, which concerns the appearance of
pure point spectrum and eigenfunction decay; see the survey articles \cite{H08,K08,St11}
for more details.
In contrast to localization and similar questions, in this paper we investigate the {\it transport
of spectral information} from one region to another,
whereby observing the configuration of $H$'s eigenvalues in some domain $D\subset\mbb C$
allows to recover nontrivial information about the spectrum in $D$'s complement.
Such questions of spatial conditioning in general point processes
have long been of interest due to their
natural applications in mathematics and physics; see, e.g., \cite{ApplicationsPtProcess,Ka17}.
In recent years, there has been a renewed interest in such investigations coming from
the seminal work of Ghosh and Peres \cite{GP17} on {\it rigidity and tolerance},
culminating in a now active field of research (e.g.,
\cite{Buf16,Buf18,BDQ18,BNQ18,G15,SG16,GhoshKrishnapur,Ghosh17,PeresSly};
see also \cite{AM80}).
In \cite{GGL20}, we studied the occurrence of number rigidity in the
spectrum of a class of random Schr\"odinger operators on one-dimensional continuous space.
In this paper, we study a similar problem for discrete random Schr\"odinger operators.
\subsection{Organization}
In the remainder of this introduction, we provide an outline of our main results
and proof strategy, we compare the results in this paper to previous investigations
in a similar vein, and we discuss a few natural open questions raised by our work.
In Section \ref{Section: Outline}, we provide a high-level
outline of the proof of our main results. We take this opportunity to
explain how our technical assumptions arise from our computations.
In Section \ref{sec:Main Result}, we state our assumptions
and main results in full details, namely,
Assumptions \ref{Assumption: Graph} and \ref{Assumption: Potential and Noise}
and Theorems \ref{Theorem: Upper}, \ref{Theorem: Rigidity}, and \ref{Theorem: Lower}.
Then, we prove Theorem \ref{Theorem: Upper} in Section \ref{sec: Proof of Upper},
we prove Theorem \ref{Theorem: Rigidity} in Sections \ref{sec: Multiplicity} and \ref{Section: Rigidity},
and we prove Theorem \ref{Theorem: Lower} in Section \ref{sec: Proof of Lower}.
\subsection{Outline of Main Results}
Let $\msf d$ denote the graph distance on $\ms G$.
For every $v\in\ms V$, we use $\msf c_n(v)$, $n\geq0$, to denote $v$'s coordination sequence
in $\ms G$; that is, for every $n\in\mbb N$,
$\msf c_n(v)$ is the number of vertices $u\in\ms V$ such that $\msf d(u,v)=n$. Stated informally,
our main result is as follows:
\begin{theorem}[Informal Statement]\label{thm:Informal}
Suppose that there exists $d\geq1$ such that
\begin{align}
\label{Equation: Coordination 1}
\sup_{v\in\ms V}\msf c_n(v)=O(n^{d-1})\qquad\text{as }n\to\infty.
\end{align}
Under mild technical assumptions on the Markov process $X$ and
the noise $\xi$,
there exists a constant $d/2\leq\al\leq d$ (which, apart from $d$, depends on the
the range of the covariance in $\xi$) such that if $V(v)$ grows
faster than $\msf d(0,v)^\al$ as $\msf d(0,v)\to\infty$,
then
$H$'s eigenvalue point process is number rigid.
\end{theorem}
See Theorems \ref{Theorem: Upper}
and \ref{Theorem: Rigidity} for a formal statement.
Our technical assumptions are stated in Assumptions \ref{Assumption: Graph}
and \ref{Assumption: Potential and Noise}; roughly speaking,
our assumptions are that
\begin{enumerate}
\item the jump rates of $X$ (which may be site-dependent) are uniformly bounded; and
\item the tails of $\xi$ are not worse than exponential.
\end{enumerate}
In particular, our assumptions allow for $X$ to be non-symmetric
(hence, the operator $H$ need not be self-adjoint) and for $\xi$
to have a variety of covariance structures, including long-range
dependence.
\begin{remark}
\label{Remark: Dimension}
The constant $d$ in \eqref{Equation: Coordination 1}, which quantifies the growth rate of the number of vertices,
can be thought of as the {\it dimension} of $\ms G$ (or, at least, an upper bound of the dimension).
To illustrate this, if $\ms G$ is for example $\mbb Z^d$ or a semiregular tessellation of $\mbb R^d$,
then it is easy to see that $cn^{d-1}\leq \msf c_n(v)\leq Cn^{d-1}$ for some $C,c>0$. More generally,
the constant $d$ is closely related to the {\it intrinsic dimension} of $\ms G$, which is the minimal number
$k$ such that $\ms G$ can be embedded in $\mbb Z^k$. We refer to, e.g., \cite{KL07,LLR95} for more details.
\end{remark}
\begin{remark}
In Theorem \ref{Theorem: Lower}, we provide concrete examples showing
that the growth lower bound of $\msf d(0,v)^\al$ that we impose on $V$ to
get rigidity is the best general sufficient condition that can be obtained with our proof method.
The question of whether or not this is actually necessary for rigidity is addressed in
Section \ref{sec:new methods}.
\end{remark}
\subsection{Proof Strategy and Previous Results}
Our method to prove number rigidity follows the general scheme introduced by Ghosh and Peres
in \cite{GP17}: Let $\mc X=\sum_{k\in\mbb N}\de_{\la_k}$ be a point process on $\mbb C$.
As per \cite[Theorem 6.1]{GP17}, for any bounded set $B\subset\mbb C$, if there exists
a sequence of functions $(f_n)_{n\in\mbb N}$ such that, as $n\to\infty$,
\begin{enumerate}
\item $f_n\to1$ uniformly on $B$, and
\item the variance of the linear statistics
$\int f_n\d\mc X=\sum_{k\in\mbb N}f_n(\la_k)$ vanish,
\end{enumerate}
then $\mc X(B)$ is measurable
with respect to the configuration of $\mc X$ outside of $B$.
One of the main difficulties involved with carrying out the above program lies in
the computation of upper bounds for the variances of linear statistics $\mbf{Var}[\int f\d\mc X]$.
For this reason, much of the previous literature on number rigidity exploits special
properties that make the computations more manageable, such as
determinantal/Pfaffian or other inetegrable structure \cite{Buf16,BNQ18,G15,GL18,GP17}, translation
invariance and hyperuniformity \cite{GS19,Ghosh17}, and finite dimensional approximations \cite{RN18}.
Among those works, the only result that is related to the spectrum of random Schr\"odinger operators
is the proof of rigidity of the {\it Airy-2} point process in \cite{Buf16}. Thanks to the work of Edelman,
Ram{\'\i}rez, Rider, Sutton, and Vir\'ag \cite{EdelmanSutton,RamirezRiderVirag}, this implies that the spectrum
of the {\it stochastic Airy operator} with parameter $\be=2$ is number rigid. Given that the
method of proof in \cite{Buf16} relies crucially on special algebraic structure only present in that
one particular case, however, the result cannot be extended to general Schr\"odinger operators.
More recently, in \cite{GGL20} we proposed to study number rigidity in
the spectrum of random Schr\"odinger operators using a new {\it semigroup method}: Given that
the exponential functions $\mr e_n(z):=\mr e^{-z/n}$ converge uniformly to 1 on any bounded
set as $n\to\infty$, in order to prove number rigidity of any point process, it suffices to prove
that $\mbf{Var}[\int \mr e_n\d\mc X]\to0$ (though the requirement that $\int \mr e_n\d\mc X$
is finite imposes strong conditions on $\mc X$).
If $\mc X$ happens to be
the eigenvalue point process of a random Schr\"odinger operator $H$, then $\int \mr e_n\d\mc X$
is the trace of the operator $\mr e^{-H/n}$. Thus, in order to prove the number rigidity of the
spectrum of any random Schr\"odinger operator $H$, it suffices to prove that
\[\lim_{t\to0}\mbf{Var}\big[\mr{Tr}[\mr e^{-t H}]\big]=0.\]
The reason why this is a particularly attractive strategy to prove number rigidity of general
random Schr\"odinger operators is that, thanks to the Feynman-Kac formula, there
exists an explicit probabilistic representation of the semigroup $(\mr e^{-t H})_{t>0}$ in terms
of elementary stochastic processes, making the variance $\mbf{Var}\big[\mr{Tr}[\mr e^{-t H}]\big]$
amenable to computation.
In \cite{GGL20}, this strategy was used to prove number rigidity
for a class of random Schr\"odinger operators acting on one-dimensional continuous
space (i.e., an interval of the form $I=(a,b)$ with $-\infty\leq a<b\leq\infty$). In this paper,
we apply the same methodology to prove number rigidity for a general class of discrete
random Schr\"odinger operators.
Despite the fact that the general strategy of proof used in the present paper is the same as
\cite{GGL20}, the differences between the two settings are such that
virtually none of the work carried out in \cite{GGL20} can be directly
extended to the present paper. For example:
\begin{enumerate}
\item Since we consider operators acting on general graphs $\ms G$,
the treatment of the geometry of the space on which our operators are
defined requires a much more careful analysis than that carried out in
\cite{GGL20}. In particular (as per Remark
\ref{Remark: Dimension}), in this paper we uncover
that the dimension of the space plays an important role in the proof
of rigidity using the semigroup method.
\item In \cite{GGL20}, we only consider Schr\"odinger
operators whose kinetic energy operator is the standard Laplacian and whose noise
is a Gaussian process. As a result, the operators considered therein are all self-adjoint
and upper bounds of $\mbf{Var}\big[\mr{Tr}[\mr e^{-t H}]\big]$ can mostly be reduced to the analysis
of self-intersection local times of standard Brownian motion.
In contrast, in this paper we allow for much more general generators $H_X$ and noises $\xi$.
Most notably, the assumptions of this paper allow for non-self-adjoint operators, which
increases the technical difficulties involved (e.g., Sections \ref{sec: Multiplicity}
and \ref{Section: Rigidity}).
\end{enumerate}
\subsection{Future Directions}
Given that our main theorems apply to a very general class of operators,
the results of this paper provide substantial evidence of the universality
of number rigidity in discrete random Schr\"odinger operators.
That being said, we feel that our results raise a number of interesting
follow-up questions. We now discuss three such directions.
\subsubsection{New Methods}
\label{sec:new methods}
It is natural to wonder if the growth condition $V(v)\gg\msf d(0,v)^\al$
that we impose on the potential to get number rigidity is close to optimal. As we
show in Theorem \ref{Theorem: Lower}, our main result is optimal in the sense that
we can find concrete examples of operators such that
\begin{align}
\label{Equation: Non-Vanishing of Exponential}
\liminf_{t\to0}\mbf{Var}\big[\mr{Tr}[\mr e^{-t H}]\big]>0
\end{align}
when $V(v)\asymp\msf d(0,v)^\al$. That being said, the vanishing of the variance of the
trace of the semigroup is only a sufficient condition for number rigidity, and, in fact,
it was observed in \cite[Proposition 2.27]{GGL20} that there
exists at least one random Schr\"odinger operator whose spectrum is known to
be number rigid and such that \eqref{Equation: Non-Vanishing of Exponential}
holds. For example, the following simple question appears to be outside the scope
of the methods used in this paper:
\begin{problem}
\label{Problem: Rigidity vs. Dimension}
Suppose that $X$ is the simple symmetric random walk on $\ms G=\mbb Z^d$,
that $V(v)=\msf d(0,v)^\de$ for some $\de>0$, and that $\big(\xi(v)\big)_{v\in\mbb Z^d}$
are i.i.d. standard Gaussians (or any other simple distribution). Is $H$'s spectrum always number rigid in this case?
\end{problem}
More specifically, given that $\msf c_n(v)\asymp n^{d-1}$ on $\mbb Z^d$,
our main theorem only implies number rigidity in the above when $\de>d/2$.
We expect that solving Problem \ref{Problem: Rigidity vs. Dimension}
will require developing new methods to study number rigidity in random
Schr\"odinger operators.
\subsubsection{The Mechanism of Rigidity}
\label{Section: Mechanism}
Our main result implies that for every bounded measurable set $B\subset\mbb C$,
there exists a deterministic function $\mc N_B$ such that
the identity
\[\text{number of $H$'s eigenvalues in $B$}=\mc N_B(\text{configuration of $H$'s eigenvalues outside $B$})\]
holds with probability one. That being said, the argument that we use to prove the
existence of $\mc N_B$
gives little information on its exact form. In other words, the precise nature
of the mechanism that makes the number of eigenvalues in $B$ a deterministic function of the configuration
on the outside remains largely unknown. In light of this, an interesting future direction for investigation would be
along the following lines:
\begin{problem}
Let $B\subset\mbb C$ be a ``simple" bounded subset of the complex plane
(e.g., a closed or open ball).
Does $\mc N_B$ admit an explicit representation?
\end{problem}
We point to Remark \ref{Remark: Mechanism} for more details on the construction of $\mc N_B$.
\subsubsection{Spatial Conditioning Beyond Number Rigidity}
When $H$'s spectrum is number rigid, we know that if we condition
$H$ on having a specific eigenvalue configuration outside of a bounded set
$B$, then $H$'s spectrum inside of $B$ is a point process with a fixed total number
of points. It would be interesting to see if more can be learned about
the conditional distribution of the eigenvalues in $B$. For instance, the
following problem (related to the notion of tolerance introduced in
\cite{GP17}) might be a good starting point:
\begin{problem}
Suppose that, after conditioning on the outside configuration,
$H$ has $M\in\mbb N$ random eigenvalues in some bounded
set $B\subset\mbb C$. Let $\La\in\mbb C^M$ be the random
vector whose components are $H$'s random eigenvalues in $B$
(conditional on the configuration outside $B$),
taken in a uniformly random order. What is the support of $\La$'s
probability distribution on the set $B^M$?
\end{problem}
\section{Proof Outline}
\label{Section: Outline}
In this section, we present a sketch of the proof of
our main theorem in two simple special cases.
We take this opportunity to explain how our technical
assumptions arise in our computations.
For simplicity of exposition, we assume in this outline that $\ms G$ is the integer lattice
$\mbb Z^d$ (i.e., $(u,v)\in\ms E$ if and only if $\|u-v\|_\infty=1$, where $\|\cdot\|_\infty$ denotes the
usual $\ell^\infty$ norm), $X$ is the simple
symmetric random walk on $\mbb Z^d$, and $\xi$ is a centered stationary Gaussian process
with covariance function
\[\ga(v):=\mbf E[\xi(v)\xi(0)],\qquad v\in\mbb Z^d.\]
As alluded to in the introduction (and proved in Section \ref{Section: Rigidity}),
to prove that $H$'s eigenvalue point process is number rigid, it suffices to show that
$\mr{Tr}[\mr e^{-t H}]$'s variance vanishes as $t\to0$.
According to the Feynman-Kac formula, we have that
\[\mr{Tr}[\mr e^{-t H}]=\sum_{v\in\mbb Z^d}\mbf E_X\left[\exp\left(\int_0^t V\big(X(s)\big)+\xi\big(X(s)\big)\d s\right)\mbf 1_{\{X(t)=X(0)\}}\bigg|X(0)=v\right],\]
where $\mbf E_X$ means that we are only averaging with respect to the randomness in the path of $X$,
and we assume that $X$ is independent of the noise $\xi$.
In order to ensure that $\mr e^{-t H}$ is trace class (or even bounded) in the general case,
we assume that $\ms G$ has uniformly bounded degrees; see Section \ref{Section: Boundedness}
for more details.
Our first step in the analysis of $\mr{Tr}[\mr e^{-t H}]$ is to note that if
$t$ is small, then the probability that there exists some $0\leq s\leq t$ such that $X(s)\neq X(0)$
is close to zero (i.e., $1-\mr e^{-t}\sim t$). Thus, by working only with the complement of this event,
we have that
\begin{align}
\label{Equation: Heuristic 1}
\mr{Tr}[\mr e^{-t H}]\approx\sum_{v\in\mbb Z^d}\mr e^{-tV(v)-t\xi(v)}.
\end{align}
A rigorous version of this heuristic is carried out in
the proof of Lemma \ref{Lemma: Variance Upper Bound 3}.
The latter relies on controlling how far $X$ can travel from its initial value $X(0)$ after a small time
(e.g., the tail bound \eqref{Equation: Tail Bound}), which itself depends on the
assumptions that the jump rates of $X$ are uniformly bounded.
Our second step is to identify the leading order asymptotics in the variance
of the expression on the right-hand side of \eqref{Equation: Heuristic 1}.
In the special case where $\xi$ is a stationary Gaussian process with covariance $\ga$,
an application of Tonelli's theorem yields
\begin{align}
\nonumber
\mbf{Var}\left[\sum_{v\in\mbb Z^d}\mr e^{-tV(v)-t\xi(v)}\right]
&=\sum_{u,v\in\mbb Z^d}\mr e^{-tV(u)-tV(v)}\mbf{Cov}[\mr e^{-t\xi(u)},\mr e^{-t\xi(v)}]\\
\nonumber
&=\sum_{u,v\in\mbb Z^d}\mr e^{-tV(u)-tV(v)}\mr e^{t^2\ga(0)}\left(\mr e^{t^2\ga(u-v)}-1\right)\\
\label{Equation: Heuristic 2}
&\approx t^2\,\sum_{u,v\in\mbb Z^d}\mr e^{-tV(u)-tV(v)}\ga(u-v),
\end{align}
where the last line follows from a Taylor expansion.
A bound of this type can be achieved in the general case thanks to our assumption that
$\xi$'s tails are not worse than exponential. We refer to Proposition \ref{Proposition: Variance Formula}
for the general form of the variance formula. See Lemmas
\ref{Lemma: Variance Upper Bound 1} and \ref{Lemma: Variance Upper Bound 2}
for quantitative bounds on the vanishing of the covariance of the exponential random field $\mr e^{-t\xi}$
as $t\to0$ in terms of the strength of $\xi$'s covariance.
Our third and final step is to identify conditions such that the quantity
\begin{align}
\label{Equation: Heuristic 3}
\sum_{u,v\in\mbb Z^d}\mr e^{-tV(u)-tV(v)}\ga(u-v)
\end{align}
does not blow up at a faster rate than $t^{-2}$ as $t\to0$. As advertised
in our informal statement, this depends on the growth rate of the potential $V$
and the decay rate (if any) of the covariance $\ga$ at infinity. To give an
illustration of how this is carried out in this paper, we consider the two simplest
(and most extreme) cases of covariance structure:
\begin{enumerate}
\item $\big(\xi(v)\big)_{v\in\mbb Z^d}$ are i.i.d., i.e., $\ga(v)=0$ whenever $v\neq 0$; and
\item $\big(\xi(v)\big)_{v\in\mbb Z^d}$ are all equal to each other, i.e., $\ga(v)=\ga(0)$ for all $v\in\ms V$.
\end{enumerate}
The quantity \eqref{Equation: Heuristic 3} then becomes
\[\sum_{u,v\in\mbb Z^d}\mr e^{-tV(u)-tV(v)}\ga(u-v)=\begin{cases}
\displaystyle
\ga(0)\sum_{v\in\mbb Z^d}\mr e^{-2tV(v)}&\text{i.i.d. case,}
\vspace{10pt}\\
\displaystyle
\ga(0)\left(\sum_{v\in\mbb Z^d}\mr e^{-tV(v)}\right)^2&\text{all equal case.}
\end{cases}\]
If we assume that $V(v)\gg\msf d(0,v)^\al$ for some $\al>0$,
then for any $\theta>0$ we have that
\begin{align}
\label{Equation: Heuristic 4}
\sum_{v\in\mbb Z^d}\mr e^{-\theta tV(v)}\ll\sum_{v\in\mbb Z^d}\mr e^{-\theta t\msf d(0,v)^\al}=\sum_{n\in\mbb N\cup\{0\}}\msf c_n(0)\mr e^{-\theta tn^\al},
\end{align}
where we recall that $\msf c_n(0)$ denotes for every $n\in\mbb N$ the number of vertices in $\ms G$ such that $\msf d(0,v)=n$.
For the $d$-dimensional integer lattice $\mbb Z^d$, it is easy to check that there exists a constant $C>0$ such that $\msf c_n(0)\leq Cn^{d-1}$
for every $n\in\mbb N$, whence \eqref{Equation: Heuristic 4} yields
\begin{align}
\label{Equation: Heuristic 5}
\sum_{v\in\mbb Z^d}\mr e^{-\theta tV(v)}\ll\sum_{n\in\mbb N\cup\{0\}}n^{d-1}\mr e^{-\theta tn^\al}\approx\int_0^\infty x^{d-1}\mr e^{-\theta tx^\al}\d x=O(t^{-d/\al}).
\end{align}
Summarizing our argument so far in \eqref{Equation: Heuristic 1}--\eqref{Equation: Heuristic 5},
we are led to the $t\to0$ asymptotic
\[\mbf{Var}\big[\mr{Tr}[\mr e^{-t H}]\big]
\ll
\begin{cases}
t^{2-d/\al}&\text{i.i.d. case,}\\
t^{2-2d/\al}&\text{all equal case.}
\end{cases}\]
Thus, $H$'s eigenvalue point process is proved to be number rigid if $V(v)\gg\msf d(0,v)^{d/2}$ in the i.i.d case
and $V(v)\gg\msf d(0,v)^d$ in the all equal case. If $\ga$ has a less extreme decay rate
(such as $\ga(v)=O(\msf d(0,v)^{-\be})$ as $\msf d(0,v)\to\infty$ for some $\be>0$), then $H$'s eigenvalue
point process is number rigid if $V(v)\gg\msf d(0,v)^\al$ for some $d/2\leq\al\leq d$, where the exact value
of $\al$ depends on $\ga$'s decay rate. We refer to Theorems \ref{Theorem: Upper} and \ref{Theorem: Rigidity}
for the details.
\section{Main Results}\label{sec:Main Result}
\subsection{Basic Definitions and Notations}
We begin by introducing basic/standard notations
that will be used throughout the paper.
\begin{notation}[Function Spaces]
We use $\ell^p(\ms V)$ to denote the space of real-valued
absolutely $p$-summable (or bounded if $p=\infty$) functions on $\ms V$; we
denote the associated norm by $\|\cdot\|_{p}$.
We use $\langle\cdot,\cdot\rangle$ to denote the inner
product on $\ell^2(\ms V)$.
\end{notation}
\begin{notation}[Operator Theory]\label{def:Multiplicities}
Given a linear operator $T$ on $\ell^2(\ms V)$ (or a dense domain $D(T)\subset\ell^2(\ms V)$), we use $\si(T)$ to denote its spectrum,
and $\si_p(T)\subset\si(T)$ to denote its point spectrum. If $T$ is bounded, we denote its operator norm by
\[\|T\|_{\mr{op}}:=\sup_{\|f\|_{2}=1}\|Tf\|_{2}.\]
We use $\mf R(z,T):=(T-z)^{-1}$
to denote the resolvent of $T$ for all $z\in\mbb C\setminus\si(T)$.
If $\la$ is an isolated eigenvalue of $T$, then we let
\[m_a(\la,T):=\dim\left(\mr{rg}\left(\frac1{2\pi\mr i}\oint_{\Ga_\la}\mf R(z,T)\d z\right)\right)\]
denote $\la$'s algebraic multiplicity, where $\dim$ denotes the dimension of a linear space,
$\mr{rg}$ denotes the range of an operator, and
$\Ga_\la$ denotes a Jordan curve that encloses $\la$ and excludes the remainder
of $T$'s spectrum.
\end{notation}
\begin{definition}[Rigidity]
\label{Definition: Rigidity}
Let $\mc X=\sum_{k\in\mbb N}\de_{\la_k}$ be an infinite point process on $\mbb C$.
We say that $\mc X$ is real-bounded below by a random variable $\om\in\mbb R$
if $\Re(\la_k)\geq\om$ almost surely for every $k\in\mbb N$.
We say that such a point process is number rigid if for every Borel set $B\subset\mbb C$
such that $B\subset(-\infty,\de]+\mr i[-\tilde \de,\tilde \de]$
for some $\de,\tilde\de>0$, the random variable $\mc X(B)$ is measurable with
respect to the sigma algebra generated by the set
\[\big\{\mc X(A):A\subset\mbb C\text{ is Borel and }B\cap A=\varnothing\big\}.\]
\end{definition}
\begin{remark}
In previous works in the literature, it is most common to define number rigidity
as the requirement that $\mc X(B)$ is measurable with respect to the configuration in $\mbb C\setminus B$
for every bounded Borel set $B$. This is in part due to the fact that most point processes
that have been proved to be number rigid thus far are such that $\mc X(B)=\infty$ almost surely
whenever $B$ is unbounded.
That being said, the fact that we are considering the spectrum of Schr\"odinger
operators whose potentials have a strong growth at infinity means that we are considering
eigenvalue point processes that are real-bounded below, in which case a more general
notion of number rigidity makes sense. We note that a similarly generalized notion of rigidity appeared
in the work of Bufetov on the stochastic Airy operator in \cite[Proposition 3.2]{Buf16}.
\end{remark}
\subsection{Markov Process}
Next, we introduce the Markov processes on the graph
$\ms G$ that generate
our random operators, as well as some of the notions we
need to describe them. We recall that $\ms G=(\ms V,\ms E)$ is a countably infinite
connected graph with uniformly bounded degrees and a root $0\in\ms V$.
\begin{definition}[Markov Process]
Let $\Pi:\ms V\times\ms V\to[0,1]$ be a matrix such that
\begin{enumerate}
\item $\Pi$ is stochastic, that is, for every $u\in \ms V$,
\[\sum_{v\in\ms V}\Pi(u,v)=1;\]
\item $\Pi(v,v)=0$ for all $v\in\ms V$; and
\item If $(u,v)\not\in\ms E$, then $\Pi(u,v)=\Pi(v,u)=0$.
\end{enumerate}
Let $q:\ms V\to(0,\infty)$ be a positive vector.
We use $X:[0,\infty)\to\ms V$ to denote the continuous-time Markov process on $\ms V$
defined as follows: If $X$ is in
state $u\in\ms V$, it waits for a random time with
an exponential distribution with rate $q(u)$, and then jumps
to another state $v\neq u$ with probability $\Pi(u,v)$, independently of the wait time.
Once at the new state, $X$ repeats this procedure independently of all previous jumps.
\end{definition}
\begin{remark}
We note that condition (3) in the above definition implies that
$X$ is a Markov process on the graph $\ms G$, in the sense that
jumps can only occur between vertices that are connected by edges.
\end{remark}
\begin{notation}
\label{Notation: Markov Probability and Expectation}
For every $v\in\ms V$, we use
$X^v$ to denote the process $X$
conditioned on the starting point $X(0)=v$.
We use $\mbf P^v$ to denote the
law of $X^v$, and $\mbf E^v$ to denote expectation
with respect to $\mbf P^v$.
\end{notation}
We assume throughout that the Markov process $X$
and the graph $\ms G$ satisfy the following.
\begin{assumption}[Graph Geometry and Jump Rates]
\label{Assumption: Graph}
The following two conditions hold:
\begin{enumerate}
\item There exists constants $d\geq1$ and $\mf c>0$ such that
\begin{align}
\label{Equation: Coordination Sequence}
\sup_{v\in\ms V}\msf c_n(v):=\sup_{v\in\ms V}|\{u\in\ms V:\msf d(u,v)=n\}|\leq \mf c\,n^{d-1}\qquad\text{for all }n\in\mbb N\cup\{0\},
\end{align}
recalling that $\msf d$ is the graph distance in $\ms G$, that is, $\msf d(u,v)$
is the length of the shortest path (in terms of number of edges) connecting
$u$ and $v$, and with the convention that $\msf d(v,v)=0$ for all $v\in\ms V$.
\item $X$ has uniformly bounded jump rates, that is,
\[\displaystyle\mf q:=\sup_{v\in\ms V}q(v)<\infty.\]
\end{enumerate}
\end{assumption}
\begin{remark}
We note that the assumption \eqref{Equation: Coordination Sequence}
simultaneously takes care of the requirement that $\ms G$ has uniformly
bounded degrees (since $\msf c_1(v)=\deg(v)$) and of the asymptotic
growth rate \eqref{Equation: Coordination 1} stated in our informal theorem.
\end{remark}
\subsection{Feynman-Kac Kernel}
We are now in a position to introduce the central objects of study of this paper,
namely, the Feynman-Kac semigroups of the Schr\"odinger operators we
are interested in.
\begin{notation}[Local Time]
For every $t\geq0$, we let $L_t:\ms V\to[0,t]$ denote $X$'s
local time:
\[L_t(v):=\int_0^t\mbf 1_{\{X(s)=v\}}\d s,\qquad v\in\ms V.\]
\end{notation}
\begin{definition}[Potential and Noise]
Let $V:\ms V\to\mbb R\cup\{\infty\}$ be a deterministic function,
and let $\xi:\ms V\to\mbb R$ be a random function, where $\mbf E[\xi(v)]=0$.
We denote the set
\begin{align}
\label{Equation: Dirichlet Set}
\ms Z:=\{v\in\ms V:V(v)=\infty\},
\end{align}
\end{definition}
Throughout, we make the following assumptions on the noise and potential.
\begin{assumption}[Potential Growth and Noise Tails]
\label{Assumption: Potential and Noise}
There exists $\al>0$ such that
\begin{align}
\label{Equation: Potential Growth}
\liminf_{\msf d(0,v)\to\infty}\frac{V(v)}{\msf d(0,v)^\al}=\infty.
\end{align}
Moreover, $\xi$ satisfies the following conditions:
\begin{enumerate}
\item $\mbf E[\xi(v)]=0$ for every $v\in\ms V$.
\item There exists $\mf m>0$ such that
for every $p\in\mbb N$,
\begin{align}
\label{Equation: Exponential Moments}
\sup_{v\in\ms V}\mbf E[|\xi(v)|^p]\leq p!\mf m^p.
\end{align}
\end{enumerate}
\end{assumption}
In the sequel, it will be useful to characterize noises in terms
of the decay rate of their covariances. For this purpose, we make
the following definition.
\begin{definition}[covariance decay]
\label{Definition: covariance decay}
We say that $\xi$ has covariance decay of order (at least) $\be>0$ if there
exists a constant $\mf C>0$ such that
\begin{align}
\label{Equation: covariance decay of Order Two}
\big|\mbf E[\xi(u)\xi(v)]\big|\leq\mf C\,\big(\msf d(u,v)+1\big)^{-\be}
\end{align}
for every $u,v\in\ms V$, and such that
\begin{align}
\label{Equation: covariance decay of Order Three}
\big|\mbf E[\xi(u)\xi(v)\xi(w)]\big|\leq\mf C\,\min_{a,b\in\{u,v,w\}}\big(\msf d(a,b)+1\big)^{-\be}
\end{align}
for every $u,v,w\in\ms V$.
\end{definition}
\begin{definition}[Feynman-Kac Kernel]
Define the Feynman-Kac kernel
\begin{align}
\label{Equation: Kernel}
K_t(u,v):=
\mbf E^u\left[\mr e^{-\langle L_t,V+\xi\rangle}\mbf 1_{\{X(t)=v\}}\right],\qquad u,v\in\ms V,
\end{align}
where we assume that $X$ is independent of $\xi$,
and that $\mbf E^v$ denotes the expectation with respect to
the Markov process $X^v$, conditional on $\xi$.
We denote the trace of $K_t$ as
\[\mr{Tr}[K_t]:=\sum_{v\in\ms V}K_t(v,v).\]
\end{definition}
\begin{remark}
In the above definition,
we use the convention that $\mr e^{-\infty}:=0$ whenever $V(v)=\infty$,
in particular, $K_t(u,v)=0$ whenever $u\in\ms Z$ or $v\in\ms Z$.
\end{remark}
\subsection{Main Results: Variance Upper Bound and Rigidity}
We now state our main results. First, we have the following sufficient condition
for the vanishing of the variance of the trace of $K_t$ as $t\to0$:
\begin{theorem}
\label{Theorem: Upper}
Suppose that Assumptions \ref{Assumption: Graph} and \ref{Assumption: Potential and Noise} hold.
In order to have
\[\lim_{t\to0}\mbf{Var}\big[\mr{Tr}[K_t]\big]=0,\]
it is sufficient that the constant $\al$ in \eqref{Equation: Potential Growth}
satisfies the following:
\begin{enumerate}
\item if $\xi$ has covariance decay of order $\be>0$, then
\begin{align}
\label{Equation: Polynomial Growth Condition}
\al\begin{cases}
\geq d/2&\text{when }\be>d,\\
>d/2&\text{when }\be=d,\\
\geq d-\be/2&\text{when }\be<d;
\end{cases}
\end{align}
\item otherwise, $\al\geq d$.
\end{enumerate}
\end{theorem}
As a consequence of the above theorem, we have the following result,
which states some properties of $K_t$'s infinitesimal generator, including number rigidity.
\begin{theorem}
\label{Theorem: Rigidity}
Suppose that Assumptions \ref{Assumption: Graph} and \ref{Assumption: Potential and Noise} hold,
and that we take the constant $\al$ in \eqref{Equation: Potential Growth} as in Theorem \ref{Theorem: Upper}.
The following conditions hold almost surely:
\begin{enumerate}
\item For every $t>0$, $K_t$ is a trace class linear operator
on $\ell^2(\ms V)$. There exists a random variable $\om\leq0$
such that $\|K_t\|_{\mr{op}}\leq\mr e^{-\om t}$ for all $t>0$.
\item The family of operators
$(K_t)_{t>0}$ is a strongly continuous semigroup
on $\ell^2(\ms V)$.
\item The infinitesimal generator
\begin{align}
\label{Equation: Infinitesimal Generator}
H:=\lim_{t\to0}\frac{K_0-K_t}{t}
\end{align}
is closed on some dense domain $D(H)\subset\ell^2(\ms V)$,
and its action on functions is given by the following matrix:
\begin{align}
\label{Equation: Schrodinger Generator}
H(u,v):=\begin{cases}
-q(u)\Pi(u,v)&\text{if }u\neq v\text{ and }u,v\not\in\ms Z,\\
q(u)+V(u)+\xi(u)&\text{if }u=v\text{ and }u\not\in\ms Z,\\
\infty&\text{if }u\in\ms Z\text{ or }v\in\ms Z.
\end{cases}
\end{align}
(In particular, if $f\in D(H)$, then $f(v)=0$ for every $v\in\ms Z$.)
\end{enumerate}
In particular, almost surely,
$H$ has a pure point spectrum without accumulation point,
and the eigenvalue point process (counting algebraic multiplicities)
\begin{align}
\label{Equation: Eigenvalue Point Process}
\mc X_H:=\sum_{\la\in\si(H)} m_a(\la,H)\,\de_\la
\end{align}
is real-bounded below by $\om$
and number rigid in the sense of Definition \ref{Definition: Rigidity}.
\end{theorem}
\subsection{Questions of Optimality}
\label{Section: Optimality}
In this section, we study the optimality of the growth assumptions
we make on $V$ in Theorem \ref{Theorem: Upper} by considering three
counterexamples.
\begin{theorem}\label{Theorem: Lower}
Suppose that $X$ is the nearest-neighbor symmetric random walk on the integer lattice $\mbb Z^d$,
that $V(v):=\msf d(0,v)^\de$ for some $\de>0$, and that $\xi$ is a centered stationary Gaussian
process whose covariance function $\ga(v):=\mbf E[\xi(v)\xi(0)]$ is nonnegative.
If one of the following conditions hold:
\begin{enumerate}
\item $\de\leq d/2$ and $\ga(v)=\mbf 1_{\{v=0\}}$;
\item $\de\leq d-\be/2$ for some $0<\be<d$, and there exists a constant $\mf L>0$
such that $\ga(v)\geq\mf L\big(\msf d(0,v)+1\big)^{-\be}$ for every $v\in\ms V$; or
\item $\de\leq d$ and $\inf_{v\in\mbb Z^d}\ga(v)>\mf L$ for some constant $\mf L>0$;
\end{enumerate}
then we have the variance lower bound
\[\liminf_{t\to0}\mbf{Var}\big[\mr{Tr}[K_t]\big]>0.\]
\end{theorem}
Thus, given that $\msf c_n(v)\asymp n^{d-1}$ as $n\to\infty$ on $\mbb Z^d$,
if one is interested in providing a general sufficient condition for number rigidity on graphs
using semigroups, then Theorem \ref{Theorem: Upper} is essentially the optimal result
one could hope for.
\begin{remark}
An examination of the proof of Theorem \ref{Theorem: Lower}
reveals that similar lower bounds can be proved for more general
examples with little effort; we restrict our attention to this elementary setting for
simplicity of exposition.
\end{remark}
\section{Proof of Theorem \ref{Theorem: Upper}}
\label{sec: Proof of Upper}
Throughout this section, we assume that Assumptions
\ref{Assumption: Graph} and \ref{Assumption: Potential and Noise} hold.
This section is organized as follows: In Section \ref{Section: Outline 2}, we outline the
main steps of the proof of Theorem \ref{Theorem: Upper}.
That is, we state a number of technical propositions and lemmas,
which we then use to prove Theorem~\ref{Theorem: Upper}. Then, in Sections
\ref{Section: Proof of Variance Formula}--\ref{sec:PrLem3}, we prove the technical results stated
Section \ref{Section: Outline 2}, thus wrapping-up the proof
of Theorem~\ref{Theorem: Upper}.
\subsection{Proof Outline}
\label{Section: Outline 2}
\subsubsection{Step 1. Variance Formula and First Bound}
We begin with some notations.
\begin{notation}
\label{Notation: Conditional Expectation}
Let us denote by $(\Om_\xi,\mbf P_\xi)$ the probability space on which $\xi$
is defined. Let $Y$ be any random element that is independent of $\xi$,
and let $F$ be any measurable function. We denote the random variable
\[\mbf E_\xi\big[F(\xi,Y)\big]:=\int_{\Om_\xi} F(x,Y)\d\mbf P_\xi(x);\]
that is, $\mbf E_\xi$ is the conditional expectation with respect to
$\xi$, given $Y$. Then, for measurable functions $F$ and $G$,
we denote the random variable
\[\mbf{Cov}_\xi\big[F(\xi,Y),G(\xi,Y)\big]:=\mbf E_\xi\big[F(\xi,Y)G(\xi,Y)\big]-\mbf E_\xi\big[F(\xi,Y)\big]\mbf E_\xi\big[G(\xi,Y)\big].\]
\end{notation}
Our main tool in the proof of Theorem \ref{Theorem: Upper} is the following variance formula:
\begin{proposition}
\label{Proposition: Variance Formula}
For every $u,v\in\ms V$,
we let $X^u$ and $\tilde X^v$ be independent
copies of the Markov process $X$ started from $u$ and $v$ respectively.
We assume that $X^u$ and $\tilde X^v$ are independent of the
noise $\xi$, and
we denote their local times as
\[L^u_t(w):=\int_0^t\mbf 1_{\{X^u(s)=w\}}\d s
\qquad\text{and}\qquad
\tilde L^v_t(w):=\int_0^t\mbf 1_{\{\tilde X^v(s)=w\}}\d s\]
for all $w\in\ms V$. It holds that
\[\mbf{Var}\big[\mr{Tr}[K_t]\big]=\sum_{u,v\in\ms V}\mbf E\left[\mr e^{-\langle L^u_t+\tilde L^v_t,V\rangle}
\mbf{Cov}_\xi\left[\mr e^{-\langle L^u_t,\xi\rangle},\mr e^{-\langle\tilde L^v_t,\xi\rangle}\right]\mbf 1_{\{X^u(t)=u,\tilde X^v(t)=v\}}\right].\]
\end{proposition}
The proof of this proposition, which we provide in Section \ref{Section: Proof of Variance Formula} below,
is essentially a direct consequence of the definition of $K_t$ in \eqref{Equation: Kernel}.
In order to find sufficient conditions for $\mbf{Var}\big[\mr{Tr}[K_t]\big]\to0$ as $t\to0$ using this formula, it is
convenient to control the contributions coming from $V$ and $\xi$ separately. To this end,
we use H\"older's inequality, as well as the elementary fact that $\mbf 1_E\leq 1$ for every event $E$,
which yields
\begin{multline*}
\mbf E\Big[\mr e^{-\langle L^u_t+\tilde L^v_t,V\rangle}
\mbf{Cov}_\xi\big[\mr e^{-\langle L^u_t,\xi\rangle},\mr e^{-\langle\tilde L^v_t,\xi\rangle}\big]\mbf 1_{\{X^u(t)=u,\tilde X^v(t)=v\}}\Big]\\
\leq\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}
\Big]^{1/2}\mbf E\Big[\mbf{Cov}_{\xi}\Big[\mr e^{-\langle L^u_t,\xi\rangle},\mr e^{-\langle\tilde L^v_t,\xi\rangle}\Big]^2\Big]^{1/2}
\end{multline*}
for every fixed $u,v\in\ms V$.
Then, by summing both sides of the above inequality over $u,v \in \ms V$,
we obtain our first upper bound for the variance:
\begin{align}
\label{Equation: Variance Upper Bound Holder}
\mbf{Var}\big[\mr{Tr}[K_t]\big]\leq
\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}
\Big]^{1/2}\mbf E\Big[\mbf{Cov}_{\xi}\Big[\mr e^{-\langle L^u_t,\xi\rangle},\mr e^{-\langle\tilde L^v_t,\xi\rangle}\Big]^2\Big]^{1/2}.
\end{align}
\subsubsection{Step 2. Controlling the Contributions from $\xi$ and $V$}
We now state the technical results that we use to control the right-hand side of \eqref{Equation: Variance Upper Bound Holder}.
Our first such result is as follows:
\begin{lemma}
\label{Lemma: Variance Upper Bound 1}
Recall the definition of the constant $\mf m>0$ in \eqref{Equation: Exponential Moments}.
There exists a constant $C_1>0$ (which only depends on $\mf m$) such that
for every $t<1/C_1$, one has
\begin{align*}
\sup_{u,v\in\ms V}\mbf E\Big[\mbf{Cov}_{\xi}\big[\mr e^{-\langle L^u_t,\xi\rangle},\mr e^{-\langle\tilde L^v_t,\xi\rangle}\big]^2\Big]^{1/2}\leq C_1t^2.
\end{align*}
\end{lemma}
The proof of Lemma \ref{Lemma: Variance Upper Bound 1}, which we provide in Section \ref{sec:PrLem1},
follows from estimating expectations of the form $\mbf E^v\big[\mr e^{-\theta\langle L_t,\xi\rangle}\big]$
using our assumption that $\xi$'s tails are not worse than exponential (i.e., \eqref{Equation: Exponential Moments}).
Next, we have the following result, which provides a tighter decay rate
in the case where $\xi$ has covariance decay:
\begin{lemma}
\label{Lemma: Variance Upper Bound 2}
Suppose that $\xi$ has covariance decay of order $\beta$,
as per Definition \ref{Definition: covariance decay}.
Recall the definitions of the constants $\mf q$, $\mf m$, and $\mf C$
in Assumption \ref{Assumption: Graph} (3), \eqref{Equation: Exponential Moments},
\eqref{Equation: covariance decay of Order Two}, and \eqref{Equation: covariance decay of Order Three}.
There exists a constant $C_2>0$
(which only depends on $\mf q$, $\mf m$, $\mf C$, and $\be$)
such that for every $t<1/C_2$ and $u,v\in\ms V$, one has
\[\mbf E\left[\mbf{Cov}_\xi\left[\mr e^{-\langle L^u_t,\xi\rangle},\mr e^{-\langle\tilde L^v_t,\xi\rangle}\right]^2\right]^{1/2}\leq C_2\left(t^2\big(\msf d(u,v)+1\big)^{-\be}+t^4\right).\]
\end{lemma}
Lemma \ref{Lemma: Variance Upper Bound 2} is proved in Section \ref{sec:PrLem2}.
The proof of this lemma is rather more subtle than that of Lemma \ref{Lemma: Variance Upper Bound 1},
and depends on a careful control of how much $X^u$ and $\tilde X^v$ deviate from their
respective starting points $u$ and $v$. We note that the uniform upper bound on $X$'s
jump rates in Assumption \ref{Assumption: Graph} (3) is crucial for this lemma.
\begin{remark}
The proofs of Lemmas \ref{Lemma: Variance Upper Bound 1} and \ref{Lemma: Variance Upper Bound 2} both rely on some elementary formulas and estimates of the moment generating functions of the noises and their covariances, which will be stated and proved in Section~\ref{Section: Moment Generating Functions}.
\end{remark}
With Lemmas \ref{Lemma: Variance Upper Bound 1} and \ref{Lemma: Variance Upper Bound 2}
in hand, it now only remains to control the contribution of the potential $V$ in \eqref{Equation: Variance Upper Bound Holder}.
For this, we have the following result:
\begin{lemma}
\label{Lemma: Variance Upper Bound 3}
Recall the definition of $d\geq1$ and $\mf c>0$ in \eqref{Equation: Coordination Sequence}.
Suppose that we can find some constants $\ka,\mu>0$ such that
\begin{align}
\label{Equation: V Pointwise Lower Bound}
V(v)\geq\big(\ka\,\msf d(0,v)\big)^\al-\mu,\qquad v\in\ms V.
\end{align}
Then, there exists a constant $C_3>0$
(which only depends on $\al$, $\be$, $d$, and $\mf c$)
such that
\begin{align}
\label{Equation: Variance Upper Bound 3.1}
&\limsup_{t\to0}t^{2d/\al}\sum_{u,v\in\ms V}
\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}
\leq C_3\ka^{-2d};\\
\label{Equation: Variance Upper Bound 3.2}
&\limsup_{t\to0}t^{(2d-\be)/\al}\sum_{u,v\in\ms V}
\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\big(\msf d(u,v)+1\big)^{-\be}
\leq C_3\ka^{-2d+\be}
\end{align}
for every $0<\be<d$; and
\begin{align}
\label{Equation: Variance Upper Bound 3.3}
\limsup_{t\to0}t^{d/\al}\sum_{u,v\in\ms V}
\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\big(\msf d(u,v)+1\big)^{-\be}
\leq C_3\ka^{-d}
\end{align}
for every $\be>d$.
\end{lemma}
Lemma \ref{Lemma: Variance Upper Bound 3}, which is proved in Section \ref{sec:PrLem3},
follows the strategy outlined in \eqref{Equation: Heuristic 4} and \eqref{Equation: Heuristic 5}:
The first step of the proof of Lemma \ref{Lemma: Variance Upper Bound 3} relies on a rigorous
implementation of the intuition that, for very small $t>0$, one expects that
\begin{align}
\label{Equation: Lemma 3 Intuition}
\mbf E\big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\big]^{1/2}\approx\mr e^{-tV(u)-tV(v)}.
\end{align}
This once again relies on controlling how much $X^u$ and $\tilde X^v$ deviate from their starting points.
Once a quantitative version of \eqref{Equation: Lemma 3 Intuition} is established,
we can then use \eqref{Equation: V Pointwise Lower Bound}, which allows to control
$\mbf E\big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\big]^{1/2}$
in terms of quantities that only depend on the geometry of $\ms G$ (more precisely, the graph distance).
We then wrap up the proof of the lemma by using the upper bound on the coordination sequences in
\eqref{Equation: Coordination Sequence}, in similar fashion to \eqref{Equation: Heuristic 5}.
\subsubsection{Step 3. Conclusion of Proof}
We now combine the technical results stated above to conclude the proof of Theorem
\ref{Theorem: Upper}.
By applying Lemmas \ref{Lemma: Variance Upper Bound 1} and \ref{Lemma: Variance Upper Bound 2}
to our upper bound \eqref{Equation: Variance Upper Bound Holder}, we get that for every $t<1/C_1$, one has
\begin{align}
\label{Equation: Main Proof General}
\mbf{Var}\big[\mr{Tr}[K_t]\big]\leq C_1 t^2\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2},
\end{align}
and if $\xi$ has covariance decay of order $\be>0$, then for every $t<1/C_2$, one has
\begin{multline}
\label{Equation: Main Proof Decay}
\mbf{Var}\big[\mr{Tr}[K_t]\big]\leq C_2 t^2\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\big(\msf d(u,v)+1\big)^{-\be}\\
+C_2t^4\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}.
\end{multline}
Thanks to our growth assumption in \eqref{Equation: Potential Growth}, for any choice of $\ka>0$, we know that there exists
a large enough $\mu>0$ so that \eqref{Equation: V Pointwise Lower Bound} holds. We may then complete the proof
of Theorem \ref{Theorem: Upper} by an application of Lemma \ref{Lemma: Variance Upper Bound 3}. We do this on a case-by-case
basis:
Suppose first that $\xi$ has covariance decay of order $0<\be<d$ and that $\al\geq d-\be/2>d/2$. Then,
the fact that $2-(2d-\be)/\al\geq0$ implies by \eqref{Equation: Variance Upper Bound 3.2} that
\begin{multline*}
\limsup_{t\to0}t^2\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\big(\msf d(u,v)+1\big)^{-\be}\\
=\limsup_{t\to0}t^{2-(2d-\be)/\al}t^{(2d-\be)/\al}\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\big(\msf d(u,v)+1\big)^{-\be}
\leq C_3\ka^{-2d+\be};
\end{multline*}
and the fact that $4-2d/\al>0$ implies by \eqref{Equation: Variance Upper Bound 3.1} that
\begin{multline}
\label{Equation: Main Proof Decay 2}
\limsup_{t\to0}t^4\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\\
=\limsup_{t\to0}t^{4-2d/\al}t^{2d/\al}\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}
=0.
\end{multline}
Combining this with \eqref{Equation: Main Proof Decay} implies that
\[\limsup_{t\to0}\mbf{Var}\big[\mr{Tr}[K_t]\big]\leq C_2C_3\ka^{-2d+\be},\]
where we recall that $C_2,C_3>0$ do not depend on $\ka$ or $\mu$. Since
\eqref{Equation: V Pointwise Lower Bound} holds for any choice of $\ka>0$, we can take
$\ka\to\infty$, which then yields $\mbf{Var}\big[\mr{Tr}[K_t]\big]\to0$ as $t\to0$.
Next, suppose that $\xi$ has covariance decay of order $\be=d$ and that $\al>d/2$.
We note that this implies that $\xi$ also has correlation
decay of order $\tilde\be$ for any choice of $0<\tilde\be<d$.
Since $\al>d/2$ implies that $2d-2\al<d$, we can choose $\tilde\be$ close enough to
$d$ so that
$2d-2\al<\tilde\be$, which we can rearrange into $2>(2d-\tilde\be)/\al$.
Thus, \eqref{Equation: Variance Upper Bound 3.2} implies that
\begin{multline*}
\limsup_{t\to0}t^2\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\big(\msf d(u,v)+1\big)^{-\be}\\
=\limsup_{t\to0}t^{2-(2d-\tilde\be)/\al}t^{(2d-\tilde\be)/\al}\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\big(\msf d(u,v)+1\big)^{-\tilde\be}
=0.
\end{multline*}
Combining this with \eqref{Equation: Main Proof Decay 2}, we directly prove
that $\mbf{Var}\big[\mr{Tr}[K_t]\big]\to0$ as $t\to0$ in this case.
Suppose now that $\xi$ has covariance decay of order $\be>d$ and that $\al\geq d/2$.
Then, the fact that $2-d/\al\geq0$ implies by \eqref{Equation: Variance Upper Bound 3.3} that
\begin{multline*}
\limsup_{t\to0}t^2\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\big(\msf d(u,v)+1\big)^{-\be}\\
=\limsup_{t\to0}t^{2-d/\al}t^{d/\al}\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\big(\msf d(u,v)+1\big)^{-\be}
\leq C_3\ka^{-d};
\end{multline*}
and the fact that $4-2d/\al\geq0$ implies by \eqref{Equation: Variance Upper Bound 3.1} that
\begin{multline}
\label{Equation: Main Proof Decay 2}
\limsup_{t\to0}t^4\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\\
=\limsup_{t\to0}t^{4-2d/\al}t^{2d/\al}\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}
\leq C_3\ka^{-2d}.
\end{multline}
Combining this with \eqref{Equation: Main Proof Decay} and taking $\ka\to\infty$ then implies that
$\mbf{Var}\big[\mr{Tr}[K_t]\big]\to0$ as $t\to0$.
Finally, consider the general case where we simply assume that $\al\geq d$.
Then, $2-2d/\al\geq0$, and thus \eqref{Equation: Variance Upper Bound 3.1} implies that
\begin{multline*}
\limsup_{t\to0} t^2\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\\
=\limsup_{t\to0} t^{2-2d/\al}t^{2d/\al}\sum_{u,v\in\ms V}\mbf E\Big[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}\Big]^{1/2}\leq C_3\ka^{-2d}.
\end{multline*}
Since the constants $C_1,C_3>0$ are independent of $\ka$ and $\mu$, combining this with \eqref{Equation: Main Proof General}
and taking $\ka\to\infty$ then implies that $\mbf{Var}\big[\mr{Tr}[K_t]\big]\to0$ as $t\to0$ in this case.
This then completes the proof of Theorem \ref{Theorem: Upper}.
\subsection{Proof of Proposition \ref{Proposition: Variance Formula}}
\label{Section: Proof of Variance Formula}
Since the random walk $X$ is assumed independent of $\xi$, by applying Fubini's theorem
to the definition of $K_t$ in \eqref{Equation: Kernel},
we have that
\[\mbf E\big[\mr{Tr}[K_t]\big]=\sum_{v\in\ms V}\mbf E^v\left[\mr e^{-\langle L_t,V\rangle}\mbf E_\xi\left[\mr e^{-\langle L_t,\xi\rangle}\right]\mbf 1_{\{X(t)=v\}}\right],\]
where we recall the definition of $\mbf E_\xi$ in Notation \ref{Notation: Conditional Expectation}.
Taking the square of this expression, we then get once again by Fubini's theorem that
\[\mbf E\big[\mr{Tr}[K_t]\big]^2=\sum_{u,v\in\ms V}\mbf E\left[\mr e^{-\langle L^u_t+\tilde L^v_t,V\rangle}
\mbf E_\xi\left[\mr e^{-\langle L^u_t,\xi\rangle}\right]\mbf E_\xi\left[\mr e^{-\langle\tilde L^v_t,\xi\rangle}\right]\mbf 1_{\{X^u(t)=u,\tilde X^v(t)=v\}}\right].\]
Thanks to \eqref{Equation: Kernel}, it is easy to check that
\[\mr{Tr}[K_t]^2=\sum_{u,v\in\ms V}\mbf E_{\xi}\left[\mr e^{-\langle L^u_t+\tilde L^v_t,V+\xi\rangle}\mbf 1_{\{X^u(t)=u,\tilde X^v(t)=v\}}\right].\]
Taking the expectation of this expression using Fubini's theorem then leads to
\[\mbf E\big[\mr{Tr}[K_t]^2\big]=\sum_{u,v\in\ms V}\mbf E\left[\mr e^{-\langle L^u_t+\tilde L^v_t,V\rangle}
\mbf E_\xi\left[\mr e^{-\langle L^u_t+\tilde L^v_t,\xi\rangle}\right]\mbf 1_{\{X^u(t)=u,\tilde X^v(t)=v\}}\right].\]
The proof of Proposition \ref{Proposition: Variance Formula} is then simply a matter of subtracting
$\mbf E\big[\mr{Tr}[K_t]\big]^2$ from the above expression for $\mbf E\big[\mr{Tr}[K_t]^2\big]$,
and using the definition of $\mbf{Cov}_\xi$ in Notation \ref{Notation: Conditional Expectation}.
\subsection{Auxiliary results on estimates of moment generating functions}
\label{Section: Moment Generating Functions}
Before discussing the proofs of Lemma~\ref{Lemma: Variance Upper Bound 1} and Lemma~\ref{Lemma: Variance Upper Bound 2} in the next two subsections, we list here two simple propositions concerning the tail behaviors of the moment generating functions of the noises and their covariances. The first result is a straightforward consequence of Taylor expansions and Assumption~\ref{Assumption: Potential and Noise} on the tails of the noises.
\begin{proposition}
\label{Proposition: Noise Decay 1}
Under Assumption \ref{Assumption: Potential and Noise},
for every finitely-supported deterministic functions $f,g:\ms V\to\mbb R$ such that
$\|f+g\|_{1},\|f\|_{1},\|g\|_{1}\leq1/2\mf m$,
it holds that
\begin{align}\label{eq: NoiseDecay1}
\Big|\mbf E\big[\mr e^{\langle f,\xi\rangle}\big]-1\Big|\leq 2\mf m^2\|f\|_{1}^2
\end{align}
and
\begin{align}\label{eq: NoiseDecay2}
\big|\mbf{Cov}\big[\mr e^{\langle f,\xi\rangle},\mr e^{\langle g,\xi\rangle}\big]\Big|\leq2\mf m^2\big(\|f+g\|_{1}^2+\|f\|_{1}^2+\|g\|^2_{1}\big)+4\mf m^4\|f\|_{1}^2\|g\|_{1}^2.
\end{align}
\end{proposition}
\begin{proof}
For every deterministic function $f:\ms V\to\mbb R$, it follows from
a straightforward Taylor expansion of the exponential that
\begin{align}
\label{Equation: Exponential Moment Expansion}
\mbf E\left[\mr e^{\langle f,\xi\rangle}\right]=\sum_{p=0}^\infty\frac{1}{p!}\sum_{v_1,\ldots,v_p\in\ms V}\mbf E[\xi(v_1)\cdots\xi(v_p)]f(v_1)\cdots f(v_p),
\end{align}
with the convention that the term with $p=0$ above is equal to one.
Firstly, since $\mbf E[\xi(v)]=0$ for all $v$,
the term corresponding to $p=1$ in \eqref{Equation: Exponential Moment Expansion} is zero.
Secondly, thanks to our moment growth assumption $\mbf E[|\xi(v)|^p]\leq p!\mf m^p$, for every $p\geq2$ we have that
\begin{multline*}
\left|\sum_{v_1,\ldots,v_p\in\ms V}\mbf E[\xi(v_1)\cdots\xi(v_p)]f(v_1)\cdots f(v_p)\right|\\
\leq\sum_{v_1,\ldots,v_p\in\ms V}\mbf E[|\xi(v_1)|^p]^{1/p}\cdots\mbf E[|\xi(v_p)|^p]^{1/p}|f(v_1)|\cdots |f(v_p)|
\leq p!\big(\mf m\|f\|_{1}\big)^p.
\end{multline*}
Thus, if $\|f\|_{1}\leq1/2\mf m$, then we have that
\[\Big|\mbf E\left[\mr e^{\langle f,\xi\rangle}\right]-1\Big|\leq\sum_{p=2}^\infty(\mf m\|f\|_{1})^p=\frac{(\mf m\|f\|_{1})^2}{1-\mf m\|f\|_{1}}\leq2(\mf m\|f\|_{1})^2.\]
As for the claim regarding the covariance, for any two random variables $Y$ and $Z$,
we have by the triangle inequality that
\begin{multline*}
|\mbf{Cov}[Y,Z]|
=|\mbf E[YZ]-\mbf E[Y]\mbf E[Z]|\\
\leq|\mbf E[YZ]-1|-|\mbf E[Y]-1||\mbf E[Z]-1|+|1-\mbf E[Y]|+|1-\mbf E[Z]|
\end{multline*}
Thus, whenever $\|f+g\|_{1},\|f\|_{1},\|g\|_{1}\leq1/2\mf m$,
it follows from \eqref{eq: NoiseDecay1} that
\[\Big|\mbf{Cov}\big[\mr e^{\langle f,\xi\rangle},\mr e^{\langle g,\xi\rangle}\big]\Big|
\leq2\mf m^2\big(\|f+g\|_{1}^2+\|f\|_{1}^2+\|g\|^2_{1}\big)+4\mf m^4\|f\|_{1}^2\|g\|_{1}^2,\]
as desired.
\end{proof}
In cases where we need a more precise control on the covariance, we have the following power series expansion:
\begin{proposition}
\label{Proposition: General Covariance Formula}
Suppose that Assumption \ref{Assumption: Potential and Noise} holds.
For any two finitely supported deterministic functions $f,g:\ms V\to\mbb R$,
one has
\[\mbf{Cov}\left[\mr e^{\langle f,\xi\rangle},\mr e^{\langle g,\xi\rangle}\right]=\sum_{p=2}^\infty\frac{\mc A_p(f,g)}{p!},\]
where, for every $p\geq2$, we denote
\begin{multline}
\label{Equation: General Covariance Formula}
\mc A_p(f,g):=
\sum_{v_1,\ldots,v_p\in\ms V}\Bigg(\sum_{m=1}^{p-1}{p\choose m}
\mbf{Cov}[\xi(v_1)\cdots\xi(v_m),\xi(v_{m+1})\cdots\xi(v_p)]\\
\cdot f(v_1)\cdots f(v_m)g(v_{m+1})\cdots g(v_p)\Bigg).
\end{multline}
\end{proposition}
\begin{proof}
GGL20ing the same Taylor expansion as in \eqref{Equation: Exponential Moment Expansion},
we get, on the one hand,
\begin{align*}
&\mbf E\left[\mr e^{\langle f+g,\xi\rangle}\right]\\
&=\sum_{p=0}^\infty\frac{1}{p!}\sum_{v_1,\ldots,v_p\in\ms V}\mbf E[\xi(v_1)\cdots\xi(v_p)]\big(f(v_1)+g(v_1)\big)\cdots\big(f(v_p)+g(v_p)\big)\\
&=\sum_{p=0}^\infty\frac{1}{p!}\sum_{v_1,\ldots,v_p\in\ms V}\sum_{m=0}^p{p\choose m}\mbf E[\xi(v_1)\cdots\xi(v_p)]f(v_1)\cdots f(v_m)g(v_{m+1})\cdots g(v_p),
\end{align*}
and on the other hand
\begin{align*}
&\mbf E\left[\mr e^{\langle f,\xi\rangle}\right]\mbf E\left[\mr e^{\langle g,\xi\rangle}\right]\\
&=\sum_{m_1,m_2=0}^\infty\frac{1}{m_1!m_2!}\Bigg(\sum_{v_1,\ldots,v_{m_1+m_2}\in\ms V}
\mbf E[\xi(v_1)\cdots\xi(v_{m_1})]\mbf E[\xi(v_{m_1+1})\cdots\xi(v_{m_1+m_2})]\\
&\hspace{2.5in}\cdot f(v_1)\cdots f(v_{m_1})g(v_{m_1+1})\cdots g(v_{m_1+m_2})\Bigg)\\
&=\sum_{p=0}^\infty\sum_{m=0}^p\frac{1}{m!(p-m)!}\Bigg(\sum_{v_1,\ldots,v_{p}\in\ms V}
\mbf E[\xi(v_1)\cdots\xi(v_{m})]\mbf E[\xi(v_{m+1})\cdots\xi(v_{p})]\\
&\hspace{2.5in}\cdot f(v_1)\cdots f(v_{m})g(v_{m+1})\cdots g(v_{p})\Bigg)\\
&=\sum_{p=0}^\infty\frac1{p!}\sum_{v_1,\ldots,v_{p}\in\ms V}\Bigg(\sum_{m=0}^p{p\choose m}
\mbf E[\xi(v_1)\cdots\xi(v_{m})]\mbf E[\xi(v_{m+1})\cdots\xi(v_{p})]\\
&\hspace{2.5in}\cdot f(v_1)\cdots f(v_{m})g(v_{m+1})\cdots g(v_{p})\Bigg).
\end{align*}
We then get the result by subtracting these two expressions.
\end{proof}
\subsection{Proof of Lemma \ref{Lemma: Variance Upper Bound 1}}
\label{sec:PrLem1}
By definition of local time, $\|L_t^u\|_{1}=\|\tilde{L}_t^v\|_{1}= t$,
as well as $\|L_t^u+\tilde{L}_t^v\|_{1}=2t$.
Thus, by \eqref{eq: NoiseDecay2} in Proposition \ref{Proposition: Noise Decay 1}, if $t<1/4\mf m$, then
we have for any $u,v\in \ms V$ that
\[
\left|\mbf{Cov}_{\xi}\big[\mr e^{-\langle L^u_t,\xi\rangle},\mr e^{-\langle\tilde L^v_t,\xi\rangle}\big]\right|\leq2\mf m^2\big(4t^2+t^2+t^2\big)+4\mf m^4t^4=12\mf m^2t^2+4\mf m^4t^4.
\]
Since the right-hand side of this inequality is not random, the result then follows
by noting that $t^4\leq t^2$ when $t\geq1$ and taking $C_1:=\max\{1,4\mf m,12\mf m^2,4\mf m^4\}$.
\subsection{Proof of Lemma \ref{Lemma: Variance Upper Bound 2}}
\label{sec:PrLem2}
For every $u,v\in\ms V$ and $t>0$, let us denote by
\[\mf D^{u,v}_t:=\min_{\substack{a,b\in\ms V\\L_t^u(a),\tilde L_t^v(b)\neq0}}\msf d(a,b)\]
the distance between the ranges of $X^u$ and $\tilde X^v$ up to time $t$.
In Section \ref{Section: Covariance Decay Step 1} below we prove the following crude version of Lemma \ref{Lemma: Variance Upper Bound 2}:
For every $t<\min\{1,1/4\mf m\}$ and $u,v\in\ms V$,
\begin{align}
\label{Equation: covariance decay}
\left|\mbf{Cov}_\xi\left[\mr e^{-\langle L_t^u,\xi\rangle},\mr e^{-\langle\tilde L_t^v,\xi\rangle}\right]\right|\leq 2\mf Ct^2(\mf D_t^{u,v}+1)^{-\be}+64\mf m^4t^4.
\end{align}
With this in hand, by Minkowski's inequality, we have that
\begin{align}
\label{Equation: PrLem2 - 1}
\mbf E\left[\mbf{Cov}_\xi\left[\mr e^{-\langle L_t^u,\xi\rangle},\mr e^{-\langle\tilde L_t^v,\xi\rangle}\right]^2\right]^{1/2}\leq
2\mf Ct^2\mbf E\big[(\mf D_t^{u,v}+1)^{-2\be}\big]^{1/2}+64\mf m^4t^4
\end{align}
for every $t<\min\{1,1/4\mf m\}$ and $u,v\in\ms V$.
Next, we control $\mf D^{u,v}_t$ in terms of
$\msf d(u,v)$. We do this in two cases. Suppose first that $\msf d(u,v)<16$.
In this case, we have the trivial bound
\[\mbf E\big[(\mf D_t^{u,v}+1)^{-2\be}\big]^{1/2}\leq1\leq17^\be\big(\msf d(u,v)+1\big)^{-\be},\]
which, when combined with \eqref{Equation: PrLem2 - 1}, yields
\begin{align}
\label{Equation: PrLem2 - 2}
\mbf E\left[\mbf{Cov}_\xi\left[\mr e^{-\langle L_t^u,\xi\rangle},\mr e^{-\langle\tilde L_t^v,\xi\rangle}\right]^2\right]^{1/2}\leq
2\cdot17^\be\mf Ct^2\big(\msf d(u,v)+1\big)^{-\be}+64\mf m^4t^4
\end{align}
for every $t<\min\{1,1/4\mf m\}$ and $u,v\in\ms V$ such that $\msf d(u,v)<16$.
Suppose then that $\msf d(u,v)\geq16$.
For any $u,v\in \ms V$ and $t>0$, we introduce the event
\[
E^{u,v}_t := \left\{\sup_{0\leq s\leq t}\msf d\big(X^u(s),u\big)\leq \frac{\msf d(u,v)}{4}\quad\text{and}\quad\sup_{0\leq s\leq t}\msf d\big(\tilde{X}^v(s),v\big)\leq \frac{\msf d(u,v)}{4}\right\}.
\]
With this in hand, given that $(\mf D_t^{u,v}+1)^{-\be}\leq1$ and $\sqrt{x+y}\leq\sqrt x+\sqrt y$ for all $x,y\geq0$,
\[\mbf E\big[(\mf D_t^{u,v}+1)^{-2\be}\big]^{1/2}
\leq\mbf E\big[(\mf D_t^{u,v}+1)^{-2\be}\mbf 1_{E^{u,v}_t}\big]^{1/2}+\mbf P\big[(E^{u,v}_t)^c\big]^{1/2}.\]
For any outcome in the event $E^{u,v}_t$, we have by the triangle inequality that
\[
\msf d\big(X^u(s),\tilde{X}^v(\tilde s)\big)\geq \msf d(u,v)-\msf d\big(X^u(s),u\big)-\msf d\big(\tilde{X}^v(\tilde s),v\big)\geq \frac{\msf d(u,v)}{4}
\]
for every $0\leq s,\tilde s\leq t$. In particular, this means that
$\mf D^{u,v}_t\mbf 1_{E^{u,v}_t}\geq\msf d(u,v)/4$.
In Section \ref{Section: Covariance Decay Step 2} below, we prove that
if $t<\min\{4/\mf q,1/4\mf q\mr e\}$ and $\msf d(u,v)\geq16$,
then
\begin{align}
\label{Equation: Ranges Tail Bound}
\mbf P\big[(E^{u,v}_t)^c\big]^{1/2}\leq \frac{\sqrt 2\,\mf q^2\mr e^2 t^2}{16}.
\end{align}
Combining these bounds with \eqref{Equation: PrLem2 - 1}, we are led to
\begin{multline}
\label{Equation: PrLem2 - 3}
\mbf E\left[\mbf{Cov}_\xi\left[\mr e^{-\langle L_t^u,\xi\rangle},\mr e^{-\langle\tilde L_t^v,\xi\rangle}\right]^2\right]^{1/2}\\
\leq2\cdot 4^\be\mf Ct^2\big(\msf d(u,v)+1\big)^{-\be}+\left(\frac{\sqrt 2\,\mf q^2\mr e^2 t^2}{8}+64\mf m^4\right)t^4
\end{multline}
for all $t<\min\{1,1/4\mf m,4/\mf q,1/4\mf q\mr e\}$ and $u,v\in\ms V$ such that $\msf d(u,v)\geq16$.
With \eqref{Equation: PrLem2 - 2} and \eqref{Equation: PrLem2 - 3} in hand,
in order to prove Lemma \ref{Lemma: Variance Upper Bound 2}, it only remains
to establish \eqref{Equation: covariance decay} and \eqref{Equation: Ranges Tail Bound}.
We do this in the next two subsections.
\subsubsection{Proof of \eqref{Equation: covariance decay}}
\label{Section: Covariance Decay Step 1}
Our main tool to prove \eqref{Equation: covariance decay} consists of the power series expansion
proved in Proposition \ref{Equation: General Covariance Formula}:
\begin{align}
\label{Equation: Covariance Expansion}
\mbf{Cov}_\xi\left[\mr e^{-\langle L_t^u,\xi\rangle},\mr e^{-\langle\tilde L_t^v,\xi\rangle}\right]=\sum_{p=2}^\infty\frac{\mc A_p(-L^u_t,-\tilde L^v_t)}{p!},
\end{align}
where the terms $\mc A_p$ are defined in \eqref{Equation: General Covariance Formula}.
Thanks to our moment growth assumptions in \eqref{Equation: Exponential Moments}, for every $p\geq4$ and $1\leq m\leq p-1$, we have that
\begin{align*}
&\big|\mbf{Cov}[\xi(v_1)\cdots\xi(v_m),\xi(v_{m+1})\cdots\xi(v_p)]\big|\\
&\leq\big|\mbf E[\xi(v_1)\cdots\xi(v_p)]\big|+\big|\mbf E[\xi(v_1)\cdots\xi(v_m)]\mbf E[\xi(v_{m+1})\cdots\xi(v_p)]\big|\\
&\leq\mbf E[|\xi(v_1)|^p]^{1/p}\cdots\mbf E[|\xi(v_p)|^p]^{1/p}\\
&\qquad+\mbf E[|\xi(v_1)|^m]^{1/m}\cdots\mbf E[|\xi(v_m)|^m]^{1/m}\mbf E[|\xi(v_{m+1})|^{p-m}]^{1/(p-m)}\cdots\mbf E[|\xi(v_p)|^{p-m}]^{1/(p-m)}\\
&\leq p!\mf m^p+m!(p-m)!\mf m^p\\
&\leq2p!\mf m^p.
\end{align*}
Therefore, by combining \eqref{Equation: General Covariance Formula}
with the fact that $\sum_{m=0}^p{p\choose m}=2^p$, one has
\[\frac{|\mc A_p(-L_t^u,-\tilde L_t^v)|}{p!}\leq2\mf m^p\sum_{m=1}^{p-1}{p\choose m}\|L_t^u\|_{1}^m\|\tilde L_t^v\|_{1}^{p-m}
\leq 2(2\mf m t)^p.\]
Next, if $\xi$ has covariance decay of order $\be$, then
\eqref{Equation: covariance decay of Order Two} implies that
\begin{multline*}
|\mc A_2(-L_t^u,-\tilde L_t^v)|\leq\sum_{w_1,w_2\in\ms V}\big|\mbf{Cov}[\xi(w_1),\xi(w_2)]\big|L_t^u(w_1)L_t^u(w_2)\\
=\mf C(\mf D_t^{u,v}+1)^{-\be}\|L_t^u\|_{1}\|\tilde L_t^v\|_{1}\leq\mf C t^2(\mf D_t^{u,v}+1)^{-\be}.
\end{multline*}
and similarly \eqref{Equation: covariance decay of Order Three} implies that
\[|\mc A_3(-L_t^u,-\tilde L_t^v)|\leq\mf C t^3(\mf D_t^{u,v}+1)^{-\be}.\]
At this point if we take $t<\min\{1,1/4\mf m\}$, then $t^3\leq t^2$,
and thus it follows from the expansion \eqref{Equation: Covariance Expansion}
and the estimates above that
\begin{multline*}
\left|\mbf{Cov}_\xi\left[\mr e^{-\langle L_t^u,\xi\rangle},\mr e^{-\langle\tilde L_t^v,\xi\rangle}\right]\right|
\leq2\mf Ct^2(\mf D_t^{u,v}+1)^{-\be}+2\sum_{p=4}^\infty(2\mf m t)^p\\
=2\mf Ct^2(\mf D_t^{u,v}+1)^{-\be}+\frac{32\mf m^4t^4}{1-2\mf m t}
\leq2\mf Ct^2(\mf D_t^{u,v}+1)^{-\be}+64\mf m^4t^4.
\end{multline*}
\subsubsection{Proof of \eqref{Equation: Ranges Tail Bound}}
\label{Section: Covariance Decay Step 2}
Let us denote by $\mc S_t(X)$ the number of jumps that $X$ makes
in the time interval $[0,t]$. For every $x>0$ and $v\in\ms V$, it is easy to see that
\begin{align}
\label{Equation: Jumps Tail Bound 1}
\mbf P^v\left[\max_{0\leq s\leq t} \msf d\big(v,X(s)\big)\geq x\right]
\leq\mbf P^v\big[\mc S_t(X)\geq x\big].
\end{align}
For every $v\in\ms V$ and $t\geq0$,
the number of jumps $\mc S_t(X)$ is stochastically dominated by
a poisson random variable with parameter $t\mf q$.
Therefore, applying the Chernoff bound for the tails of Poisson
random variables, we obtain that
\begin{align}
\label{Equation: Tail Bound}
\sup_{v\in\ms V}\mbf P^v\left[\max_{0\leq s\leq t} \msf d\big(v,X(s)\big)\geq x\right]
\leq\sup_{v\in\ms V}\mbf P^v\big[\mc S_t(X)\geq x\big]
\leq\mr e^{-\mf q t}\left(\frac{\mf q\mr e t}{x}\right)^{x}
\end{align}
for every $x>\mf q t$.
In order to specialize this to \eqref{Equation: Ranges Tail Bound},
we use the parameter $x:=\msf d(u,v)/4$. If $t<\min\{4/\mf q,1/4\mf q\mr e\}$ and $\msf d(u,v)\geq16$,
then we have that $4\mf q\mr e t<1$ and $x>\mf qt$,
and thus it follows by a union bound that
\begin{multline*}
\mbf P\big[(E^{u,v}_t)^c\big]^{1/2}\leq\left(\mbf P^u\left[\mc S_t(X)
\geq\frac{\msf d(u,v)}{4}\right]+\mbf P^v\left[\mc S_t(X)\geq\frac{\msf d(u,v)}{4}\right]\right)^{1/2}\\
\leq \sqrt 2\mr e^{-\mf q t/2}\left(\frac{4\mf q\mr e t}{\msf d(u,v)}\right)^{\msf d(u,v)/8}\leq\frac{\sqrt 2\,\mf q^2\mr e^2 t^2}{16},
\end{multline*}
as desired.
\subsection{Proof of Lemma \ref{Lemma: Variance Upper Bound 3}}
\label{sec:PrLem3}
\begin{notation}
Throughout this proof, we use $C>0$ to denote a constant
whose exact value may change from one display to the next.
If $C>0$ depends on some other parameters, this will be explicitly
stated.
\end{notation}
\subsubsection{Step 1. General Upper Bound}
Our first step in this proof is to provide a general upper bound for $\mbf E[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}]^{1/2}$
that formalizes the intuition \eqref{Equation: Lemma 3 Intuition}.
To this effect, we claim that if \eqref{Equation: V Pointwise Lower Bound} holds, then
\begin{align}
\label{Equation: Stay Same Place Argument}
-\langle L^u_t,V\rangle
\leq-\big(\ka t^{1/\al}\msf d(0,u)\big)^{\min\{\al,1\}}+\max_{0\leq s\leq t}\Big(\ka t^{1/\al}\msf d\big(u,X^u(s)\big)\Big)^{\min\{\al,1\}}-1+\mu t
\end{align}
for every $u\in\ms V$ and $t>0$, and similarly for $-\langle\tilde L^v_t,V\rangle$.
To see this, we note that
\begin{align}\label{eq:VBound}
\nonumber
-\langle L^u_t,V\rangle
&\leq-\int_0^t\Big(\ka\,\msf d\big(0,X^u(s)\big)\Big)^\al\d s+\mu t\\
\nonumber
&=-\int_0^t\Big|\ka\Big(\msf d(0,u)-\msf d(0,u)+\msf d\big(0,X^u(s)\big)\Big)\Big|^\al\d s+\mu t\\
&=-\int_0^1\Big|\ka t^{1/\al}\Big(\msf d(0,u)-\msf d(0,u)+\msf d\big(0,X^u(ut)\big)\Big)\Big|^\al\d u+\mu t,
\end{align}
where the first line follows directly from \eqref{Equation: V Pointwise Lower Bound},
and the last line follows from a change of variables. For any $x,y\in \mathbb{R}$, the triangle
inequality implies that
\[|x-y|^{\al}\geq|x-y|^{\min\{\al,1\}}-1\geq|x|^{\min\{\al,1\}}-|y|^{\min\{\al,1\}}-1.\]
Applying this to \eqref{eq:VBound} yields
\[-\langle L^u_t,V\rangle\leq-\big(\ka t^{1/\al}\msf d(0,u)\big)^{\min\{\al,1\}}+\max_{0\leq s\leq t}\Big|\ka t^{1/\al}\Big(\msf d\big(0,X^u(s)\big)-\msf d(0,u)\Big)\Big|^{\min\{\al,1\}}-1+\mu t.\]
We then obtain \eqref{Equation: Stay Same Place Argument} by combining the fact that $x\mapsto x^{\min\{\al,1\}}$ is increasing for $x>0$
with the reverse triangle inequality $\big|\msf d\big(0,X^u(s)\big)-\msf d(0,u)\big|\leq\msf d\big(u,X^u(s)\big)$.
With \eqref{Equation: Stay Same Place Argument} in hand, we see that $\mbf E[\mr e^{-2\langle L^u_t+\tilde L^v_t,V\rangle}]^{1/2}$
is bounded above by
\begin{multline}
\label{Equation: Holder in Exponential Moment}
\mr e^{2(\mu t-1)-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}\\
\cdot\mbf E\left[\exp\left(\max_{0\leq s\leq t}\Big(\ka t^{1/\al}\msf d\big(u,X^u(s)\big)\Big)^{\min\{\al,1\}}+\max_{0\leq s\leq t}\Big(\ka t^{1/\al}\msf d\big(v,\tilde X^v(s)\big)\Big)^{\min\{\al,1\}}\right)\right]^{1/2}.
\end{multline}
On the one hand, $\mr e^{2(\mu t-1)}\to\mr e^{-2}$ as $t\to0$ for any choice of $\mu>0$. On the other hand,
thanks to the tail bound \eqref{Equation: Tail Bound}, we know that for every $\theta,\ka>0$, one has
\[\limsup_{t\to0}\sup_{u\in\ms V}\mbf E\left[\exp\left(\theta\max_{0\leq s\leq t}\Big(\ka t^{1/\al}\msf d\big(u,X^u(s)\big)\Big)^{\min\{\al,1\}}\right)\right]=1,\]
and similarly for $\tilde X$. Therefore, by a straightforward application of H\"older's inequality
on the second line of \eqref{Equation: Holder in Exponential Moment}, in order to prove Lemma \ref{Lemma: Variance Upper Bound 3}, it suffices to prove that
there exists a constant $C>0$ (which only depends on $\al$, $\be$, $d$, and $\mf c$) such that
\begin{align}
\label{Equation: Variance Upper Bound 3.1 in Lemma}
&\limsup_{t\to0}t^{2d/\al}\sum_{u,v\in\ms V}
\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}
\leq C\ka^{-2d};\\
\label{Equation: Variance Upper Bound 3.2 in Lemma}
&\limsup_{t\to0}t^{(2d-\be)/\al}\sum_{u,v\in\ms V}
\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{\big(\msf d(u,v)+1\big)^{\be}}
\leq C\ka^{-2d+\be}
\end{align}
for every $0<\be<d$; and
\begin{align}
\label{Equation: Variance Upper Bound 3.3 in Lemma}
&\limsup_{t\to0}t^{d/\al}\sum_{u,v\in\ms V}
\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{\big(\msf d(u,v)+1\big)^{\be}}
\leq C\ka^{-d}
\end{align}
for every $\be>d$.
We now prove these claims in two steps.
\subsubsection{Step 2. Proof of \eqref{Equation: Variance Upper Bound 3.1 in Lemma}}
\label{Section: Proof of 3.1}
Recalling the definition and upper bound of $\ms G$'s coordination sequences $\msf c_n(v)$ in
\eqref{Equation: Coordination Sequence}, we have that
\begin{align}
\label{Equation: 3.1 Proof}
\nonumber
&\sum_{u,v\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}
=\left(\sum_{v\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}\right)^2\\
\nonumber
&=\left(\sum_{n\in\mbb N\cup\{0\}}\msf c_n(0)\,\mr e^{-(\ka t^{1/\al}n)^{\min\{\al,1\}}}\right)^2
\leq \mf c^2\left(\sum_{n\in\mbb N\cup\{0\}}n^{d-1}\mr e^{-(\ka t^{1/\al}n)^{\min\{\al,1\}}}\right)^2\\
&=\mf c^2t^{(-2d+2)/\al}\left(\sum_{n\in t^{1/\al}\mbb N\cup\{0\}}n^{d-1}\mr e^{-(\ka n)^{\min\{\al,1\}}}\right)^2.
\end{align}
By a Riemann sum, we have that
\begin{multline}
\label{Equation: 3.1 Proof 2}
\lim_{t\to\infty}t^{2/\al}\left(\sum_{n\in t^{1/\al}\mbb N\cup\{0\}}n^{d-1}\mr e^{-(\ka n)^{\min\{\al,1\}}}\right)^2\\
=\left(\int_0^\infty x^{d-1}\mr e^{-(\ka x)^{\min\{\al,1\}}}\d x\right)^2
=\frac{\ka^{-2 d} \Gamma \left(\frac{d}{\min\{1,\al\}}\right)^2}{\min\{1,\al^2\}}.
\end{multline}
Combining this limit with \eqref{Equation: 3.1 Proof}
yields \eqref{Equation: Variance Upper Bound 3.1 in Lemma}, where, as shown
on the right-hand side of \eqref{Equation: 3.1 Proof 2}, the constant $C>0$
only depends on the parameters $\al$, $d$, and $\mf c$.
\subsubsection{Step 3. Proof of \eqref{Equation: Variance Upper Bound 3.2 in Lemma} and \eqref{Equation: Variance Upper Bound 3.3 in Lemma}}
We now conclude the proof of Lemma \ref{Lemma: Variance Upper Bound 3} by establishing
\eqref{Equation: Variance Upper Bound 3.2 in Lemma} and \eqref{Equation: Variance Upper Bound 3.3 in Lemma}.
We separate the analysis of the sum on the left-hand sides of
\eqref{Equation: Variance Upper Bound 3.2 in Lemma} and \eqref{Equation: Variance Upper Bound 3.3 in Lemma}
into two parts, namely, the terms $u,v\in\ms V$ such that $\msf d(u,v)>\ka^{-1}t^{-1/\al}$, and those such that
$\msf d(u,v)\leq\ka^{-1}t^{-1/\al}$.
We first consider the terms such that $\msf d(u,v)>\ka^{-1}t^{-1/\al}$. For these, we have the sequence of upper bounds
\begin{align*}
&\sum_{\substack{u,v\in\ms V\\\msf d(u,v)>\ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{
\big(\msf d(u,v)+1\big)^{\be}}\\
&\leq\sum_{\substack{u,v\in\ms V\\\msf d(u,v)>\ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{
\msf d(u,v)^{\be}}\\
&<\ka^\be t^{\be/\al}\sum_{\substack{u,v\in\ms V\\\msf d(u,v)>\ka^{-1}t^{-1/\al}}}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}\\
&\leq\ka^\be t^{\be/\al}\left(\sum_{v\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}\right)^2.
\end{align*}
At this point, by replicating the arguments in Section \ref{Section: Proof of 3.1}, we get that there exists
a constant $C>0$ that only depends on $\al$, $d$, and $\mf c$, and such that
\begin{align}
\label{Equation: 3.2 Proof 1}
\limsup_{t\to0}t^{(2d-\be)/\al}\sum_{\substack{u,v\in\ms V\\\msf d(u,v)>\ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{
\big(\msf d(u,v)+1\big)^{\be}}\leq C\ka^{-2d+\be}
\end{align}
if $0<\be<d$; and
\begin{align}
\label{Equation: 3.3 Proof 1}
\lim_{t\to0}t^{d/\al}\sum_{\substack{u,v\in\ms V\\\msf d(u,v)>\ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{
\big(\msf d(u,v)+1\big)^{\be}}=0
\end{align}
if $\be>d$.
We now consider the terms such that $\msf d(u,v)\leq\ka^{-1}t^{-1/\al}$. For those terms,
we can reformulate the summands as follows:
\begin{align}
\label{Equation: 3.2 Proof 2}
&\sum_{\substack{u,v\in\ms V\\\msf d(u,v)\leq \ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{
\big(\msf d(u,v)+1\big)^{\be}}\\
\nonumber
&=\sum_{u\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}}
\left(\sum_{\substack{v\in\ms V\\\msf d(u,v)\leq \ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{
\big(\msf d(u,v)+1\big)^{\be}}\right)\\
\nonumber
&=\sum_{u\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}}
\left(\sum_{\substack{v\in\ms V\\\msf d(u,v)\leq \ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}(\msf d(u,v)+\msf d(0,v)-\msf d(u,v)))^{\min\{\al,1\}}}}{
\big(\msf d(u,v)+1\big)^{\be}}\right).
\end{align}
For every every $u,v\in\ms V$ such that $\msf d(u,v)\leq \ka^{-1}t^{-1/\al}$, the fact that $\msf d(0,v)\geq0$ gives the upper bound
$\mr e^{-(\ka t^{1/\al}(\msf d(0,v)-\msf d(u,v)))^{\min\{\al,1\}}}\leq\mr e$. Putting this into the above equation, we then obtain that
\begin{align*}
\eqref{Equation: 3.2 Proof 2}&\leq\mr e\sum_{u\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}}
\left(\sum_{\substack{v\in\ms V\\\msf d(u,v)\leq \ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}\msf d(u,v))^{\min\{\al,1\}}}}{
\big(\msf d(u,v)+1\big)^{\be}}\right)\\
&\leq\mr e\sum_{u\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}}
\left(\sum_{n=0}^{\ka^{-1}t^{-1/\al}}\frac{\msf c_n(u)\,\mr e^{-(\ka t^{1/\al}n)^{\min\{\al,1\}}}}{
\big(n+1\big)^{\be}}\right).
\end{align*}
Thanks to the uniform bound in \eqref{Equation: Coordination Sequence},
we then have that
\begin{align}
\label{Equation: 3.2 Proof 3}
\nonumber
\eqref{Equation: 3.2 Proof 2}&\leq\mr e \mf c\,\left(\sum_{u\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}}\right)
\left(\sum_{n=0}^{\ka^{-1}t^{-1/\al}}\frac{n^{d-1}\mr e^{-(\ka t^{1/\al}n)^{\min\{\al,1\}}}}{
\big(n+1\big)^{\be}}\right)\\
\nonumber
&\leq \mr e^{1+(\ka t^{1/\al})^{\min\{\al,1\}}} \mf c\left(\sum_{u\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}}\right)\\
&\nonumber\hspace{1.5in}
\cdot\left(\sum_{n\in\mbb N\cup\{0\}}(n+1)^{d-1-\be}\mr e^{-(\ka t^{1/\al}(n+1))^{\min\{\al,1\}}}\right)\\
&=\mr e^{1+o(1)} \mf c\,\left(\sum_{u\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}}\right)
\left(\sum_{n\in \mbb N}n^{d-1-\be}\mr e^{-(\ka t^{1/\al} n)^{\min\{\al,1\}}}\right).
\end{align}
We now analyze the two sums on the right-hand side of \eqref{Equation: 3.2 Proof 3}.
Looking at the first term, the same analysis carried out in Section \ref{Section: Proof of 3.1} implies that
\begin{align*}
\label{Equation: 3.x Proof 3}
\limsup_{t\to0}t^{d/\al}\sum_{u\in\ms V}\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}}\leq C\ka^{-d}
\end{align*}
for some $C$ that only depends on $\al$, $d$, and $\mf c$.
Next, the second sum in \eqref{Equation: 3.2 Proof 3} is analyzed differently depending on whether $0<\be<d$ or $\be>d$:
On the one hand, if $\be<d$, then by a Riemann sum we have that
\begin{multline*}
\lim_{t\to0}t^{(d-\be)/\al}\sum_{n\in \mbb N}n^{d-1-\be}\mr e^{-(\ka t^{1/\al} n)^{\min\{\al,1\}}}
=\lim_{t\to0}t^{1/\al}\sum_{n\in t^{1/\al}\mbb N}n^{d-1-\be}\mr e^{-(\ka n)^{\min\{\al,1\}}}\\
=\int_0^\infty x^{d-1-\be}\mr e^{-(\ka x)^{\min\{\al,1\}}}\d x
=\frac{\kappa ^{-d+\be} \Gamma \left(\frac{d-\beta }{\min\{\al,1\}}\right)}{\min\{\al,1\}}.
\end{multline*}
On the other hand, if $\be>d$, then we have by dominated convergence that
\[\lim_{t\to0}\sum_{n\in \mbb N}n^{d-1-\be}\mr e^{-(\ka t^{1/\al} n)^{\min\{\al,1\}}}
=\sum_{n\in \mbb N}n^{d-1-\be};\]
we know that the sum on the right-hand side is convergent since $\be>d$.
Putting these two limits back into \eqref{Equation: 3.2 Proof 3}, we then get that there exists a constant $C>0$
(which only depends on $\al$, $d$, $\be$, and $\mf c$) such that
\[\limsup_{t\to0}t^{(2d-\be)/\al}\sum_{\substack{u,v\in\ms V\\\msf d(u,v)\leq \ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{
\big(\msf d(u,v)+1\big)^{\be}}\leq C\ka^{-2d+\be}\]
when $\be<d$, and such that
\[\limsup_{t\to0}t^{d/\al}\sum_{\substack{u,v\in\ms V\\\msf d(u,v)\leq \ka^{-1}t^{-1/\al}}}\frac{\mr e^{-(\ka t^{1/\al}\msf d(0,u))^{\min\{\al,1\}}-(\ka t^{1/\al}\msf d(0,v))^{\min\{\al,1\}}}}{
\big(\msf d(u,v)+1\big)^{\be}}\leq C\ka^{-d}\]
when $\be>d$.
Combining this with \eqref{Equation: 3.2 Proof 1} and \eqref{Equation: 3.3 Proof 1} concludes the proof of \eqref{Equation: Variance Upper Bound 3.2 in Lemma}
and \eqref{Equation: Variance Upper Bound 3.3 in Lemma}.
With this in hand, we have now completed the proof of Lemma \ref{Lemma: Variance Upper Bound 3}.
\section{Spectral Mapping and Multiplicity}
\label{sec: Multiplicity}
A crucial aspect of the proof of Theorem \ref{Theorem: Rigidity}
is the ability to relate exponential linear statistics of the eigenvalue point process
\eqref{Equation: Eigenvalue Point Process} to the trace of $K_t$ via the identities
\begin{align}
\label{Equation: Trace Identity}
\mr{Tr}[K_t]=\sum_{\mu\in\si(K_t)\setminus\{0\}}m_a(\mu,K_t)\,\mu=\sum_{\la\in\si(H)}m_a(\la,H)\,\mr e^{-t\la}\in(0,\infty).
\end{align}
Though we expect that such a result is known (or at least folklore) in the operator theory
community, we were not able to locate any reference that contains all of the precise statements
that we need to prove \eqref{Equation: Trace Identity}.
(This is especially so since the level of generality in this paper allows for non-self-adjoint
operators.)
As such, our purpose in this section
is to provide a general criterion for an identity of the form \eqref{Equation: Trace Identity}
to hold (as well as a few more properties), which we then use in Section \ref{Section: Rigidity}
to wrap up the proof of Theorem \ref{Theorem: Rigidity}.
We begin this section with a definition:
\begin{definition}
\label{Definition: Finite Dimensional}
We say that a linear operator $T$ on $\ell^2(\ms V)$
is finite-dimensional if there exists a finite set $\ms U\subset\ms V$
such that $T(u,v)=0$ whenever $(u,v)\not\in\ms U\times\ms U$.
In particular, if we enumerate the set $\ms U=\{u_1,\ldots,u_{|\ms U|}\}$, then $T$
has the same spectrum as the $|\ms U|\times |\ms U|$ matrix $M_T$ with entries
\begin{align}
\label{Equation: Matrix Representation}
M_T(i,j):=T(u_i,u_j),\qquad 1\leq i,j\leq |\ms U|.
\end{align}
\end{definition}
The result that we prove in this section is as follows:
\begin{proposition}
\label{Proposition: Operator Theory}
Let $(T_t)_{t>0}$ be a strongly continuous semigroup of trace class operators on $\ell^2(\ms V)$
such that $\|T_t\|_{\mr{op}}\leq\mr e^{-\om t}$ for some $\om<0$, and
let $G$ be its infinitesimal generator.
The following holds:
\begin{enumerate}
\item $G$ is closed and densely defined on $\ell^2(\ms V)$.
\item $\si(G)=\si_p(G)$, and $\Re(\la)\geq\om$ for all $\la\in\si(G)$.
\item For every $t>0$, $\si(T_t)\setminus\{0\}=\{\mr e^{-t\la}:\la\in\si(G)\}$.
\end{enumerate}
Moreover, if there exists a sequence of finite-dimensional
operators $(G_n)_{n\in\mbb N}$ such that
\begin{align}
\label{Equation: Resolvent Convergence Assumption}
\lim_{n\to\infty}\|\mf R(z,G_n)-\mf R(z,G)\|_{\mr{op}}=0
\end{align}
for at least one $z\in\mbb C\setminus\si(G)$ and such that
\begin{align}
\label{Equation: Semigroup Convergence Assumption}
\lim_{n\to\infty}\|\mr e^{-t G_n}-T_t\|_{\mr{op}}=0,
\end{align}
then for every $t>0$ and $\mu\in\si(T_t)\setminus\{0\}$,
\begin{align}
\label{Equation: Multiplicity Identity}
m_a(\mu,T_t)=\sum_{\la\in\si(G):~\mr e^{-t\la}=\mu}m_a(\la,G).
\end{align}
\end{proposition}
As a direct consequence of the above proposition, we have that
\[\mr{Tr}[T_t]=\sum_{\mu\in\si(T_t)\setminus\{0\}}m_a(\mu,T_t)\,\mu=\sum_{\la\in\si(G)}m_a(\la,G)\,\mr e^{-t\la}\in\mbb C\]
for all $t>0$, which is precisely the kind of statement that we are looking for.
The remainder of this section is now devoted to the proof of Proposition \ref{Proposition: Operator Theory}.
\subsection{Step 1. Closed Generator and Spectral Mapping}
We begin with the more straightforward aspects of the statement
of Proposition \ref{Proposition: Operator Theory}, namely, items (1)--(3).
Since $(T_t)_{t>0}$ is strongly continuous and $\|T_t\|_{\mr{op}}\leq\mr e^{-\om t}$,
it follows from the Hille-Yosida theorem (e.g., \cite[Chapter II, Corollary 3.6]{EngelNagel})
that $G$ is closed and densely defined on $\ell^2(\ms V)$.
Moreover, $\Re(\la)\geq\om$ for every $\la\in\si(G)$.
Given that the $T_t$ are trace class, we know that $\si(T_t)=\si_p(T_t)$ and that
\[\mr{Tr}[T_t]=\sum_{\mu\in\si(T_t)\setminus\{0\}}m_a(\mu,T_t)\mu\in\mbb C\]
by Lidskii's theorem
(e.g., \cite[Sections 3.6 and 3.12]{Simon}). Next,
by the spectral mapping theorem (e.g., \cite[Chapter IV, (3.7) and (3.16)]{EngelNagel}),
we know that for every $t>0$,
\begin{align}
\label{Equation: Spectral Mapping}
\big\{\mr e^{-t\la}:\la\in\si(G)\big\}\subset\si(T_t)
\qquad\text{and}\qquad
\big\{\mr e^{-t\la}:\la\in\si_p(G)\big\}=\si_p(T_t)\setminus\{0\}.
\end{align}
In particular, $\si(G)=\si_p(G)$, concluding the proof of
Proposition \ref{Proposition: Operator Theory} (1)--(3).
\subsection{Step 2. Multiplicities in Finite Dimensions}
It now remains to prove \eqref{Equation: Multiplicity Identity}.
Before we prove this result, we first prove the corresponding statement
in finite dimensions, namely:
\begin{lemma}
\label{Lemma: Spectral Mapping in Finite Dimensions}
Let $T$ be a finite-dimensional linear operator on $\ell^2(\ms V)$
and $F:\mbb C\to\mbb C$ be an analytic function.
For every $\mu\in\si\big(F(T)\big)=F\big(\si(T)\big)$, one has
\[m_a\big(\mu,F(T)\big)=\sum_{\la\in\si(T):~F(\la)=\mu}m_a(\la,T).\]
\end{lemma}
Applying this to the exponential map and the operators $G_n$, we are led to
the fact that for every $n\in\mbb N$, $t>0$, and $\mu\in\si(G_n)$ one has
\begin{align}
\label{Equation: Algebraic Identity Prelimit}
m_a(\mu,\mr e^{-t G_n})=\sum_{\la\in\si(G_n):~\mr e^{-t\la}=\mu}m_a(\la,G_n).
\end{align}
\begin{proof}[Proof of Lemma \ref{Lemma: Spectral Mapping in Finite Dimensions}]
It suffices to prove the result with $T$ replaced by $M_T$
and $F(T)$ replaced by $F(M_T)$, where $M_T$ is the
matrix defined in \eqref{Equation: Matrix Representation}.
Let $M_T=PJP^{-1}$ be $M_T$'s Jordan canonical form. That is,
$J$ is the direct sum of $M_T$'s Jordan blocks, and in particular the number of times
any $\la\in\mbb C$ appears on $J$'s diagonal is equal to
$m_a(\la,M_T)$. By the standard analytic functional calculus for matrices,
we know that $F(M_T)=PF(J)P^{-1}$, where $F(J)$ is the direct sum
of $M_T$'s transformed Jordan blocks, wherein any $k\times k$ Jordan block
of the form
\[\left[\begin{array}{ccccc}\la&1\\
&\la&1\\
&&\ddots&\ddots\\
&&&\la&1\end{array}\right]\]
is transformed into the upper triangular matrix
\[\left[\begin{array}{ccccc}F(\la)&F'(\la)&F''(\la)/2&\cdots&F^{(k-1)}(\la)/(k-1)!\\
&F(\la)&F'(\la)&\cdots&F^{(k-2)}(\la)/(k-2)!\\
&&\ddots&\ddots&\vdots\\
&&&\ddots&F'(\la)\\
&&&&F(\la)\end{array}\right].\]
Given that the characteristic polynomial of $F(M_T)$ is the same as that of $F(J)$,
this readily implies the result.
\end{proof}
\subsection{Step 3. Passing to the Limit}
We now complete the proof of Proposition \ref{Proposition: Operator Theory}
by arguing that the identity \eqref{Equation: Algebraic Identity Prelimit} persists
in the large $n$ limit.
Thanks to \eqref{Equation: Resolvent Convergence Assumption}
and \eqref{Equation: Semigroup Convergence Assumption}, we know that we have the convergences
$G_n\to G$ and $\mr e^{-t G_n}\to T_t$ for every $t>0$ in the generalized sense of Kato
(see \cite[Chapter IV, (2.9), (2.20) and p. 206]{Kato} for a definition of convergence in the
generalized sense, and \cite[Chapter IV, Theorems 2.23 a) and 2.25]{Kato} for a proof
that norm-resolvent and norm convergence implies convergence in the generalized sense).
As shown in \cite[Chapter IV, Theorem 3.16]{Kato} (see also \cite[Chapter IV, Section 5]{Kato}
for a discussion specific to the context of isolated eigenvalues),
convergence in the generalized sense implies the following spectral continuity results:
\begin{notation}
In what follows, we use $B(z,r)$ to denote the closed ball in the complex plane centered at $z\in\mbb Z$
and with raduis $r>0$.
\end{notation}
\begin{corollary}
For every $\la\in\si(G)$, if $\eps>0$ is such that $\si(G)\cap B(\la,\eps)=\{\la\}$,
then there exists $N\in\mbb N$ large enough so that
\begin{align}
\label{Equation: Multiplicities Convergence 1}
\sum_{\tilde\la\in\si(G_n)\cap B(\la,\eps)}m_a(\tilde\la,G_n)=m_a(\la,G)
\end{align}
whenever $n\geq N$.
Conversely, for every $t>0$ and $\mu\in\si(T_t)\setminus\{0\}$, if $\eps>0$ is such that $\si(T_t)\cap B(\mu,\eps)=\{\mu\}$,
then there exists $N\in\mbb N$ large enough so that
\begin{align}
\label{Equation: Multiplicities Convergence 2}
\sum_{\tilde\mu\in(\mr e^{-t G_n})\cap B(\mu,\eps)}m_a(\tilde\mu,\mr e^{-t G_n})=m_a(\mu,T_t)
\end{align}
whenever $n\geq N$.
\end{corollary}
We are now ready to prove \eqref{Equation: Matrix Representation}.
We first show that for every $t>0$ and $\mu\in\si(T_t)\setminus\{0\}$, the set $\{\la\in\si(G):~\mr e^{-t\la}=\mu\}$
is finite. Suppose by contradiction that this is not the case. Then, for any integer $M>0$, we can
find at least $M$ distinct eigenvalues $\la_1,\ldots,\la_M\in\si(G)$ such that $\mr e^{-t\la_i}=\mu$.
By taking a small enough $\eps>0$ and large enough $N\in\mbb N$, a combination of
\eqref{Equation: Algebraic Identity Prelimit} and \eqref{Equation: Multiplicities Convergence 2} yields
\begin{align}
\label{Equation: Algebraic Identity Prelimit Finite 1}
m_a(\mu,T_t)=\sum_{\tilde\mu\in\si(\mr e^{-t G_N})\cap B(\mu,\eps)}m_a(\tilde\mu,\mr e^{-tG_N})
=\sum_{\tilde\la\in\si(G_N):~\mr e^{-t\tilde\la}\in B(\mu,\eps)}m_a(\tilde\la,G_N).
\end{align}
Since $z\mapsto\mr e^{-t z}$ is continuous, we can take $\de>0$ small enough so that
\begin{enumerate}
\item if $\tilde\la\in B(\la_i,\de)$ for some $1\leq i\leq M$, then $\mr e^{-t\tilde\la}\in B(\mu,\eps)$; and
\item $\si(G)\cap B(\la_i,\de)=\{\la_i\}$ for every $1\leq i\leq M$.
\end{enumerate}
Thus, up to increasing the value of $N$ if necessary, an application of \eqref{Equation: Multiplicities Convergence 1}
to the right-hand side of \eqref{Equation: Algebraic Identity Prelimit Finite 1} then gives
\begin{align}
\label{Equation: Algebraic Identity Prelimit Finite 2}
m_a(\mu,T_t)\geq\sum_{i=1}^{M}\sum_{\tilde\la\in\si(G_N)\cap B(\la_i,\de)}m_a(\tilde\la,G_N)
=\sum_{i=1}^Mm_a(\la_i,G)\geq M.
\end{align}
Since $M$ was arbitrary, this implies that $m_a(\mu,T_t)=\infty$. Since $T_t$ is trace class
this cannot be the case, hence
we conclude that $\{\la\in\si(G):~\mr e^{-t\la}=\mu\}$ is finite.
By repeating the argument leading up to \eqref{Equation: Algebraic Identity Prelimit Finite 2},
but this time letting $M$ be equal to the number of eigenvalues in the set
$\{\la\in\si(G):~\mr e^{-t\la}=\mu\}$, we obtain that
\[m_a(\mu,T_t)\geq\sum_{\la\in\si(G):~\mr e^{-t\la}=\mu}m_a(\la,G).\]
We now proceed to prove the reverse inequality. Recall that $\{\la\in\si(G):~\mr e^{-t\la}=\mu\}$ contains finitely many elements. Denote them by $\lambda_1, \ldots , \lambda_M$ for some $M\in \mathbb{N}$.
Thanks to \eqref{Equation: Multiplicities Convergence 1}, we can find
a small enough $\eps>0$ and large enough $N\in\mbb N$ such that
\begin{align*}
\sum_{i=1}^{M} m_{a}(\lambda_i, G) &=\sum_{\tilde{\lambda} \in \cup^{M}_{i=1}\sigma(G_N)\cap B(\lambda_i, \eps)}m_{a}(\tilde{\lambda}, G_N) = \sum_{\tilde{\lambda} \in \sigma(G_N)\cap \big(\cup^{M}_{i=1} B(\lambda_i, \eps)\big)}m_{a}(\tilde{\lambda}, G_N).
\end{align*}
Then, by \eqref{Equation: Algebraic Identity Prelimit}, one has
\begin{align}
\sum_{\tilde{\lambda} \in \sigma(G_N)\cap \big(\cup^{M}_{i=1} B(\lambda_i, \eps)\big)}m_{a}(\tilde{\lambda}, G_N) = \sum_{\substack{\tilde{\mu}\in \sigma(e^{-tG_N})\\\tilde{\mu}\in e^{-t}(\cup_{i=1}^{M}B(\lambda_i,\eps))}}m(\tilde{\mu}, e^{-tG_N}),\label{eq:RevIneq2}
\end{align}
where we use $\mr e^{-t}(B)$ to denote the image of a set $B\subset\mbb C$ through the exponential map $z\mapsto\mr e^{-tz}$.
Since the exponential map is open and $\mr e^{-t\la_i}=\mu$ for all $1\leq i\leq M$, we can find a small enough $\de>0$
such that $B(\mu,\delta)\subset e^{-t}(\cup_{i=1}^{M}B(\lambda_i,\eps))$ and $\si(T_t)\cap B(\mu,\de)=\{\mu\}$.
As a result we get
\begin{align}
\sum_{i=1}^{M} m_{a}(\lambda_i, G)\geq\text{r.h.s. of \eqref{eq:RevIneq2}}\geq \sum_{\tilde{\mu}\in \sigma(e^{-tG_N})\cap B(\mu,\delta)} m_a(\tilde{\mu}, e^{-tG_N}).
\end{align}
At this point, up to increasing $N$ if necessary an application of
\eqref{Equation: Multiplicities Convergence 2} then yields
\[ \sum_{i=1}^{M} m_{a}(\lambda_i, G)\geq\sum_{\tilde{\mu}\in \sigma(e^{-tG_N})\cap B(\mu,\delta)} m_a(\tilde{\mu}, e^{-tG_N})
=m_a(\mu,T_t),\]
thus concluding the proof of \eqref{Equation: Multiplicity Identity}
and Proposition \ref{Proposition: Operator Theory}.
\section{Proof of Theorem \ref{Theorem: Rigidity}}
\label{Section: Rigidity}
In this section, we prove Theorem~\ref{Theorem: Rigidity}.
We assume throughout that Assumptions~\ref{Assumption: Graph} and~\ref{Assumption: Potential and Noise} hold.
We begin with a notation:
\begin{notation}
Throughout this proof,
we denote $X$'s transition semigroup by
\[\Pi_t(u,v)=\mbf P^u[X(t)=v],\qquad t\geq0,~u,v\in\ms V.\]
\end{notation}
\subsection{Step 1. Boundedness}
\label{Section: Boundedness}
Our first step in the proof is to show that, almost surely, $K_t$ is a bounded linear operator
on $\ell^2(\ms V)$ with $\|K_t\|_{\mr{op}}\leq\mr e^{\om t}$
for every $t>0$ for some $\om<0$. As is typical in Schr\"odinger semigroup theory,
this relies on controlling the minimum of the random potential $V+\xi$. To this end, we have the following
result:
\begin{lemma}
\label{Lemma: Bounded Below V Plus xi}
Define the random variable
\begin{align}
\label{Equation: Omega Zero}
\om_0:=\inf_{v\in\ms V}\big(V(v)+\xi(v)\big).
\end{align}
$\om_0>-\infty$ almost surely.
\end{lemma}
\begin{proof}
Thanks to \eqref{Equation: Potential Growth}, it suffices to prove that
\begin{align}
\label{Equation: Log n Borel-Cantelli Bound}
\liminf_{n\to\infty}\left(\inf_{v\in\ms V:~\msf d(0,v)\leq n}\frac{\xi(v)}{\log n}\right)>-\infty\qquad\text{almost surely}.
\end{align}
By a union bound and Markov's inequality, for every $\theta,\la>0$,
\[\mbf P\Big(\inf_{v\in\ms V:~\msf d(0,v)\leq n}\xi(v)\leq-\la\Big)
\leq\sum_{v\in\ms V:~\msf d(0,v)\leq n}\mr e^{-\theta\la}\mbf E\big[\mr e^{-\theta\xi(v)}\big].\]
On the one hand, thanks to \eqref{Equation: Coordination Sequence}, we have that
\[|\{v\in\ms V:\msf d(0,v)\leq n\}|\leq\mf c\sum_{m=1}^n m^{d-1}\leq\mf c+\mf c\int_1^nx^{d-1}\d x\leq Cn^d\]
for some constant $C>0$. On the other hand, thanks to the moment bound \eqref{Equation: Exponential Moments},
there exists a $\theta>0$ small enough so that
\[\sup_{v\in\ms V}\mbf E\big[\mr e^{-\theta\xi(v)}\big]<\infty.\]
Combining these two observations, we conclude that there exists $\tilde C,\theta>0$ such that
\[\mbf P\Big(\inf_{v\in\ms V:~\msf d(0,v)\leq n}\xi(v)\leq-\la\Big)\leq \tilde C n^d\mr e^{-\theta\la},\qquad\la>0.\]
If we take $\la=\la(n)=c\log n$ for large enough $c>0$,
then $\sum_{n\in\mbb N}\tilde C n^d\mr e^{-\theta\la(n)}<\infty$;
hence \eqref{Equation: Log n Borel-Cantelli Bound} holds by the
Borel-Cantelli lemma.
\end{proof}
As a direct application of Lemma \ref{Lemma: Bounded Below V Plus xi},
we have the inequality $K_t(u,v)\leq\mr e^{-\om_0t}\Pi_t(u,v)$ for every $u,v\in\ms V$,
where we take $\om_0$ as in \eqref{Equation: Omega Zero}.
In particular, $\|K_t\|_{\mr{op}}\leq\mr e^{-\om_0t}\|\Pi_t\|_{\mr{op}}$.
Given that $\om_0>-\infty$ almost surely by Lemma \ref{Lemma: Bounded Below V Plus xi},
it suffices to prove that $\Pi_t$ is bounded with $\|\Pi_t\|_{\mr{op}}\leq \mr e^{-t\om_1}$
for some constant $\om_1\leq 0$. We now prove this.
Note that for every $f\in\ell^2(\ms V)$, we have by Jensen's inequality that
\[\|\Pi_tf\|_{2}^2=\sum_{v\in\ms V}\mbf E^v\big[f\big(X(t)\big)\big]^2\leq\sum_{v\in\ms V}\mbf E^v\big[f\big(X(t)\big)^2\big]
=\sum_{u,v\in\ms V}\Pi_t(v,u)f(u)^2,\]
from which we conclude that
\[\|\Pi_t\|_{\mr{op}}\leq\sqrt{\sup_{u\in\ms V}\sum_{v\in\ms V}\Pi_t(v,u)}.\]
If we define the matrix
\[H_X(u,v):=\begin{cases}
-q(u)\Pi(u,v)&\text{if }u\neq v\\
q(u)&\text{if }u=v
\end{cases},\qquad u,v\in\ms V\]
(i.e., the Markov generator of $X$), then we can write
\[\sum_{v\in\ms V}\Pi_t(v,u)=\sum_{v\in\ms V}\sum_{n=0}^\infty\frac{(-t)^nH_X^n(v,u)}{n!}
\leq\sum_{n=0}^\infty\frac{t^n}{n!}\sum_{v\in\ms V}|H_X^n(v,u)|.\]
Noting that
\[\sup_{u,v\in\ms V}|H^n_X(u,v)|\leq\|H^n_X\|_{\mr{op}}\leq\|H_X\|^n_{\mr{op}},\]
for every $u,v\in\ms V$, we have the bound
\[|H_X^n(v,u)|\leq \|H_X\|^n_{\mr{op}}\mbf 1_{\{\msf d(u,v)\leq n\}}.\]
By \eqref{Equation: Coordination Sequence}, for any $u\in\ms V$,
the number of $v\in\ms V$ such that $(u,v)$ is an edge is bounded by $\mf c$.
Thus, the number of $v\in\ms V$ such that $\msf d(u,v)\leq n$ is crudely bounded by $\mf c^n$.
Consequently,
\[\|\Pi_t\|_{\mr{op}}^2\leq\sup_{u\in\ms V}\sum_{v\in\ms V}\Pi_t(v,u)\leq \sum_{n=0}^\infty\frac{(t\mf c\|H_X\|_{\mr{op}})^n}{n!}=\mr e^{\mf c\|H_X\|_{\mr{op}}t}.\]
Thus, it now suffices to prove that $\|H_X\|_{\mr{op}}<\infty$.
Recall that, by assumption, $\mf q:= \mathrm{sup}_{u\in \ms V} q(u)<\infty$. For every $f\in\ell^2(\ms V)$,
\[\|H_Xf\|_{2}^2\leq\mf q^2\sum_{u\in\ms V}\left(\sum_{v\in\ms V}\mbf 1_{\{(u,v)\in\ms E\}}f(v)\right)^2
\leq\mf q^22^{\mf c}\sum_{u,v\in\ms V}\mbf 1_{\{(u,v)\in\ms E\}}f(v)^2,\]
where the last inequality comes from the fact that
\[(x_1+\cdots+x_{\mf c})^2\leq2^{\mf c}(x_1^2+\cdots+x_{\mf c}^2),\qquad x_i\in\mbb R,\]
and that, by \eqref{Equation: Coordination Sequence},
for every $v\in\ms V$ there are at most $\mf c$ vertices $u$
such that $(u,v)\in\ms E$.
GGL20ing once again this last observation, we have that
\[\sum_{u,v\in\ms V}\mbf 1_{\{(u,v)\in\ms E\}}f(v)^2\leq\mf c\|f\|_2^2,\]
from which we conclude that
$\|H_X\|_{\mr{op}}^2\leq\mf q^22^{\mf c}\mf c,$
as desired.
\subsection{Step 2. Continuity of the Semigroup}
We now prove the almost-sure strong continuity and semigroup property.
Since $X$ is Markov and local time is additive, the semigroup property is trivial. We now prove strong continuity.
Let $C_0(\ms V)$ denote the set of functions $f:\ms V\to\mbb R$ that are finitely supported.
Since $C_0(\ms V)$ is dense in $\ell^2(\ms V)$ and a semigroup of bounded linear
operators is strongly continuous if and only if it is weakly continuous
(e.g., \cite[Chapter I, Theorem 5.8]{EngelNagel}), it suffices to prove that
$\langle f,K_tg-g\rangle\to0$ as $t\to0$ for every $f,g\in C_0(\ms V)$.
For every $g\in C_0(\ms V)$, we know that
\[\lim_{t\to0}g\big(X(t)\big)\mr e^{-\langle L_t,V+\xi\rangle}=g\big(X(0)\big)\qquad\text{almost surely}.\]
By the definition of $\om_0$, it follows that $\langle L_t,V+\xi\rangle\geq \om_0 t$ which implies that
\[\big|g\big(X(t)\big)\mr e^{-\langle L_t,V+\xi\rangle}\big|\leq\|g\|_{\ell^\infty}\mr e^{-\om_0t}.\] Since the right-hand side of this inequality is independent of $X$, it follows from dominated
convergence that
\[\lim_{t\to0}K_tg(v)=\lim_{t\to0}\mbf E^v\left[g\big(X(t)\big)\mr e^{-\langle L_t,V+\xi\rangle}\right]=g(v)\qquad\text{almost surely}\]
for every $v\in\ms V$. Finally, given that for every $v\in\ms V$, we have
\[\big|f(v)\big(K_tg(v)-g(v)\big)\big|\leq\|f\|_{\ell^\infty}\|g\|_{\ell^\infty}(\mr e^{-\om_0t}+1)\mbf 1_{\{f(v)\neq0\}},\]
which is summable in $v$ whenever $f\in C_0(\ms V)$,
we obtain $\langle f,K_tg-g\rangle\to0$ as $t\to0$ by dominated convergence.
\subsection{Step 3. Trace Class}
By the semigroup property, for every $t>0$, we can write
$K_t$ as the product $K_{t/2}K_{t/2}$. Thus, given that
the product of any two Hilbert-Schmidt operators is trace class
(e.g., \cite[Theorem 3.7.4]{Simon}),
it suffices to prove that, almost surely, $K_t$ is Hilbert-Schmidt
for all $t>0$, that is,
\[\sum_{u,v\in\ms V}K_t(u,v)^2<\infty.\]
By \eqref{Equation: Log n Borel-Cantelli Bound},
there exists finite random variables $\ka,\mu>0$ that only depend on $\xi$ such that
\[V(v)+\xi(v)\geq\big(\ka\msf d(0,v)\big)^\al-\mu,\qquad v\in\ms V\]
almost surely. Therefore, it suffices to prove the result with $K_t$ replaced by the kernel
\[\tilde K_t(u,v):=\mr e^{\mu t}\mbf E^u\left[\mr e^{-\langle L_t,(\ka\msf d(0,\cdot))^\al\rangle}\mbf 1_{\{X(t)=v\}}\right],\qquad u,v\in\ms V.\]
By Jensen's inequality,
\begin{align*}
\sum_{u,v\in\ms V}\tilde K_t(u,v)^2
&\leq\mr e^{2\mu t}\sum_{u,v\in\ms V}\mbf E^u\left[\mr e^{-2\langle L_t,(\ka\msf d(0,\cdot))^\al\rangle}\mbf 1_{\{X(t)=v\}}\right]\\
&=\mr e^{2\mu t}\sum_{u\in\ms V}\mbf E^u\left[\mr e^{-2\langle L_t,(\ka\msf d(0,\cdot))^\al\rangle}\right].
\end{align*}
At this point, the same argument used in \eqref{Equation: Tail Bound}, \eqref{eq:VBound}, and \eqref{Equation: Holder in Exponential Moment}
implies that there exists some finite constant $C_{\ka,t}>0$ (which depends on $\ka$ and $t$) such that
\[\sum_{u,v\in\ms V}\tilde K_t(u,v)^2\leq C_{\ka,t}\mr e^{2\mu t}\sum_{u\in\ms V}\mr e^{-2t(\ka\msf d(0,u))^\al}.\]
Then, writing the above sum as
\[\sum_{u\in\ms V}\mr e^{-2t(\ka\msf d(0,u))^\al}=\sum_{n\in\mbb N}\msf c_n(0)\mr e^{-2t(\ka n)^\al},\]
this is easily seen to be finite for all $t>0$ by \eqref{Equation: Coordination Sequence}.
\subsection{Step 4. Infinitesimal Generator}
We now prove the properties of the generator $H$,
except for number rigidity of its spectrum, which is relegated to the next (and final) step of the proof.
That $K_t$'s generator
is of the form \eqref{Equation: Schrodinger Generator} follows
from the straightforward computation that for every $u,v\in\ms V$,
\[\lim_{t\to0}\frac{\mbf 1_{\{u=v\}}-K_t(u,v)}{t}=H(u,v)\qquad\text{almost surely}\]
(indeed, recall that by definition of the process $X$, $\Pi_t(u,v)=q(u)\Pi(u,v)t+o(t)$ as $t\to0$
whenever $u\neq v$, and that $K_t(u,v)=0$ if $u\in\ms Z$ or $v\in\ms Z$).
Almost surely, $(K_t)_{t>0}$ is a strongly continuous semigroup of trace class operators and $\|K_t\|_{\mr{op}}\leq\mr e^{-\om t}$.
Therefore, by Proposition \ref{Proposition: Operator Theory} (1)--(3), the following holds almost surely:
\begin{enumerate}
\item $H$ is closed and densely defined on $\ell^2(\ms V)$.
\item $\si(H)=\si_p(H)$, and $\Re(\la)\geq\om$ for all $\la\in\si(H)$.
\item For every $t>0$, $\si(K_t)\setminus\{0\}=\{\mr e^{-t\la}:\la\in\si(H)\}$.
\end{enumerate}
It now remains to establish the trace identity \eqref{Equation: Trace Identity}, which is crucial
in our proof of rigidity. The fact that $\mr{Tr}[K_t]$
is a positive real number follows from the fact that
\[\mr{Tr}[K_t]=\sum_{v\in\ms V}K_t(v,v)\]
and that $K_t(u,v)\in[0,\infty)$ for all $u,v\in\ms V$. To prove the remainder of \eqref{Equation: Trace Identity},
as per Proposition \ref{Proposition: Operator Theory}, we need to find a sequence of finite-dimensional
operators that converge to $H$ and $K_t$ in the sense of \eqref{Equation: Resolvent Convergence Assumption}
and \eqref{Equation: Semigroup Convergence Assumption}.
To this end, for every $n\in\mbb N$, let us denote the subset
\[\ms V_n:=\{v\in\ms V:\msf d(0,v)\leq n\}\subset\ms V.\]
Given that $\ms G$ has uniformly bounded degrees, this must be finite. Thus,
the operators
\[H_n(u,v):=H(u,v)\mbf 1_{\{(u,v)\in\ms V_n\}},\qquad u,v\in\ms V\]
are finite-dimensional in the sense of Definition \ref{Definition: Finite Dimensional}.
More specifically, $H_n$ is the restriction of $H$ to the set $\ms V_n$
with Dirichlet boundary on $\ms V\setminus\ms V_n$. In particular,
if for every $n\in\mbb N$ we denote the hitting time
\[\tau_n:= \inf_{t\geq0}\big\{t\geq 0: X(t)\not\in\ms V_n\big\},\]
Then $\mr e^{-tH_n}$ is the integral operator on $\ell^2(\ms V)$ with kernel
\begin{align}
\label{Equation: Finite Kernel}
\mr e^{-tH_n}(u,v)=\mbf E^u\left[\mr e^{-\langle L_t,V+\xi\rangle}\mbf 1_{\{X(t)=v\}}\mbf 1_{\{\tau_n>t\}}\right].
\end{align}
The proof of \eqref{Equation: Trace Identity} is now a matter of establishing the following result:
\begin{lemma}
Almost surely, it holds that
\begin{align}
\label{Equation: Resolvent Convergence Assumption 2}
\lim_{n\to\infty}\|\mf R(z,H_n)-\mf R(z,H)\|_{\mr{op}}=0
\end{align}
for every $z\in\mbb C$ such that $\Re(z)<\om$ and
\begin{align}
\label{Equation: Semigroup Convergence Assumption 2}
\lim_{n\to\infty}\|\mr e^{-t G_n}-K_t\|_{\mr{op}}=0
\end{align}
for every $t>0$.
\end{lemma}
\begin{proof}
Given that $0\leq\mr e^{-tH_n}(u,v)\leq K_t(u,v)$ for all $u,v\in\ms V$,
it is easy to see that $\|\mr e^{-tH_n}\|_{\mr{op}}\leq\|K_t\|_{\mr{op}}\leq\mr e^{-\om t}$
for all $t>0$ almost surely. In particular, any $z\in\mbb C$ such that $\Re(z)<\om$
is in the resolvent set of $H_n$ and $H$ for all $n$. Consequently, it follows
from \cite[Chapter II, Theorem 1.10]{EngelNagel} that
\[\|\mf R(z,H_n)-\mf R(z,H)\|_{\mr{op}}=\left\|\int_0^\infty\mr e^{tz}(\mr e^{-t G_n}-K_t)\d t\right\|_{\mr{op}}
\leq\int_0^\infty\mr e^{t z}\|\mr e^{-t G_n}-K_t\|_{\mr{op}}\d t,\]
where the last inequality follows from \cite[Chapter II, Theorem 4 (ii)]{DU77}.
Given that
\[\int_0^\infty\mr e^{t z}\|\mr e^{-t G_n}-K_t\|_{\mr{op}}\d t\leq\int_0^\infty\mr e^{t z}\big(\|\mr e^{-t G_n}\|_{\mr{op}}+\|K_t\|_{\mr{op}}\big)\d t
\leq2\int_0^\infty\mr e^{t(z-\om)}\d t<\infty\]
whenever $\Re(z)<\om$, we get that \eqref{Equation: Resolvent Convergence Assumption 2} is
a consequence of \eqref{Equation: Semigroup Convergence Assumption 2} by an application of the dominated convergence theorem.
Let us then prove \eqref{Equation: Semigroup Convergence Assumption 2}.
Since the Hilbert-Schmidt norm dominates the operator norm, it suffices to prove that
\begin{align}
\label{Equation: H-S Convergence}
\sum_{u,v\in\ms V}\big(\mr e^{-tG_n}(u,v)-K_t(u,v)\big)^2=\sum_{u,v\in\ms V}\mbf E^u\left[\mr e^{-\langle L_t,V+\xi\rangle}\mbf 1_{\{X(t)=v\}}\mbf 1_{\{\tau_n\leq t\}}\right]^2
\end{align}
vanishes as $n\to\infty$ for all $t>0$ almost surely. By H\"older's inequality,
the right-hand side of \eqref{Equation: H-S Convergence} is bounded above by
\[\sum_{u,v\in\ms V}\mbf E^u\left[\mr e^{-2\langle L_t,V+\xi\rangle}\mbf 1_{\{X(t)=v\}}\right]\mbf P^u[\tau_n\leq t].\]
By mimicking our proof that $K_t$ is trace class, we know that
\[\sum_{u,v\in\ms V}\mbf E^u\left[\mr e^{-2\langle L_t,V+\xi\rangle}\mbf 1_{\{X(t)=v\}}\right]<\infty\]
for every $t>0$ almost surely. Thus, by dominated convergence, it suffices to prove that
\[\lim_{n\to\infty}\mbf P^u[\tau_n\leq t]=0\]
for every $u\in\ms V$ and $t>0$. Noting that
\[\mbf P^u\left[\max_{0\leq s\leq t} \msf d\big(0,X(s)\big)> n\right]
\leq\mbf P^u\left[\max_{0\leq s\leq t} \msf d\big(u,X(s)\big)> n-\msf d(0,u)\right]\]
for all $n\in\mbb N$ by the triangle inequality,
this follows directly from
the tail bound \eqref{Equation: Tail Bound}.
\end{proof}
\subsection{Step 5. Rigidity}
It now only remains to prove that the point process
\eqref{Equation: Eigenvalue Point Process}
is number rigid in the sense of Definition \ref{Definition: Rigidity}. The proof of
this amounts to a minor modification of the argument in \cite[Theorem 6.1]{GP17} (see also
\cite[Proposition 2.2]{GGL20}).
Let $B\subset\mbb C$ be a Borel set such that
$B\subset(-\infty,\de]+\mr i[-\tilde \de,\tilde \de]$
for some $\de,\tilde\de>0$.
Thanks to the trace identity \eqref{Equation: Trace Identity},
almost surely,
we can write
\[\mc X_H(B)=\sum_{\la\in\si(H)\cap B}m_a(\la,H)\]
as the sum of the following three terms:
\begin{align}
\label{Equation: Rigidity 1}
&\sum_{\la\in\si(H)}m_a(\la,H)\,\mr e^{-t\la}-\mbf E\left[\sum_{\la\in\si(H)}m_a(\la,H)\,\mr e^{-t\la}\right]=\mr{Tr}[K_t]-\mbf E\big[\mr{Tr}[K_t]\big],\\
\label{Equation: Rigidity 2}
&\sum_{\la\in\si(H)\cap B}m_a(\la,H)\left(1-\mr e^{-t\la}\right),\\
\label{Equation: Rigidity 3}
&\mbf E\left[\sum_{\la\in\si(H)}m_a(\la,H)\,\mr e^{-t\la}\right]
-\sum_{\la\in\si(H)\setminus B}m_a(\la,H)\,\mr e^{-t\la}.
\end{align}
Since we choose the exponent $\al$ in the same way as
Theorem \ref{Theorem: Upper}, \eqref{Equation: Rigidity 1} converges to zero as $t\to0$
almost surely along a subsequence.
Next, we have that \eqref{Equation: Rigidity 2} is bounded above
in absolute value by
\[\mc X_H(B)\sup_{\ze\in[\om,\de]+\mr i[\al,\be]}|1-\mr e^{-t\ze}|,\]
where we recall that $\om$ is the random lower bound on the real part
of the points in $\mc X_H$.
Since $\mc X_H$ is real-bounded below and $B\subset(-\infty,\de]+\mr i[-\tilde \de,\tilde \de]$,
$\mc X_H(B)<\infty$ almost surely. Thus, \eqref{Equation: Rigidity 2}
converges to zero almost surely as $t\to0$.
Thus, $\mc X_H(B)$ is the almost sure limit of \eqref{Equation: Rigidity 3}
as $t\to0$, along a subsequence.
Given that \eqref{Equation: Rigidity 3}
is measurable with respect
to the configuration of points outside of $B$ for every $t$ and that
the limit of measurable functions is measurable,
we conclude that $\mc X_H(B)$ is measurable with respect
to the configuration outside of $B$. This then concludes the proof
of number rigidity, and thus of Theorem \ref{Theorem: Rigidity}.
\begin{remark}
\label{Remark: Mechanism}
Referring back to the point raised in Section \ref{Section: Mechanism},
we see that the function denoted $\mc N_B$ therein satisfies the relation
\begin{align}
\label{Equation: Understanding of NB}
\mc N_B\big(\si(H)\setminus B\big)=\lim_{n\to\infty}\left(\mbf E\left[\sum_{\la\in\si(H)}m_a(\la,H)\,\mr e^{-t_n\la}\right]
-\sum_{\la\in\si(H)\setminus B}m_a(\la,H)\,\mr e^{-t_n\la}\right)
\end{align}
with probability one, where $(t_n)_{n\in\mbb N}$ is a sparse enough
sequence that vanishes in the large $n$ limit. In particular, understanding the
precise form of $\mc N_B$ relies, among other things, on understanding
how the divergences of the two terms inside the limit on the right-hand side of
\eqref{Equation: Understanding of NB} somehow cancel out as $n\to\infty$.
\end{remark}
\section{Proof of Theorem \ref{Theorem: Lower}}
\label{sec: Proof of Lower}
\subsection{Step 1. General Lower Bound}
We begin by providing a lower bound for $\mbf{Var}\big[\mr{Tr}[K_t]\big]$
in the general setting of the statement of
Theorem \ref{Theorem: Lower}. This bound will then be shown to remain positive
as $t\to0$ in the cases labelled (1)--(3).
Recalling that $\ga$ is the positive definite covariance function of $\xi$,
if we denote the semi-inner-product
\[\langle f,g\rangle_\ga:=\sum_{u,v\in\mbb Z^d}f(u)\ga(u-v)g(v),\qquad f,g:\mbb Z^d\to\mbb R,\]
then our assumption that $\ga$ is nonnegative implies that $\langle f,g\rangle_\ga\geq0$
whenever $f$ and $g$ are nonnegative. In particular, we have that
\begin{align}
\label{Equation: Lower Bound - Covariance}
\mbf{Cov}_\xi\big[\mr e^{-\langle L^u_t,\xi\rangle},\mr e^{-\langle\tilde L^v_t,\xi\rangle}\big]
=\mr e^{\frac12\langle L_t^u,L_t^u\rangle_\ga+\frac12\langle\tilde L_t^v,\tilde L_t^v\rangle_\ga}\left(\mr e^{\langle L_t^u,\tilde L_t^v\rangle_\ga}-1\right)\geq0.
\end{align}
For every $u,v\in\mbb Z^d$ and $t>0$, denote the event
$J_t(u,v):=\{L^u_t=t\mbf 1_u\text{ and }\tilde L^v_t=t\mbf 1_v\}$.
Clearly, $J_t(u,v)\subset\{X^u(t)=u,\tilde X^v(t)=v\}$, and
by independence of $X^u$ and $\tilde X^v$,
\begin{align}
\label{Equation: Lower Bound - J_t Event}
\inf_{u,v\in\mbb Z^d}\mbf P[J_t(u,v)]=\inf_{v\in\mbb Z^d}\mbf P^v[X(s)=v\text{ for every }s\leq t]^2
\geq\mr e^{-2t}.
\end{align}
We now combine \eqref{Equation: Lower Bound - Covariance}
and \eqref{Equation: Lower Bound - J_t Event} to lower bound the variance of $\mr{Tr}[K_t]$:
By Proposition \ref{Proposition: Variance Formula}, we may write
\begin{align}
\mbf{Var}\big[\mr{Tr}[K_t]\big]&\geq\sum_{u,v\in\mbb Z^d}\mbf E\Big[\mr e^{-\langle L^u_t+\tilde L^v_t,V\rangle}
\mr e^{\frac12\langle L_t^u,L_t^u\rangle_\ga+\frac12\langle\tilde L_t^v,\tilde L_t^v\rangle_\ga}\left(\mr e^{\langle L_t^u,\tilde L_t^v\rangle_\ga}-1\right)
\mbf 1_{J_t(u,v)}\Big]\nonumber\\
&=\sum_{u,v\in\mbb Z^d}\mr e^{-tV(u)-tV(v)}\mr e^{t^2\ga(0)}\left(\mr e^{t^2\ga(u-v)}-1\right)\mbf P[J_t(u,v)]\nonumber\\
&\geq\mr e^{-2t+t^2\ga(0)}\sum_{u,v\in\mbb Z^d}\mr e^{-tV(u)-tV(v)}\left(\mr e^{t^2\ga(u-v)}-1\right)\nonumber\\
&=\mr e^{-2t+t^2\ga(0)}\sum_{u,v\in\mbb Z^d}\mr e^{-t\msf d(0,u)^\de-t\msf d(0,v)^\de}\left(\mr e^{t^2\ga(u-v)}-1\right),\label{eq:VarLowLast}
\end{align}
where the first line comes from \eqref{Equation: Lower Bound - Covariance} and
the fact that $\mbf E[Y]\geq\mbf E[Y\mbf 1_E]$ for any nonnegative random variable $Y$
and event $E$, the second line comes from the definition of the event $J_t(u,v)$,
the third line comes from \eqref{Equation: Lower Bound - J_t Event},
and the last line comes from the assumption on $V$ stated in Theorem \ref{Theorem: Lower}.
As $\mr e^{-2t+t^2\ga(0)}\to1$ as $t\to0$, we obtain our general lower bound:
\begin{align}
\label{Equation: General Lower Bound}
\liminf_{t\to0}\mbf{Var}\big[\mr{Tr}[K_t]\big]\geq\liminf_{t\to0}\sum_{u,v\in\mbb Z^d}\mr e^{-t\msf d(0,u)^\de-t\msf d(0,v)^\de}\left(\mr e^{t^2\ga(u-v)}-1\right).
\end{align}
We now prove that the right-hand side of \eqref{Equation: General Lower Bound} is positive in cases (1)--(3).
\subsection{Step 2. Three Examples}
Suppose first that $\de\leq d/2$ and $\ga(v)=\mbf 1_{\{v=0\}}$.
On the integer lattice $\mbb Z^d$, it is easy to see that there exists a
constant $C>0$ such that $\msf c_n(0)\geq Cn^{d-1}$.
Therefore,
by an application of \eqref{Equation: General Lower Bound}, followed by
the inequality $\mr e^{x}-1\geq x$ for all $x\geq0$ and a Riemann sum, we have that
\begin{multline*}
\liminf_{t\to0}\mbf{Var}\big[\mr{Tr}[K_t]\big]\geq\liminf_{t\to0}\left(\mr e^{t^2}-1\right)\sum_{v\in\mbb Z^d}\mr e^{-2t\msf d(0,v)^\de}
\geq\liminf_{t\to0}t^2\sum_{n\in\mbb N\cup\{0\}}\msf c_n(0)\mr e^{-2tn^\de}\\
\geq C \liminf_{t\to0}t^{2-d/\de}t^{1/\de}\sum_{n\in t^{1/\de}\mbb N\cup\{0\}}n^{d-1}\mr e^{-2n}
\geq C \int_0^\infty x^{d-1}\mr e^{-2x}\d x>0.
\end{multline*}
Next, suppose that $\de\leq d-\be/2$ and that
$\ga(v)\geq\mc L\big(\msf d(0,v)+1\big)^{-\be}$ for some $0<\be<d$ and $\mc L>0$. Then,
\eqref{Equation: General Lower Bound}, the triangle inequality,
and the same arguments as in the previous case yield
\begin{align*}
&\liminf_{t\to0}\mbf{Var}\big[\mr{Tr}[K_t]\big]\\
&\geq\liminf_{t\to0}\sum_{u,v\in\mbb Z^d}\mr e^{-t\msf d(0,u)^\de-t\msf d(0,v)^\de}
\left(\mr e^{\mc Lt^2(\msf d(u,v)+1)^{-\be}}-1\right)\\
&\geq \mc L \liminf_{t\to0}t^{2}\sum_{u,v\in\mbb Z^d}\mr e^{-t\msf d(0,u)^\de-t\msf d(0,v)^\de}
\big(\msf d(0,u)+\msf d(0,v)+1\big)^{-\be}\\
&=\mc L \liminf_{t\to0}t^{2}\sum_{m,n\in\mbb N\cup\{0\}}\msf c_m(0)\msf c_n(0)\,\mr e^{-tm^\de-tn^\de}(m+n+1)^{-\be}\\
&\geq\mc L C^2\liminf_{t\to0}t^{2-2(d-1)/\de+\be/\de}\sum_{m,n\in t^{1/\de}\mbb N\cup\{0\}}(mn)^{d-1}\mr e^{-m^\de-n^\de}(m+n+t^\de)^{-\be}\\
&=\mc L C^2\liminf_{t\to0}t^{2-2(d-\be/2)/\de}\int_0^\infty\int_0^\infty\frac{(xy)^{d-1}}{(x+y)^\be}\mr e^{-x^\de-y^\de}\d x\dd y>0.
\end{align*}
Finally, suppose that $\de\leq d$ and $\inf_{v\in\mbb Z^d}\ga(v)>\mc L>0$. In this case we obtain that
\begin{multline*}
\liminf_{t\to0}\mbf{Var}\big[\mr{Tr}[K_t]\big]\geq\liminf_{t\to0}\left(\mr e^{\mc Lt^2}-1\right)\sum_{u,v\in\mbb Z^d}\mr e^{-t\msf d(0,u)^\de-t\msf d(0,v)^\de}\\
\geq\mc L C^2\liminf_{t\to0}t^2\left(\sum_{n\in\mbb N}n^{d-1}\mr e^{-2tn^\de}\right)^2
=\mc L C^2\liminf_{t\to0}t^{2-2d/\de}\left(\int_0^\infty x^{d-1}\mr e^{-2x}\d x\right)^2>0,
\end{multline*}
thus concluding the proof.
\bibliographystyle{plain}
|
1,116,691,499,733 | arxiv | \section{Introduction}\label{sec:intro}
The question of how many images an observer at an event $p$ sees of a
light source with worldline $\gamma$ is equivalent to the question of how
many past-pointing lightlike geodesics from $p$ to $\gamma$ exist. In
spacetimes with many symmetries this question can be addressed, in principle,
by directly integrating the geodesic equation. In the spacetime around a
non-rotating and uncharged black hole of mass $m$, e.g., which is described
by the Schwarzschild metric, all lightlike geodesics can be explicitly written
in terms of elliptic integrals; with the help of these explicit expressions,
it is easy to verify that in the region outside the horizon, i.e. in the
region wher $r>2 \, m$, there are infinitely many past-pointing lightlike
geodesics from any event $p$ to any integral curve of the Killing vector
field $\partial _t$. This was demonstrated already in 1959 bei Darwin
\cite{Darwin1959}. We may thus say that a Schwarzschild black hole acts
as a gravitational lens that produces infinitely many images of any static
light source. However, already in the Schwarzschild spacetime the problem
becomes more difficult if we want to consider light sources which are not
static, i.e., worldlines $\gamma$ which are not integral curves
of $\partial _t$.
In this paper we want to investigate this problem for the more general case of a
charged and rotating black hole, which is described by the Kerr-Newman metric.
More precisely, we want to demonstrate that in the domain of outer communication
around a Kerr-Newman black hole, i.e., in the domain outside of the outer horizon,
there are infinitely many past-pointing lightlike geodesics from an unspecified
event $p$ to an unspecified worldline $\gamma$, with as little restrictions on
$\gamma$ as possible. Although the geodesic equation in the Kerr-Newman spacetime
is completely integrable, the mathematical expressions are so involved that it
is very difficult to achieve this goal by explicitly integrating the geodesic
equation. Therefore it is recommendable to use more indirect methods.
Such a method is provided by Morse theory. Quite generally, Morse theory relates
the number of solutions to a variational principle to the topology of the space
of trial maps. Here we refer to a special variant of Morse theory, developed by
Uhlenbeck \cite{Uhlenbeck1975}, which is based on a version of Fermat's principle
for a globally hyperbolic Lorentzian manifold $(M,g)$. The trial maps are the
lightlike curves joining a point $p$ and a timelike curve $\gamma$ in $M$,
and the solution curves of Fermat's principle are the lightlike geodesics.
If $(M,g)$ and $\gamma$ satisfy additional conditions, the topology of the
space of trial maps is determined by the topology of $M$. Uhlenbeck's work
gives criteria that guarantee the existence
of infinitely many past- or future-pointing lightlike geodesics from $p$ to
$\gamma$. In this paper we will apply her results to the domain of outer
communication around a Kerr-Newman black hole which is, indeed, a globally
hyperbolic Lorentzian manifold.
We will show that the criteria for having infinitely many past-pointing timelike
geodesics from $p$ to $\gamma$ are satisfied for every event $p$ and every timelike
curve $\gamma$ in this region, provided that the following three conditions are
satisfied. First, $\gamma$ must not have a past end-point; it is obvious
that we need a condition of this kind because otherwise it would be possible
to choose for $\gamma$ an arbitrarily short section of a worldline such
that trivially the number of past-pointing lightlike geodesics from $p$ to
$\gamma$ is zero. Second, $\gamma$ must not intersect the caustic of the
past light-cone of $p$; this excludes all cases where $p$ sees an extended
image, such as an Einstein ring, of $\gamma$. Third, in the past the worldline
$\gamma$ must not go to the horizon or to infinity. Under these (very mild)
restrictions on the motion of the light source we will see that the Kerr-Newman black
hole acts as a gravitational lens that produces infinitely many images. Moreover,
we will also show that all (past-directed) lightlike geodesics from $p$ to $\gamma$ are confined
to a certain spherical shell. For the characterization of this shell we will have
to discuss a light-convexity property which turns out to be intimately related to
the phenomenon of centrifugal(-plus-Coriolis) force reversal. This phenomenon
has been discussed, first in spherically symmetric static and then in more
general spacetimes, in several papers by Marek Abramowicz with various coauthors;
material which is of interest to us can be found, in particular, in Abramowicz,
Carter and Lasota \cite{AbramowiczCarterLasota1988}, Abramowicz \cite{Abramowicz1990}
and Abramowicz, Nurowski and Wex \cite{AbramowiczNurowskiWex1993}.
The paper is organized as follows. In Section \ref{sec:morse} we
summarize the Morse-theoretical results we want to use. Section \ref{sec:centrifugal}
is devoted to the notions of centrifugal and Coriolis force in the Kerr-Newman
spacetime; in particular, we introduce a potential $\Psi _+$ (respectively $\Psi_-$)
that characterizes the sum of centrifugal and Coriolis force with respect to
co-rotating (respectively counter-rotating) observers whose velocity approaches
the velocity of light. In Section \ref{sec:imaging} we discuss multiple imaging
in the Kerr-Newman spacetime with the help of the Morse theoretical result quoted
in Section \ref{sec:morse} and with the help of the potential $\Psi _{\pm}$
introduced in Section \ref{sec:centrifugal}. Our results are summarized and
discussed in Section \ref{sec:conclusion}.
\section{A result from Morse theory}\label{sec:morse}
In this section we briefly review a Morse-theoretical result that relates the
number of lightlike geodesics between a point $p$ and a timelike curve $\gamma$
in a globally hyperbolic Lorentzian manifold to the topology of this manifold.
This result was found by Uhlenbeck \cite{Uhlenbeck1975} and its relevance in view of
gravitational lensing was discussed by McKenzie \cite{McKenzie1985}. Uhlenbeck's work
is based on a variational principle for lightlike geodesics (``Fermat principle'')
in a globally hyperbolic Lorentzian manifold, and her main method of proof is to
approximate trial paths by broken geodesics. With the help of infinite-dimensional
Hilbert manifold techniques Giannoni, Masiello, and Piccione were able to rederive
Uhlenbeck's result \cite{GiannoniMasiello1996} and to generalize it to certain
subsets-with-boundary of spacetimes that need not be globally hyperbolic
\cite{GiannoniMasielloPiccione1998}. In contrast to Uhlenbeck, they start out
from a variational principle for lightlike geodesics that is not restricted to
globally hyperbolic spacetimes. (Such a Fermat principle for arbitrary
general-relativistic spacetimes was first formulated by Kovner \cite{Kovner1990};
the proof that the solution curves of Kovner's variational principle are, indeed,
precisely the lightlike geodesics was given by Perlick \cite{Perlick1990b}).
Although for our purpose the original Uhlenbeck result is sufficient, readers
who are interested in technical details are encouraged to also consult the
papers by Giannoni, Masiello, and Piccione, in particular because in the
Uhlenbeck paper some of the proofs are not worked out in full detail.
Following Uhlenbeck \cite{Uhlenbeck1975}, we consider a 4-dimensional Lorentzian manifold
$(M,g)$ that admits a foliation into smooth Cauchy surfaces, i.e., a globally
hyperbolic spacetime. (For background material on globally hyperbolic spacetimes
the reader may consult, e.g., Hawking and Ellis \cite{HawkingEllis1973}. The fact that the
original definition of global hyperbolicity is equivalent to the existence of a
foliation into \emph{smooth} Cauchy surfaces was completely proven only recently by
Bernal and S{\'a}nchez \cite{BernalSanchez2005}.) Then $M$ can be written
as a product of a 3-dimensional manifold $S$, which serves as the prototype for
each Cauchy surface, and a time-axis,
\begin{equation}\label{eq:product}
M= S \times {\mathbb{R}} \, .
\end{equation}
Moreover, this product can be chosen such that the metric $g$ orthogonally splits
into a spatial and a temporal part,
\begin{equation}\label{eq:globhyp}
g = g_{ij}(x, t ) \, dx^i \, dx^j - f(x, t ) \, d t ^2 \, ,
\end{equation}
where $t$ is the time coordinate given by projecting from $M= S \times {\mathbb{R}}$
onto the second factor, $x = (x^1 , x^2 , x^3 )$ are coordinates on $S$, and the
summation convention is used for latin indices running from 1 to 3. (We write
(\ref{eq:globhyp}) in terms of coordinates for notational convenience only. We do
not want to presuppose that $S$ can be covered by a single coordinate system.)
We interpret the direction of increasing $t$ as the future-direction on $M$.
Again following Uhlenbeck \cite{Uhlenbeck1975}, we say that the splitting (\ref{eq:globhyp})
satisfies the {\em metric growth condition\/} if for every compact subset of $S$
there is a function $F$ with
\begin{equation}\label{eq:F}
\int_{- \infty} ^0 \frac{d t}{F(t)} = \infty
\end{equation}
such that for $t \le 0$ the inequality
\begin{equation}\label{eq:growth}
g_{ij} (x, t ) \, v^i \, v^j \le f(x, t ) \, F( t )^2 \, G_{ij} (x) \, v^i \, v^j
\end{equation}
holds for all $x$ in the compact subset and for all $(v^1, v^2, v^3)
\in {\mathbb{R}}^3$, with a time-independent Riemannian metric $G_{ij}$ on
$S$. It is easy to check that the metric growth condition assures
that for every (smooth) curve $\alpha : [a,b] \longrightarrow S$ there is a
function $T : [a,b] \longrightarrow {\mathbb{R}}$ with $T (a)=0$ such
that the curve $\lambda : [a,b] \longrightarrow M = S \times {\mathbb{R}}, s \longmapsto
\lambda (s) = \big( \alpha (s) , T (s) \big)$ is past-pointing and lightlike.
In particular, the metric growth condition assures that from each point $p$ in $M$
we can find a past-pointing lightlike curve to every timelike curve that is vertical
with respect to the orthogonal splitting chosen. In this sense, the metric growth
condition prohibits the existence of {\em particle horizons}, cf. Uhlenbeck \cite{Uhlenbeck1975}
and McKenzie \cite{McKenzie1985}. Please note that our formulation of the metric growth
condition is the same as McKenzie's which differs from Uhlenbeck's by interchanging
future and past (i.e., $t \longmapsto - t$). The reason is that Uhlenbeck in
her paper characterizes {\em future-pointing\/} lightlike geodesics from a
point to a timelike curve whereas we, in view of gravitational lensing,
are interested in {\em past-pointing\/} ones.
For formulating Uhlenbeck's result we have to assume that the reader is familiar
with the notion of {\em conjugate points\/} and with the following facts (see,
e.g., Perlick \cite{Perlick2000}). The totality of all conjugate points, along any lightlike
geodesic issuing from a point $p$ into the past, makes up the {\em caustic\/}
of the past light-cone of $p$. A lightlike geodesic is said to have
(Morse) index $k$ if it has $k$ conjugate points in its interior; here and in the following every
conjugate point has to be counted with its multiplicity. For a lightlike geodesic
with two end-points, the index is always finite. It is our goal to estimate
the number of past-pointing lightlike geodesics of index $k$ from a point $p$
to a timelike curve $\gamma$ that does not meet the caustic of the past
light-cone of $p$. The latter condition is generically satisfied in the
sense that, for any $\gamma$, the set of all points $p$ for which it is
true is dense in $M$. This condition makes sure that the past-pointing
lightlike geodesics from $p$ to $\gamma$ are countable, i.e., it excludes
gravitational lensing situations where the observer sees a continuum of
images such as an Einstein ring.
As another preparation, we recall how the \emph{Betti numbers} $B_k$ of the
\emph{loop space} $L(M)$ of a connected topological space $M$ are defined.
As a realization of $L(M)$ one may take the space of all continuous curves
between any two fixed points in $M$. The $k$th Betti number $B_k$ is
formally defined as the dimension of the $k$-th homology space of $L(M)$
with coefficients in a field $\mathbb{F}$. (For our purpose we may choose
$\mathbb{F} = \mathbb{R}$.)
Roughly speaking, $B_0$ counts the connected components of $L(M)$
and $B_k$, for $k>0$, counts those ``holes'' in $L(M)$ that prevent
a $k-$sphere from being a boundary. If the reader is not familiar with
Betti numbers he or she may consult e.g. \cite{Frankel1997}.
After these preparations Uhlenbeck's result that we want to use later in this
paper can now be phrased in the following way.
\begin{theorem}\label{theo:Uh}
{\em (Uhlenbeck \cite{Uhlenbeck1975})}
Consider a globally hyperbolic spacetime $(M,g)$ that admits an orthogonal
splitting $(\ref{eq:product}), (\ref{eq:globhyp})$ satisfying the metric growth
condition. Fix a point $p \in M$ and a smooth timelike curve $\gamma : {\mathbb{R}}
\longrightarrow M$ which, in terms of the above-mentioned orthogonal splitting,
takes the form $\gamma ( \tau ) = \big( \beta ( \tau ) , \tau \big)$, with a curve
$\beta : {\mathbb{R}} \longrightarrow S$. Moreover, assume that $\gamma$ does
not meet the caustic of the past light-cone of $p$ and that for some sequence
$(\tau _i )_{i \in {\mathbb{N}}}$ with $\tau _i \rightarrow
- \infty$ the sequence $\big( \beta (\tau _i ) \big) {}_{i \in {\mathbb{N}}}$
converges in $S$. Then the Morse inequalities
\begin{equation}\label{eq:Morseineq}
N_k \ge B_k \; \qquad {\text{for all}} \quad k \in {\mathbb{N}}_0
\end{equation}
and the Morse relation
\begin{equation}\label{eq:Morserel}
\sum_{k=0}^{\infty} (-1)^k N_k =
\sum_{k=0}^{\infty} (-1)^k B_k
\end{equation}
hold true, where $N_k$ denotes the number of past-pointing lightlike geodesics with
index $k$ from $p$ to $\gamma$, and $B_k$ denotes the $k$-th Betti number of the loop
space of $M$.
\end{theorem}
\begin{proof}
See Uhlenbeck \cite{Uhlenbeck1975}, \S 4 and Proposition 5.2.
\end{proof}
Please note that the convergence condition on $\big( \beta (\tau _i ) \big)
{}_{i \in {\mathbb{N}}}$ is certainly satisfied if $\beta$ is confined to a
compact subset of $S$, i.e., if $\gamma$ stays in a spatially compact set.
The sum on the right-hand side of (\ref{eq:Morserel}) is, by definition, the
Euler characteristic $\chi$ of the loop space of $M$. Hence, (\ref{eq:Morserel})
can also be written in the form
\begin{equation}\label{eq:N+-}
N_+ - N_- = \chi \, ,
\end{equation}
where $N_+$ (respectively $N_-$) denotes the number of past-pointing lightlike
geodesics with even (respectively odd) index from $p$ to $\gamma$.
The Betti numbers of the loop space of $M=S \times {\mathbb{R}}$ are, of
course, determined by the topology of $S$. Three cases are to be distinguished.
\noindent
{\bf Case A}: $M$ is not simply connected. Then the loop space
of $M$ has infinitely many connected components, so $B_0 = \infty$. In this
situation (\ref{eq:Morseineq}) says that $N_0 = \infty$, i.e., that there
are infinitely many past-pointing lightlike geodesics from $p$ to $\gamma$
that are free of conjugate points.
\noindent
{\bf Case B}: $M$ is simply connected but not contractible
to a point. Then for all but finitely many $k \in {\mathbb{N}}_0$ we have
$B_k > 0$. This was proven in a classical paper by Serre \cite{Serre1951},
cf. McKenzie \cite{McKenzie1985}. In this situation (\ref{eq:Morseineq}) implies
$N_k > 0$ for all but finitely many $k$. In other words, for almost
every positive integer $k$ we can find a past-pointing lightlike geodesic
from $p$ to $\gamma$ with $k$ conjugate points in its interior. Hence,
there must be infinitely many past-pointing lightlike geodesics from
$p$ to $\gamma$ and the caustic of the past light-cone of $p$ must be
complicated enough such that a past-pointing lightlike geodesic from $p$
can intersect it arbitrarily often.
\noindent
{\bf Case C}: $M$ is contractible to a point. Then the loop space of
$M$ is contractible to a point, i.e., $B_0 =1$ and $B_k = 0$ for $k > 0$.
In this case (\ref{eq:N+-}) takes the form $N_+ - N_-
= 1$ which implies that the total number $N_+ + N_- = 2 N_- + 1$
of past-pointing lightlike geodesics from $p$ to $\gamma$ is (infinite
or) odd.
The domain of outer communication of a Kerr-Newman black hole has topology
$S^2 \times {\mathbb{R}}^2$ which is simply connected but not contractible
to a point. So it is Case B we are interested in when applying Uhlenbeck's
result to the Kerr-Newman spacetime.
\section{Centrifugal and Coriolis force in the Kerr-Newman
spacetime}\label{sec:centrifugal}
The Kerr-Newman metric is given in Boyer-Lindquist coordinates (see, e.g.,
Misner, Thorne and Wheeler \cite{MisnerThorneWheeler1973}, p.877) by
\begin{equation}\label{eq:kerr}
g = - \frac{\Delta}{\rho ^2} \, \big( \, dt \, - \,
a \, \mathrm{sin} ^2 \vartheta \, d \varphi \big) ^2 \, + \,
\frac{\mathrm{sin} ^2 \vartheta}{\rho ^2} \, \big(
(r^2 + a^2) \, d \varphi \, - \, a \, dt \, \big) ^2 \, + \,
\frac{\rho ^2}{\Delta} \, dr^2 \, + \, \rho ^2 \, d \vartheta ^2 \, ,
\end{equation}
where $\rho$ and $\Delta$ are defined by
\begin{equation}\label{eq:rhodelta}
\rho ^2 = r^2 + a^2 \, {\mathrm{cos}} ^2 \vartheta
\quad \text{and} \quad
\Delta = r^2 - 2mr + a^2 + q^2 \, ,
\end{equation}
and $m$, $q$ and $a$ are real constants. We shall assume throughout that
\begin{equation}\label{eq:ma}
0 \, < \, m \: , \quad 0 \, \le \, a \: , \quad \sqrt{a^2 + q ^2} \, \le \, m \, .
\end{equation}
In this case, the Kerr-Newman metric describes the spacetime around a rotating
black hole with mass $m$, charge $q$, and specific angular momentum $a$. The
Kerr-Newman metric (\ref{eq:kerr}) contains the Kerr metric ($q=0$), the
Reissner-Nordstr{\"om} metric ($a=0$) and the Schwarzschild metric ($q=0$ and $a=0$)
as special cases which are all discussed, in great detail, in Chandrasekhar \cite{Chandrasekhar1983};
for the Kerr metric we also refer to O'Neill \cite{ONeill1995}.
By (\ref{eq:ma}), the equation $\Delta = 0$ has two real roots,
\begin{equation}\label{eq:hor}
r_{\pm} = m \pm \sqrt{ m^2 - a^2 - q ^2} \, ,
\end{equation}
which determine the two horizons. We shall restrict to the region
\begin{equation}\label{eq:M+}
M_+ : \quad r_+ < r < \infty \, ,
\end{equation}
which is usually called the {\em domain of outer communication\/} of the Kerr-Newman
black hole. On $M_+$, the coordinates $\varphi$
and $\vartheta$ range over $S^2$, the coordinate $t$ ranges over ${\mathbb{R}}$,
and the coordinate $r$ ranges over an open interval which is diffeomorphic to
${\mathbb{R}}$; hence $M_+ \simeq S^2 \times {\mathbb{R}}^2$.
From now on we will consider the spacetime $(M_+,g)$, where
$g$ denotes the restriction of the Kerr-Newman metric (\ref{eq:kerr}) with (\ref{eq:ma})
to the domain $M_+$ given by (\ref{eq:M+}). For the sake of brevity, we will
refer to $(M_+,g)$ as to the {\em exterior Kerr-Newman spacetime}. As a matter of
fact, $(M_+,g)$ is a globally hyperbolic spacetime; the Boyer-Lindquist time
coordinate $t$ gives a foliation of $M_+$ into Cauchy surfaces $t = {\mathrm{constant}}$.
Together with the lines perpendicular to these surfaces, we get an orthogonal
splitting of the form (\ref{eq:globhyp}). Observers with worldlines perpendicular
to the surfaces $t = {\mathrm{constant}}$ are called {\em zero-angular-momentum
observers\/} or \emph{locally non-rotating observers}. In contrast to the
worldlines perpendicular to the surfaces $t = {\mathrm{constant}}$, the integral
curves of the Killing vector field $\partial _t$ are {\em not\/} timelike on all
of $M_+ \,$; they become spacelike inside the socalled {\em ergosphere\/} which
is characterized by the inequality $\Delta < a^2 \mathrm{sin}^2 \vartheta$.
For $a \neq 0$ it is impossible to find a Killing vector field
which is timelike on all of $M_+$; in this sense, the exterior Kerr-Newman
spacetime is {\em not\/} a stationary spacetime.
In the rest of this section we discuss the notions of centrifugal
force and Coriolis force for observers on circular orbits around the
axis of rotational symmetry in the exterior Kerr-Newman spacetime $(M_+,g)$. For
background information on these notions we refer to the work of
Marek Abramowicz and his collaborators \cite{AbramowiczCarterLasota1988,
Abramowicz1990, AbramowiczNurowskiWex1993} which was
mentioned already in the introduction. For our discussion it will be convenient
to introduce on $M_+$ the orthonormal basis
\begin{gather}
E_0 = \frac{1}{\rho \, \sqrt{\Delta}} \Big( (r^2 + a^2) \partial _t
+ a \partial _{\varphi} \Big) \: ,
\nonumber
\\
E_1 = \frac{1}{\rho \, {\mathrm{sin}} \, \vartheta} \,
\big( \partial _{\varphi} + a \, {\mathrm{sin}} ^2 \vartheta \, \partial _t \big) \: ,
\label{eq:E}
\\
E_2 = \frac{1}{\rho} \, \partial _{\vartheta} \: ,
\qquad
E_3 = \frac{\sqrt{\Delta}}{\rho} \, \partial _r \: ,
\nonumber
\end{gather}
whose dual basis is given by the covector fields
\begin{gather}
-g(E_0 , \, \cdot \, ) = \frac{\sqrt{\Delta}}{\rho} \, \big( \, dt \, - \,
a \, {\mathrm{sin}}^2 \vartheta \, d \varphi \, \big) \: ,
\nonumber
\\
g(E_1 , \, \cdot \, ) = \frac{\mathrm{sin} \, \vartheta}{\rho} \,
\big( \, (r^2+a^2) \; d \varphi \, - \, a \, dt \, \big) \: ,
\label{eq:gE}
\\
g(E_2 , \, \cdot \, ) = \rho \, d \vartheta \: ,
\qquad
g(E_3 , \, \cdot \, ) = \frac{\rho}{\sqrt{\Delta}} \, dr \: .
\nonumber
\end{gather}
Henceforth we refer to the integral curves of the timelike basis field
$E_0$ as to the worldlines of the {\em standard observers\/} in $(M_+,g)\,$.
For later purpose we list all non-vanishing Lie brackets of the $E_i$.
\begin{gather}
[E_0,E_2] \, = \, - \, \frac{a^2}{\rho ^3} \,
\mathrm{cos} \, \vartheta \, \mathrm{sin} \, \vartheta \, E_0 \; ,
\nonumber
\\
[E_0,E_3] \, = \, \Big( \, \frac{r-m}{\rho \sqrt{\Delta}} \, - \,
\frac{r \sqrt{\Delta}}{\rho ^3} \, \Big) \, E_0 \, + \,
\frac{2 \, r \, a \, \mathrm{sin} \, \vartheta}{\rho ^3} \, E_1 \; ,
\nonumber
\\
[E_1,E_2] \, = \,
\frac{( \rho ^2 + a^2 \mathrm{sin} ^2 \vartheta ) \,
\mathrm{cos} \, \vartheta}{\rho ^3 \, \mathrm{sin} \, \vartheta} \, E_1 \, - \,
\frac{2 \, a \, \sqrt{\Delta} \, \mathrm{cos} \, \vartheta}{\rho ^3} \, E_0 \; ,
\label{eq:Lie}
\\
[E_1,E_3] \, = \, \frac{r \, \sqrt{\Delta}}{\rho ^3} \, E_1 \; ,
\nonumber
\\
[E_2,E_3] \, = \, \frac{r \, \sqrt{\Delta}}{\rho ^3} \, E_2 \, + \,
\frac{a^2 \mathrm{cos} \, \vartheta \, \mathrm{sin} \, \vartheta}{\rho ^3}
\, E_3 \; .
\nonumber
\end{gather}
For every $v \in [0,1\, [ \, $, the integral curves of the vector field
\begin{equation}\label{eq:U}
U = \frac{E_0 \pm v \, E_1}{\sqrt{1-v^2}}
\end{equation}
can be interpreted as the worldlines of observers who circle along the $\varphi$-lines
around the axis of rotational symmetry of the Kerr-Newman spacetime. The number $v$
gives the velocity (in units of the velocity of light) of these observers with respect
to the standard observers. For the upper sign in (\ref{eq:U}), the motion relative
to the standard observers is in the positive $\varphi$-direction and thus co-rotating
with the black hole (because of our assumption $a \ge 0$), for the negative sign it
is in the negative $\varphi$-direction and thus counter-rotating. Please
note that $g(U,U)=-1$, which demonstrates that the integral curves of $U$ are
parametrized by proper time.
In general, $U$ is non-geodesic, $\nabla _U U \neq 0$, i.e., one needs a thrust to
stay on an integral curve of $U$. Correspondingly, relative to a $U$-observer a
freely falling particle undergoes an ``inertial acceleration'' measured by
$- \nabla _U U$. To calculate this quantity, we write
\begin{equation}\label{eq:compacc}
-g(\nabla _U U,E_i) =
-U g(U,E_i) + g(U ,\nabla _U E_i) = -U g(U,E_i)+ g(U,[U,E_i]) \, .
\end{equation}
The first term on the right-hand side vanishes, and the second term can be
easily calculated with the help of (\ref{eq:U}) and (\ref{eq:Lie}),
for i=0,1,2,3. We find
\begin{equation}\label{eq:delUU}
-g(\nabla _U U , \, \cdot \, ) =
\, A_{\mathrm{grav}} \, + \, A_{\mathrm{Cor}} \, + \, A_{\mathrm{cent}}
\end{equation}
where the covector fields
\begin{gather}
\label{eq:Agrav}
A_{\mathrm{grav}} = \frac{\Delta \, r - \rho ^2 (r-m)}{\rho^2 \Delta} \, dr
+ \frac{a^2}{\rho^2} \; {\mathrm{sin}} \, \vartheta \;
{\mathrm{cos}}\, \vartheta \; d \vartheta \; ,
\\
\label{eq:ACor}
A_{\mathrm{Cor}}= \, \pm \, \frac{v}{(1-v^2)} \; \frac{2 \, a \, \sqrt{\Delta}}{\rho ^2} \;
\Big( \, \frac{r}{\Delta} \, {\mathrm{sin}} \, \vartheta \; dr \; +
\; {\mathrm{cos}}\, \vartheta \; d \vartheta \, \Big) \; ,
\\
\label{eq:Acent}
A_{\mathrm{cent}}= \frac{v^2}{(1-v^2)} \;
\Big( \, \frac{2 \, r \, \Delta-\rho^2 (r-m)}{\rho^2 \Delta} \, dr + \frac{( \,
\rho^2+ 2 \, a^2 {\mathrm{sin}}^2 \vartheta \, ){\mathrm{cos}}\, \vartheta}{
\rho ^2 \, {\mathrm{sin}}\, \vartheta}
\; d\vartheta \, \Big)
\end{gather}
give, respectively, the gravitational, the Coriolis, and the centrifugal acceleration
of a freely falling particle relative to the $U$-observers. (Multiplication with the
particle's mass gives the corresponding ``inertial force''.) Here the decomposition of
the total inertial acceleration into its three contributions is made according to
the same rule as in Newtonian mechanics: The gravitational acceleration is independent
of $v$, the Coriolis acceleration is odd with respect to $v$, and the centrifugal acceleration
is even with respect to $v$. In \cite{FoertschHassePerlick2003} it was shown that, according to this rule,
gravitational, Coriolis and centrifugal acceleration are unambiguous whenever a timelike
2-surface with a timelike vector field has been specified; here we apply this procedure
to each 2-surface $(r, \vartheta ) = \mathrm{constant}$ with the timelike vector field
$E_0$.
Up to the positive factor $v / (1-v^2)$, the sum of Coriolis and centrifugal
acceleration is equal to
\begin{equation}\label{eq:Z}
\begin{split}
Z_{\pm}(v) = \pm \, \frac{2 \, a \, \sqrt{\Delta}}{\rho ^2} \;
& \Big( \, \frac{r}{\Delta} \, {\mathrm{sin}} \, \vartheta \; dr \; +
\; {\mathrm{cos}}\, \vartheta \; d \vartheta \, \Big)
\\
+ \, v \,
\Big( \, \frac{2 \, r \, \Delta-\rho^2 (r-m)}{\rho^2 \, \Delta} \, d &r +
\frac{(\, \rho^2+ 2 \, a^2 \, {\mathrm{sin}}^2 \vartheta \, ) \,
{\mathrm{cos}}\, \vartheta}{\rho ^2 \, {\mathrm{sin}}\, \vartheta}
\; d\vartheta \, \Big) \; .
\end{split}
\end{equation}
If we exclude the Reissner-Nordstr{\"o}m case $a=0$, the Coriolis force
dominates the centrifugal force for small $v$. To investigate the behavior
for $v$ close to the velocity of light, we consider the limit $v \rightarrow 1$.
By a straight-forward calculation we find that
\begin{equation}\label{eq:ZPsi}
Z_{\pm} (v) \: {\underset{v \to 1}{\longrightarrow}} \:
\frac{\mathrm{sin} \, \vartheta}{\rho ^2 \sqrt{\Delta}} \,
\big( r^2 + a^2 \pm a \, \sqrt{\Delta} \; {\mathrm{sin}}\, \vartheta \, \big)^2
\; d \Psi_{\pm} \; ,
\end{equation}
where
\begin{equation}\label{eq:dPsi}
\begin{split}
d \Psi _{\pm} \; & = \;
\frac{
2\, r \, \Delta -(r-m) \, \rho^2
\pm 2 \, a \, r \, \sqrt{\Delta} \; {\mathrm{sin}}\, \vartheta
}{
\sqrt{\Delta} \; {\mathrm{sin}} \, \vartheta \,
\big( r^2 +a^2 \pm \, a \, \sqrt{\Delta} \; {\mathrm{sin}} \, \vartheta \, \big)^2
} \; \; dr
\\
& + \;
\frac{
\big( \rho^2 + 2 \, a^2 {\mathrm{sin}}^2 \vartheta \pm
2 \, a \, \sqrt{\Delta} \; {\mathrm{sin}} \, \vartheta \, \big) \, \sqrt{\Delta} \;
{\mathrm{cos}}\, \vartheta
}{
{\mathrm{sin}}^2 \vartheta \,
\big( r^2 +a^2 \pm \, a \sqrt{\Delta} \; {\mathrm{sin}} \, \vartheta \, \big)^2
} \; \; d \vartheta
\end{split}
\end{equation}
is the differential of the function
\begin{equation}\label{eq:Psi}
\Psi_{\pm}
= \frac{
- \frac{1}{{\mathrm{sin}} \, \vartheta} \mp \frac{a}{\sqrt{\Delta}}
}{
\frac{r^2+a^2}{\sqrt{\Delta}} \pm a \, {\mathrm{sin}} \, \vartheta
} \: .
\end{equation}
Because of $\, {\mathrm{sin}} \, \vartheta \,$ in the denominator, both
$\Psi_-$ and $\Psi_+$ are singular along the axis. $\Psi_+$ is negative
on all of $M_+$ whereas $\Psi_-$ is negative outside and positive inside
the ergosphere.
From (\ref{eq:ZPsi}) we read that, in the limit $v \rightarrow 1$, the sum of
Coriolis and centrifugal force is perpendicular to the surfaces
$\Psi _{\pm} = {\mathrm{constant}}$ and points in the direction of increasing
$\Psi_{\pm}$. In this limit, we may thus view the function $\Psi_+$ (or
$\Psi_-$, resp.) as a Coriolis-plus-centrifugal potential for co-rotating
(or counter-rotating, resp.) observers. The surfaces $\Psi _{\pm} =
{\mathrm{constant}}$ are shown in Figure \ref{fig:Psi}.
It is not difficult to see that $\Psi_{\pm}$ is independent of the
family of observers with respect to which the inertial accelerations
have been defined, as long as their 4-velocity is a linear combination of
$\partial _t$ and $\partial _{\varphi}$. We have chosen the standard
observers; a different choice would lead to different formulas for the
inertial accelerations (\ref{eq:Agrav}), (\ref{eq:ACor}) and (\ref{eq:Acent}),
but to the same $\Psi_{\pm}$. For the sake of comparison, the reader may
consult Nayak and Vishveshwara \cite{NayakVishveshwara1996} where the
inertial accelerations are calculated with respect to the zero angular momentum
observers. Also, it should be mentioned that the potentials
$\Psi_+$ and $\Psi_-$, or closely related functions, have been used already
by other authors. The quantities $\Omega _{c \pm}$, e.g., introduced by de
Felice and Usseglio-Tomasset \cite{deFeliceUsseglio1991} in their analysis
of physical effects related to centrifugal force reversal in the equatorial
plane of the Kerr metric, are related to our potentials by $\Omega _{c \pm} =
\mp \Psi_{\pm} |_{\vartheta = \pi /2}$.
In the Reissner-Nordstr{\"o}m case $a=0$, the Coriolis acceleration (\ref{eq:ACor})
vanishes identically and
\begin{equation}\label{eq:Psi0}
\Psi = \Psi_+ = \Psi_- =
- \frac{\sqrt{\, r^2 \, - \, 2 \, m \, r \, + \, q^2 \,}}{
r^2 \, {\mathrm{sin}} \, \vartheta}
\end{equation}
is a potential for the centrifugal acceleration in the sense that $A_{\mathrm{cent}}$
is a multiple of $d \Psi$. In this case, the surfaces $\Psi = {\mathrm{constant}}$
coincide with what Abramowicz \cite{Abramowicz1990} calls the \emph{von Zeipel cylinders}.
Abramowicz's Figure 1 in \cite{Abramowicz1990}, which shows the von Zeipel
cylinders in the Schwarzschild spacetime, coincides with the $a \to 0$ limit of our
Figure \ref{fig:Psi}, which shows the surfaces $\Psi_+ = \mathrm{constant}$ and
$\Psi_- = \mathrm{constant}$ in the Kerr spacetime. (The notion of von Zeipel cylinders
has also been defined in the Kerr metric, see \cite{KozlowskiJaroszynskiAbramowicz1978},
for observers of a specified angular velocity. However, this angular-velocity-dependent
von Zeipel cylinders are not related to the potentials $\Psi_+$ and $\Psi_-$ in the Kerr
spacetime.)
By construction, the function $\Psi _{\pm}$ has the following property. If
we send a lightlike geodesic tangential to a $\varphi$-line in the positive
(respectively negative) $\varphi$-direction, it will move away from this
$\varphi$-line in the direction of the negative gradient of $\Psi _+$ (respectively
$\Psi _ -$). Thus, each zero of the differential $d \Psi _+$ (respectively
$d \Psi _-$) indicates a co-rotating (respectively counter-rotating) circular
lightlike geodesic, i.e., a ``photon circle''. By (\ref{eq:dPsi}),
$d \Psi _{\pm}$ vanishes if
\begin{equation}\label{eq:crit}
{\mathrm{cos}}\, \vartheta \, =0 \quad \text{and} \quad
2 \, r \, \Delta - (r-m) \, \rho^2
\pm 2 \, a \, r \, \sqrt{\Delta} \; {\mathrm{sin}} \, \vartheta \, =0 \; .
\end{equation}
By writing $\Delta$ and $\rho^2$ explicitly, we see that (\ref{eq:crit}) is true
at $\vartheta = \pi /2$ and $r = r_{\pm}^{\mathrm{ph}}$, where
$r_{\pm}^{\mathrm{ph}}$ is defined by the equation
\begin{equation}\label{eq:rph}
\big( r_{\pm}^{\mathrm{ph}} \big)^2 - 3 \, m \, r_{\pm}^{\mathrm{ph}}
+ 2 \, a^2 \, + \, 2 \, q ^2 \, = \, \mp \;
2 \, a \, \sqrt{ \big( r_{\pm}^{\mathrm{ph}} \big)^2 -
2 \, m \, r_{\pm}^{\mathrm{ph}} \, + \, a^2 \, + \, q ^2 \,} \, .
\end{equation}
For $0 \, < \, \sqrt{a^2+q^2} \, < \, m \,$, (\ref{eq:rph}) has exactly one
solution for each sign which satisfies
\begin{equation}\label{eq:order}
r_+ \, < \, r_+^{\mathrm{ph}} \, < \,
\frac{3 \, m}{2} \, + \, \sqrt{ \frac{9 m ^2}{4} \, - \, 2 \, q ^2 \, } \, < \,
r_-^{\mathrm{ph}} \, < \, \, 2 \, m \, + \, 2 \, \sqrt{m^2 -q^2} \, .
\end{equation}
So there is exactly one co-rotating
photon circle in $M_+$, corresponding to the critical point of
$\Psi_+$ at $r_+^{\mathrm{ph}}$, and exactly one counter-rotating photon
circle in $M_+$, corresponding to the critical point of $\Psi_-$
at $r_-^{\mathrm{ph}}$, see Figure \ref{fig:Psi}. (The relation of photon
circles to centrifugal-plus-Coriolis force in the limit $v \to 1$ is also
discussed by Stuchlik, Hledik and Jur{\'a}n \cite{StuchlikHledikJuran2000};
note, however, that their work is restricted to the equatorial plane of
the Kerr-Newman spacetime throughout.)
In the Reissner-Nordstr{\"o}m case, $a=0$, we have $r_+^{\mathrm{ph}} =
r_-^{\mathrm{ph}} = \frac{3m}{2} + \sqrt{\frac{9 m^2}{4} - 2 q^2}$
(cf., e.g., Chandrasekhar \cite{Chandrasekhar1983}, p.218). If we keep $m$ and $q$ fixed
and vary $a$ from $0$ to the extreme value $\sqrt{m^2-q^2}$, $r_+^{\mathrm{ph}}$
decreases from $\frac{3}{2}m + \sqrt{\frac{9 \, m^2}{4} -2 q^2}$
to $m$ whereas $r_-^{\mathrm{ph}}$ increases from $\frac{3}{2}m +
\sqrt{\frac{9 \, m^2}{4} - 2 q^2}$ to $2m + 2 \sqrt{m^2 - q^2}$. As an aside, we
mention that, although $r_+^{\mathrm{ph}}$ and $r_+$ both go to $m$ in the
extreme case, the proper distance between the co-rotating photon circle at
$r_+^{\mathrm{ph}}$ and the horizon at $r_+$ does not go to zero; for the
case $q=0$ this surprising feature is discussed in Chandrasekhar \cite{Chandrasekhar1983},
p. 340.
From (\ref{eq:dPsi}) we can read the sign of $\partial _r \Psi _{\pm}$ at
each point. We immediately find the following result.
\begin{proposition}\label{prop:centri}
Decompose the exterior Kerr spacetime into the sets
\begin{eqnarray}
M_{\mathrm{in}} \; : \qquad &
2 \, r \, \Delta - (r-m) \, \rho ^2 \: <
\: - \, 2 \, a \, r \, \sqrt{\Delta} \, {\mathrm{sin}} \, \vartheta
\label{eq:Min}
\\
K \; \; : \qquad &
\, - \, 2 \, a \, r \, \sqrt{\Delta} \, {\mathrm{sin}} \, \vartheta \: \le
\: 2 \, r \, \Delta - (r-m) \, \rho ^2 \: \le
\: 2 \, a \, r \, \sqrt{\Delta} \, {\mathrm{sin}} \, \vartheta
\label{eq:K}
\\
M_{\mathrm{out}} \; : \qquad &
2 \, a \, r \, \sqrt{\Delta} \, {\mathrm{sin}} \, \vartheta \: <
\: 2 \, r \, \Delta - (r-m) \, \rho ^2 \, ,
\label{eq:Mout}
\end{eqnarray}
so $M_+ = M_{\mathrm{in}} \cup K \cup M_{\mathrm{out}}$, see
Figure $\ref{fig:K}$. Then
\begin{eqnarray}
\partial _r \Psi _+ < 0 \quad \text{and} \quad \partial _r \Psi _- < 0
\quad & \text{on } \, M_{\mathrm{in}} \, ,
\label{eq:in}
\\
\partial _r \Psi _+ < 0 \quad \text{and} \quad \partial _r \Psi _- > 0
\quad & \text{on the interior of } \, K \, ,
\label{eq:trans}
\\
\partial _r \Psi _+ > 0 \quad \text{and} \quad \partial _r \Psi _- > 0
\quad & \text{on } \, M_{\mathrm{out}} \, .
\label{eq:out}
\end{eqnarray}
\end{proposition}
The inequality $\partial _r \Psi _{\pm} > 0$ is true for both signs
if and only if, for $v$ sufficiently large, the sum of Coriolis and
centrifugal force is pointing in the direction of increasing $r$ for
co-rotating and counter-rotating observers. An equivalent condition is that
the centrifugal force points in the direction of increasing $r$ and dominates
the Coriolis force for $v$ sufficiently large. This is the situation we are familiar
with from Newtonian physics. According to Proposition \ref{prop:centri}, however, in
the Kerr-Newman spacetime this is true only in the region $M_{\mathrm{out}}$.
In the interior of the intermediate region $K$ the direction of
centrifugal-plus-Coriolis force for large $v$ is reversed for
counter-rotating observers while still normal for co-rotating observers.
In the region $M_{\mathrm{in}}$, finally, it is reversed both for
co-rotating and for counter-rotating observers.
The relevance of the sets $M_{\mathrm{out}}$, $M_{\mathrm{in}}$ and $K$
in view of lightlike geodesics is demonstrated in the following proposition.
\begin{proposition}\label{prop:convex}
\begin{itemize}
\item[\emph{(a)}]
In the region $M_{\mathrm{out}}$, the radius coordinate $r$ cannot have
other extrema than strict local minima along a lightlike geodesic.
\item[\emph{(b)}]
In the region $M_{\mathrm{in}}$, the radius coordinate $r$ cannot have
other extrema than strict local maxima along a lightlike geodesic.
\item[\emph{(c)}]
Through each point of $K$ there is a spherical lightlike
geodesic. $($Here ``spherical'' means that the geodesic is completely
contained in a sphere $r = \mathrm{constant}.)$
\end{itemize}
\end{proposition}
\begin{proof}
Let $X$ be a lightlike and geodesic vector field on $(M_+,g)$, i.e., $g(X,X)=0$ and
$\nabla_X X = 0$. To prove (a) and (b), we have to demonstrate that the implication
\begin{equation}\label{eq:Xpos}
X r = 0 \quad \Rightarrow \quad XXr > 0
\end{equation}
is true at all points of $M_{\mathrm{out}}$ and that the implication
\begin{equation}\label{eq:Xneg}
X r = 0 \quad \Rightarrow \quad XXr < 0
\end{equation}
is true at all points of $M_{\mathrm{in}}$. Here $X r$ is
to be read as ``the derivative operator $X$ applied to the function $r$''.
The condition $\nabla_X X=0$ implies
\begin{equation}\label{eq:XXr}
XXr = X dr(X)= X \, \Big( \, \frac{\sqrt{\Delta}}{\rho} \, g (E_3, X ) \, \Big)
\, = \, \frac{\sqrt{\Delta}}{\rho} \, g \big( \nabla _X E_3 , X \big) +
\Big( \, X \, \frac{\sqrt{\Delta}}{\rho} \, \Big) \, g(E_3,X) \; ,
\end{equation}
where we have used the basis vector field $E_3$ from (\ref{eq:E}) and (\ref{eq:gE}).
Using these orthonormal basis vector fields, we can write $X$ in the form
\begin{equation}\label{eq:XE}
X \, = \, E_0 \, + \, {\mathrm{cos}} \, \alpha \; E_1 \,
+ \, {\mathrm{sin}} \, \alpha \; E_2
\end{equation}
at all points where $Xr=0$. (A non-zero factor of $X$ is irrelevant because $X$
enters quadratically into the right-hand side of (\ref{eq:XXr}).)
Then (\ref{eq:XXr}) takes the form
\begin{gather}
\frac{\rho}{\sqrt{\Delta}} \, XXr \, = \,
g \big( \nabla _{E_0} E_3 , E_0 \big) \, + \,
\mathrm{sin} \, \alpha \, \Big( \,
g \big( \nabla _{E_2} E_3 , E_0 \big) \, + \,
g \big( \nabla _{E_0} E_3 , E_2 \big) \, \Big) \, + \,
\nonumber
\\
\mathrm{cos} \, \alpha \, \Big( \,
g \big( \nabla _{E_1} E_3 , E_0 \big) \, + \,
g \big( \nabla _{E_0} E_3 , E_1 \big) \, \Big) \, + \,
\mathrm{sin}^2 \alpha \,
g \big( \nabla _{E_2} E_3 , E_2 \big) \, + \,
\nonumber
\\
\mathrm{cos} ^2 \alpha \,
g \big( \nabla _{E_1} E_3 , E_1 \big) \, = \,
g \big( [E_0 , E_3 ], E_0 \big) \, + \,
\label{eq:XXrE}
\\
\mathrm{sin} \, \alpha \, \Big( \,
g \big( [ E_2 , E_3 ] , E_0 \big) \, + \,
g \big( [ E_0 , E_3 ] , E_2 \big) \, \Big) \, + \,
\mathrm{cos} \, \alpha \, \Big( \,
g \big( [ E_1 , E_3 ] , E_0 \big) \, + \,
g \big( [ E_0 , E_3 ] , E_1 \big) \, \Big) \, + \,
\nonumber
\\
\mathrm{sin}^2 \alpha \,
g \big( [ E_2 , E_3 ] , E_2 \big) \, + \,
\mathrm{cos} ^2 \alpha \,
g \big( [ E_1 , E_3 ] , E_1 \big) \; .
\nonumber
\end{gather}
If we insert the Lie brackets from (\ref{eq:Lie}) we find
\begin{equation}\label{eq:alpha}
\rho ^4 \, X X r \, = \,
2 \, r \, \Delta - (r-m) \, \rho^2 + 2 \, a \, r \, \sqrt{\Delta} \;
{\mathrm{sin}} \, \vartheta \; {\mathrm{cos}} \, \alpha \, .
\end{equation}
Now we compare this expression with (\ref{eq:Min}), (\ref{eq:K})
and (\ref{eq:Mout}).
If ${\mathrm{cos}} \, \alpha$ runs through all possible values from $-1$ to 1,
the right-hand side of (\ref{eq:alpha}) stays positive on $M_{\mathrm{out}}$
and negative on $M_{\mathrm{in}}$. This proves part (a) and part (b).
At each point of $K$ there is exactly one value of ${\mathrm{cos}} \,
\alpha$ such that the right-hand side of (\ref{eq:alpha}) vanishes. This
assigns to each point of $K$ a lightlike direction such that the integral
curves of the resulting direction field are spherical lightlike geodesics.
This proves part (c).
\end{proof}
In view of part (c) of Proposition \ref{prop:convex} we refer to the closed
region $K$ as to the \emph{photon region} of the exterior Kerr-Newman
spacetime. Along each spherical lightlike geodesic in $K$ the
$\vartheta$-coordinate oscillates between extremal values $\vartheta _0$
and $- \vartheta _0$, correponding to boundary points of $K$, see Figure
\ref{fig:K}; the $\varphi$-coordinate either increases or decreases monotonically.
In the Reissner-Nordstr{\"o}m case $a=0$, where (\ref{eq:Psi0}) is a
potential for the centrifugal force, the photon region $K$ shrinks
to the \emph{photon sphere} $r \, = \,
\frac{3}{2} m + \sqrt{ \frac{9 \, m^2}{4} \, - \, 2 \, q ^2 \, }$
and Proposition \ref{prop:centri} reduces to the known fact that centrifugal
force reversal takes place at the photon sphere.
We end this section with a word of caution as to terminology. In part (c) of
Proposition \ref{prop:convex} we have refered to the set $r=\mathrm{constant}$ as
to a 'sphere'. This is indeed justified in the sense that, for each fixed $t$,
fixing the radius coordinate $r$ gives a two-dimensional submanifold of $M_+$ that
is diffeomorphic to the 2-sphere. Moreover, in our Figures \ref{fig:Psi} and
\ref{fig:K} the sets $r=\mathrm{constant}$ are represented as (meridional
cross-sections of) spheres. Note, however, that the Kerr-Newman metric does
\emph{not} induce an isotropic metric on these spheres (unless $a=0$), so they
are not 'round spheres' in the metrical sense.
\section{Multiple imaging in the Kerr-Newman spacetime}\label{sec:imaging}
It is now our goal to discuss multiple imaging in the exterior Kerr-Newman spacetime
$(M_+,g)$. To that end we fix a point $p$ and a timelike curve $\gamma$ in $M_+$
and we want to get some information about the past-pointing lightlike geodesics
from $p$ to $\gamma$. The following proposition is an immediate consequence of
Proposition \ref{prop:convex}.
\begin{proposition}\label{prop:shell}
Let $p$ be a point and $\gamma$ a timelike curve in the exterior Kerr-Newman
spacetime. Let
\begin{equation}\label{eq:shell}
\Lambda \, : \quad r_a < r < r_b
\end{equation}
denote the smallest spherical shell, with $r_+ \le r_a < r_b \le \infty$,
such that $p$, $\gamma$ and the region $K$ defined by {\em (\ref{eq:K})}
are completely contained in ${\overline{\Lambda}}\, ( \, = \, $closure of
$\Lambda$ in $M_+)$. Then all lightlike geodesics that join $p$ and
$\gamma$ are confined within ${\overline{\Lambda}}$.
\end{proposition}
\begin{proof}
Along a lightlike geodesic that leaves and re-enters $\overline{\Lambda}$ the
radius coordinate $r$ must have either a maximum in the region $M_{\mathrm{out}}$
or a minimum in the region $M_{\mathrm{in}}$. Proposition \ref{prop:convex} makes sure
that this cannot happen.
\end{proof}
By comparison with Proposition \ref{prop:centri} we see that, among all spherical
shells whose closures in $M_+$ contain $p$ and $\gamma$, the shell $\Lambda$ of
Proposition \ref{prop:shell} is the smallest shell such that at all points of
the boundary of $\Lambda$ in $M_+$ the gradient of $\Psi _+$ and
the gradient of $\Psi_-$ are pointing in the direction away from $\Lambda$.
Based on Proposition \ref{prop:shell}, we will later see that there is a close
relation between multiple imaging and centrifugal-plus-Coriolis force reversal
in the Kerr-Newman spacetime.
Proposition \ref{prop:shell} tells us to what region the lighlike geodesics between
$p$ and $\gamma$ are confined, but it does not tell us anything about the number of
these geodesics. To answer the latter question, we now apply Theorem \ref{theo:Uh}
to the exterior Kerr-Newman spacetime $(M_+,g)$.
\begin{proposition}\label{prop:infinite}
Consider, in the exterior Kerr-Newman spacetime $(M_+,g)$, a point $p$ and a smooth
future-pointing timelike curve $\gamma : \; ] - \infty , \, \tau _a \; [ \;
\longrightarrow M_+$, with $- \infty < \tau _a \le \infty$, which is parametrized by
the Boyer-Lindquist time coordinate $t$, i.e., the $t$-coordinate of the point
$\gamma (\tau)$ is equal to $\tau$. Assume {\em (i)\/} that $\gamma$
does not meet the caustic of the past light-cone of $p$, and {\em (ii)\/} that
for $\tau \rightarrow - \infty$ the radius coordinate $r$ of the point $\gamma (\tau)$
remains bounded and bounded away from $r_+$. $($The last condition means that
$\gamma (\tau)$ goes neither to infinity nor to the horizon for
$\tau \rightarrow - \infty$.$)$ Then there is an infinite sequence
$(\lambda _n) _{n \in {\mathbb{N}}}$ of mutually different past-pointing
lightlike geodesics from $p$ to $\gamma$. For $n \rightarrow \infty$, the
index of $\lambda _n$ goes to infinity. Moreover, if we denote the point where
$\lambda _n$ meets the curve $\gamma$ by $\gamma ( \tau _n )$, then $\tau _n
\rightarrow - \infty$ for $n \rightarrow \infty$.
\end{proposition}
\begin{proof}
We want to apply Theorem \ref{theo:Uh} to the exterior Kerr-Newman spacetime $(M_+,g)$.
To that end, the first thing we have to find is an orthogonal splitting of the
exterior Kerr-Newman spacetime that satisfies the metric growth condition. As in the
original Boyer-Lindquist coordinates the $t$-lines are not orthogonal to the
surfaces $t = {\mathrm{constant}}$, we change to new coordinates
\begin{equation}\label{eq:coord}
x^1 = r \, , \quad
x^2 = \vartheta \, , \quad
x^3 = \varphi - u(r, \vartheta ) \, t \, , \quad
t = t \, ,
\end{equation}
with
\begin{equation}\label{eq:u}
u(r,\vartheta ) \, = \,
\frac{2 \, m \, a \, r}{\rho ^2 \Delta + 2 \, m \, r \, (r^2+a^2)} \; .
\end{equation}
Then the Kerr metric (\ref{eq:kerr}) takes the orthogonal splitting form
(\ref{eq:globhyp}), with
\begin{equation}\label{eq:gij}
\begin{split}
g_{ij}(x, t ) \, dx^i dx^j =
\rho ^2 \, \Big( \, \frac{dr^2}{\Delta} + d \vartheta ^2 \, \Big) +
\qquad \qquad \qquad \qquad
\\
\frac{{\mathrm{sin}}^2 \vartheta}{\rho^2} \, \big( \, (r^2 +a^2)^2 -
\Delta \, a ^2 \, \mathrm{sin}^2 \vartheta \, \big) \,
\Big( \, t \, \big( \, \frac{\partial u (r, \vartheta)}{\partial r} \, dr +
\frac{\partial u (r, \vartheta)}{\partial \vartheta} \, d \vartheta \, \big) +
d x^3 \, \Big)^2
\end{split}
\end{equation}
and
\begin{equation}\label{eq:f}
f(x, t ) = \frac{\rho ^2 \, \Delta}{
(r^2+a^2)^2 - \Delta \, a ^2 \, \mathrm{sin} ^2 \vartheta} \; .
\end{equation}
Clearly, if we restrict the range of the coordinates $x = (x^1,x^2,x^3)$ to a
compact set, we can find positive constants $A$ and $B$ such that
\begin{equation}\label{eq:ineq}
\frac{g_{ij} (x, t ) v^i v^j}{f(x, t)} \le
(A+B \, |t|)^2 \delta _{ij} v^i v^j \, .
\end{equation}
As $F(t) = A+B\, |t|$ satisfies the integral condition (\ref{eq:F}), this proves that
our orthogonal splitting satisfies the metric growth condition. -- Our assumptions on
$\gamma$ guarantee that we can find a curve $\gamma ' : {\mathbb{R}} \longrightarrow
M_+$ which, in terms of our orthogonal splitting, is of the form $\gamma '( \tau ) =
\big( \beta ' (\tau), \tau \big)$ such that $\gamma ' (\tau ) = \gamma (\tau )$ for
all $\, ]-\infty, \, \tau _b \, ]\,$, with some $\tau _b \in {\mathbb{R}}$. (Introducing
$\gamma '$ is necessary because $\gamma$ need not be defined on all of $\mathbb{R}$.)
As $\gamma$ does not meet the caustic of the past light-cone of $p$, we may
assure that $\gamma '$ does not meet the caustic of the past light-cone of $p$.
As $\gamma$ does not go to the horizon or to infinity for $\tau \rightarrow -
\infty$, the set $\{ \, \beta ' (\tau) \, | \, -\infty < \tau < \tau_b\}$ is
confined to a compact region. Hence, for every sequence $(\tau_i)_{i \in {\mathbb{N}}}$
with $\tau_i \rightarrow - \infty$ the sequence $\big( \beta ' (\tau_i) \big)
_{i \in {\mathbb{N}}}$ must have a convergent subsequence. This shows that all
the assumptions of Theorem \ref{theo:Uh} are satisfied if we replace $\gamma$ with
$\gamma '$. Hence, the theorem tells us that $N_k' \ge B_k$, where $N_k'$ is the
number of past-pointing lightlike geodesics with index $k$ from $p$ to $\gamma '$
and $B_k$ is the $k$-th Betti number of the loop space of $M_+ \simeq S^2 \times
{\mathbb{R}}^2$. As $M_+ \simeq S^2 \times {\mathbb{R}} ^2$ is simply connected but
not contractible to a point, the theorem of Serre \cite{Serre1951} guarantees that $B_k >0$
and, thus, $N_k' >0$ for all but finitely many $k \in {\mathbb{N}}$. Hence, for
almost all positive integers $k$ there is a past-pointing lightlike geodesic of
index $k$ from $p$ to $\gamma '$. This gives us an infinite sequence $(\lambda _n)
_{n \in {\mathbb{N}}}$ of mutually different past-pointing lightlike geodesics
from $p$ to $\gamma '$ such that the index of $\lambda _n$ goes to infinity if
$n \rightarrow \infty$. We denote the point where $\lambda _n$ meets the curve
$\gamma '$ by $\gamma '(\tau _n )$. What remains to be shown is that $\tau _n
\rightarrow - \infty$ for $n \rightarrow \infty \,$; as $\gamma$ coincides with
$\gamma '$ on $\;]-\infty, \, \tau _b \, ]$, this would make sure that all but
finitely many $\lambda _n$ arrive indeed at $\gamma$. So we have to prove
that it is impossible to select infinitely many $\tau _n$ that
are bounded below. By contradiction, assume that we can find a common lower bound
for infinitely many $\tau _n$. As the $\tau_n$ are obviously bounded above by the
value of the Boyer-Lindquist time coordinate at $p$, this implies that the $\tau _n$
have an accumulation point. Hence, for an infinite subsequence of our lightlike
geodesics $\lambda _n$ the end-points $\gamma ' (\tau _n )$ converge to some point
$q$ on $\gamma '$. As $\gamma '$ does not meet the caustic of the past light-cone
of $p$, the past light-cone of $p$ is an immersed 3-dimensional lightlike
submanifold near $q$. We have thus found an infinite sequence of points $\gamma '
(\tau _n )$ that lie in a 3-dimensional lightlike submanifold and, at the same
time, on a timelike curve. Such a sequence can converge to $q$ only if all but
finitely many $\gamma ' (\tau _n)$ are equal to $q$. So there are infinitely many
$\lambda _n$ that terminate at $q$. As there is only one lightlike direction
tangent to the past light-cone of $p$ at $q$, all these infinitely many lightlike
geodesics must have the same tangent direction at $q$. As there are no periodic
lightlike geodesics in the globally hyperbolic spacetime $(M_+,g)$, any two
lightlike geodesics from $p$ to $q$ with a common tangent direction at $q$ must
coincide. This contradicts the fact that the $\lambda _n$ are mutually
different, so our assumption that there is a common lower bound for infinitely
many $\tau _n$ cannot be true.
\end{proof}
The proof shows that in Proposition \ref{prop:infinite} the condition of $\gamma
(\tau )$ going neither to infinity nor to the horizon for $\tau \to - \infty$
can be a little bit relaxed. It suffices to require that there is a sequence
$(\tau _i)_{i \in \mathbb{N}}$ of time parameters with $\tau _i \to - \infty$
for $i \to \infty$ such that the spatial coordinates of $\gamma ( \tau _i )$
converge. This condition is mathematically weaker than the one given in the
proposition, but there are probably no physically interesting situations where
the former is satisfied and the latter is not.
Proposition \ref{prop:infinite} tells us that a Kerr-Newman black hole produces infinitely
many images for an arbitrary observer, provided that the worldline of the light source
satisfies some (mild) conditions. At the same time, this proposition demonstrates that
the past light-cone of every point $p$ in the exterior Kerr-Newman spacetime must have a non-empty
and, indeed, rather complicated caustic; otherwise it would not be possible to find
a sequence of past-pointing lightlike geodesics $\lambda _n$ from $p$ that intersect
this caustic arbitrarily often for $n$ sufficiently large. Please note that the
last sentence of Proposition \ref{prop:infinite} makes clear that for the existence
of infinitely many images it is essential to assume that the light source exists
since arbitrarily early times.
In Proposition \ref{prop:shell} we have shown that all lightlike geodesics from $p$
to $\gamma$ are confined to a spherical shell that contains the photon region $K$.
We can now show that, under the assumptions of Proposition \ref{prop:infinite},
almost all past-pointing lightlike geodesics from $p$ to $\gamma$ come actually
arbitrarily close to $K$.
\begin{proposition}\label{prop:limit}
Let $U$ be any open subset of $M_+$ that contains the region $K$ defined by
{\em (\ref{eq:K})}.
Then, if the assumptions of Proposition $\ref{prop:infinite}$ are
satisfied, all but finitely many past-pointing lightlike geodesics from $p$
to $\gamma$ intersect $U$.
\end{proposition}
\begin{proof}
The sequence $( \lambda _n )_{n \in {\mathbb{N}}}$ of Proposition
\ref{prop:infinite} gives us a sequence $( w_n )_{n \in {\mathbb{N}}}$ of
mutually different lightlike vectors $w_n \in T_p M_+$ with $dt(w_n) = -1$
and a sequence $( s_n )_{n \in {\mathbb{N}}}$ of real numbers $s_n \ge 0$
such that ${\mathrm{exp}} _p (s_n w_n)$ is on $\gamma$ for all
$n \in {\mathbb{N}}$. Here ${\mathrm{exp}} _p$ denotes the
exponential map of the Levi-Civita derivative of the Kerr-Newman metric at the point $p$.
Since the 2-sphere consisting of the lightlike vectors $w \in T_p M_+$ with $dt(w) = -1$
(which may be regarded as the observer's celestial sphere) is compact, a subsequence
of $( w_n )_{n \in {\mathbb{N}}}$ must converge to some lightlike vector
$w _{\infty} \in T_p M_+$. By Proposition \ref{prop:infinite},
the sequence $\big( {\mathrm{exp}} _p (s_n w_n) \big)_{n \in {\mathbb{N}}}$ cannot
have an accumulation point, hence $s_n \to \infty$ for $n \to \infty$. Owing to
Proposition \ref{prop:shell}, the radius coordinate $r$ of all points
${\mathrm{exp}}_p (s w_n)$ with $s \in [0,s_n]$ is bounded, so the past-pointing
past-inextendible lightlike geodesic
\begin{equation}\label{eq:exp}
\begin{split}
\lambda _{\infty}: [0, \infty \, [ \; & \longrightarrow \, M_+
\\
s \, \longmapsto & \; \lambda _{\infty} (s) = {\mathrm{exp}}_p(sw_{\infty})
\end{split}
\end{equation}
cannot go to infinity. Let us assume that $\lambda _{\infty}$ goes to the horizon. By
Proposition \ref{prop:shell}, this is possible only in the extreme case $a^2 + q^2=m^2$.
Then along $\lambda _n$ the radius coordinate $r$ must have local minima
arbitrarily close to $r_+$ for $n$ sufficiently large. As, by Proposition \ref{prop:convex},
such minima cannot lie in $M_{\mathrm{in}}$, the geodesic $\lambda _n$ has to meet $K$
for $n$ sufficiently large and we are done. Therefore, we may assume for the
rest of the proof that $\lambda _{\infty}$ does not go to the horizon. So along
$\lambda _{\infty}$ the coordinate $r$ must either approach a limit value
$r_{\infty}$ or pass through a maximum and a minimum. In the first case,
both the first and the second derivative of $s \longmapsto r \big( \lambda _{\infty}
(s) \big)$ must go to zero for $s \to \infty$. This is possible only if
$\lambda _{\infty}$ comes arbitrarily close to $K$, because, as we know from
the proof of Proposition \ref{prop:convex}, the implication (\ref{eq:Xpos}) holds
on $M_{\mathrm{out}}$ and the implication (\ref{eq:Xneg}) holds on $M_{\mathrm{in}}$.
In the second case, again by Proposition \ref{prop:convex},
the maximum cannot lie in $M_{\mathrm{out}}$ and the minimum cannot lie in
$M_{\mathrm{in}}$; hence, both the maximum and the minimum must lie in $K$.
In both cases we have, thus, found that $\lambda _{\infty}$ and hence all but finitely
many $\lambda _n$ intersect $U$.
\end{proof}
\section{Discussion and concluding remarks}\label{sec:conclusion}
We have proven, with the help of Morse theory, in Proposition \ref{prop:infinite}
that a Kerr-Newman black hole acts as a gravitational lens that produces
infinitely many images. We emphasize that we made only very mild assumptions
on the motion of the light source and that we considered the whole domain of
outer communication, including the ergosphere. For the sake of comparison,
the reader may consult Section 7.2 of Masiello \cite{Masiello1994} where it is shown, with
the help of Morse theory, that a Kerr black hole produces infinitely many images.
However, Masiello's work is based on a special version of Morse theory which applies
to stationary spacetimes only; therefore he had to exclude the ergosphere from the
discussion, he had to require that the worldline of the light source is an
integral curve of the Killing vector field $\partial _t$, and he had to restrict
to the case of slowly rotating Kerr black holes, $0 \le a^2 < a_0 ^2$
with some $a_0$ that remained unspecified, instead of the whole
range $0 \le a^2 \le m^2$. On the basis of our Proposition \ref{prop:shell} one
can show that Masiello's $a_0$ is equal to $m/\sqrt{2}$; this is the value
of $a$ where the photon region $K$ reaches the ergosphere (see Figure
\ref{fig:K}), i.e. where $r_+^{\mathrm{ph}} = 2m$. For a Kerr spacetime with
$m \ge a \ge m/\sqrt{2}$ we can find an event $p$ and a $t$-line in
$M_+ \setminus \{\mathrm{ergosphere} \}$ that can be connected by only
finitely many lightlike geodesics in $M_+ \setminus \{\mathrm{ergosphere} \}$.
If an observer sees infinitely many images of a light source, they must have
at least one accumulation point on the observer's celestial sphere. This follows
immediately from the compactness of the 2-sphere. This accumulation point
corresponds to a limit light ray $\lambda _{\infty}$. In the proof of
Proposition \ref{prop:limit} we have demonstrated that $\lambda _{\infty}$
comes arbitrarily close to the photon region $K$ and that either $\lambda _{\infty}$
approaches a sphere $r=\mathrm{constant}$ or the radius coordinate along
$\lambda _{\infty}$ has a minimum and
a maximum in $K$. (In the extreme case $a^2+q^2=m^2$ the ray $\lambda _{\infty}$
may go to the inner boundary of $M_+$.) This is all one can show with the help of
Morse theory and the qualitative methods based on the sign of centrifugal-plus-Coriolis
force. Stronger results are possible if one uses the explicit first-order form of
the lightlike geodesic equation in the Kerr-Newman spacetime, making use of the
constants of motion which reflect complete integrability. Then one can show that
along a lightlike geodesic in $M_+$ the radius coordinate is either monotonous
or has precisely one turning point. (This result can be deduced, e.g., from
Calvani and Turolla \cite{CalvaniTurolla1981}).
Thus, the case that there is a minimum and a maximum in $K$ is, actually, impossible.
As a consequence, the limit light ray $\lambda _{\infty}$ necessarily approaches a
sphere $r=\mathrm{constant}$. By total integrability it must then approach a
lightlike geodesic with the same constants of motion. Of course, this must be one
of the spherical geodesics in $K$. (In the extreme case
$a^2+q^2=m^2$ the limit ray $\lambda _{\infty}$ may approach the
circular light ray at $r_+^{\mathrm{ph}}=m$ which is outside of $M_+$.)
Also, it follows from Proposition \ref{prop:infinite} that the limit curve
$\lambda _{\infty}$ meets the caustic of the past light cone of $p$ infinitely
many times. This gives, implicitly, some information on the structure of the
caustic. For the Kerr case, $q=0$, it was shown numerically by Rauch and
Blandford \cite{RauchBlandford1994} that the caustic consists of infinitely
many tubes with astroid cross sections. This result was supported by recent
analytical results by Bozza, de Luca, Scarpetta, and Sereno
\cite{BozzadeLucaScarpettaSereno2005}.
We have shown, in Proposition \ref{prop:shell}, that all
lightlike geodesics connecting an event $p$ to a timelike curve $\gamma$
in the exterior Kerr-Newman spacetime $M_+$ are confined to the smallest
spherical shell that contains $p$, $\gamma$ and the photon region $K$.
If $\gamma$ satisfies the assumptions of Proposition \ref{prop:infinite},
which guarantees infinitely many past-pointing lightlike geodesics from $p$ to
$\gamma$, Proposition \ref{prop:limit} tells us that all but finitely
many of them come arbitrarily close to the photon region $K$. Thus, our result
that a Kerr-Newman black hole produces infinitely many images is crucially
related to the existence of the photon region. If we restrict to some open subset
of $M_+$ whose closure is completely contained in either $M_{\mathrm{out}}$ or
$M_{\mathrm{in}}$, then we are left with finitely many images for any choice of
$p$ and $\gamma$. In Section \ref{sec:centrifugal} we have seen that the
decomposition of $M_+$ into $M_{\mathrm{in}}$, $M_{\mathrm{out}}$ and
the photon region $K$ plays an important role in view of
centrifugal-plus-Coriolis force reversal; if we restrict to an open subset
of $M_+$ that is contained in either $M_{\mathrm{out}}$ or $M_{\mathrm{in}}$,
then we are left with a spacetime on which $\partial _r \Psi_+ $ and
$\partial _r \Psi_- $ have the same sign, i.e., the centrifugal-plus-Coriolis
force for large velocities points either always outwards or always inwards. In
an earlier paper \cite{HassePerlick2002} we have shown that in a
spherically symmetric and static spacetime the occurrence of gravitational
lensing with infinitely many images is equivalent to the occurrence of
centrifugal force reversal. Our new results demonstrate that the same
equivalence is true for subsets of the exterior Kerr-Newman spacetime,
with the only difference that instead of the centrifugal force alone
now we have to consider the sum of centrifugal and Coriolis force in the limit
$v \to 1$. It is an interesting problem to inquire whether this observation carries
over to other spacetimes with two commuting Killing vector fields $\partial _t$
and $\partial _{\varphi}$ that span timelike 2-surfaces with cylindrical topology.
\section*{Acknowledgment}
V. P. wishes to thank Simonetta Frittelli and Arlie Petters for inviting him to the
workshop on ``Gravitational lensing in the Kerr spacetime geometry'' at the American Institute
of Mathematics, Palo Alto, July 2005, and all participants of this workshop for stimulating
and useful discussions.
|
1,116,691,499,734 | arxiv | \section{Introduction}\label{introduction}
The similarity between
spectral analysis problems in the spatial-angular domain and
in the time-frequency domain has attracted signal processing researchers since the
1970s.
Direction of arrival (DOA) estimation and frequency identification of sinusoids are examples of such similar
problems examined during that period~\cite{Stoica}.
The renewed interest in
spectral analysis problems, especially due to the emergence of compressive sampling, has spurred reinvestigations on this similarity because, when
time-domain or spatial-domain compression is introduced, this similarity
can be
exploited to tackle different problems using the same algorithmic approach.
This paper focuses on both the reconstruction of the angular-domain periodogram
from far-field signals received by an antenna array at different time indices (problem P1) and that of the
frequency-domain periodogram from the time-domain signals
received by different wireless sensors (problem P2). It further underlines the similarity between P1 and P2.
Unless
otherwise stated, the entire angular or frequency band is divided into uniform
bins, where the size of the bins is configured such that
the received spectra at two frequencies or angles, whose distance is equal to or larger than the size of a bin, are uncorrelated.
In this case,
the so-called coset correlation matrix will have a circulant structure, which allows the use of a periodic non-uniform linear array (non-ULA) in P1 and a multi-coset sampler in P2 in order to produce a strong compression.
Our work in P1 is motivated in part by~\cite{Kochman},
which attempts to reconstruct
the angular spectrum from spatial-domain samples received by a non-ULA. Comparable works to~\cite{Kochman} for P2 are~\cite{Venkataramani} and~\cite{Eldar}, which focus on the analog signal reconstruction from its sub-Nyquist rate samples. However, the aim of~\cite{Kochman}-\cite{Eldar} to reconstruct
the original spectrum or signal leads to an underdetermined problem, which has a unique solution only if we add
constraints on the spectrum such as a sparsity constraint. A less ambitious goal in the context of P2 is to
reconstruct the power spectrum instead of the actual signal from sub-Nyquist rate samples.
For wide-sense stationary (WSS) signals, this has been shown to be possible in~\cite{Lexa} and~\cite{TSP12} without applying a sparsity constraint on the power spectrum.
Meanwhile, the work of~\cite{XiaodongWang}
assumes the existence of a multiband signal where
different bands are uncorrelated. In this case, the diagonal structure of the correlation matrix of the entries at different bands can be exploited.
Note though that~\cite{XiaodongWang} does not focus on the strongest compression rate and
uses frequency smoothing to approximate the correlation matrix computation as it
relies on a single
realization of the received signal.
Comparable works to~\cite{TSP12} in P1 are~\cite{Nested}-\cite{Siavash}, which aim to estimate the DOA of uncorrelated point sources with fewer
antennas than sources.
This is possible because for uncorrelated point sources,
the spatial correlation matrix of the received signals also has a Toeplitz structure. Hence, for a given ULA,
we can deactivate some
antennas but still manage to estimate
the spatial correlation
at all lags.
For example,~\cite{Nested} and~\cite{Coprime} suggest to place
the active antennas based on a nested or coprime array, respectively,
which results in a longer virtual array called the difference co-array (which is uniform in this case). As the difference co-array generally has more antennas and a larger aperture than the actual array, the degrees of freedom are increased allowing~\cite{Nested} and~\cite{Coprime} to estimate the DOA of more uncorrelated sources than sensors.
In a more optimal way, a uniform difference co-array can also be obtained by the minimum redundancy array (MRA) of~\cite{Moffet},
but the nested and coprime arrays present many advantages
due to their algebraic construction.
MRAs have been used in~\cite{Siavash} to estimate the DOA of more uncorrelated sources than sensors, or more generally, to estimate the angular-domain power spectrum.
Unlike~\cite{Kochman}, our work for P1
focuses on the angular periodogram reconstruction (similar to~\cite{Siavash}). This allows us to have an overdetermined problem that is solvable even without a sparsity constraint on the angular domain. This is beneficial for applications that require only information about the angular periodogram and not the actual angular spectrum.
Our work is also different from~\cite{Nested}-\cite{Siavash} as we do not exploit the Toeplitz structure of the spatial correlation matrix.
As for P2,
we focus on frequency periodogram reconstruction (unlike~\cite{Venkataramani}-\cite{Eldar}) but we do not exploit the Toeplitz structure of the time-domain correlation matrix
(unlike~\cite{TSP12}). On the other hand, the problem handled by~\cite{XiaodongWang} can be considered as a special case of P2 but, unlike~\cite{XiaodongWang}, we aim for the strongest compression rate which is achieved by exploiting the circulant structure of the coset correlation matrix and
solving the minimal circular sparse ruler problem.
Moreover, unlike~\cite{XiaodongWang}, we also exploit the signals received by different sensors to estimate
the correlation matrix.
Also related to P2, a cooperative compressive wideband spectrum sensing scheme for cognitive radio (CR) networks is proposed
in~\cite{FZheng}.
While
\cite{FZheng} can reduce the required sampling rate per CR, its focus on reconstructing the spectrum or the spectrum support requires a sparsity constraint on the original spectrum.
Unlike~\cite{FZheng},~\cite{AsilomarCR}
focuses on compressively estimating the power spectrum instead of the spectrum
by extending~\cite{TSP12}
for a cooperative scenario.
However, while
the required sampling rate per sensor can be lowered without applying a sparsity constraint on the power spectrum, the exploitation of the cross-spectra between signals
at different sensors in~\cite{AsilomarCR}
requires the knowledge of the channel state information (CSI).
Our approach for P2 does not require a sparsity constraint on the original periodogram (unlike~\cite{FZheng}) and it does not require CSI since
we are not interested in the cross-spectra between samples at different sensors (unlike~\cite{AsilomarCR}).
In~\cite{FrugalSensing}, each wireless sensor applies a threshold on the measured average signal power after applying a random wideband filter.
The threshold output is then communicated as a few bits to a fusion centre, which uses them
to recover the power spectrum by generalizing the problem in the form of inequalities.
The achievable compression rate with such a system is not clear though, in contrast to what we will present in this paper.
In more advanced problems, such as cyclic spectrum reconstruction from sub-Nyquist rate samples of cyclostationary signals in~\cite{GerryCyclo}-\cite{GeertCyclo}
or angular power spectrum reconstruction from signals produced by correlated sources in~\cite{ElsevierDOA},
finding a special structure in the resulting correlation matrix that can be exploited to perform compression is
challenging.
A similar challenge is faced in Section~\ref{correlatedbins}, where we
consider the case when we reduce the
bin size
such that the received spectra at two frequencies or angles with a spacing
larger than the bin size can still be correlated.
As the resulting coset correlation matrix in this case is generally not circulant,
we further develop the concepts originally introduced in~\cite{GeertCyclo}
and~\cite{ElsevierDOA}
to solve our problem.
{\color{blue}We now would like to summarize the advantages of our approach and highlight our contribution.
\begin{itemize}
\item We propose a compressive periodogram reconstruction approach, which does not rely on
any sparsity constraint on the original signal or the periodogram. Moreover, it is based on a simple least-squares (LS) algorithm leading to a low complexity.
\item In our approach, we also focus on the strongest possible compression that maintains the identifiability of the periodogram, which is shown
to be related to a minimal circular sparse ruler.
\item
Our approach does not require any knowledge of the CSI. \item The statistical performance analysis of the compressively reconstructed periodogram is also provided.
\item Our approach can also be modified to handle cases where the spectra in different bins are correlated.
\end{itemize}
This paper is organized as follows. The system model description (including the definition of the so-called coset correlation matrix) and the problem statement are provided in Section~\ref{uncorr_bins_system_model}. Section~\ref{compression_reconstruction} discusses the spatial (for P1) or temporal (for P2) compression as well as
periodogram reconstruction
using LS. Here, the condition for the system matrix to have full column rank and its connection to the minimal circular sparse ruler problem are
provided.
Section~\ref{corr_mat_approximate} shows how to approximate the expectation operation in the correlation matrix computation and summarizes the procedure to compressively estimate the periodogram. In Section~\ref{performance}, we provide an analysis on the statistical performance of the compressively reconstructed periodogram
including a bias and variance analysis.
Sections~\ref{uncorr_bins_system_model}-\ref{performance} assume that the received signals at different time instants (for P1) or at different sensors (for P2) have the same statistics.
To handle more general cases, we propose a multi-cluster model in Section~\ref{CaseC2}, which considers clusters of time indices in P1 or clusters of sensors in P2 and assumes that the signal
statistics are only constant within a cluster. Another
case is discussed in Section~\ref{correlatedbins}, where
the received spectra at two frequencies or angles located at different predefined bins can still be correlated. Some numerical studies are elaborated in Section~\ref{numerical} and Section~\ref{sec:conclusion} provides conclusions.
{\it Notation:} Upper (lower) boldface letters are used to denote matrices (column vectors). Given an $N \times N$ matrix ${\bf X}$, diag$({\bf X})$ is an $N\times 1$ vector containing the main diagonal entries of ${\bf X}$. Given an $N \times 1$ vector ${\bf x}$, diag$({\bf x})$ is an $N\times N$ diagonal matrix whose diagonal entries are given by the entries of ${\bf x}$.}
\section{System Mode
}\label{uncorr_bins_system_model}
\subsection{Model Description and Problem Statement}\label{model_problem_statement}
We aim at estimating the following spectral representation of the power
of a
process $x[\tilde{n}]$:
\vspace{-0.5mm}
\begin{eqnarray}
P_x(\vartheta)&=&\lim_{\tilde{N}\rightarrow\infty}E\left\{\frac{1}{\tilde{N}}\left|\sum_{\tilde{n}=0}^{\tilde{N}-1}x[\tilde{n}]e^{-j\vartheta\tilde{n}}\right|^2\right\}\nonumber\\
&=&\lim_{\tilde{N}\rightarrow\infty}E\left\{\frac{1}{\tilde{N}}\left|X_{(\tilde{N})}(\vartheta)\right|^2\right\}.
\label{eq:PowerSpectrum}
\vspace{-0.5mm}
\end{eqnarray}
Here, $x[\tilde{n}]$ represents either the spatial-domain process at the output of a ULA for P1 or the time-domain process sensed by a wireless sensor for P2. In addition, $X_{(\tilde{N})}(\vartheta)$ represents either the value of the angular spectrum at angle $\text{sin}^{-1}(2\vartheta)$ for P1 or that of the frequency spectrum at frequency $\vartheta$ for P2, with $\vartheta \in [-0.5,0.5)$.
Note from~\cite{Stoica} that, for a {\it WSS process} $x[\tilde{n}]$, $P_x(\vartheta)$ represents the {\it power spectrum}.
To estimate $P_x(\vartheta)$
in~\eqref{eq:PowerSpectrum}, consider the $\tilde{N} \times 1$ complex-valued observation vectors ${\bf x}_t=[x_t[0],x_t[1],\dots,x_t[{\tilde{N}-1}]]^T$, $t=1,2\dots,\tau$, where $x_t[\tilde{n}]$ represents the output of the $(\tilde{n}+1)$-th antenna in the ULA of $\tilde{N}$ half-wavelength spaced antennas at time index $t$ for P1
or the $(\tilde{n}+1)$-th sample out of $\tilde{N}$ successive
samples produced by the
Nyquist-rate sampler at the $t$-th sensor for P2.
To acquire an accurate
Fourier interpretation, we assume a relatively large $\tilde{N}$, which is affordable for P2 and also realistic for P1,
if we consider millimeter wave imaging applications where the antenna spacing is very small and thus the required aperture has to be covered by a large number of antennas~\cite{Kochman}.
Denote the discrete-time Fourier transform (DTFT) of $x_t[\tilde{n}]$ by $X_t(\vartheta)$.
As $X_t(\vartheta)$ at $\vartheta \in [-0.5,0)$ is a replica of $X_t(\vartheta)$ at $\vartheta \in [0.5,1)$, we can focus on $X_t(\vartheta)$ in $\vartheta \in [0,1)$.
Next, we divide the $\tilde{N}$ uniform grid points (that is,
the antennas of the ULA for P1 or the indices of the Nyquist-rate samples for P2) into $L$ non-overlapping blocks of $N$ uniform grid points.
We collect all the $(n+1)$-th grid points from each of the $L$ blocks and label this collection of grid points, i.e., $\{\tilde{n} \in \{0,1,\dots,\tilde{N}-1\}|\tilde{n}\text{ mod }N=n\}$, as the {\it $(n+1)$-th coset}, with $\tilde{n} \text{ mod }N$ the remainder of the integer division $\tilde{n}/N$. In this paper, the {\it coset index} of the $(n+1)$-th coset is $n$. This procedure allows us to view the above uniform sampling as a multi-coset sampling~\cite{Venkataramani} with $N$ cosets. Consequently, the ULA of $\tilde{N}$ antennas in P1 can be regarded as $N$ interleaved uniform linear subarrays (ULSs)~\cite{Kochman} (which are the cosets) of $L$ $(N\lambda/2)$-spaced antennas
with $\lambda$ the wavelength, whereas the
$\tilde{N}$ time-domain samples in P2 can be considered as the output of a time-domain multi-coset sampler with $L$ samples per coset. If we activate only the $(n+1)$-th coset, the spatial- or time-domain samples at index $\tilde{n}$ are given by
\vspace{-0.5mm}
\begin{equation}
\bar{x}_{t,n}[\tilde{n}]=x_t[\tilde{n}]\sum_{l=0}^{L-1}\delta[\tilde{n}-(lN+n)], \: n=0,1,\dots,N-1,
\label{eq:x_{t,n}}
\vspace{-0.5mm}
\end{equation}
which can be collected into the $\tilde{N} \times 1$ vector $\bar{\bf x}_{t,n}=[\bar{x}_{t,n}[0],$ $\bar{x}_{t,n}[1],\dots,\bar{x}_{t,n}[{\tilde{N}-1}]]^T$.
Observe that ${\bf x}_{t}=\sum_{n=0}^{N-1}\bar{\bf x}_{t,n}$. To show the relationship between the DTFT of $\bar{x}_{t,n}[\tilde{n}]$ and that of $x_t[\tilde{n}]$,
we split $\vartheta \in [0,1)$ into $N$ equal-width bins and express the spectrum at the $(i+1)$-th bin ($i=0,1,\dots,N-1$) as
$X_{t,i}(\vartheta)=X_t\left(\vartheta+\frac{i}{N}\right)$ with $\vartheta$ now limited to $\vartheta \in [0,1/N)$.
As either the spatial or temporal sampling rate becomes $1/N$ times the Nyquist-rate when only the $(n+1)$-th coset is activated, the DTFT of $\bar{x}_{t,n}[\tilde{n}]$, denoted by $\bar{X}_{t,n}(\vartheta)$, is the sum of $N$ aliased versions of $X_t(\vartheta)$ at $N$ different bins.
This is shown
for $n=0,1,\dots,N-1$ as~\cite{Eldar}
\vspace{-1mm}
\begin{equation}
\bar{X}_{t,n}(\vartheta)=\frac{1}{N}\sum_{i=0}^{N-1
X_{t,i}(\vartheta)e^{\frac{j2\pi n i}{N}},\quad \vartheta \in [0,1/N).
\label{eq:X_{t,n}}
\vspace{-1mm}
\end{equation}
Collecting $\bar{X}_{t,n}(\vartheta)$,
for $n=0,1,\dots,N-1$, into the $N\times 1$ vector $\bar{\bf x}_{t}(\vartheta)=[\bar{X}_{t,0}(\vartheta),\bar{X}_{t,1}(\vartheta),\dots,\bar{X}_{t,N-1}(\vartheta)]^T$ and introducing the $N\times 1$ vector ${\bf x}_{t}(\vartheta)=[X_{t,0}(\vartheta),X_{t,1}(\vartheta),$ $\dots,X_{t,N-1}(\vartheta)]^T$ allow us to write
\vspace{-0.5mm}
\begin{equation}
\bar{\bf x}_{t}(\vartheta)={\bf B}{\bf x}_{t}(\vartheta),\quad \vartheta \in [0,1/N),
\label{eq:x_{t}_bar}
\vspace{-0.5mm}
\end{equation}
with the element of the $N\times N$ matrix ${\bf B}$ at the $(n+1)$-th row and the $(i+1)$-th column given by $[{\bf B}]_{n+1,i+1}=\frac{1}{N}e^{\frac{j2\pi n i}{N}}$.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{SystemModelDaniel.eps}
\caption{The system model for problems P1 and P2.}
\label{fig:SystemModel}
\end{figure}
We now assume the presence of $K$ active users,
consider the model in Fig.~\ref{fig:SystemModel}, and introduce the following definition (see also Fig.~\ref{fig:SystemModel}).
\vspace{0.5mm}
\newline
\hspace*{1mm}{\it Definition 1: We define the complex-valued zero-mean random processes
$U_t^{(k)}(\vartheta)$ and $H_t^{(k)}(\vartheta)$ as
\begin{itemize}
\item For P1, $U_t^{(k)}(\vartheta)$ is the source signal related to the $k$-th user received at time index $t$, which can depend on the DOA $\text{sin}^{-1}(2\vartheta)$
due to scattering. For P2, it is the source signal related to the $k$-th user received at sensor $t$, which can vary with frequency $\vartheta$
due to power loading,
\item $H_t^{(k)}(\vartheta)$ is the related channel response for the $k$-th user at time index $t$ and DOA $\text{sin}^{-1}(2\vartheta)$ (for P1) or at sensor $t$ and frequency $\vartheta$ (for P2).
\end{itemize}}
\noindent Note from Fig.~\ref{fig:SystemModel} that, theoretically, $U_t^{(k)}(\vartheta)$ is the only component observed by the ULA in P1 or by the sensors in P2 if no fading channel exists.
Define $N_t(\vartheta)$ as the zero-mean additive white (both in $\vartheta$ and $t$)
noise at DOA $\text{sin}^{-1}(2\vartheta)$ and time index $t$ (for P1) or at frequency $\vartheta$ and sensor $t$ (for P2).
By introducing $N_{t,i}(\vartheta)=N_t\left(\vartheta+\frac{i}{N}\right)$ and similarly also $H_{t,i}^{(k)}(\vartheta)$
as well as $U_{t,i}^{(k)}(\vartheta)$,
we can then use Definition~1
to write $X_{t,i}(\vartheta)$ in~\eqref{eq:X_{t,n}} as
\vspace{-0.5mm}
\begin{equation}
X_{t,i}(\vartheta)=\sum_{k=1}^{K}H_{t,i}^{(k)}(\vartheta) U_{t,i}^{(k)}(\vartheta)+N_{t,i}(\vartheta),\: \vartheta \in [0,1/N).
\label{eq:Xti_as_H_tki_and_Uki}
\vspace{-0.5mm}
\end{equation}
Next, let us consider the following assumption.
\vspace{0.5mm}
\newline
\hspace*{1mm}{\it Assumption~1:
$X_{t,i}(\vartheta)$ in~\eqref{eq:Xti_as_H_tki_and_Uki} is an ergodic stochastic process along
$t$.
}
\vspace{0.5mm}
\newlin
This ergodicity assumption requires that the statistics of ${\bf x}_{t}(\vartheta)$ in~\eqref{eq:x_{t}_bar} do not change with $t$ (a more general case is discussed in Section~\ref{CaseC2}). Hence, we can define the $N\times N$ correlation matrix of ${\bf x}_{t}(\vartheta)$ as ${\bf R}_x(\vartheta)=E[{\bf x}_{t}(\vartheta){\bf x}_{t}^H(\vartheta)]$, for all $t$ and $\vartheta \in [0,1/N)$.
The assumption that the statistics of ${\bf x}_{t}(\vartheta)$
do not vary with $t$ is motivated for P1 when the signal received by the array is stationary in the time-domain. For P2, it implies that the statistics of the signal ${\bf x}_t$ received by different sensors $t$ are the same. Observe from~\eqref{eq:Xti_as_H_tki_and_Uki} that the element of ${\bf R}_x(\vartheta)$ at the $(i+1)$-th row and the $(i'+1)$-th column is given by
\vspace{-0.5mm}
\begin{align}
&E[X_{t,i}(\vartheta){X_{t,i'}^*}(\vartheta)]
E[|N_{t,i}(\vartheta)|^2]\delta[i-i']+\nonumber \\
&\sum_{k=1}^{K}\sum_{k'=1}^{K}E[U_{t,i}^{(k)}(\vartheta){U_{t,i'}^{(k')*}}(\vartheta)]E[H_{t,i}^{(k)}(\vartheta){H_{t,{i'}}^{(k')*}}(\vartheta)],
\label{eq:E[XtXt']}
\vspace{-0.5mm}
\end{align}
where we assume that the source signal $U_t^{(k)}(\vartheta)$, the noise $N_t(\vartheta)$, and the channel response $H_t^{(k)}(\vartheta)$ are mutually uncorrelated. We now consider the following remark.
\vspace{0.5mm}
\newline
\hspace*{1mm}{\it Remark~1:
The diagonal of ${\bf R}_x(\vartheta)$, which is given by $\{E[|X_{t,i}(\vartheta)|^2]\}_{i=0}^{N-1}$
and which is independent of $t$,
can be related to
$P_x(\vartheta)$ in~\eqref{eq:PowerSpectrum}. In practice, this expected value has to be estimated and
Assumption~1 allows us to estimate $E[|X_{t,i}(\vartheta)|^2]$ using $\frac{1}{\tau}\sum_{t=1}^{\tau}|X_{t,i}(\vartheta)|^2$. We can then consider $\frac{1}{\tilde{N}\tau}\sum_{t=1}^{\tau}|X_{t,i}(\vartheta)|^2$ as a reasonable estimate for
$P_x(\vartheta+\frac{i}{N})$ in~\eqref{eq:PowerSpectrum}, for $\vartheta \in [0,1/N)$. Here, $\frac{1}{\tilde{N}\tau}\sum_{t=1}^{\tau}|X_t(\vartheta)|^2$, for $\vartheta \in [0,1)$, can be considered as the averaged periodogram (AP) of $x_t[\tilde{n}]$ over different time indices $t$ in P1 or different sensors $t$ in P2.}
\vspace{0.5mm}
\newlin
Note that, even for the noiseless case, we can expect $X_{t,i}(\vartheta)$ in~\eqref{eq:Xti_as_H_tki_and_Uki}
to vary with $t$ if either one (or both) of the following situations occurs.
\begin{list}{\labelitemi}{\leftmargin=0mm \itemindent=0.5em}
\item For P1, $U_t^{(k)}(\vartheta)$ varies with the time index $t$ if the information that is being transmitted changes with time.
For P2, it varies with the sensor index $t$ where the signal is received if the sensors are not synchronized.
\item For P1, $H_t^{(k)}(\vartheta)$ varies with the time index $t$ if Doppler fading effects exist.
For P2, it varies with the sensor index $t$ where the signal is received, due to path loss, shadowing, and small-scale spatial fading effects.
\end{list}
We then consider the following remark.
\vspace{0.5mm}
\newline
\hspace*{1mm}{\it Remark~2: Recall that the size of the predefined bins in $\vartheta \in [0, 1)$ is a design parameter given by $\frac{1}{N}$, i.e., the inverse of the number of cosets.
Using~\eqref{eq:E[XtXt']}, it is easy to find that ${\bf R}_{x}(\vartheta)$ is a diagonal matrix if either
$E[U_t^{(k)}(\vartheta)U_t^{(k')*}(\vartheta')]=0$ and/or $E[H_t^{(k)}(\vartheta)H_t^{(k')*}(\vartheta')]=0$ for $|\vartheta'-\vartheta|\geq \frac{1}{N}$, with $\vartheta,\vartheta' \in [0,1)$, and for all $t,k,k'$.}
\vspace{0.5mm}
\newline
One example
for both P1 and P2 is when we have $K$ non-overlapping active bands corresponding to $K$ different users leading to a multiband structure in the $\vartheta$-domain with either the $K$ different users transmitting mutually uncorrelated source signals and/or the signals from the $K$ different users passing through mutually uncorrelated wireless channels on their way to the receiver. If we denote the support of the $k$-th active band by ${\mathcal{B}_k}$ and its
bandwidth by $\Lambda({\mathcal{B}_k})=\text{sup}\{{\mathcal{B}_k}\}-\text{inf}\{{\mathcal{B}_k}\}$, the condition in Remark~2 is then satisfied by setting $N$ such that $\frac{1}{N}\geq \max_k\Lambda({\mathcal{B}}_k)$. Note that such a choice is possible,
especially for P2, as the channelization parameter for a communication network is usually known.
We focus on the case where ${\bf R}_{x}(\vartheta)$ is a diagonal matrix and
define the so-called $N\times N$ {\it coset correlation matrix} as
\begin{equation}
{\bf R}_{\bar{x}}(\vartheta)=E[\bar{\bf x}_t(\vartheta)\bar{\bf x}_t^H(\vartheta)]
={\bf B}{\bf R}_{x}(\vartheta){\bf B}^H,\:\: \vartheta \in [0,1/N).
\label{eq:Rxt_bar}
\end{equation}
Observe that ${\bf R}_{\bar{x}}(\vartheta)$ is a circulant matrix when ${\bf R}_{x}(\vartheta)$ is a diagonal matrix since ${\bf B}$ is an inverse discrete Fourier transform (IDFT) matrix, as can be concluded from~\eqref{eq:x_{t}_bar}.
Based on the aforementioned system model, we finally formulate our problem statement as follows:
\vspace{0.5mm}
\newline
\hspace*{1mm}{\it Problem Statement: As an estimate of the
spectral representation of the power $P_x(\vartheta)$ in~\eqref{eq:PowerSpectrum} (which is also the power spectrum when $x[\tilde{n}]$ in~\eqref{eq:PowerSpectrum} is a WSS process), we aim to compressively reconstruct the AP of $x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}} over the index $t$, where we assume that $x_t[\tilde{n}]$ is ergodic along the index t and that its coset correlation matrix ${\bf R}_{\bar{x}}(\vartheta)$ has a circulant structure. We discuss the compression and the reconstruction in Section~\ref{compression_reconstruction} and the estimation of the correlation matrix in Section~\ref{corr_mat_approximate}.}
\subsection{Interpretation of AP in Remark~1}
How the AP in Remark~1 is interpreted with respect to $U_t^{(k)}(\vartheta)$ and $H_t^{(k)}(\vartheta)$ depends on which of the functions varies in $t$.
For example,
consider problem~P2 and assume that only one user $k$ can occupy a given frequency $\vartheta$ at a given time
and that only $H_t^{(k)}(\vartheta)$ varies in $t$, i.e., $U_t^{(k)}(\vartheta)=U^{(k)}(\vartheta)$.
For this example,
we have from~\eqref{eq:Xti_as_H_tki_and_Uki}
\begin{align}
&\frac{1}{\tilde{N}\tau}\sum_{t=1}^{\tau}|X_t(\vartheta)|^
=\frac{|U^{(k)}(\vartheta)|^2}{\tilde{N}}\sum_{t=1}^{\tau}\frac{|H_t^{(k)}(\vartheta)|^2}{\tau}\nonumber \\
&+\sum_{t=1}^{\tau}\frac{|N_t(\vartheta)|^2}{\tilde{N}\tau}
+\sum_{t=1}^{\tau}\frac{2\text{Re}(H_t^{(k)}(\vartheta)U^{(k)}(\vartheta)N^*_t(\vartheta))
{\tilde{N}\tau},
\label{ExS3P2}
\end{align}
where $\text{Re}(x)$ gives the real component of $x$,
the first term is the
classical periodogram of the user signals $\frac{|U^{(k)}(\vartheta)|^2}{\tilde{N}}$ scaled by the averaged fading magnitude experienced at different channels $\frac{1}{\tau}\sum_{t=1}^{\tau}|H_t^{(k)}(\vartheta)|^2$, the second term is the AP of the noises at different sensors $t$, and the
last term
converges to zero as $\tau$ becomes
larger
due to the uncorrelatedness between the noise $N_t(\vartheta)$ and the channel response $H_t^{(k)}(\vartheta)$. The
assumption that the statistics of $X_t(\vartheta)$ do not vary with
$t$ (as required by Assumption~1)
implies that the statistics of the fading experienced by different sensors $t$ are the same
(e.g., they experience small-scale
fading on top of the same path loss and shadowing).
As another example, consider problem P1
and assume that only one user $k$ can occupy a given DOA $\text{sin}^{-1}(2\vartheta)$ at a given time
and that only $U_t^{(k)}(\vartheta)$ varies in $t$, i.e., $H_t^{(k)}(\vartheta)=H^{(k)}(\vartheta)$.
For this example, we have from~\eqref{eq:Xti_as_H_tki_and_Uki}
\begin{align}
&\frac{1}{\tilde{N}\tau}\sum_{t=1}^{\tau}|X_t(\vartheta)|^2
=|H^{(k)}(\vartheta)|^2\sum_{t=1}^{\tau}\frac{|U_t^{(k)}(\vartheta)|^2}{\tilde{N}\tau}\nonumber\\
&+\sum_{t=1}^{\tau}\frac{|N_t(\vartheta)|^2}{\tilde{N}\tau}
+\sum_{t=1}^{\tau}\frac{2\text{Re}(U_t^{(k)}(\vartheta)H^{(k)}(\vartheta)N^*_t(\vartheta))}
{\tilde{N}\tau},
\label{ExS1P1}
\end{align}
where the first term is the angular-domain
AP of the user signals $\frac{1}{\tilde{N}\tau}\sum_{t=1}^{\tau}|U_t^{(k)}(\vartheta)|^2$ scaled by the magnitude of the time-invariant channel angular response $|H^{(k)}(\vartheta)|^2$, the second term is the angular-domain AP
of the noise,
and the last term again
converges to zero as $\tau$ becomes
larger due to the uncorrelatedness between $N_t(\vartheta)$ and $U_t^{(k)}(\vartheta)$.
\section{Compression and Reconstruction}\label{compression_reconstruction}
\subsection{Spatial or Temporal Compression}\label{uncorr_bins_compression}
As ${\bf R}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rxt_bar} is a circulant matrix,
it is possible to
condense its entries
into an $N\times 1$ vector ${\bf r}_{\bar{x}}(\vartheta)=[{r}_{\bar{x}}(\vartheta,0),$ ${r}_{\bar{x}}(\vartheta,1),\dots,{r}_{\bar{x}}(\vartheta,N-1)]^T$
with ${r}_{\bar{x}}(\vartheta,(n-n')\text{ mod }N)=E\left[\bar{X}_{t,n}(\vartheta),\bar{X}^*_{t,n'}(\vartheta)\right]$.
We can then relate ${\bf r}_{\bar{x}}(\vartheta)$ to ${\bf R}_{\bar{x}}(\vartheta)$ a
\begin{equation}
\text{vec}({\bf R}_{\bar{x}}(\vartheta))={\bf T}{\bf r}_{\bar{x}}(\vartheta),\quad \vartheta \in [0,1/N),
\label{eq:Rbarx_as_rbar_x}
\end{equation}
where ${\bf T}$ is an $N^2\times N$ repetition matrix whose $(q+1)$-th row is given by the $\left(\left(q-\left\lfloor\frac{q}{N}\right\rfloor\right)\text{ mod }N+1\right)$-th row of the $N\times N$ identity matrix ${\bf I}_N$ and vec$(.)$ is the operator that stacks all columns of a matrix into one column vector.
The possibility to condense the $N^2$ entries of ${\bf R}_{\bar{x}}(\vartheta)$ into the $N$ entries of ${\bf r}_{\bar{x}}(\vartheta)$ facilitates compression
by performing a spatial- or time-domain non-uniform periodic sampling (similar to~\cite{Eldar}), in which only
$M<N$ cosets are activated.
Here, we use the set $\mathcal{M}=\{n_0, n_1,\dots, n_{M-1}\}$, with $0\leq n_0 < n_1 < \dots < n_{M-1}\leq N-1$, to indicate the indices of the $M$ active cosets.
All values of $\bar{x}_{t,n}[\tilde{n}]$ in~\eqref{eq:x_{t,n}} are then collected and their corresponding DTFT $\bar{X}_{t,n}(\vartheta)$ in~\eqref{eq:X_{t,n}} is computed for all $n \in \mathcal{M}$.
Stacking $\left\{\bar{X}_{t,n}(\vartheta)\right\}_{n\in\mathcal{M}}$ into the $M\times 1$ vector $\bar{\bf y}_{t}(\vartheta)=[\bar{X}_{t,n_0}(\vartheta),\bar{X}_{t,n_1}(\vartheta),\dots,\bar{X}_{t,n_{M-1}}(\vartheta)]^T$ allows us to relate
$\bar{\bf y}_{t}(\vartheta)$ to $\bar{\bf x}_{t}(\vartheta)$ in~\eqref{eq:x_{t}_bar} as
\begin{equation}
\bar{\bf y}_{t}(\vartheta)={\bf C}\bar{\bf x}_{t}(\vartheta),\quad \vartheta \in [0,1/N),
\label{eq:y_{t}_bar}
\end{equation}
where ${\bf C}$ is an $M \times N$ selection matrix whose rows are selected from the rows of ${\bf I}_N$ based on ${\mathcal{M}}$.
Since ${\bf C}$ is real, the $M\times M$ correlation matrix of $\bar{\bf y}_{t}(\vartheta)$, for $\vartheta \in [0,1/N)$,
can be written as
\begin{equation}
{\bf R}_{\bar{y}}(\vartheta)=E[\bar{\bf y}_{t}(\vartheta)\bar{\bf y}^H_{t}(\vartheta)]={\bf C}{\bf R}_{\bar{x}}(\vartheta){\bf C}^T.
\label{eq:Ry_bar}
\end{equation}
We then take $\eqref{eq:Rbarx_as_rbar_x}$ into account, cascade all columns of ${\bf R}_{\bar{y}}(\vartheta)$ into a column vector $\text{vec}({\bf R}_{\bar{y}}(\vartheta))$, and write
\begin{equation}
\text{vec}({\bf R}_{\bar{y}}(\vartheta)
={\bf R}_c{\bf r}_{\bar{x}}(\vartheta),\quad \vartheta \in [0,1/N),
\label{eq:Rybar_as_rbar_x}
\end{equation}
where ${\bf R}_c=({\bf C}\otimes{\bf C}){\bf T}$ is a real $M^2\times N$ matrix and $\otimes$ denotes the Kronecker product operation.
\subsection{Reconstruction}\label{uncorrbinsreconstruct}
If ${\bf R}_c$ in~\eqref{eq:Rybar_as_rbar_x} is a tall matrix ($M^2 \geq N$), which is possible despite $M <N$, and if it has full column rank,
${\bf r}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x} can be reconstructed from $\text{vec}({\bf R}_{\bar{y}}(\vartheta))$ using
LS for all $\vartheta \in [0,1/N)$. In addition, as long as the identifiability of ${\bf r}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x} is preserved, we can also consider estimators other than LS (such as in~\cite{Daniel}). To formulate a necessary and sufficient condition for the identifiability of ${\bf r}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x} from $\text{vec}({\bf R}_{\bar{y}}(\vartheta))$, let us review the concept of a circular sparse ruler defined in~\cite{Romero}.
\vspace{0.5mm}
\newline
\hspace*{1mm}{\it Definition 2: A circular sparse ruler of length $N-1$ is a set $\mathcal{K} \subset \{0,1,\dots,N-1\}$ for which $\Omega(\mathcal{K})=\{(\kappa-\kappa')\text{ mod }N|\forall \kappa,\kappa' \in \mathcal{K}\}=\{0,1,\dots,N-1\}$. We call it minimal if there is
no other circular sparse ruler of length $N-1$ with fewer elements.}
\newline
Detailed information about circular
sparse rulers can be found in~\cite{Romero}. We can then use this concept
to formulate the following theorem whose proof is available in~\cite{CAMSAP13}.
\vspace{0.5mm}
{\color{blue}\newlin
\hspace*{1mm}{\it Theorem 1: ${\bf r}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x} is identifiable from $\text{vec}({\bf R}_{\bar{y}}(\vartheta))$, i.e., ${\bf R}_c$ has full column rank, if and only if ${\mathcal M}$ is a circular sparse ruler, i.e., $\Omega(\mathcal{M}) = \{ 0,1,\dots,N-1\}$. When this is satisfied,
${\bf R}_c$ contains all rows of ${\bf I}_N$.}
\newline
\hspace*{1mm}Our goal is to obtain the strongest possible compression rate $M/N$ preserving the identifiability.}
This is achieved by minimizing the cardinality of the set $\mathcal{M}$, $|\mathcal{M}|=M$,
under the condition that $\Omega({\mathcal M})=\{0,1,\dots, N-1\}$.
This leads to a length-$(N-1)$ minimal circular sparse ruler problem, which can be written as
\vspace{-0.9mm}
\begin{equation}
\min_{\mathcal{M}}\left|\mathcal{M}\right| \: \text{s.t.} \: \Omega(\mathcal{M})=\left\{0,1,\dots,N-1\right\}.
\label{eq:cardinality_comp}
\vspace{-0.9mm}
\end{equation}
Solving~\eqref{eq:cardinality_comp} minimizes the compression rate $M/N$ while maintaining the identifiability of ${\bf r}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x}.
Recall that, for P1, $\mathcal{M}$ indicates the indices of the $M<N$ active ULSs in our ULA, which will be referred to
as the ${\it underlying}$ array. Therefore, we have a periodic non-ULA of active antennas and $\mathcal{M}$ governs the location of the active antennas in each spatial period. When $\mathcal{M}$ is a solution of the minimal length-$(N-1)$ circular sparse ruler problem in~\eqref{eq:cardinality_comp}, we can label the resulting non-ULA of active antennas as a {\it periodic circular MRA} and each of its spatial periods as a {\it circular MRA}. Similarly for P2, we can label the non-uniform sampling in each temporal period as {\it minimal circular sparse ruler sampling} and the entire periodic non-uniform sampling as {\it periodic minimal circular sparse ruler sampling} if the indices of the $M<N$ active cosets
are given by the solution of~\eqref{eq:cardinality_comp}.
Once ${\bf r}_{\bar{x}}(\vartheta)$ is reconstructed from $\text{vec}({\bf R}_{\bar{y}}(\vartheta))$ in~\eqref{eq:Rybar_as_rbar_x} using LS for $\vartheta \in [0,1/N)$,
we can use~\eqref{eq:Rbarx_as_rbar_x} to compute ${\bf R}_{\bar{x}}(\vartheta)$ from ${\bf r}_{\bar{x}}(\vartheta)$ and~\eqref{eq:Rxt_bar} to compute ${\bf R}_{x}(\vartheta)$ from ${\bf R}_{\bar{x}}(\vartheta)$ as ${\bf R}_{x}(\vartheta)=N^2{\bf B}^{H}{\bf R}_{\bar{x}}(\vartheta){\bf B}$. As
we have $\text{diag}({\bf R}_{x}(\vartheta))=[E[|X_{t,0}(\vartheta)|^2],E[|X_{t,1}(\vartheta)|^2],\dots,E[|X_{t,N-1}(\vartheta)|^2]]^T$ with
$\vartheta \in [0,1/N)$, reconstructing $\text{diag}({\bf R}_{x}(\vartheta))$ for all $\vartheta \in [0,1/N)$ gives $E[|X_t(\vartheta)|^2]$ for all $\vartheta \in [0,1)$.
\section{Correlation Matrix Estimation}\label{corr_mat_approximate}
In practice, the expectation in~\eqref{eq:Ry_bar} must be approximated.
Here, we propose to approximate the expectation in~\eqref{eq:Ry_bar} with the sample average
over different time indices $t$ for P1 or sensors indices $t$ for P2, i.e.,
\vspace{-1mm}
\begin{equation}
\hat{\bf R}_{\bar{y}}(\vartheta)=\frac{1}{\tau}\sum_{t=1}^{\tau}\bar{\bf y}_{t}(\vartheta)\bar{\bf y}^H_{t}(\vartheta),\:\:\vartheta \in [0,1/N),
\label{eq:Rybar_hat}
\vspace{-1mm}
\end{equation}
where we recall that $\tau$ is either the total number of time indices or sensors from which the observations are collected.
Observe that the $M\times M$ matrix $\hat{\bf R}_{\bar{y}}(\vartheta)$
is an unbiased estimate of ${\bf R}_{\bar{y}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x}.
It is also a consistent estimate if
Assumption~1 holds.
We can then apply
LS reconstruction on $\hat{\bf R}_{\bar{y}}(\vartheta)$ in~\eqref{eq:Rybar_hat} instead of ${\bf R}_{\bar{y}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x}.
As a result, the procedure to compressively reconstruct the AP of $x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}} over the index $t$ can be listed as
\begin{enumerate}
\item For $t=1,2,\dots,\tau$, collect all values of $\bar{x}_{t,n}[\tilde{n}]$ in~\eqref{eq:x_{t,n}} and compute their corresponding DTFT $\bar{X}_{t,n}(\vartheta)$ in~\eqref{eq:X_{t,n}} for all $n \in \mathcal{M}$. We use them to form $\bar{\bf y}_{t}(\vartheta)$ in~\eqref{eq:y_{t}_bar}.
\item Compute $\hat{\bf R}_{\bar{y}}(\vartheta)$, for $\vartheta \in [0,1/N)$, using~\eqref{eq:Rybar_hat}.
\item Based on~\eqref{eq:Rybar_as_rbar_x} and for $\vartheta \in [0,1/N)$, we apply
LS reconstruction on $\hat{\bf R}_{\bar{y}}(\vartheta)$ leading to
\vspace{-0.5mm}
\begin{equation}
\hat{\bf r}_{\bar{x},LS}(\vartheta)=({\bf R}_c^T{\bf R}_c)^{-1}{\bf R}_c^T\text{vec}(\hat{\bf R}_{\bar{y}}(\vartheta)).
\label{rxhatbarLS}
\vspace{-0.5mm}
\end{equation}
\item Based on~\eqref{eq:Rbarx_as_rbar_x} and~\eqref{eq:Rxt_bar}, for $\vartheta \in [0,1/N)$, we compute $\text{vec}(\hat{\bf R}_{\bar{x},LS}(\vartheta))={\bf T}\hat{\bf r}_{\bar{x},LS}(\vartheta)$ and
\vspace{-0.5mm}
\begin{equation}
\hat{\bf R}_{x,LS}(\vartheta)=N^2{\bf B}^{H}\hat{\bf R}_{\bar{x},LS}(\vartheta){\bf B}.
\label{eq:RxLS}
\vspace{-0.5mm}
\end{equation}
\item Note that the $(i+1)$-th diagonal element of $\hat{\bf R}_{x,LS}(\vartheta)$, i.e., $[\text{diag}(\hat{\bf R}_{x,LS}(\vartheta))]_{i+1}$ is the LS estimate of the $(i+1)$-th diagonal element of ${\bf R}_{x}(\vartheta)$, which according to Remark~1 is given by $E[|X_{t,i}(\vartheta)|^2]$. Based on the definition of AP in Remark~1 and considering~\eqref{eq:Rybar_hat}, we can then formulate the compressive
AP (CAP) of $x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}} over the index $t$ as
\begin{equation}
\hat{P}_{x,LS}(\vartheta+\frac{i}{N})=\frac{1}{\tilde{N}}[\text{diag}(\hat{\bf R}_{x,LS}(\vartheta))]_{i+1}
\label{CRAP}
\end{equation}
for $\vartheta \in [0,1/N)$ and $i=0,1,\dots,N-1$.
\end{enumerate}
Note that, when reconstructing the CAP $\hat{P}_{x,LS}(\vartheta)$ in~\eqref{CRAP}, we introduce additional errors with respect to the AP $\frac{1}{\tilde{N}\tau}\sum_{t=1}^{\tau}|X_t(\vartheta)|^2$ in Remark~1 (including the ones in~\eqref{ExS3P2} and~\eqref{ExS1P1}).
This error emerges during the compression and the LS operation in~\eqref{rxhatbarLS}.
This issue will be discussed up to some extent in the next section.
\section{Performance Analysis}\label{performance}
\subsection{Bias Analysis}\label{uncorrbins_bias}
The
bias analysis of the CAP $\hat{P}_{{x},{LS}}(\vartheta)$ in~\eqref{CRAP} with respect to $P_{x}(\vartheta)$ in~\eqref{eq:PowerSpectrum} is given by the following theorem whose proof is available in
Appendix~\ref{ProofTheorem2}.
\vspace{0.7mm}
\newline
\hspace*{1mm}{\it Theorem 2: For
$\vartheta \in [0,1)$, the CAP $\hat{P}_{{x},{LS}}(\vartheta)$ in~\eqref{CRAP} is an asymptotically (with respect to $\tilde{N}$) unbiased estimate of
$P_{x}(\vartheta)$ in~\eqref{eq:PowerSpectrum}.
\vspace{-1mm}
\subsection{Variance Analysis}\label{uncorrbins_variance}
We start by recalling that the $(m+1)$-th element of $\bar{\bf y}_{t}(\vartheta)$ in~\eqref{eq:y_{t}_bar} is given by $\bar{X}_{t,n_m}(\vartheta)$. By using~\eqref{eq:X_{t,n}}, we can write the element of $\hat{\bf R}_{\bar{y}}(\vartheta)$ in~\eqref{eq:Rybar_hat} at the $(m+1)$-th row and the $(m'+1)$-th column, for $m,m'=0,1,\dots,M-1$, as
\begin{align}
&[\hat{\bf R}_{\bar{y}}(\vartheta)]_{m+1,m'+1}
=\frac{1}{N^2\tau}\sum_{t=1}^{\tau}\sum_{i=0}^{N-1}\sum_{i'=0}^{N-1}\nonumber\\
&X_{t,i}(\vartheta)X_{t,i'}^*(\vartheta)e^{\frac{j2\pi (n_m i-n_{m'}i')}{N}}.
\label{elementRybarhat}
\end{align}
We continue to evaluate the covariance between the elements of $\hat{\bf R}_{\bar{y}}(\vartheta)$ in~\eqref{elementRybarhat}, which
is not trivial for a general signal $x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}}, as it involves the computation of fourth order moments.
To get a useful insight, let us consider the case when the distribution of
$x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}} (and thus also ${X}_{t,i}(\vartheta)$ in~\eqref{elementRybarhat}) is jointly Gaussian.
In this case,
the fourth order moment computation is simplified
by using the results in~\cite{Bar}: If $x_1$, $x_2$, $x_3$, and $x_4$ are jointly (real or complex)
Gaussian random variables, we have $E[x_1 x_2 x_3 x_4]=E[x_1 x_2]E[x_3 x_4]+E[x_1 x_3]E[x_2 x_4]+E[x_1 x_4]E[x_2 x_3]-2E[x_1]E[x_2]E[x_3]E[x_4]$.
Using this result,
the covariance between the elements of $\hat{\bf R}_{{\bar{y}}}(\vartheta)$ in~\eqref{elementRybarhat},
when $x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}} is jointly Gaussian, can be shown to be
\begin{align}
&\text{Cov}[[\hat{\bf R}_{{\bar{y}}}(\vartheta)]_{m+1,m'+1},[\hat{\bf R}_{{\bar{y}}}(\vartheta)]_{a+1,a'+1}]=\frac{1}{N^4\tau^2}\sum_{t=1}^{\tau}\sum_{t'=1}^{\tau}\nonumber \\
&\sum_{i=0}^{N-1}\sum_{i'=0}^{N-1}\sum_{b=0}^{N-1}\sum_{b'=0}^{N-1}e^{\frac{j2\pi (n_m i-n_{m'} i'-n_ab+n_{a'}b')}{N}}\nonumber \\
&\left\{E[{X}_{t,i}(\vartheta){X}_{t',b}^*(\vartheta)]E[{X}_{t,i'}^*(\vartheta){X}_{t',b'}(\vartheta)
+\right.\nonumber \\
&\left.E[{X}_{t,i}(\vartheta){X}_{t',b'}(\vartheta)]E[{X}_{t,i'}^*(\vartheta){X}_{t',b}^*(\vartheta)]\right\}
\label{eq:CovRycheckbarelementGaussian}
\end{align}
for $\vartheta \in [0,1/N)$ and $m,m',a,a'=0,1,\dots,M-1$, where we also
assume that $x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}} has zero mean (see Definition~1).
Under the above assumptions, we introduce the $M^2 \times M^2$ covariance matrix
${\boldsymbol \Sigma}_{\hat{R}_{{\bar{y}}}}(\vartheta)=E[\text{vec}(\hat{\bf R}_{{\bar{y}}}(\vartheta))\text{vec}(\hat{\bf R}_{{\bar{y}}}(\vartheta))^H]-E[\text{vec}(\hat{\bf R}_{{\bar{y}}}(\vartheta))]E[\text{vec}(\hat{\bf R}_{{\bar{y}}}(\vartheta))^H]$, whose
entry at the $(Mm'+m+1)$-th row and the $(Ma'+a+1)$-th column
is given by $\text{Cov}[[\hat{\bf R}_{{\bar{y}}}(\vartheta)]_{m+1,m'+1},[\hat{\bf R}_{{\bar{y}}}(\vartheta)]_{a+1,a'+1}]$ in~\eqref{eq:CovRycheckbarelementGaussian}.
By recalling that ${\bf R}_{c}$ and ${\bf T}$ are real matrices, we can then compute the $N\times N$ covariance matrix of $\hat{\bf r}_{{\bar{x}},{LS}}(\vartheta)$ in~\eqref{rxhatbarLS} as
\begin{equation}
{\boldsymbol \Sigma}_{\hat{r}_{{\bar{x}},LS}}(\vartheta
=({\bf R}_{c}^T{\bf R}_{c})^{-1}{\bf R}_{c}^T{\boldsymbol \Sigma}_{\hat{R}_{{\bar{y}}}}(\vartheta){\bf R}_{c}({\bf R}^T_{c}{\bf R}_{c})^{-1},
\label{eq:Covar_scheckhatbarXLS}
\end{equation}
and
use~\eqref{eq:RxLS} to introduce ${\boldsymbol \Sigma}_{\hat{R}_{{x},LS}}(\vartheta)$ as
the $N^2\times N^2$ covariance matrix of $\text{vec}(\hat{\bf R}_{{{{x}}},{LS}}(\vartheta))$, which can be written as
\begin{equation}
{\boldsymbol \Sigma}_{\hat{R}_{{x},LS}}(\vartheta
=N^4({\bf B}^T\otimes{\bf B}^{H}){\bf T}{\boldsymbol \Sigma}_{\hat{r}_{{\bar{x}},LS}}(\vartheta){\bf T}^T({\bf B}^*\otimes{\bf B})
\label{eq:Covar_ScheckhatXLS}
\end{equation}
for $\vartheta \in [0,1/N)$. Recall from~\eqref{CRAP} that the
CAP $\hat{P}_{{x},{LS}}(\vartheta+\frac{i}{N})$, for $\vartheta \in [0,1/N)$ and $i=0,1,\dots,N-1$, is given by $\frac{1}{\tilde{N}}[\hat{\bf R}_{{{{x}}},{LS}}(\vartheta)]_{i+1,i+1}$.
It is then trivial to show that the variance of $\hat{P}_{{x},{LS}}(\vartheta+\frac{i}{N})$ is given by
\begin{equation}
\text{Var}[\hat{P}_{{x},{LS}}(\vartheta+\frac{i}{N})]=\frac{1}{\tilde{N}^2}[{\boldsymbol \Sigma}_{\hat{R}_{{x},LS}}(\vartheta)]_{Ni+i+1,Ni+i+1}
\label{eq:Var_LS_periodogoram}
\end{equation}
for $\vartheta \in [0,1/N)$ and $i=0,1,\dots,N-1$.
To get even more insight into this result, we consider a specific case in the next proposition whose proof is provided in Appendix~\ref{ProofPropos1}.
\vspace{1mm}
\newline
\hspace*{1mm}{\it Proposition 1: When $x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}} contains only circular complex zero-mean Gaussian i.i.d. noise with variance $\sigma^2$,
the covariance between the elements of $\hat{\bf R}_{{\bar{y}}}(\vartheta)$ in~\eqref{elementRybarhat}, for $\vartheta \in [0,1/N)$, is given by
\vspace{-0.5mm}
\begin{align}
&\text{Cov}[[\hat{\bf R}_{\bar{y}}(\vartheta)]_{m+1,m'+1},[\hat{\bf R}_{\bar{y}}(\vartheta)]_{a+1,a'+1}]=\frac{L^2\sigma^4}{\tau}\times\nonumber\\
&\delta[m -{a}]\delta[{m'}-{a'}],\:\:m,m',a,a'=0,1,\dots,M-1
\label{eq:CovRycheckbarelementGaussian_noise_propos}
\vspace{-0.75mm}
\end{align}
It is clear from~\eqref{eq:CovRycheckbarelementGaussian_noise_propos} that ${\boldsymbol \Sigma}_{\hat{R}_{{\bar{y}}}}(\vartheta)$ in~\eqref{eq:Covar_scheckhatbarXLS} is then a diagonal matrix and we can
find from~\eqref{eq:Covar_scheckhatbarXLS
-\eqref{eq:Var_LS_periodogoram}
that $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)] \propto \sigma^4$ or $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)] \propto {P}^2_{{x}}(\vartheta)$. This observation can be related to a similar result found for the conventional periodogram estimate of white Gaussian noise sampled at Nyquist rate in~\cite{Hayes}.
\vspace{-0.5mm}\subsection{Effect of the Compression Rate on the Variance
\label{comprate_variance}
In this section, we focus on the impact of the compression rate $M/N$ on the variance analysis by first {\color{blue}defining an $N\times 1$ vector ${\bf w}=[w[0],w[1],\dots,w[N-1]]^T$ containing
binary entries, with $w[n]=1$ if $n \in \mathcal{M}$ (i.e., the coset with index $n$ is one of the $M$ activated cosets) and $w[n]=0$ if $n \notin \mathcal{M}$. In other words, the entries of ${\bf w}$ indicate which $M$ out of the $N$ cosets are activated.
Let us then focus on~\eqref{rxhatbarLS}
and} consider the following remark.
\vspace{1mm}
\newline{\color{blue}
\hspace*{1mm}{\it Remark 3: The same argument that leads to Theorem~1
(see Lemma~1 in~\cite{CAMSAP13})
shows that the rows of ${\bf R}_{c}$ are given by the $((g-f)\text{ mod }N+1)$-th rows of ${\bf I}_N$, for all $f,g \in \mathcal{M}$. As a result, ${\bf R}_{c}^T{\bf R}_{c}$
is an $N\times N$ diagonal matrix. Denote the value of the $\kappa$-th diagonal element of ${\bf R}_{c}^T{\bf R}_{c}$ as ${\gamma_\kappa}$. We can then show that ${\gamma_\kappa}$ is given by
\begin{equation}
\gamma_\kappa=\sum_{n=0}^{N-1}w[(n+\kappa-1)\text{ mod }N]w[n],\:\:\kappa=1,2,\dots,N.
\label{eq:gamma_kappa}
\end{equation}
The proof of~\eqref{eq:gamma_kappa} is available in Appendix~\ref{ProofOfGammakappa}.
Using~\eqref{eq:gamma_kappa}, we can also show that $\gamma_\kappa$ gives the number of times the $\kappa$-th row of ${\bf I}_N$ appears in ${\bf R}_{c}$, i.e., the number of pairs $(g,f)$ that lead to $(g-f)\text{ mod }N+1=\kappa$. As we have $|\mathcal{M}|=M$, we can find that
$\sum_{\kappa=1}^N\gamma_\kappa=M^2$ and $\gamma_1=M$.}}
\vspace{0.5mm}
\newline
Using Remark~3, we then
formulate the following theorem whose proof is available in Appendix~\ref{ProofTheorem3}.
\newline
\hspace*{1mm}{\it Theorem 3: When $x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}} contains only circular complex zero-mean Gaussian i.i.d. noise with variance $\sigma^2$, the variance of the
CAP $\hat{P}_{{x},{LS}}(\vartheta+\frac{i}{N})$ in~\eqref{eq:Var_LS_periodogoram}, for $\vartheta \in [0,1/N)$ and $i=0,1,\dots,N-1$, is given by
\begin{equation}
\text{Var}[\hat{P}_{{x},{LS}}(\vartheta+\frac{i}{N})]=\frac{\sigma^4}{M\tau}+\frac{\sigma^4}{\tau}\sum_{n=1}^{N-1}\frac{1}{\gamma_{n+1}}
\label{eq:VarPxLSwhitenoisetheo}
\end{equation}
Note how~\eqref{eq:VarPxLSwhitenoisetheo}
relates
$M$ and $N$ to $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)]$ for circular complex zero-mean Gaussian i.i.d. noise and $\vartheta \in [0,1)$.
Recalling from Remark~3 that $\sum_{n=1}^{N-1}\gamma_{n+1}=M^2-M$,
we can find that, for a given $N$,
a stronger compression rate (smaller $M/N$)
tends to lead to a larger $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)]$.
{\color{blue}Based on~\eqref{eq:gamma_kappa} and~\eqref{eq:VarPxLSwhitenoisetheo},
it is of interest to find the binary values of $\{w[n]\}_{n=0}^{N-1}$
(or equivalently the cosets $n_m\in\mathcal{M}$) that minimize $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)]$ for a given $M$. This will generally lead to a non-convex optimization problem, which is difficult to solve, although it is clear that the solution will force the values of $\{\gamma_{n+1}\}_{n=1}^{N-1}$ to be as equal as possible.
Alternatively, we can also put a constraint on $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta+\frac{i}{N})]$ in~\eqref{eq:VarPxLSwhitenoisetheo} and find the binary values $\{w[n]\}_{n=0}^{N-1}$ that minimize the compression rate $M/N$. This however, will again lead to a non-convex optimization problem that is difficult to solve.
Note that, although finding ${\bf w}$ that minimizes $M/N$ for a given $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)]$ in~\eqref{eq:VarPxLSwhitenoisetheo} or the one that minimizes $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)]$ for a given $M/N$ is not trivial,
the solution will always have to satisfy the identifiability condition in Theorem~1. This is because we can show that if the identifiability condition is not satisfied, some
$\gamma_{n}$ in~\eqref{eq:VarPxLSwhitenoisetheo} will be zero and thus $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)]$ in~\eqref{eq:VarPxLSwhitenoisetheo} will have an infinite value.}
The analysis of the effect of $M/N$ on $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)]$ for a general Gaussian signal $x_t[\tilde{n}]$, however, is difficult since it is clear from~\eqref{eq:CovRycheckbarelementGaussian}
that $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta)]$ for this case depends on the unknown statistics of $x_t[\tilde{n}]$. This is also true for a more general signal.
\vspace{-0.5mm}\subsection{Asymptotic Performance Analysis}\label{asymp_variance}
We now discuss the asymptotic behaviour of the performance of the
CAP $\hat{P}_{{x},{LS}}(\vartheta)$. We start by noting that
Assumption~1 ensures that $\hat{\bf R}_{{\bar{y}}}(\vartheta)$ in~\eqref{eq:Rybar_hat} is a consistent estimate of ${\bf R}_{{\bar{y}}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x} i.e., $\hat{\bf R}_{{\bar{y}}}(\vartheta)$ converges to ${\bf R}_{{\bar{y}}}(\vartheta)$
as $\tau$ approaches $\infty$. As it is clear from~\eqref{rxhatbarLS} and~\eqref{eq:RxLS} that $\hat{\bf R}_{x,LS}(\vartheta)$ is linearly related to $\hat{\bf R}_{{\bar{y}}}(\vartheta)$, it is easy to show that $\hat{\bf R}_{x,LS}(\vartheta)$ converges to ${\bf R}_{x}(\vartheta)$ in~\eqref{eq:Rxt_bar}
as $\tau$ approaches $\infty$. This implies that the CAP $\hat{P}_{x,LS}(\vartheta+\frac{i}{N})$ in~\eqref{CRAP} also converges to $\frac{1}{\tilde{N}}[\text{diag}({\bf R}_{x}(\vartheta))]_{i+1}=\frac{1}{\tilde{N}}E[|X_t(\vartheta+\frac{i}{N})|^2]$, for $\vartheta \in [0,1/N)$ and $i=0,1,\dots,N-1$,
as $\tau$ approaches $\infty$. Since $x_t[\tilde{n}]$ in~\eqref{eq:x_{t,n}} is an observation of the true process $x[\tilde{n}]$ in~\eqref{eq:PowerSpectrum}, $\hat{P}_{x,LS}(\vartheta)$ will converge to ${P}_{x}(\vartheta)$ in~\eqref{eq:PowerSpectrum}
if both $\tau$ and $\tilde{N}$ (or $L$ for a fixed $N$) approach $\infty$.
{\color{blue}\subsection{Complexity Analysis}\label{complexity}
Let us now compare the complexity of our CAP approach with an existing state-of-the-art approach to tackle similar problems.
We compare our CAP approach with a method that
reconstructs
$X_t(\vartheta)$ (instead of the periodogram), for $\vartheta \in [0,1)$ and all $t=1,2,\dots,\tau$, from compressive measurements. The reconstruction of $\{X_t(\vartheta)\}_{t=1}^{\tau}$, for $\vartheta \in [0,1)$, is performed by reconstructing $\{{\bf x}_{t}(\vartheta)\}_{t=1}^{\tau}$ in~\eqref{eq:x_{t}_bar} from $\{\bar{\bf y}_{t}(\vartheta)\}_{t=1}^{\tau}$ in~\eqref{eq:y_{t}_bar}, for $\vartheta \in [0,1/N)$, using the Regularized M-FOCUSS (RM-FOCUSS) approach of~\cite{BhaskarRao}.
We then use the reconstructed $\{{\bf x}_{t}(\vartheta)\}_{t=1}^{\tau}$, for $\vartheta \in [0,1/N)$, either to compute the periodogram or to compute the energy at
$\vartheta \in [0,1)$ and to detect the existence of
active user signals.
Note that RM-FOCUSS is designed to treat $\{\bar{\bf y}_{t}(\vartheta)\}_{t=1}^{\tau}$, for each
$\vartheta$, as multiple measurement vectors (MMVs) and exploit the assumed joint sparsity structure in $\{{\bf x}_{t}(\vartheta)\}_{t=1}^{\tau}$.}
{\color{blue}Table~\ref{tab:compute_complex} summarizes the computational complexity of
CAP and
RM-FOCUSS
(see~\cite{BhaskarRao} for more details).
Note that Table~\ref{tab:compute_complex} only describes the computational complexity of RM-FOCUSS for a single iteration. The number of RM-FOCUSS iterations depends on the
convergence criterion parameter (labeled as $\delta$ in~\cite{BhaskarRao}). Hence, we can argue that our CAP approach is simpler than RM-FOCUSS. Moreover, in RM-FOCUSS, we also need to determine a proper
regularization parameter (labeled as $\lambda$ in~\cite{BhaskarRao}), which is generally not a trivial task. Note that we also compare the detection performance of the two methods in the sixth experiment of Section~\ref{simulation_uncorrbins}. Note that
the reconstruction of ${\bf x}_{t}(\vartheta)$ from $\bar{\bf y}_{t}(\vartheta)$ is also considered in~\cite{Eldar} but it
only considers the single-sensor case.
\begin{table}[ht
\caption
Computational complexity of the CAP approach and the RM-FOCUSS of~\cite{BhaskarRao} for a given frequency point $\vartheta \in [0,1/N)$.}
\centering
\vspace{-1mm}
\begin{tabular}{| c | c |}
\hline
\multicolumn{2}{|c|}{CAP approach}\\
\hline
Computation steps & Computational complexity\\ \hline
Computation of $\hat{\bf R}_{\bar{y}}(\vartheta)$ in~\eqref{eq:Rybar_hat}& $\mathcal{O}(M^2\tau)$\\\hline
Computation of ${\bf R}_c^T{\bf R}_c$ in~\eqref{rxhatbarLS} & $\mathcal{O}(N^2M^2)$\\\hline
Inversion of ${\bf R}_c^T{\bf R}_c$ in~\eqref{rxhatbarLS} & $\mathcal{O}(N^3)$\\\hline
Multiplication between $({\bf R}_c^T{\bf R}_c)^{-1}$ & $\mathcal{O}(N^2M^2)+$\\
and ${\bf R}_c^T\text{vec}(\hat{\bf R}_{\bar{y}}(\vartheta))$ in~\eqref{rxhatbarLS} &$\mathcal{O}(NM^2)$\\\hline
Computation of~\eqref{eq:RxLS} (recall that & $\mathcal{O}(N\text{ log }N)$\\
${\bf B}$ in~\eqref{eq:RxLS} is an IDFT matrix) & \\\hline
Total & $\mathcal{O}(N^3)+\mathcal{O}(N^2M^2)$\\
& $+\mathcal{O}(M^2\tau)$ \\\hline
\multicolumn{2}{|c|}{RM-FOCUSS of~\cite{BhaskarRao} (per iteration)}\\\hline
Computation steps & Computational complexity\\ \hline
Computation of $\ell_2$-norm of & $\mathcal{O}(N\tau)$\\
each row of an $N\times\tau$ matrix & \\\hline
Multiplication between an $M\times N$&$\mathcal{O}(N^2M)$\\
matrix and an $N\times N$ matrix & \\\hline
Multiplication between an $M\times N$ &$\mathcal{O}(NM^2)$\\
matrix and an $N\times M$ matrix & \\\hline
Inversion of an $M\times M$ matrix &$\mathcal{O}(M^3)$\\\hline
Multiplication between an $N\times M$ &$\mathcal{O}(NM^2)$\\
matrix and an $M\times M$ matrix & \\\hline
Multiplication between an $N\times M$ &$\mathcal{O}(NM\tau)$\\
matrix and an $M\times \tau$ matrix &\\\hline
Multiplication between an $N\times N$ &$\mathcal{O}(N^2\tau)$\\
matrix and an $N\times \tau$ matrix &\\\hline
Total & $\mathcal{O}(N^2M)+\mathcal{O}(M^3)+$\\
& $\mathcal{O}(N^2\tau)+\mathcal{O}(NM^2)$\\
& $+\mathcal{O}(NM\tau)$\\\hline
\end{tabular}
\label{tab:compute_complex}
\end{table}}
\section{Multi-cluster Scenario}\label{CaseC2}
Recall that the ergodicity assumption on ${\bf x}_t(\vartheta)$
in Assumption~1
requires the statistics of ${\bf x}_t(\vartheta)$ to be the same along index $t$.
Let us now consider the case where we have $D$ clusters of $\tau$ time indices in P1 or of $\tau$ sensors in P2 such that ${\bf x}_t(\vartheta)$ is ergodic and its statistics do not change
only along index $t$ within a cluster.
We can then consider
Assumption~1
and the resulting case considered in Sections~\ref{uncorr_bins_system_model}-\ref{performance} as a special case of this multi-cluster scenario with $D=1$. We introduce the correlation matrix of ${\bf x}_t(\vartheta)$ and $\bar{\bf y}_t(\vartheta)$ for all indices $t$ belonging to cluster $d$ as ${\bf R}_{x,d}(\vartheta)$ and ${\bf R}_{\bar{y},d}(\vartheta)$, respectively, with $d=0,1,\dots,D-1$. We can then repeat all the steps of Sections~\ref{uncorr_bins_system_model}-\ref{performance}
for each cluster. More precisely, we can follow~\eqref{eq:Rybar_hat} and define the estimate of ${\bf R}_{\bar{y},d}(\vartheta)$ as $\hat{\bf R}_{\bar{y},d}(\vartheta)$, which is computed by averaging the outer-product of $\bar{\bf y}_t(\vartheta)$ over indices $t$ belonging to cluster $d$.
Then, we apply~\eqref{rxhatbarLS}-\eqref{CRAP} on $\hat{\bf R}_{\bar{y},d}(\vartheta)$ to obtain $\hat{\bf R}_{x,LS,d}(\vartheta)$ and the CAP for cluster $d$, i.e., $\hat{P}_{x,LS,d}(\vartheta)$. Also note that the bias and variance analysis in Section~\ref{performance} is also valid for each cluster in this section.
We might then be interested in the averaged statistics over the clusters, i.e., $\frac{1}{D}\sum_{d=0}^{D-1}{\bf R}_{x,d}(\vartheta)$. Since $\frac{1}{D}\sum_{d=0}^{D-1}\hat{\bf R}_{\bar{y},d}(\vartheta)$ is a consistent estimate of $\frac{1}{D}\sum_{d=0}^{D-1}{\bf R}_{\bar{y},d}(\vartheta)$, we can then consider the resulting $\frac{1}{D}\sum_{d=0}^{D-1}\hat{\bf R}_{{x},{LS},d}(\vartheta)$ as a valid LS estimate of $\frac{1}{D}\sum_{d=0}^{D-1}{\bf R}_{x,d}(\vartheta)$. Defining the theoretical spectral representation of the power at cluster $d$ as ${P}_{x,d}(\vartheta)$, we can then apply Theorem~2 for each cluster to conclude that $\frac{1}{D}\sum_{d=0}^{D-1}\hat{P}_{{x},{LS},d}(\vartheta)$ is an asymptotically (with respect to $\tilde{N}$) unbiased estimate of $\frac{1}{D}\sum_{d=0}^{D-1} P_{x,d}(\vartheta)$. This multi-cluster scenario is of interest for P2 when we have clusters of wireless sensors sensing user signals where the signal from each user experiences the same fading statistics (the same path loss and shadowing) on its way towards the sensors belonging to the same cluster. However, the fading statistics experienced by the signal between the user location and different clusters are not the same. For P1, the multi-cluster scenario implies that the array sensing time can be grouped into
multiple clusters of time indices where the
signal statistics do not vary
along the time within the cluster but
they vary across different clusters.
\vspace{-1mm}
\section
Correlated Bins}\label{correlatedbins}
When the bin size is reduced by increasing $N$ in~\eqref{eq:X_{t,n}}, the received spectra at two frequencies or angles, which are separated by more
than the size of the bin, might still be correlated.
In this case, ${\bf R}_{x}(\vartheta)$ and ${\bf R}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rxt_bar} are respectively not a diagonal and circulant matrix anymore, and
the temporal and spatial compression of Section~\ref{uncorr_bins_compression} cannot be performed without violating the identifiability of ${\bf r}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x}.
This section proposes a solution when this situation occurs under
Assumption~1 and the single-cluster scenario (it does not apply to the multi-cluser scenario of Section~\ref{CaseC2}).
Let us organize $\tau$ indices $t$ into several groups
and write $t$ as $t=pZ+z+1$ with $p=0,1,\dots,P-1$ and $z=0,1,\dots,Z-1$, where $Z$ and $P$ represent the total number of groups
and the number of indices belonging to a group, respectively.
Writing $\bar{\bf y}_t(\vartheta)$ and $\bar{\bf x}_t(\vartheta)$ at $t=pZ+z+1$ as $\bar{\bf y}_{p,z}(\vartheta)$ and $\bar{\bf x}_{p,z}(\vartheta)$, we can
introduce for each $z$ a compression similar to~\eqref{eq:y_{t}_bar} as
\vspace{-1mm}
\begin{equation}
\bar{\bf y}_{p,z}(\vartheta)={\bf C}_{z}\bar{\bf x}_{p,z}(\vartheta),\quad \vartheta \in [0,1/N),
\label{eq:y_pz_bar}
\vspace{-1mm}
\end{equation}
where ${\bf C}_{z}$ is the $M\times N$ selection matrix for the $z$-th group of indices whose rows are also selected from the rows of ${\bf I}_N$.
Next,
we compute the correlation matrix of $\bar{\bf y}_{p,z}(\vartheta)$ in~\eqref{eq:y_pz_bar}, i.e., ${\bf R}_{\bar{y}_{z}}(\vartheta)=E[\bar{\bf y}_{p,z}(\vartheta)\bar{\bf y}^H_{p,z}(\vartheta)]$, for $z=0,1,\dots,Z-1$, as
\vspace{-2mm}
\begin{equation}
{\bf R}_{\bar{y}_{z}}(\vartheta
={\bf C}_{z}E[\bar{\bf x}_{p,z}(\vartheta)\bar{\bf x}^H_{p,z}(\vartheta)]{\bf C}^T_{z}={\bf C}_{z}{\bf R}_{\bar{x}}(\vartheta){\bf C}^T_{z}
\label{eq:Ry_z_bar}
\vspace{-1mm}
\end{equation}
with ${\bf R}_{\bar{x}}(\vartheta)=E[\bar{\bf x}_{p,z}(\vartheta)\bar{\bf x}^H_{p,z}(\vartheta)]$, for all $p,z$,
as Assumpti-on~1 requires that the statistics of $\bar{\bf x}_{t}(\vartheta)$ do not vary with $t$.
Let us interpret the above model
for problems P1 and P2. For P1,~\eqref{eq:y_pz_bar} implies that we split the
array scanning time $\tau$ into $P$ scanning periods, each of which consists of $Z$ time slots.
It is
clear from~\eqref{eq:y_pz_bar} that, in different time slots per scanning period, different sets of $M$ ULSs out of $N$ available ULSs in the underlying ULA are activated leading to a dynamic linear array (DLA). This DLA model has actually been introduced in~\cite{ElsevierDOA} though it is originally designed to estimate the DOA of more sources than active antennas, where the sources can be highly correlated.
Here, the indices of the selected rows of ${\bf I}_N$ used to form ${\bf C}_z$ correspond to the indices of the active ULSs at time slot $z$,
the set of $M$ active ULSs in a given time slot $z$ is the same across different scanning periods,
and
the number of received time samples per antenna in a time slot is one. Fig.~\ref{fig:SystemModelCorrP1} shows an example of this DLA
model.
For P2,~\eqref{eq:y_pz_bar} implies that
$\tau$
sensors are organized into $Z$ groups of $P$ sensors, where the same
sampling pattern is adopted by all sensors within the same group and where different groups employ different sampling patterns.
The indices of the active cosets used by group $z$ then correspond to the indices of the selected rows of ${\bf I}_N$ used to construct ${\bf C}_z$. Fig.~\ref{fig:SystemModelCorrP2} shows an example of the model for problem P2.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{SystemModelCorrP1.eps}\vspace{-2mm}
\caption{The
DLA model used in problem P1 when the bins are correlated with $M=3$, $N=5$, $P=2$, and $Z=4$. Solid lines and dashed-dotted lines indicate active and inactive antennas, respectively.}
\label{fig:SystemModelCorrP1
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{SystemModelCorrP2.eps}\vspace{-0.5mm}
\caption{The model for problem P2 when the bins are correlated with $M=3$, $N=5$, $P=2$, and $Z=4$. For simplicity, we illustrate the multi-coset sampling as a Nyquist-rate sampling followed by a multiplexer and a switch that performs sample selection based on ${\bf C}_z$. Sensors in the same group have the same colour. For example, sensors in group $z=0$ collect the samples at the cosets with coset indices $0$,$1$, and $2$.}
\label{fig:SystemModelCorrP2}\vspace{-5mm}
\end{figure}
Since it turns out that the mathematical model in~\cite{ElsevierDOA} is applicable for both P1 and P2, we can then follow~\cite{ElsevierDOA}, rewrite~\eqref{eq:Ry_z_bar} for $z=0,1,\dots,Z-1$ as
\vspace{-0.75mm}
\begin{equation*}
{\bf r}_{\bar{y}_z}(\vartheta)=\text{vec}({\bf R}_{\bar{y}_z}(\vartheta))=({\bf C}_{z}\otimes{\bf C}_{z})\text{vec}({\bf R}_{\bar{x}}(\vartheta))
\label{eq:vec_Ry_z_bar}
\vspace{-0.72mm}
\end{equation*}
combine ${\bf r}_{\bar{y}_z}(\vartheta)$ for all $z$
into ${\bf r}_{\bar{y}}(\vartheta)=[{\bf r}^T_{\bar{y}_0}(\vartheta),{\bf r}^T_{\bar{y}_1}(\vartheta),\dots,$ ${\bf r}^T_{\bar{y}_{Z-1}}(\vartheta)]^T$, and write ${\bf r}_{\bar{y}}(\vartheta)$ as
\vspace{-0.5mm}
\begin{equation}
{\bf r}_{\bar{y}}(\vartheta)={\boldsymbol \Psi}\text{vec}({\bf R}_{\bar{x}}(\vartheta)),
\label{eq:ry_bar_vartheta}
\vspace{-0.5mm}
\end{equation}
with ${\boldsymbol \Psi}$ an $M^2Z\times N^2$ matrix given by
\vspace{-0.5mm}
\begin{equation}
{\boldsymbol \Psi}=[({\bf C}_{0}\otimes{\bf C}_{0})^T
\dots,({\bf C}_{Z-1}\otimes{\bf C}_{Z-1})^T]^T.
\label{eq:Psi}
\vspace{-0.5mm}
\end{equation}
We can solve for $\text{vec}({\bf R}_{\bar{x}}(\vartheta))$ from ${\bf r}_{\bar{y}}(\vartheta)$ in~\eqref{eq:ry_bar_vartheta} using LS if ${\boldsymbol \Psi}$ in~\eqref{eq:Psi} has full column rank.
It has been shown in~\cite{ElsevierDOA} that ${\boldsymbol \Psi}$ has full column rank if and only if {\it each possible pair of two different rows} of ${\bf I}_N$ is simultaneously used in {\it at least one} of the matrices
$\{{\bf C}_z\}_{z=0}^{Z-1}$.
In P1, this implies that each possible combination of two ULSs in the underlying ULA should be active in at least one time slot per scanning period.
In P2, this implies that each possible pair of two cosets (out of $N$ possible cosets) should be simultaneously used by at least one group of sensors.
Observe how the
DLA model in Fig.~\ref{fig:SystemModelCorrP1} and the model in Fig.~\ref{fig:SystemModelCorrP2} satisfy this requirement.
Once $\text{vec}({\bf R}_{\bar{x}}(\vartheta))$ is reconstructed, we follow the procedure in Section~\ref{uncorrbinsreconstruct} to reconstruct ${\bf R}_{{x}}(\vartheta
=E[{\bf x}_{p,z}(\vartheta){\bf x}^H_{p,z}(\vartheta)]$ from ${\bf R}_{\bar{x}}(\vartheta)$
In practice, to approximate the expectation operation in computing ${\bf R}_{\bar{y}_z}(\vartheta)$ in~\eqref{eq:Ry_z_bar},
we propose to take an average over $\bar{\bf y}_{p,z}(\vartheta)$ at different scanning periods $p$ for P1 or at $P$ sensors in
group $z$ for P2, i.e., $\hat{\bf R}_{\bar{y}_z}(\vartheta)=\frac{1}{P}\sum_{p=0}^{P-1}\bar{\bf y}_{p,z}(\vartheta)\bar{\bf y}^H_{p,z}(\vartheta)$. Introducing $\hat{\bf r}_{\bar{y}_z}(\vartheta)=\text{vec}(\hat{\bf R}_{\bar{y}_z}(\vartheta))$, the LS reconstruction is then applied to $\hat{\bf r}_{\bar{y}}(\vartheta)=[\hat{\bf r}^T_{\bar{y}_0}(\vartheta),\hat{\bf r}^T_{\bar{y}_1}(\vartheta),\dots,\hat{\bf r}^T_{\bar{y}_{Z-1}}(\vartheta)]^T$.
\section{Numerical Study}\label{numerical}
\subsection{Uncorrelated Bins}\label{simulation_uncorrbins}
In this section, we simulate
the estimation and detection performance of the
CAP approach for the uncorrelated bins case discussed in Sections~\ref{uncorr_bins_system_model}-\ref{CaseC2}.
To keep the study general, in this section, we generally simulate the multi-cluster scenario of Section~\ref{CaseC2}.
In our first experiment, we consider problem P2 and have $\tilde{N}=3060$, $L=170$, and $N=18$. Each sensor collects $M=5$ samples out of every $N=18$ possible samples based on a periodic length-$17$ minimal circular sparse ruler with $\mathcal{M}=\{0,1,4,7,9\}$.
This is identical to forming a $5 \times 18$ matrix ${\bf C}$ in~\eqref{eq:y_{t}_bar}
by selecting
the rows of ${\bf I}_{18}$ based on $\mathcal{M}$.
The resulting ${\bf R}_c$ in~\eqref{eq:Rybar_as_rbar_x} has full column rank and we have a compression rate of $M/N=0.28$. We consider $K=6$ user signals whose frequency bands are given in Table~\ref{tab:experiment1} together with the power at each band normalized by frequency. We generate these signals by passing six circular complex zero-mean Gaussian i.i.d. noise signals through different digital filters having $200$ taps where the location of the unit-gain passband of the filter for each signal corresponds to the six different active bands. We set the variances of these noise signals based on the desired user signal powers in Table~\ref{tab:experiment1}. We assume $D=2$ clusters of $\tau=100$ unsynchronized sensors, which means that, at a given point in time, different sensors observe different parts of the user signals.
To simplify the experiment, the correlation between the different parts of the user signals observed by different sensors is assumed to be negligible such that they can be viewed as independent realizations of the user signals.
The spatially and temporally white noise
has a variance of $\sigma^2=7$ dBm. The signal of each user received by different sensors is assumed to pass through different and uncorrelated fading channels $H_t^{(k)}(\vartheta)$. Note however that the signal from a user received by sensors within
the same cluster is assumed to suffer from the same path loss and shadowing. The amount of path loss experienced between each user and each cluster listed in Table~\ref{tab:experiment1}
includes the shadowing to simplify the simulation. We simulate small-scale Rayleigh fading on top of the path loss
by generating the channel frequency response based on a zero-mean complex Gaussian distribution with variance given by the
path loss in Table~\ref{tab:experiment1}. We assume flat fading in each band.
Fig.~\ref{fig:DisplayExperiment1} shows the CAP of the faded user signals received at the
sensors. As a benchmark, we provide the Nyquist-rate based AP (NAP),
which is obtained when all sensors collect all the $\tilde{N}$ samples.
With respect to the
NAP, the degradation in the quality of the
CAP is acceptable despite a strong compression, although more leakage is introduced in the unoccupied band. Next, we perform 1000 Monte Carlo
runs and vary the number of sensors per cluster $\tau$, the noise variance at each sensor $\sigma^2$, and
$M/N$ (see Fig.~\ref{fig:NMSEExperiment1}).
In Fig.~\ref{fig:NMSEExperiment1}, the compression rate of $M/N=0.44$ is implemented by
activating three extra cosets, i.e., $\{2,12,14\}$ (which we picked randomly).
Fig.~\ref{fig:NMSEExperiment1} shows the normalized mean square error (NMSE) of the
CAP with respect to the
NAP and indicates that increasing $M/N$
by a factor of less than two significantly improves the estimation quality. Having more sensors $\tau$ also improves the estimation quality.
Also observe that the compression introduces a larger NMSE for a larger noise power.
\begin{table}[t
\caption{The frequency band and the power of the users signal and the experienced path loss in the first, second, and third experiments.}
\centering
\vspace{-1mm}
\begin{tabular}{| c | c | c | c |}
\hline
User band & Power/freq. &Path loss at&Path loss at\\
(rad/sample) &(per rad/sample)&cluster 1&cluster 2\\ \hline
$[-0.69\pi,-0.61\pi]$ & $38$ dBm& $-17$ dB& $-19$ dB\\ \hline
$[-0.49\pi,-0.41\pi]$ & $40$ dBm& $-20$ dB& $-18$ dB\\ \hline
$[0.11\pi,0.19\pi]$ & $34$ dBm& $-12$ dB& $-10$ dB\\ \hline
$[0.31\pi,0.39\pi]$ & $34$ dBm& $-16$ dB& $-18$ dB\\ \hline
$[0.41\pi,0.49\pi]$ & $32$ dBm& $-14$ dB& $-12$ dB\\ \hline
$[0.71\pi,0.79\pi]$ & $35$ dBm& $-18$ dB& $-20$ dB\\ \hline
\end{tabular}
\label{tab:experiment1
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{RevDispUNSYNCSc1VerA.eps}\vspace{-2.5mm}
\caption{The
CAP and the NAP of the faded user signals for the first experiment (unsynchronized sensors) as a function of frequency in a linear scale (top) and logarithmic scale (bottom).}
\label{fig:DisplayExperiment1}
\centering
\includegraphics[width=0.50\textwidth]{dBNMSEUnsyncFreqScen1VerC.eps}\vspace{-2mm}
\caption{The NMSE between the
CAP and the
NAP for the first experiment (unsynchronized sensors).}
\label{fig:NMSEExperiment1}\vspace{-5mm}
\end{figure}
We can also re-interpret the first experiment
for problem P1.
In P1, the first experiment implies that $M=5$ ULSs (whose indices are indicated by $\mathcal{M}$) out of $N=18$ ULSs are activated leading to a periodic circular MRA. Table~\ref{tab:experiment1} then gives the angular bands of the $K=6$ user signals
and the power for each band normalized by the angle. For P1, the first experiment also implies that each user transmits temporally independent signals and that the signals from different users $k$ pass through statistically different and uncorrelated time-varying fading channels $H_t^{(k)}(\vartheta)$ on their way towards the receiving array. For each user $k$, the fading statistics
remain constant within each cluster of time indices but the fading realization is temporally independent.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{RevDispSyncVerASc1.eps}\vspace{-2mm}
\caption{The
CAP and the NAP of the faded user signals for the second experiment (synchronized sensors) as a function of frequency in a linear scale (top) and logarithmic scale (bottom).}
\label{fig:DisplayExperiment2}
\centering
\includegraphics[width=0.49\textwidth]{dBNMSESyncVerC.eps}\vspace{-1.5mm}
\caption{The NMSE between the
CAP and the NAP for the second experiment (synchronized sensors).}
\label{fig:NMSEExperiment2
\end{figure
The second experiment uses the same setting as used in the first experiment (including Table~\ref{tab:experiment1}). The only difference is that the
sensors are now assumed to be synchronized. Fig.~\ref{fig:DisplayExperiment2} depicts the CAP and the NAP of the faded user signals received at the
sensors.
{\color{blue}Unlike in the unsynchronized sensors case (see Fig.~\ref{fig:DisplayExperiment1}), we now observe a significant variation in both the CAP and the NAP. This is because, when the sensors are synchronized, they observe the same part of the user signals.
This means that, while the fading realization components in the received signals at different sensors are independent, the user signal components in the received signals at different sensors are fully correlated.}
Fig.~\ref{fig:NMSEExperiment2} shows the NMSE of the CAP with respect to the NAP for the synchronized sensors case. In general, some trends found in the unsynchronized sensors case also appear here. Notice that the NMSE
for the synchronized sensors case is smaller than the one for the unsynchronized sensors case since the quality of the NAP in the synchronized sensors case is also significantly worse than the one in the unsynchronized sensors case. Note that we can also re-interpret this second experiment for problem P1. This re-interpretation however, will make more sense, if we reverse the roles of $H_t^{(k)}(\vartheta)$ and $U_t^{(k)}(\vartheta)$. When this is the case, for P1, the second experiment implies that each user transmits temporally independent signals and that the signals from different users $k$ pass through statistically different and uncorrelated {\it time-invariant} fading channels
on their way towards the receiving array. Here, the statistics of the user signal are constant only within a cluster of time indices.
\begin{table}[th
\caption{The two sets of coset patterns used in the third experiment (comparison of different bin size).}
\centering
\vspace{-1mm}
\begin{tabular}{| c | c | c |}
\hline
\multicolumn{3}{|c|}{First set of coset patterns}\\
\hline
$N$ & Minimal circular & The order of the additional coset indices\\
& sparse ruler indices & for implementing a larger compression rate\\ \hline
18 & 0, 1, 4, 7, 9 & 17, 2, 13, 12, 15, 6 \\ \hline
14 & 0, 1, 2, 4, 7 & 10, 6, 12, 5 \\ \hline
10 & 0, 1, 3, 5 & 8, 4 \\ \hline
\multicolumn{3}{|c|}{Second set of coset patterns}\\
\hline
$N$ & Minimal circular & The order of the additional coset indices\\
& sparse ruler indices & for implementing a larger compression rate\\ \hline
18 & 0, 1, 4, 7, 9 & 5, 2, 6, 17, 15, 14 \\ \hline
14 & 0, 1, 2, 4, 7 & 12, 10, 13, 11 \\ \hline
10 & 0, 1, 3, 5 & 4, 6 \\ \hline
\end{tabular}
\label{tab:coset2sets}
\end{table}
{\color{blue}In the third experiment, we investigate the impact of varying the bin size (which is equivalent to varying $N$) and $L$ for a given $\tilde{N}$ on the performance of the CAP approach. Let us consider
the settings in the first experiment (i.e., we consider Table~\ref{tab:experiment1})
except for the following.
We now examine three different values of $N$, i.e., $N=10$, $N=14$, and $N=18$ for a given $\tilde{N}=3150$. For each value of $N$, we vary the compression rate $M/N$ and examine the two sets of coset patterns available in Table~\ref{tab:coset2sets}. We start from the minimal
$M/N$ offered by the minimal circular sparse ruler. Larger compression rates are implemented by selecting additional coset indices where the order of the selection is provided by the third column of Table~\ref{tab:coset2sets}.
We fix the number of $\tau$
to $\tau=76$ and perform 1000 Monte Carlo simulation runs for different noise variances (see Fig.~\ref{fig:NMSEDiffBinSize}).
Fig.~\ref{fig:NMSEDiffBinSize} illustrates the NMSE of the CAP with respect to the NAP for the two sets of coset patterns. Observe that varying $N$ and $L$ for a given $\tilde{N}$ does not really result in a clear trend in the estimation performance. While the performance of the CAP for $N=10$ is worse than the one for the larger value of $N$, the performance of the CAP for $N=14$ is better than the one for $N=18$ for some values of $M/N$. Note that the NMSE also depends on the coset pattern that we select to implement a particular compression.
At this point, we would like to mention that, as long as the bin size constraint in Remark~2 is satisfied, having a larger $N$ is generally more advantageous as we will generally have a lower value of minimum $M/N$. This is because it can be found that, as $N$ increases, the number of marks in the corresponding length-$(N-1)$ minimal circular sparse ruler (which is the minimum $M$) tends to be constant or to increase very slowly. As a result,
the minimum compression rate $M/N$ also generally (even though not monotonically) decreases with $N$.}
\begin{figure}[h]
\begin{minipage}[b]{1\linewidth}
\centering
\includegraphics[width=1\textwidth]{DBTSPREVNMSE_NL.eps}
\centerline{\small(a)
\end{minipage}
\begin{minipage}[b]{1\linewidth}
\centering
\includegraphics[width=1\textwidth]{DBPART2TSPREV_NMSE_NL.eps}
\centerline{\small(b)
\end{minipage}
\caption{The NMSE between the CAP and the NAP for the third experiment (comparison of different bin size); (a) using the first set of coset patterns (see Table~\ref{tab:coset2sets}); (b) using the second set of coset patterns.}
\label{fig:NMSEDiffBinSize}
\end{figure}
In the next {\color{blue}three} experiments, we use the
CAP to detect the existence of active user signals that suffer from fading channels and evaluate the detection performance. We start with the fourth experiment, where we again consider problem P2, $\tilde{N}=3060$, $L=170$, $N=18$, and $M/N=0.28$ (again by adopting
$\mathcal{M}=\{0,1,4,7,9\}$). We now consider $D=3$ clusters of $\tau$ unsynchronized sensors and $K=3$ user signals (see their settings in Table~\ref{tab:experimentdetect}), which
are generated using the same procedure used in the first experiment.
The amount of path loss (which includes shadowing) experienced between each user and each cluster is listed in Table~\ref{tab:experimentdetect}. We then simulate a small-scale Rayleigh fading channel on top of it. We perform 5000 Monte Carlo runs and vary
$\tau$ and
$\sigma^2$
(see Fig.~\ref{fig:DetectSc1Op3}).
We vary the detection threshold manually and out of the $\tilde{N}=3060$ frequency points at which the CAP is reconstructed, we evaluate the resulting detection events at $363$ frequency points in the active bands and the false alarm events at $363$ frequency points in the bands that are far from the active bands, i.e., $[-0.77\pi,-0.53\pi]$.
Here, we average the estimated power over every eleven subsequent frequency points $\vartheta$ and apply the threshold to these average values. The resulting receiver operating characteristic (ROC) is depicted in Fig.~\ref{fig:DetectSc1Op3}. Observe the acceptable detection performance of the CAP
for the examined $\tau$ and $\sigma^2$ though the performance is slightly poor for $\tau=17$ and $\sigma^2=14$ dBm.
This detection performance demonstrates that the proposed CAP
can be used in a spectrum sensing application such as in a CR network.
\begin{table}[ht
\caption{The frequency band and the power of the user signals and the experienced path loss in the fourth and the fifth experiments.}
\centering
\vspace{-1mm}
\begin{tabular}{| c | c | c | c | c |}
\hline
User band & Power/freq. & \multicolumn{3}{c|}{Path loss (in dB) at cluster}\\ \cline{3-5}
(rad/sample) &(per rad/sample)& 1 & 2 & 3 \\ \hline
$[0.41\pi,0.49\pi]$ & $25$ dBm& $-12$ & $-13$ & $-14$ \\ \hline
$[0.31\pi,0.39\pi]$ & $25$ dBm& $-14.5$ & $-13$ & $-11.5$ \\ \hline
$[0.21\pi,0.29\pi]$ & $25$ dBm& $-13.5$ & $-13$ & $-12.5$ \\ \hline
\end{tabular}
\label{tab:experimentdetect}\vspace{-2mm}
\end{table
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{RevPDFASc1UnSyncBackupVA.eps}\vspace{-2mm}
\caption{The resulting ROC when the
CAP is used to detect the existence of the active user signals suffering from fading channels in the fourth experiment (unsynchronized sensors).}
\label{fig:DetectSc1Op3}
\centering
\includegraphics[width=0.49\textwidth]{ResultPDFARevScen1Op2.eps}\vspace{-2mm}
\caption{The resulting ROC when the
CAP is used to detect the existence of the active user signals suffering from fading channels in the fifth experiment (synchronized sensors).}
\label{fig:DetectSc1OpTwo
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{PlotCompareMMV.eps}\vspace{-2mm}
\caption{The resulting ROC of the detector when the CAP is used compared with the one when the compressive signal reconstruction using RM-FOCUSS of~\cite{BhaskarRao} is used (the sixth experiment).}
\label{fig:CompareMMV
\end{figure}
The fifth experiment repeats the fourth experiment but for synchronized sensors.
The ROC in Fig.~\ref{fig:DetectSc1OpTwo} shows that the detection performance for the synchronized sensors case is
worse than the one for the unsynchronized sensors case in Fig.~\ref{fig:DetectSc1Op3}
due to the significant variation in the
CAP as shown in Fig.~\ref{fig:DisplayExperiment2}.
{\color{blue}In the sixth experiment, we consider problem P2 and compare the detection performance of the spectrum sensing approach based on CAP with that of the one based on the RM-FOCUSS discussed in Section~\ref{complexity}.
To simulate the existence of a joint sparsity structure in $\{{\bf x}_{t}(\vartheta)\}_{t=1}^{\tau}$, we again use the settings
in Table~\ref{tab:experimentdetect}. However, we now only assume one cluster of $\tau=50$ sensors where the amount of path loss experienced between each user and each sensor is set to $-13$ dB.
The ROC for 5000 Monte Carlo runs and different $M/N$ as well as $\sigma^2$ is illustrated in
Fig.~\ref{fig:CompareMMV}. Here, the compression rate of $M/N = 9/18$ is implemented by activating four extra cosets, i.e., {16, 8, 12, 13} (which we decide randomly), on top of the length-$17$ minimal circular sparse ruler. The M-FOCUSS convergence criterion parameter
and the M-FOCUSS diversity measure parameter (labeled as $p$ in~\cite{BhaskarRao}) are set to $0.001$ and $0.8$, respectively. Note that the latter setting follows the suggestion of~\cite{BhaskarRao}. To determine the M-FOCUSS regularization parameter,
we first perform some experiments and examine ten different values of regularization parameters between $10^{-4}$ and $10$.
We then select the regularization parameter that leads to the smallest NMSE between the resulting compressive estimate of $\{|{X}_{t}(\vartheta)|^2\}_{t=1}^{\tau}$, for all the considered $\vartheta\in[0,1)$, and the Nyquist-rate version.
We finally decide to set the regularization parameter to $10$ for the case of $M/N=5/18$, to $0.01668$ for the case of $M/N=9/18$ and $\sigma^2=14$ dBm, and to $0.21544$ for the case of $M/N=9/18$ and $\sigma^2=11$ dBm (see Fig.~\ref{fig:CompareMMV}). Observe from Fig.~\ref{fig:CompareMMV} that the spectrum sensing approach based on CAP has a better detection performance than the one based on signal/spectrum reconstruction using RM-FOCUSS.
Recall that the approach of~\cite{BhaskarRao} requires the sparsity constraint on the vectors to be reconstructed (which are $\{{\bf x}_{t}(\vartheta)\}_{t=1}^{\tau}$). This implies that, if we have additional active users on top of the scenario used in the sixth experiment, the actual $\{{\bf x}_{t}(\vartheta)\}_{t=1}^{\tau}$ will have a smaller sparsity level. In this case, if we use the same compression rate $M/N$ as the one used in the sixth experiment, the performance of RM-FOCUSS will be even worse.}
\subsection{Correlated Bins}\label{simulation_corrbins}
In this section, we conduct the seventh experiment to evaluate the estimation performance of the
CAP approach for the correlated bins case discussed in Section~\ref{correlatedbins}.
Here, we consider problem P2, $\tilde{N}=3080$, $L=77$, $N=40$, and $M=14$ ($M/N=0.35$). Recall from Section~\ref{correlatedbins} that the mathematical model for the correlated bins case is similar to the one in~\cite{ElsevierDOA}. Hence, to design the sampling matrices for all sensors, which are assumed to be synchronized, that ensure the full column rank
of ${\boldsymbol \Psi}$ in~\eqref{eq:Psi}, we use the algorithm of~\cite{ElsevierDOA}, which is originally designed to solve the antenna selection problem for estimating the DOA of highly correlated sources. This algorithm, which only offers a suboptimal solution for
$Z$, suggests
$Z=12$ groups of $P=25$ sensors where each group has a unique set of $M=14$ active cosets.
We consider $K=2$ user signals whose setting is given in Table~\ref{tab:experiment5corr}.
To simulate the full correlation between all the frequency components within the band of the $k$-th user, we assume that the $k$-th user transmits exactly the same symbol at all these frequency components at each time instant.
On its way toward the different sensors, the signal of the $k$-th user is assumed to pass through different and uncorrelated Rayleigh fading channels $H_t^{(k)}(\vartheta)$ but it suffers from the same path loss and shadowing, whose value is
listed in Table~\ref{tab:experiment5corr}.
Again, we
assume flat fading in each user band and have $\sigma^2=7$ dBm.
Fig.~\ref{fig:DisplayCorrelated} shows the
CAP of the faded user signals using the correlated bins (CB) assumption. As a benchmark, we also provide
the NAP and the
CAP based on the uncorrelated bins (UB) assumption discussed in Sections~\ref{uncorr_bins_system_model}-\ref{performance}, which
is obtained by activating the same set of $M=14$ cosets, i.e., $\mathcal{M}=\{0,1,2,3,4,9,10,15,16,18,20,30,33,37\}$, in all sensors
(leading to a full column rank matrix ${\bf R}_c$ in~\eqref{eq:Rybar_as_rbar_x}). Observe that the quality of the
CAP based on the UB assumption is extremely poor.
On the other hand, with respect to the NAP, the degradation in the quality of the
CAP based on the CB assumption is acceptable despite a significant variation
in the unoccupied band.
Next, we perform 1000 Monte Carlo runs and vary the number of sensors per group $P$,
$\sigma^2$,
and
$M/N$ (see Fig.~\ref{fig:NMSECorrelated}).
In Fig.~\ref{fig:NMSECorrelated}, the compression rate of $M/N=0.45$ is implemented by randomly activating four additional cosets on top of the already selected $14$ cosets and the resulting sampling pattern is kept fixed throughout the entire Monte Carlo runs.
Fig.~\ref{fig:NMSECorrelated} shows the NMSE of the
CAP based on the CB assumption with respect to the NAP, which indicates that either increasing $M/N$
or having more sensors per group $P$ can significantly improve the estimation quality. Again,
a larger NMSE is introduced for a larger noise power.
\begin{table}[t
\caption{The frequency bands occupied by the users, their power, and the experienced path loss in the seventh experiment.}
\centering
\vspace{-1mm}
\begin{tabular}{| c | c | c |}
\hline
User band & Power/freq. &Path loss\\
(rad/sample) &(per rad/sample)&\\ \hline
$[-0.88\pi,-0.2\pi]$ & $22$ dBm& $-6$ dB\\ \hline
$[0.15\pi,0.92\pi]$ & $25$ dBm& $-7$ dB\\ \hline
\end{tabular}
\label{tab:experiment5corr
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{DisplayCorrSyncPinvVerA.eps}\vspace{-2.5mm}
\caption{The
CAP and the NAP of the faded user signals for the seventh experiment in Section~\ref{simulation_corrbins} as a function of frequency in a linear scale (top) and logarithmic scale (bottom).}
\label{fig:DisplayCorrelated}\vspace{0.5mm}
\centering
\includegraphics[width=0.49\textwidth]{dBNMSECorrSync1000PinvVerB.eps}\vspace{-2mm}
\caption{The NMSE between the
CAP based on the correlated bins assumption and the
NAP for the seventh experiment in Section~\ref{simulation_corrbins}.}
\label{fig:NMSECorrelated
\end{figure}
The interpretation of this seventh experiment for P1 is similar to the problem discussed in~\cite{ElsevierDOA}. For P1, this experiment is equivalent to having a ULA consisting of $N=40$ ULSs, where the
array scanning time is split into $P=25$ scanning periods, each of which consists of $Z=12$ time slots. In different time slots per scanning period, we activate different sets of $M=14$ (out of $N=40$) ULSs
leading to a DLA.
The interpretation will again make more sense if we reverse the roles of $H_t^{(k)}(\vartheta)$ and $U_t^{(k)}(\vartheta)$. When this is the case, for P1, the experiment implies that all users transmit temporally independent signals and that the signals from different users $k$ pass through statistically different and uncorrelated time-invariant fading channels
on their way towards the receiving array.
As the signal received from the $k$-th user at different angles within its angular band is fully correlated, this can be related to a situation
where the same symbol of the $k$-th user hits different scatterers (which play the role of the channel) before reaching the observing array. From the point of view of the array, the scattered versions of the symbol will be received from different angles within a particular angular band.
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{BackRevdBNoiseNMSEVersion1.eps}\vspace{-3mm}
\caption{The simulated and analytical NMSE between the
CAP and the true power spectrum
when $x_t[\tilde{n}]$ only contains circular complex Gaussian i.i.d. noise.
Unless mentioned otherwise, the cases of $M/N>0.28$ are implemented by activating extra cosets based on Pattern~1.}
\label{fig:NMSEGaussianNoise
\end{figure}
\begin{table}[th]
\caption{Three coset patterns to be added on top of the already selected minimal circular sparse ruler based coset indices for implementing $M/N>0.28$
in Section~\ref{simulation_noise}.}
\centering
\vspace{-1mm}
\begin{tabular}{| c | c |}
\hline
Coset pattern & The order of the additional coset indices\\ \hline
Pattern 1 & 17, 11, 2, 6 \\ \hline
Pattern 2 & 3, 5, 6, 8 \\ \hline
Pattern 3 & 2, 3, 5, 6 \\ \hline
\end{tabular}
\label{tab:cosetpattern}
\end{table}
\subsection{Circular Complex Gaussian Noise}\label{simulation_noise}
The last experiment examines the performance of the
CAP based on the UB assumption when the received signal $x_t[\tilde{n}]$ only contains circular complex zero-mean Gaussian spatially and temporally i.i.d. noise. Here, we have $\tilde{N}=3060$, $L=170$, $N=18$, and $\sigma^2=7$ dBm. We perform 1000 Monte Carlo
runs and vary $\tau$ (see Fig.~\ref{fig:NMSEGaussianNoise}).
We compute the NMSE of the
CAP with respect to the true power spectrum (since $x_t[\tilde{n}]$ in this case is clearly a WSS signal) and compare this NMSE obtained from the simulation with the analytical NMSE. Since it can be
shown that, for circular complex Gaussian i.i.d. noise $x_t[\tilde{n}]$, $\hat{P}_{{x},{LS}}(\vartheta)$ is an unbiased estimate of ${P}_x(\vartheta)$ even for finite $\tilde{N}$, the analytical NMSE only depends on the variance of $\hat{P}_{{x},{LS}}(\vartheta)$ and it can be shown to be equal to $\frac{1}{\tau}(\frac{1}{M}+\sum_{n=1}^{N-1}\frac{1}{\gamma_{n+1}})$ by using~\eqref{eq:VarPxLSwhitenoisetheo}. We start with $M/N=0.28$ by using the cosets indexed
by the length-$17$ minimal circular sparse ruler, i.e., $\mathcal{M}=\{0,1,4,7,9\}$, and then vary $M/N$.
First, the cases of $M/N>0.28$ are implemented by activating additional cosets based on Pattern~1 in Table~\ref{tab:cosetpattern}. Then, we also test Pattern~2 and Pattern~3 as additional coset patterns to implement the case of $M/N=0.5$.
Observe in Fig.~\ref{fig:NMSEGaussianNoise} how the analytical NMSE is on top of the simulated NMSE for all the evaluated $M/N$ values.
Also observe that, for $M/N=0.5$, the three different coset patterns have led to different values of the NMSE depending on the resulting value of $\{\gamma_{n+1}\}_{n=1}^{N-1}$ in~\eqref{eq:VarPxLSwhitenoisetheo}.
\section{Conclusion and Future Work}\label{sec:conclusion}
This paper proposed a compressive periodogram reconstruction approach and considered both
time-frequency and spatio-angular domains. In our model, the entire band is split into uniform bins
such that the received spectra at two frequencies or angles, whose distance is equal to or larger than the size of a bin, are uncorrelated.
In both considered domains, this model leads to a
circulant coset correlation matrix, which allows us to perform a strong compression yet to present our reconstruction problem as an overdetermined system.
When the
coset patterns are designed based on a circular sparse ruler, the system matrix has full column rank and we can reconstruct the periodogram using LS.
In a practical situation, our estimate of the coset correlation matrix is only asymptotically circulant.
Hence, we also presented an asymptotic bias and variance analysis for the
CAP. We further included
a thorough variance analysis on
the case when the received signal only contains circular complex zero-mean white Gaussian noise, which
provides some useful insights in the performance of our
approach.
The variance analysis for a more general signal (i.e., a general Gaussian signal) has also been presented but it is not easy to interpret due to its dependence on the unknown statistics of the user signals.
We also proposed a solution for the case when the bin size is decreased such that the received spectra at two frequencies or angles, with a spacing
between them larger than the size of the bin, can still be correlated.
Finally, the simulation study showed that the estimation performance of the evaluated approach is acceptable
and that our CAP
performs well when detecting the existence of the user signals suffering from
fading channels.
{\color{blue}As a future work, we are interested in the case when both problems P1 and P2 emerge simultaneously. In that case, we would consider a compressive linear array of antennas
and a compressive digital recever unit per antenna
leading to a two-dimensional (2D) digital signal. Our interest would then be to investigate if it is possible to perform
compression in both the time and spatial domain and to jointly reconstruct the angular and frequency periodogram from the 2D compressive samples. To study that, we could follow an approach similar
to~\cite{EUSIPCO}, which assumes stationarity in both the time and spatial domain and exploits the existing Toeplitz structure in the correlation matrix.}
\appendices
\section{Proof of Theorem~2}
\label{ProofTheorem2}
Recall that $\hat{\bf R}_{{\bar{y}}}(\vartheta)$ in~\eqref{eq:Rybar_hat} is an unbiased estimate of ${\bf R}_{\bar{y}}(\vartheta)$ in~\eqref{eq:Ry_bar}, i.e., $E[\hat{\bf R}_{{\bar{y}}}(\vartheta)]={\bf R}_{\bar{y}}(\vartheta)$.
Applying the expectation operator on~\eqref{rxhatbarLS} and~\eqref{eq:RxLS}, it is then clear that $\hat{\bf r}_{\bar{x},LS}(\vartheta)$ in~\eqref{rxhatbarLS} and $\hat{\bf R}_{x,LS}(\vartheta)$ in~\eqref{eq:RxLS} are unbiased estimates of ${\bf r}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x} and ${\bf R}_{x}(\vartheta)$ in~\eqref{eq:Rxt_bar}, respectively, since ${\bf r}_{\bar{x}}(\vartheta)$ in~\eqref{eq:Rybar_as_rbar_x} can perfectly be reconstructed from ${\bf R}_{\bar{y}}(\vartheta)$ using LS. Recall from Remark~1 that the $(i+1)$-th diagonal element of ${\bf R}_{x}(\vartheta)$ is equal to $E[|X_{t,i}(\vartheta)|^2]$. From~\eqref{CRAP}, it is then obvious that the CAP $\hat{P}_{x,LS}(\vartheta+\frac{i}{N})$ is an unbiased estimate of $\frac{1}{\tilde{N}}E[|X_{t,i}(\vartheta)|^2]$. However, by taking~\eqref{eq:PowerSpectrum} into account, we can observe that
\vspace{-1mm}
\begin{equation}
\lim_{\tilde{N}\rightarrow\infty}\frac{1}{\tilde{N}}E[|X_{t,i}(\vartheta)|^2]=P_x(\vartheta+\frac{i}{N}),\:\:\vartheta\in[0,1/N),
\label{AsymptoticEXtivartheta}
\vspace{-1mm}
\end{equation}
for $i=0,1,\dots,N-1$, since $x_t[\tilde{n}]$ is a finite-length observation of the actual random process $x[\tilde{n}]$. Hence, by applying $\lim_{\tilde{N}\rightarrow\infty}E[\hat{P}_{x,LS}(\vartheta+\frac{i}{N})]$ and using~\eqref{AsymptoticEXtivartheta}, it is clear that $\hat{P}_{x,LS}(\vartheta+\frac{i}{N})$ is an asymptotically (with respect to $\tilde{N}$) unbiased estimate of $P_x(\vartheta+\frac{i}{N})$ in~\eqref{eq:PowerSpectrum}, for $\vartheta \in [0,1/N)$ and $i=0,1,\dots,N-1$. $\square$
\vspace{-2mm}
\section{Proof of Proposition~1}
\label{ProofPropos1}
Note that for the specific case assumed in this proposition, we can rewrite~\eqref{eq:CovRycheckbarelementGaussian} as
\vspace{-1mm}
\begin{align}
&\text{Cov}[[\hat{\bf R}_{{\bar{y}}}(\vartheta)]_{m+1,m'+1},[\hat{\bf R}_{{\bar{y}}}(\vartheta)]_{a+1,a'+1}]\nonumber\\
&=\frac{1}{N^4\tau^2}\sum_{t=1}^{\tau}\sum_{i=0}^{N-1}\sum_{i'=0}^{N-1}\sum_{b=0}^{N-1}\sum_{b'=0}^{N-1}e^{\frac{j2\pi (n_m i-n_{m'} i'-n_ab+n_{a'}b')}{N}}\times\nonumber\\
&E[{X}_{t,i}(\vartheta){X}_{t,b}^*(\vartheta)]E[{X}_{t,i'}^*(\vartheta){X}_{t,b'}(\vartheta)]
\label{eq:CovRycheckbarelementGaussian_noise}
\vspace{-1mm}
\end{align}
where we also take the circularity of $x_t[\tilde{n}]$
into account. By using $\tilde{N}=LN$, we can find that
$E[{X}_{t,i}(\vartheta){X}_{t,b}^*(\vartheta)]=\sigma^2\sum_{\tilde{n}=0}^{\tilde{N}-1}e^{j2\pi\tilde{n}(\frac{b-i}{N})}=\tilde{N}\sigma^2\delta[b-i]$,
as it is clear from~\eqref{eq:CovRycheckbarelementGaussian_noise} that $b,i\in \{0,1,\dots,N-1\}$. Hence, we can simplify~\eqref{eq:CovRycheckbarelementGaussian_noise} as
\vspace{-1mm}
\begin{align*}
&\text{Cov}[[\hat{\bf R}_{{\bar{y}}}(\vartheta)]_{m+1,m'+1},[\hat{\bf R}_{{\bar{y}}}(\vartheta)]_{a+1,a'+1}]=\sum_{t=1}^{\tau}\sum_{i=0}^{N-1}\sum_{i'=0}^{N-1} \nonumber \\
&\sum_{b=0}^{N-1}\sum_{b'=0}^{N-1}e^{\frac{j2\pi (n_m i-n_{m'} i'-n_ab+n_{a'}b')}{N}}\frac{L^2\sigma^4}{N^2\tau^2}\delta[b-i]\delta[i'-b']\nonumber \\
&=\frac{L^2\sigma^4}{N^2\tau}\sum_{i=0}^{N-1}e^{\frac{j2\pi i(n_m -n_{a})}{N}}\sum_{i'=0}^{N-1}e^{\frac{j2\pi i'(n_{a'}-n_{m'})}{N}}\nonumber\\
&=\frac{L^2\sigma^4}{\tau}\delta[m -{a}]\delta[{m'}-{a'}],\:\:\vartheta \in [0,1/N),
\vspace{-1mm}
\end{align*}
where the last equality is due to
$n_m \in \{0,1,\dots,N-1\}$,
for all $m$,
and the fact that $n_m =n_{a}$ implies $m ={a}$. $\square$
{\color{blue}\section{Proof of~\eqref{eq:gamma_kappa}}
\label{ProofOfGammakappa}
First, by recalling that ${\bf R}_c=({\bf C}\otimes{\bf C}){\bf T}$, we can write
\begin{align}
&\gamma_\kappa =[{\bf R}_c^T{\bf R}_c]_{\kappa,\kappa}=[{\bf T}^T(({\bf C}^T{\bf C})\otimes({\bf C}^T{\bf C})){\bf T}]_{\kappa,\kappa}\nonumber \\
&=[{\bf T}^T(\text{diag}({\bf w})\otimes\text{diag}({\bf w})){\bf T}]_{\kappa,\kappa}\nonumber\\
&=\sum_{n=0}^{N-1}\sum_{n'=0}^{N-1}[{\bf T}^T]_{\kappa,Nn+n'+1}\times\nonumber\\
&[\text{diag}({\bf w})\otimes\text{diag}({\bf w})]_{Nn+n'+1,Nn+n'+1}[{\bf T}]_{Nn+n'+1,\kappa}.
\label{eq:gammakappa1}
\end{align}
Let us then recall that the $(q+1)$-th row of ${\bf T}$ is given by the $\left(\left(q-\left\lfloor\frac{q}{N}\right\rfloor\right)\text{ mod }N+1\right)$-th row of ${\bf I}_N$. We can then find that the $(\iota+1)$-th row of ${\bf T}^T$ contains ones at the $\{Nn+(n+\iota)\text{ mod }N+1\}_{n=0}^{N-1}$-th entries and zeros elsewhere.
We can thus rewrite~\eqref{eq:gammakappa1}
\begin{align}
&\gamma_\kappa=\sum_{n=0}^{N-1}[{\bf w}\otimes{\bf w}]_{Nn+((n+\kappa-1)\text{ mod }N)+1}\nonumber\\
&=\sum_{n=0}^{N-1}[{\bf w}{\bf w}^T]_{((n+\kappa-1)\text{ mod }N)+1,n+1}\nonumber\\
&=\sum_{n=0}^{N-1}w[(n+\kappa-1)\text{ mod }N]w[n],
\end{align}
where we use ${\bf w}{\bf w}^T=\text{vec}^{-1}({\bf w}\otimes{\bf w})$ in the second equation with vec$^{-1}(.)$ the inverse of the vec$(.)$ operation.}
\section{Proof of Theorem~3}
\label{ProofTheorem3}
To simplify the discussion, we introduce the $N^2 \times 1$ vector
\vspace{-1mm}
\begin{equation}
\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta)=({\bf C}\otimes{\bf C})^T\text{vec}(\hat{\bf R}_{\bar{y}}(\vartheta)).
\label{eq:rhobarx}
\vspace{-1mm}
\end{equation}
From the definition of ${\bf C}$ in Section~\ref{uncorr_bins_compression}, it is clear that the $(Nf+g+1)$-th row of $({\bf C}\otimes{\bf C})^T$ contains a single one at a certain entry and zeros elsewhere only if $f,g \in \mathcal{M}$, otherwise it contains zeros at all entries. Hence, we can write
\begin{equation}
[\hat{\boldsymbol \rho}_{\bar{x}}(\vartheta)]_{Nf+g+1}=0, \:\:\: \text{if $ f \notin \mathcal{M}$ or $ g \notin \mathcal{M}$}.
\label{eq:rhobarxentry}
\end{equation}
When $f,g \in \mathcal{M}$, the $(Nf+g+1)$-th entry of $\hat{\boldsymbol \rho}_{\bar{x}}(\vartheta)$ is given by one of the entries of $\text{vec}(\hat{\bf R}_{\bar{y}}(\vartheta))$.
Recall from Appendix~\ref{ProofOfGammakappa} that the $(\iota+1)$-th row of ${\bf T}^T$ contains ones at the $\{Nn+(n+\iota)\text{ mod }N+1\}_{n=0}^{N-1}$-th entries and zeros elsewhere. As a result, we can use~\eqref{rxhatbarLS},~\eqref{eq:rhobarx}, and Remark~3
to write the $(\iota+1)$-th entry of $\hat{\bf r}_{{\bar{x}},{LS}}(\vartheta)$ in~\eqref{rxhatbarLS} as
\vspace{-1mm}
\begin{align}
&[\hat{\bf r}_{{\bar{x}},{LS}}(\vartheta)]_{\iota+1}=\frac{1}{\gamma_{\iota+1}}[{\bf T}^T\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta)]_{\iota+1}\nonumber\\
&\:\:=\frac{1}{\gamma_{\iota+1}}\sum_{n=0}^{N-1}[\text{vec}^{-1}(\hat{\boldsymbol \rho}_{\bar{x}}(\vartheta))]_{(n+\iota)\text{ mod }N+1,n+1}
\label{eq:rbarxLSasrho}
\vspace{-1mm}
\end{align}
with $\iota=0,1,\dots,N-1$
and $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta))$ an $N\times N$ matrix.
At this stage, let us introduce the following definition.
\vspace{0.5mm}
\newline
\hspace*{1mm}{\it Definition 3:
Define the collection of $[\text{vec}^{-1}(\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta))]_{g'+1,f'+1}$ for $f',g'\in\{0,1,\dots,N-1\}$ and all $((g'-f')\text{ mod }N+1)=\kappa$ as the $\kappa$-th modular diagonal of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta))$. Note that the first modular diagonal of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta))$ is its main diagonal.}
\vspace{0.5mm}
\newline
We use Definition~3 to formulate the following lemma.
\vspace{0.5mm}
\newline
\hspace*{1mm}{\it Lemma~1: The $\kappa$-th modular diagonal of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta))$ in~\eqref{eq:rbarxLSasrho}
contain only $\gamma_\kappa$ entries of $\text{vec}(\hat{\bf R}_{\bar{y}}(\vartheta))$ in~\eqref{eq:rhobarx}. The remaining
$N-\gamma_\kappa$ entries of the $\kappa$-th modular diagonal of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta))$ are equal to zeros.
The summation in~\eqref{eq:rbarxLSasrho} then involves $N-\gamma_{\iota+1}$ zeros and only $\gamma_{\iota+1}$ out of $M^2$ entries of $\text{vec}(\hat{\bf R}_{\bar{y}}(\vartheta))$.}
\newline\hspace*{1mm}{\it Proof:}
Recall that, when $f,g \in \mathcal{M}$, the $(Nf+g+1)$-th entry of $\hat{\boldsymbol \rho}_{\bar{x}}(\vartheta)$ in~\eqref{eq:rbarxLSasrho} is given by one of the entries of $\text{vec}(\hat{\bf R}_{\bar{y}}(\vartheta))$. Since Remark~3 indicates that the number of pairs $g,f \in \mathcal{M}$ that lead to $(g-f)\text{ mod }N+1=\kappa$ is equal to $\gamma_{\kappa}$, it is clear from Definition~3 that the $\kappa$-th modular diagonal of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{\bar{x}}(\vartheta))$ only contains $\gamma_{\kappa}$ entries of $\text{vec}(\hat{\bf R}_{\bar{y}}(\vartheta))$. Equation~\eqref{eq:rhobarxentry} then confirms that the remaining $N-\gamma_{\kappa}$ entries of the $\kappa$-th modular diagonal of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{\bar{x}}(\vartheta))$
are equal to zero.
Next, observe that the summation in~\eqref{eq:rbarxLSasrho} is the sum of all terms in the $(\iota+1)$-th modular diagonal of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{\bar{x}}(\vartheta))$. This can be found by applying Definition~3 on the column and row indices of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{\bar{x}}(\vartheta))$ in~\eqref{eq:rbarxLSasrho}, i.e.,
\vspace{-1.5mm}
\begin{eqnarray}
&((n+\iota)\text{ mod }N-n)\text{ mod }N+1\nonumber\\%&=(((n+\iota)\text{ mod }N)+N+1-(n+1))\text{ mod }N+1\nonumber \\
&=(n+\iota-n)\text{ mod }N+1=\iota+1,\nonumber
\vspace{-1.5mm}
\end{eqnarray}
which exploits the property that
$(\kappa \text{ mod }N+\kappa')\text{ mod }N=(\kappa+\kappa')\text{ mod }N$.
This concludes the proof. $\square$
Let us now define ${\boldsymbol \Sigma}_{\hat{\rho}_{{\bar{x}}}}(\vartheta)$ as the $N^2\times N^2$ covariance matrix of $\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta)$ in~\eqref{eq:rhobarx}, which can be written as ${\boldsymbol \Sigma}_{\hat{\rho}_{{\bar{x}}}}(\vartheta)=({\bf C}\otimes{\bf C})^T
{\boldsymbol \Sigma}_{\hat{R}_{\bar{y}}}(\vartheta)({\bf C}\otimes{\bf C})$. First, recall~\eqref{eq:rhobarxentry} and that when $f,g \in \mathcal{M}$, the $(Nf+g+1)$-th entry of $\hat{\boldsymbol \rho}_{\bar{x}}(\vartheta)$ in~\eqref{eq:rbarxLSasrho} is given by one of the entries of $\text{vec}(\hat{\bf R}_{\bar{y}}(\vartheta))$. By also recalling that, for circular complex Gaussian i.i.d. noise $x_t[\tilde{n}]$, ${\boldsymbol \Sigma}_{\hat{R}_{\bar{y}}}(\vartheta)$ is a diagonal matrix whose elements are given by~\eqref{eq:CovRycheckbarelementGaussian_noise_propos}, we can find that ${\boldsymbol \Sigma}_{\hat{\rho}_{{\bar{x}}}}(\vartheta)$ is also a diagonal matrix with its diagonal elements given by
\begin{equation}
[\text{diag}({\boldsymbol \Sigma}_{\hat{\rho}_{{\bar{x}}}}(\vartheta))]_{Nf+g+1}=
\left\{ \begin{array}{ll}
\frac{L^2\sigma^4}{\tau},\quad\text{if $f,g \in \mathcal{M}$.} \\
0, \: \text{if $ f \notin \mathcal{M}$ or $ g \notin \mathcal{M}$.}
\end{array} \right.
\label{eq:diag_Cov_rho}
\end{equation}
By taking~\eqref{eq:rbarxLSasrho},~\eqref{eq:diag_Cov_rho}, and the diagonal structure of ${\boldsymbol \Sigma}_{\hat{\rho}_{\bar{x}}}(\vartheta)$ into account, we can then write the entry of ${\boldsymbol \Sigma}_{\hat{r}_{{\bar{x}},LS}}(\vartheta)$ in~\eqref{eq:Covar_scheckhatbarXLS}
at the $(\iota+1)$-th row and the $(\iota'+1)$-th column as
\vspace{-1mm}
\begin{align}
&\text{Cov}[[\hat{\bf r}_{{\bar{x}},{LS}}(\vartheta)]_{\iota+1},[\hat{\bf r}_{{\bar{x}},{LS}}(\vartheta)]_{\iota'+1}
=\frac{1}{\gamma_{\iota+1}\gamma_{\iota'+1}}\times\nonumber\\
&\sum_{n=0}^{N-1}\sum_{n'=0}^{N-1}\left\{[{\bf T}^T]_{\iota+1,Nn+n'+1}[{\boldsymbol \Sigma}_{\hat{\rho}_{{\bar{x}}}}(\vartheta)]_{Nn+n'+1,Nn+n'+1}\times\right.\nonumber\\
&\quad\quad\quad\quad\left.[{\bf T}]_{Nn+n'+1,\iota'+1}\right\
=\frac{\delta[\iota-\iota']}{\gamma^2_{\iota+1}}\times\nonumber\\
&\sum_{n=0}^{N-1}[{\boldsymbol \Sigma}_{\hat{\rho}_{{\bar{x}}}}(\vartheta)]_{Nn+
((n+\iota)\text{ mod }N)+1,Nn+
((n+\iota)\text{ mod }N)+1}
\label{eq:Entryofscheckbar_x_LS_gaussiannoise}
\vspace{-1mm}
\end{align}
for $\iota,\iota'=0,1,\dots,N-1$, which implies that ${\boldsymbol \Sigma}_{\hat{r}_{{\bar{x}},LS}}(\vartheta)$ is also a diagonal matrix for circular complex Gaussian i.i.d. noise $x_t[\tilde{n}]$.
Recall from the proof of Lemma~1
that the summation in
\eqref{eq:rbarxLSasrho} is the sum of all terms in the $(\iota+1)$-th modular diagonal of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta))$. We can then observe that the summation in~\eqref{eq:Entryofscheckbar_x_LS_gaussiannoise} is the sum of the variance of each term in the $(\iota+1)$-th modular diagonal of $\text{vec}^{-1}(\hat{\boldsymbol \rho}_{{\bar{x}}}(\vartheta))$.
Using Lemma~1
and~\eqref{eq:diag_Cov_rho}, we can rewrite~\eqref{eq:Entryofscheckbar_x_LS_gaussiannoise} as
\begin{equation}
\text{Cov}[[\hat{\bf r}_{{\bar{x}},{LS}}(\vartheta)]_{\iota+1},[\hat{\bf r}_{{\bar{x}},{LS}}(\vartheta)]_{\iota'+1}
=\frac{L^2\sigma^4}{\gamma_{\iota+1}\tau}\delta[\iota-\iota']
\label{eq:Entryofscheckbar_x_LS_gaussiannoise2
\end{equation}
for $\iota,\iota'=0,1,\dots,N-1$. By considering~\eqref{eq:Covar_ScheckhatXLS} and noticing that $[{\bf B}^T\otimes{\bf B}^{H}]_{Ni+i'+1,Nn+n'+1}$ $=\frac{1}{N^2}e^{-j\frac{2\pi}{N}(n'i'-ni)}$, let us rewrite $\text{Var}[\hat{P}_{{x},{LS}}(\vartheta+\frac{i}{N})]$ in~\eqref{eq:Var_LS_periodogoram}, for $\vartheta \in [0,1/N)$ and $i=0,1,\dots,N-1$, as
\begin{align}
&\text{Var}[\hat{P}_{{x},{LS}}(\vartheta +\frac{i}{N})
=\frac{N^4}{\tilde{N}^2}\sum_{n=0}^{N-1}\sum_{n'=0}^{N-1}\sum_{\nu=0}^{N-1}\sum_{\nu'=0}^{N-1}\nonumber \\
&\left\{[{\bf B}^T\otimes{\bf B}^{H}]_{Ni+i+1,Nn+n'+1}\times\right.\nonumber\\
&\left.[{\bf T}{\boldsymbol \Sigma}_{\hat{r}_{{\bar{x}},LS}}(\vartheta){\bf T}^T]_{Nn+n'+1,N\nu+\nu'+1}[{\bf B}^*\otimes{\bf B}]_{N\nu+\nu'+1,Ni+i+1}\right\}\nonumber \\
&=\frac{1}{L^2N^2}\sum_{n=0}^{N-1}\sum_{n'=0}^{N-1}\sum_{\nu=0}^{N-1}\sum_{\nu'=0}^{N-1}\left\{
e^{-j\frac{2\pi}{N}i(n'-n+\nu-\nu')}\times\right.\nonumber\\
&\left.[{\bf T}{\boldsymbol \Sigma}_{\hat{r}_{{\bar{x}},LS}}(\vartheta){\bf T}^T]_{Nn+n'+1,N\nu+\nu'+1}\right\}.
\label{eq:VarPxLSwhitenoise2}
\end{align}
We now recall that the $(q+1)$-th row of ${\bf T}$ is given by the $\left(\left(q-\left\lfloor\frac{q}{N}\right\rfloor\right)\text{ mod }N+1\right)$-th row of ${\bf I}_N$, exploit the diagonal structure of ${\boldsymbol \Sigma}_{\hat{r}_{{\bar{x}},LS}}(\vartheta)$ for circular complex Gaussian i.i.d. noise $x_t[\tilde{n}]$, and use~\eqref{eq:Entryofscheckbar_x_LS_gaussiannoise2} to write
\begin{align}
&[{\bf T}{\boldsymbol \Sigma}_{\hat{r}_{{\bar{x}},LS}}(\vartheta){\bf T}^T]_{Nn+n'+1,N\nu+\nu'+1}\nonumber\\%=\nonumber\\
&=\frac{L^2\sigma^4}{\tau}\sum_{\iota=0}^{N-1}\frac{1}{\gamma_{\iota+1}}[{\bf T}]_{Nn+n'+1,\iota+1}[{\bf T}^T]_{\iota+1,N\nu+\nu'+1} \nonumber \\
&=\frac{L^2\sigma^4}{\tau}\frac{\delta[(n'-n)\text{ mod }N-(\nu'-\nu)\text{ mod }N]}{\gamma_{(n'-n)\text{ mod }N+1}}
\label{eq:TSigma_sT}
\end{align}
for $n,n',\nu,\nu'=0,1,\dots,N-1$. By inserting~\eqref{eq:TSigma_sT} into~\eqref{eq:VarPxLSwhitenoise2},
the variance of
$\hat{P}_{{x},{LS}}(\vartheta+\frac{i}{N})$, for circular complex Gaussian i.i.d. noise $x_t[\tilde{n}]$
and $i=0,1,\dots,N-1$, is given by
\begin{align*}
&\text{Var}[\hat{P}_{{x},{LS}}(\vartheta+\frac{i}{N})]
=\frac{1}{L^2N^2}\sum_{n=0}^{N-1}\sum_{n'=0}^{N-1}\frac{L^2\sigma^4N}{\tau\gamma_{(n'-n)\text{ mod }N+1}}\nonumber\\
&=\frac{\sigma^4}{\tau}\sum_{n=0}^{N-1}\frac{1}{\gamma_{n+1}
=\frac{\sigma^4}{M\tau}+\frac{\sigma^4}{\tau}\sum_{n=1}^{N-1}\frac{1}{\gamma_{n+1}},\:\vartheta \in [0,1/N),
\end{align*}
where we use the last part of Remark~3 in the last equality. $\square$
|
1,116,691,499,735 | arxiv | \section{Introduction}
Mathematical modeling of the hydrodynamic processes in
hydrographic basins is of great interest. The subject is
very rich in practical applications and there is not yet a
satisfactory model to enhance the entire complexity of these
processes. However, there are plenty of performant models
dedicated to some specific aspects of the hydrodynamic
processes only. To review the existent mathematical models
is beyond the purposes of this paper, but we can group them
into two large classes: physical base models and regression
models. The most known regression models are the unit
hydrograph \cite{dooge} and universal soil loss \cite{rusle,
wisch}. From the first class, we mention here a few well
known models: SWAT \cite{swat}, SWAP \cite{swap} and
KINEROS \cite{kineros}. Due to the complexity and
heterogeneity of the processes (see \cite{mcdonnel}), models
in this class are not purely physical because they need
additional empirical relations. The main difference between
models here is given by the nature of the empirical
relations. For example, in order to model the surface of
the water flow, SWAP and KINEROS use a mass balance equation
and a closure relation, while SWAT combines the mass balance
equation with the momentum balance equation. A very special
class of models are cellular automata which combine
microscale physical laws with empirical closure relations in
a specific way to build up a macroscale model, e.g. CAESAR
\cite{caesar, sds-ose}.
In this paper, we introduce a physical model described by
shallow water type equations. This model is obtained from
general principles of fluid mechanics using a space average
method and takes into consideration topography, water-soil
and water-plant interactions. To numerically integrate the
equations, we first apply a finite volume method to
approximate the spatial derivatives and then use a type of
fractional time-step method to gain the evolution of the
water depth and velocity field.
After introducing the PDE model in Section
\ref{sect_ShalowWaterEquations}, we perform the Finite
Volume Method approximation in Section
\ref{sect_FVMapproximationof2Dmodel} and obtain an ODE
[Aversion of Shallow Water Equations. In Section
\ref{sect_PropOfSemidiscreteScheme}, we investigate some
physical relevant qualitative properties of this ODE system:
monotonicity of the energy, positivity of the water depth
function $h$, well balanced properties of the scheme. In
Section \ref{sect_FractionalSteptimeSchemes}, we obtain the
full discrete version of our continuous model; we tackle on
the validation method and give some numerical results in the
last section.
\section{Shalow Water Equations}
\label{sect_ShalowWaterEquations}
The model we discuss here is a simplified version a more
general model of water flow on a hillslope introduced in
\cite{imc-rap}. Assume that the soil surface is
represented by
\begin{equation*}
x^3=z(x^1,x^2), \quad (x^1,x^2)\in\Omega,
\end{equation*}
and the first derivatives of the function $z(\cdot,\cdot)$
are small quantities. The unknown variables of the model
are the water depth $h(t,x)$ and the two components
$v_a(t,x)$ of the water velocity $\boldsymbol{v}$. The
density of the plant cover is quantified by a porosity
function $\theta(x)$. The model reads as
\begin{equation}
\label{swe_vegm_rm.02}
\begin{array}{rl}
\partial_t\theta h+\partial_b(\theta h v^b)= & \mathfrak{M},\\
\partial_t(\theta hv_a)+\partial_b(hv_av^b)+\theta h\partial_aw= &
-{\cal K}(h,\theta)|\boldsymbol{v}|v_a, \quad a=1,2.
\end{array}
\end{equation}
The term ${\cal K}(h,\theta)|\boldsymbol{v}|v_a$ quantifies
the interactions water-soil and water-plants \cite{baptist,
nepf}. The function ${\cal K}(h,\theta)$ is given by
\begin{equation}
\label{swe_vegm_rm.03}
{\cal K}(h,\theta) = \alpha_p h \left(1-\theta\right) + \theta \alpha_s,
\end{equation}
where $\alpha_p$ and $\alpha_s$ are two characteristic
parameters of the strength of the water-plant and water-soil
interactions, respectively. The contribution of rain and
infiltration to the water mass balance is taken into account
by $\mathfrak{M}$. In (\ref{swe_vegm_rm.02}),
$w=g\left[z(x^1,x^2)+h\right]$ stands for the free surface
level, and $g$ for the gravitational acceleration.
It is important to note that there is an energy function
${\cal E}$ given by
\begin{equation}
\label{swe_vegm_rm.02-0}
{\cal E} := \frac{1}{2}|\boldsymbol{v}|^2+g\left(x^3+\frac{h}{2}\right)
\end{equation}
that satisfies a conservative equation
\begin{equation}
\label{swe_veg_numerics.08}
\partial_t (\theta h{\cal E}) +
\partial_b \left(\theta h v^b \left({\cal E}+g\frac{h}{2}\right)\right) =
\mathfrak{M}\left(-\frac{1}{2}|\boldsymbol{v}|^2+w\right) -{\cal K}|\boldsymbol{v}|^3.
\end{equation}
In the absence of the mass source, the system preserves the
steady state of a lake
\begin{equation}
\label{swe_veg_numerics.08-01}
\partial_a(x^3+h)=0, \quad v_a=0, \quad a=1,2.
\end{equation}
The model (\ref{swe_vegm_rm.02}) is a hyperbolic system of
equations with source term, see \cite{imc-act}.
Among different features we ask from our approximation
scheme, we want the numerical solutions to preserve the
lake, the scheme to be well balanced and energetic
conservative. These last two properties of a numerical
algorithm for the shallow water equation are very important,
especially for the case of hydrographic basin applications,
because they allow the lake formation and prevent the
numerical solution to oscillate in the neighborhood of a
lake. In the absence of vegetation, one can find many such
schemes, see \cite{seguin, noelle, nordic}, for example.
\section{Finite Volume Method Approximation of 2D Model}
\label{sect_FVMapproximationof2Dmodel}
Let $\Omega$ be the domain of the space variables $x^1$,
$x^2$ and $\Omega=\cup_i \omega_i, i=\overline{1,N}$ an
admissible polygonal partition, \cite{veque}. To build a
spatial discrete approximation of the model
(\ref{swe_vegm_rm.02}), one integrates the continuous
equations on each finite volume $\omega _i$ and then defines
an approximation of the integrals.
Let $\omega_i$ be an arbitrary element of the partition.
Relatively to it, the integral form of
(\ref{swe_vegm_rm.02}) reads as \def\msr#1{{\rm m(#1)}}
\begin{equation}
\label{fvm_2D_eq.01}
\begin{array}{rl}
\displaystyle\partial_t\int\limits_{\omega_i}\theta h{\rm d}x+
\int\limits_{\partial\omega_i}\theta h \boldsymbol{v}\cdot\boldsymbol{n}{\rm d}s=
&\displaystyle\int\limits_{\omega_i}\mathfrak{M}{\rm d}x,\\
\displaystyle\partial_t\int\limits_{\omega_i}\theta h v_a{\rm d}x+
\int\limits_{\partial\omega_i}\theta h v_a\boldsymbol{v}\cdot\boldsymbol{n}{\rm d}s+
\int\limits_{\omega_i}\theta h\partial_a w{\rm d}x=
&\displaystyle-\int\limits_{\omega_i}{\cal K}|\boldsymbol{v}|v_a{\rm d}x, \quad a=1,2.
\end{array}
\end{equation}
Now, we build a discrete version of the integral form by
introducing some quadrature formulas. With $\psi_i$
standing for some approximation of $\psi$ on $\omega_i$, we
introduce the approximations
\begin{equation}
\label{fvm_2D_eq.01-01}
\int\limits_{\omega_i}\theta h{\rm d}x\approx\sigma_i\theta_ih_i,\quad
\int\limits_{\omega_i}\theta h v_a{\rm d}x\approx\sigma_i\theta_ih_iv_{a\,i},\quad
\int\limits_{\omega_i}{\cal K}|\boldsymbol{v}|v_a{\rm d}x\approx\sigma_i {\cal K}_i|\boldsymbol{v}|_iv_{a\,i},
\end{equation}
where $\sigma_i$ denotes the area of the polygon $\omega_i$.
For the integrals of the gradient of the free surface, we
start from the identity
\begin{equation}
\label{fvm_2D_eq.01-01identity}
\int\limits_{\omega_i}\theta h\partial_a w{\rm d}x
=-\int\limits_{\omega_i} w \partial_a\theta h{\rm d}x+
\int\limits_{\partial_i\omega_i} w \theta hn_a{\rm d}s.
\end{equation}
Assume that $w$ is constant and equal to $w_i$ on $\omega_i$
to approximate the first integral on the r.h.s. of
(\ref{fvm_2D_eq.01-01identity}). Then, we obtain:
\begin{equation}
\label{fvm_2D_eq.01-02}
\int\limits_{\omega_i}\theta h\partial_a w{\rm d}x\approx\int\limits_{\partial_i\omega_i} (w-w_i) \theta hn_a{\rm d}s.
\end{equation}
Note that if $\omega_i$ is a regular polygon and $w_i$ is
the cell-centered value of $w$, then the approximation is of
second order accuracy for smooth fields and it preserves the
null value in the case of constant fields $w$.
We introduce the notation
\begin{equation}
\label{fvm_2D_eq.psi}
\widetilde{\psi}|_{\partial \omega(i,j)}:=\int\limits_{\partial \omega(i,j)}\psi{\rm d}s.
\end{equation}
Using the approximations (\ref{fvm_2D_eq.01-01}) and
(\ref{fvm_2D_eq.01-02}) and keeping the boundary integrals,
one can write
\begin{equation}
\label{fvm_2D_eq.02}
\begin{array}{rl}
\sigma_i\partial_t\theta_i h_i+
\sum\limits_{j\in{\cal N}(i)}\widetilde{\theta h v_n}|_{\partial \omega(i,j)}=&\sigma_i\mathfrak{M}_i,\\
\sigma_i\partial_t\theta_i h_i v_{a\,i}+
\sum\limits_{j\in{\cal N}(i)}\widetilde{\theta h v_a v_n}|_{\partial \omega(i,j)}+
\sum\limits_{j\in{\cal N}(i)}\widetilde{(w-w_i)\theta h}n_a|_{\partial \omega(i,j)}
=&-\sigma_i{\cal K}_i|\boldsymbol{v}|_iv_{a\,i},
\end{array}
\end{equation}
where ${\cal N}(i)$ denotes the set of all the neighbors of
$\omega_i$ and $\partial\omega(i,j)$ is the common boundary
of $\omega_i$ and $\omega_j$.
The next step is to define the approximations of the
boundary integrals in (\ref{fvm_2D_eq.02}). We approximate
an integral $\widetilde{\psi}|_{\partial \omega(i,j)}$ of
the form (\ref{fvm_2D_eq.psi}) by considering the integrand
$\psi$ to be a constant function
$\psi_{(i,j)}(\psi_i,\psi_j)$, where $\psi_i$ and $\psi_j$
are some fixed values of $\psi$ on the adjacent cells
$\omega_i$ and $\omega_j$, respectively. Thus,
\begin{equation}
\label{fvm_2D_eq.03}
\begin{array}{l}
\widetilde{\theta h v_n}|_{\partial \omega(i,j)}\approx l_{(i,j)}\theta h_{(i,j)} (v_n)_{(i,j)},\\
\widetilde{\theta h v_a v_n}|_{\partial \omega(i,j)}\approx l_{(i,j)}\theta h_{(i,j)} (v_a)_{(i,j)}(v_n)_{(i,j)},\\
\widetilde{(w-w_i)\theta h}n_a|_{\partial \omega(i,j)} \approx l_{(i,j)}(w_{(i,j)}-w_i) \theta h^s_{(i,j)}(n_a)_{(i,j)},
\end{array}
\end{equation}
where $\boldsymbol{n}_{(i,j)}$ denotes the unitary normal to
the common side of $\omega_i$ and $\omega_j$ pointing
towards $\omega_j$, and $l_{(i,j)}$ is the length of this
common side.
The issue is to define the interface value functions
$\psi_{(i,j)}(\psi_i,\psi_j)$ so that the resulting scheme
is well balanced and energetically stable.
\medskip\noindent
{\bf Well balanced and energetically stable scheme.} For any
internal interface $(i,j)$, we define the following
quantities:
\begin{equation}
\label{fvm_2D_eq.04}
\begin{array}{l}
(v_a)_{(i,j)}=\displaystyle\frac{v_{a\,i}+v_{a\,j}}{2}, \quad a=1,2,\\
(v_n)_{(i,j)}=\boldsymbol{v}_{(i,j)}\cdot\boldsymbol{n}_{(i,j)},\\
w_{(i,j)}=\displaystyle\frac{w_{i}+w_{j}}{2},
\end{array}
\end{equation}
and
\begin{equation}
\label{fvm_2D_eq.05}
\theta h^s_{(i,j)}=
\left\{
\begin{array}{ll}
\theta h_{(i,j)}, & {\rm if}\; (v_n)_{(i,j)}\neq 0,\\
\theta_i h_i, & {\rm if}\; (v_n)_{(i,j)}=0 \;{\rm and}\; w_i>w_j,\\
\theta_j h_j, & {\rm if}\; (v_n)_{(i,j)}=0 \;{\rm and}\; w_i\leq w_j.
\end{array}
\right.
\end{equation}
\medskip\noindent
{\bf $h$-positivity.} In order to preserve the positivity
of $h$, we define $\theta h_{(i,j)}$ as
\begin{equation}
\label{fvm_2D_eq.06}
\theta h_{(i,j)}=
\left\{
\begin{array}{ll}
\theta_i h_i, & {\rm if}\; (v_n)_{(i,j)}>0,\\
\theta_j h_j, & {\rm if}\; (v_n)_{(i,j)}<0.
\end{array}
\right.
\end{equation}
The semidiscrete scheme takes now the form of the following
system of ODEs
\begin{equation}
\label{fvm_2D_eq.07}
\begin{array}{rl}
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i+\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_n)_{(i,j)}=&\sigma_i\mathfrak{M}_i,\\
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i v_{a\,i}+\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_a)_{(i,j)} (v_n)_{(i,j)}+&\\
+\displaystyle\frac{1}{2}\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(w_j-w_i)(\theta h)^s_{(i,j)} n_a|_{(i,j)}&=-\sigma_i{\cal K}_i|\boldsymbol{v}|_iv_{a\,i}.
\end{array}
\end{equation}
\medskip\noindent
{\bf Boundary conditions. Free discharge.} We need to
define the values of $h$ and $\boldsymbol{v}$ on the
external sides of $\Omega$. For each side in
$\Gamma=\partial\Omega$ we introduce a new cell (``ghost''
element) adjacent to the polygon $\omega_i$ corresponding to
that side. For each ``ghost'' element, one must somehow
define its altitude and then we set zero values to its water
depth. We can now define $h$ and $\boldsymbol{v}$ on the
external sides of $\Omega$ by
\begin{equation}
\label{fvm_2D_eq.070}
\begin{array}{l}
\boldsymbol{v}_{\partial \omega_i\cap \Gamma}=\boldsymbol{v}_i,\\
h_{\partial \omega_i\cap \Gamma}=
\left\{
\begin{array}{ll}
h_i, & {\rm if}\; \boldsymbol{v}_i\cdot \boldsymbol{n}|_{\partial \omega_i\cap \Gamma }>0,\\
0, & {\rm if}\; \boldsymbol{v}_i\cdot \boldsymbol{n}|_{\partial \omega_i\cap \Gamma }<0.
\end{array}
\right.
\end{array}
\end{equation}
Now, the solution is sought inside the positive cone
$h_i>0, \; i=\overline{1,N}$.
\section{Properties of the semidiscrete scheme}
\label{sect_PropOfSemidiscreteScheme}
The ODE model (\ref{fvm_2D_eq.07}) can have discontinuities
in the r.h.s. and therefore it is possible that the solution
in the classical sense of this system might not exist for some
initial data. However, the solution in Filipov sense
\cite{filipov} exists for any initial data.
There are initial data for which the solution in the
classical sense exists only locally in time. Since the
numerical scheme is a time approximation of the semidiscrete
form (\ref{fvm_2D_eq.07}), it is worthwhile to analyze the
properties of these classical solutions. Numerical schemes
preserving properties of some particular solutions of the
continuum model were and are intensively investigated in the
literature \cite{bouchut-book, seguin, nordic,
well-balanced}. In the present section we investigate
such properties for the semidiscrete scheme and the next
section is dedicated to the properties of the complete
discretized scheme.
\subsection{Energy balance}
Definition (\ref{fvm_2D_eq.04}) yields a dissipative
conservative equation for the cell energy ${\cal E}_i$,
\begin{equation}
\label{fvm_2D_eq.060}
{\cal E}_i(h_i,\boldsymbol{v}_i)=\theta_i\left(\frac{1}{2}{|\boldsymbol{v}|^2_i}{h_i}+\frac{1}{2}gh^2_i+gx^3_ih_i\right).
\end{equation}
The time derivative of ${\cal E}_i$ can be written as
\begin{equation}
\label{fvm_2D_eq.0700}
\sigma_i \displaystyle\frac{{\rm d }{{\cal E}_i} }{ {\rm d} t} = \sigma_i
\left(
\left(w_i-\frac{1}{2}|\boldsymbol{v}|^2_i\right) \displaystyle\frac{{\rm d }{\theta_i h_i}}{ {\rm d} t}
+\left< \boldsymbol{v}_i, \displaystyle\frac{{\rm d }{\theta_ih_i\boldsymbol{v}_i}}{ {\rm d} t} \right>
\right),
\end{equation}
where $\left<\cdot,\cdot\right>$ denotes the euclidean
scalar product in $\mathbb{R}^2$.
\begin{proposition}[Cell energy equation]
\label{cell_energy}
In the absence of mass source, one has
\begin{equation}
\label{fvm_2D_eq.071}
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}{\cal E}_i+\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\left<{\cal H}_{(i,j)},\boldsymbol{n}_{(i,j)}\right>=-\sigma_i{\cal K}_i|\boldsymbol{v}|^3_i,
\end{equation}
where
\begin{equation*}
{\cal H}_{(i,j)}=\frac{1}{2}\theta h_{(i,j)}
\left(
w_i\boldsymbol{v}_i+w_j\boldsymbol{v}_j+\left<\boldsymbol{v}_i,\boldsymbol{v}_j\right>\boldsymbol{v}_{(i,j)}
\right).
\end{equation*}
\end{proposition}
\begin{remark}
If $(\theta h,v,w)_j=(\theta h,v,w)_i$ for any
$j\in{\cal N}(i)$, then
\begin{equation*}
{\cal H}=\theta h \boldsymbol{v}\left(\frac{1}{2} |\boldsymbol{v}|^2+w\right)
\end{equation*}
is the continuous flux energy in {\rm (\ref{swe_veg_numerics.08})}.
\end{remark}
\begin{proof}
Using the equality
(\ref{fvm_2D_eq.0700}), we can write
\begin{equation*}
\begin{array}{rcl}
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}{\cal E}_i&=&-(w_i-\displaystyle\frac{1}{2}|\boldsymbol{v}|_i^2)\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_n)_{(i,j)}-\\
&&-\left<
\boldsymbol{v}_i,\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} \boldsymbol{v}_{(i,j)} (v_n)_{(i,j)}
\right>-\\
&&-\displaystyle\frac{1}{2}\left<\boldsymbol{v}_i,\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(w_j-w_i)(\theta h)^s_{(i,j)} \boldsymbol{n}_{(i,j)}\right>-\\
&&-\sigma_i{\cal K}_i|\boldsymbol{v}|_i^3.
\end{array}
\end{equation*}
Now, one has the identities
\begin{equation*}
\begin{array}{rcl}
w_i\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_n)_{(i,j)}
&=&\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_n)_{(i,j)}\displaystyle\frac{w_i+w_j}{2}+\\
&&+\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_n)_{(i,j)}\displaystyle\frac{w_i-w_j}{2}
\end{array}
\end{equation*}
and
\begin{equation*}
\begin{array}{l}
\left<\boldsymbol{v}_i,\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(w_j-w_i)(\theta h)^s_{(i,j)} \boldsymbol{n}_{(i,j)}\right>=\\
=\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(w_j-w_i)(\theta h)^s_{(i,j)}\left<\displaystyle\frac{\boldsymbol{v}_i+\boldsymbol{v}_j}{2}+\displaystyle\frac{\boldsymbol{v}_i-\boldsymbol{v}_j}{2}, \boldsymbol{n}_{(i,j)}\right>.
\end{array}
\end{equation*}
Therefore
\begin{equation*}
\begin{array}{r}
w_i\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_n)_{(i,j)}+
\displaystyle\frac{1}{2}\left<\boldsymbol{v}_i,\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(w_j-w_i)(\theta h)^s_{(i,j)} \boldsymbol{n}_{(i,j)}\right>=\\
=\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)}\left<w_i\boldsymbol{v}_i+w_j\boldsymbol{v}_j,\boldsymbol{n}_{(i,j)}\right>.
\end{array}
\end{equation*}
Similarly, one obtains the identity
\begin{equation*}
\begin{array}{r}
-\displaystyle\frac{1}{2}|\boldsymbol{v}|_i^2\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_n)_{(i,j)}+
\left<\boldsymbol{v}_i,\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} \boldsymbol{v}_{(i,j)} (v_n)_{(i,j)}\right>=\\
=\displaystyle\frac{1}{2}\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)}\left<\boldsymbol{v}_i,\boldsymbol{v}_j\right>\left<\displaystyle\frac{\boldsymbol{v}_i+\boldsymbol{v}_j}{2}, \boldsymbol{n}_{(i,j)}\right>.
\end{array}
\end{equation*}
\end{proof}
Taking out the mass exchange through the boundary, the
definitions of the interface values ensure the monotonicity
of the energy with respect to time.
\subsection{h-positivity and critical points}
\begin{proposition}[h-positivity]
The ODE system {\rm (\ref{fvm_2D_eq.07})} with {\rm
(\ref{fvm_2D_eq.04})}, {\rm (\ref{fvm_2D_eq.05})}, {\rm
(\ref{fvm_2D_eq.06})} and {\rm (\ref{fvm_2D_eq.070})}
preserves the positivity of the water depth function $h$.
\end{proposition}
\begin{proof}
One can rewrite the mass balance equations as
\begin{equation*}
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i=
-(\theta h)_i\sum\limits_{j\in{\cal N}(i)}l_{(i,j)} (v_n)^{+}_{(i,j)}+
\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(\theta h)_{j} (v_n)^{-}_{(i,j)}.
\end{equation*}
Observe that if $h_i=0$ for some $i$, then
$\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i\geq
0$.
\end{proof}
There are two kinds of stationary points for the ODE model:
the lake and uniform flow on an infinitely extended plan
with constant vegetation density.
\begin{proposition}[Stationary point. Uniform flow.]
\label{river}
Consider $\{\omega_i\}_{i=\overline{1,N}}$ to be a regular
partition of $\Omega$ with $\omega_i$ regular polygons.
Let $z-z_0=\xi_b x^b$ be a representation of the soil
plane surface. Assume that the discretization of the soil
surface is given by
\begin{equation}
\label{fvm_2D_eq.08}
z_i-z_0=\xi_b \overline{x}^b_i,
\end{equation}
where $\overline{x}^b_i$ is the mass center of the
$\omega_i$ and $\theta_i=\theta$. Then, given a value
$h$, there is $\boldsymbol{v}$ so that the state
$(h_i,\boldsymbol{v}_i)=(h,\boldsymbol{v})$,
$i=\overline{1,N}$ is a stationary point of the ODE {\rm
(\ref{fvm_2D_eq.07})}.
\end{proposition}
\begin{proof}
For any constant state $h_i=h$ and $(v_a)_i=v_a$, the ODE
(\ref{fvm_2D_eq.07}) reduces to
\begin{equation*}
\displaystyle\frac{1}{2}\theta h g\sum\limits_{j\in{\cal
N}(i)}l_{(i,j)}(z_j-z_i) n_a|_{(i,j)} =-\sigma{\cal
K}|\boldsymbol{v}|v_{a}.
\end{equation*}
Introducing the representation (\ref{fvm_2D_eq.08}), one
writes
\begin{equation*}
\displaystyle\frac{1}{2}\theta h g\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\xi_b(\overline{x}^b_j-\overline{x}^b_i) n_a|_{(i,j)}
=-\sigma{\cal K}|\boldsymbol{v}|v_{a}.
\end{equation*}
Note that for a regular partition one has the identity
\begin{equation*}
\overline{x}^b_j-\overline{x}^b_i=2(y_{(i,j)}-\overline{x}^b_i),
\end{equation*}
where $y_{(i,j)}$ is the midpoint of the common side
$\overline{\omega}_i\cap\overline{\omega}_j$. Taking into
account that
\begin{equation*}
\begin{array}{ll}
\displaystyle\frac{1}{2}\theta h g
\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(z_j-z_i)
n_a|_{(i,j)} & =\displaystyle\theta h g
\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\xi_b y^b_{(i,j)} n_a|_{(i,j)}\\
&=\displaystyle\theta h g
\int\limits_{\partial \omega_i}\xi_b x^b(s) n_a(s){\rm d}s\\
&=\displaystyle\theta h g
\int\limits_{\omega_i}\xi_b \partial_a x^b{\rm d}x\\
&=\sigma \theta h g \xi_a,
\end{array}
\end{equation*}
we obtain that the velocity is a constant field
\begin{equation}
\label{fvm_2D_eq.09}
v_a=\xi_a\left(\frac{\theta h g}{{\cal K} |\xi|}\right)^{1/2}.
\end{equation}
\end{proof}
A lake is a stationary point characterized by a constant
value of the free surface and a null velocity field over
connected regions. A lake for which $h_i>0$ for any
$i\in\{1,2,\ldots, N\}$ will be named {\it regular
stationary point} and a lake that occupies only a part of
a domain flow will be named {\it singular stationary point}.
\begin{proposition}[Stationary point. Lake.]
\label{lake}
In the absence of mass source, the following properties hold:
{\rm (a)} Regular stationary point: the state
\begin{equation*}
w_i=w \;\; \& \;\; \boldsymbol{v}_i=0, \; \forall i=\overline{1,N}
\end{equation*}
is a stationary point of ODE {\rm (\ref{fvm_2D_eq.07})}.
{\rm (b)} Singular stationary point: the state
\begin{equation*}
\boldsymbol{v}_i=0, \; \forall i=\overline{1,N} \;\; \& \;\;
w_i=w, \; \forall i\in {\cal I} \;\; \& \;\; h_i=0, \;
z_i>w, \; \forall i\in \complement{\cal I},
\end{equation*}
for some ${\cal I}\subset \{1,2,\ldots,N\}$ is a
stationary point. ($\complement{\cal I}$ is the complement
of ${\cal I}$.)
\end{proposition}
\begin{proof}
For the sake of simplicity, in the case of the singular
stationary point, we consider that
$\displaystyle\Omega_{\cal I}=\cup_{i\in{\cal I}}\omega_i$
is a connected domain. Since the velocity field is zero,
it only remains to verify that
\begin{equation*}
\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(w_j-w_i)(\theta
h)^s_{(i,j)} n_a|_{(i,j)}=0,
\end{equation*}
for any cell $\omega_i$. If $i \in \complement{\cal I}$,
then the above sum equals zero since $h^s_{(i,j)}=0$, for
all $j\in{\cal N}(i)$. If $i \in {\cal I}$, then the sum
is again zero because either $h^s_{(i,j)}=0$, for
$j\in\complement{\cal I}$ or $w_j=w_i$, for
$j\in{\cal I}$.
\end{proof}
\section{Fractional Step-time Schemes}
\label{sect_FractionalSteptimeSchemes}
In what follows we discuss different explicit or
semi-implicit schemes in order to integrate the ODE
(\ref{fvm_2D_eq.07}).
We introduce some notations
\begin{equation}
\label{fvm_2D_eq_frac.01}
\begin{array}{ll}
{\cal J}_{a\,i}(h,\boldsymbol{v}):=&-\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_a)_{(i,j)} (v_n)_{(i,j)},\\
{\cal S}_{a\,i}(h,w):=&-\displaystyle\frac{1}{2}\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(w_j-w_i)(\theta h)^{s}_{(i,j)} n_a|_{(i,j)},\\
{\cal L}_i((h,\boldsymbol{v})):=&-\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\theta h_{(i,j)} (v_n)_{(i,j)}.
\end{array}
\end{equation}
Now, (\ref{fvm_2D_eq.07}) becomes
\begin{equation}
\label{fvm_2D_eq_frac.02}
\begin{array}{rl}
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i&={\cal L}_i(h,\boldsymbol{v})+\sigma_i\mathfrak{M}(t,h),\\
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i v_{a\,i}&={\cal J}_{a\,i}(h,\boldsymbol{v})+{\cal S}_{a\,i}(h,w)-{\cal K}(h)|\boldsymbol{v}_i| v_{a\,i}.
\end{array}
\end{equation}
\noindent {\bf Source mass.} We assume that the source mass
$\mathfrak{M}$ is of the form
\begin{equation}
\label{fvm_2D_eq.14}
\mathfrak{M}(x,t,h)=r(t)-\theta(x)\iota(t,h),
\end{equation}
where $r(t)$ quantifies the rate of the rain and
$\iota(t,h)$ quantifies the infiltration rate. The
infiltration rate is a continuous function and satisfies the
following condition
\begin{equation}
\label{fvm_2D_eq.15}
\iota(t,h)<\iota_{m},\quad {\rm if}\; h\geq 0.
\end{equation}
The basic idea of a fractional time method is to split the
initial ODE into two sub-models, integrate them separately,
and then combine the two solutions \cite{veque-phd, strang}.
We split the ODE (\ref{fvm_2D_eq.07}) into
\begin{equation}
\label{fvm_2D_eq_frac.03}
\begin{array}{rl}
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i&={\cal L}_i(h,\boldsymbol{v}),\\
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i v_{a\,i}&={\cal J}_{a\,i}(h,\boldsymbol{v}) +{\cal S}_{a\,i}(h,w),
\end{array}
\end{equation}
and
\begin{equation}
\label{fvm_2D_eq_frac.04}
\begin{array}{rl}
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i&=\sigma_i\mathfrak{M}_i(t,h),\\
\sigma_i\displaystyle\frac{\rm d}{{\rm d}t}\theta_i h_i v_{a\,i}&=-{\cal K}(h)|\boldsymbol{v}_i| v_{a\,i}.
\end{array}
\end{equation}
A first order fractional step time accuracy reads as
\begin{equation}
\label{fvm_2D_eq_frac.05}
\begin{array}{rl}
\sigma(\theta h)^{*}&=\sigma(\theta h)^n+\triangle t_n{\cal L}((h,\boldsymbol{v})^{n}),\\
\sigma(\theta hv_a)^{*}&=\sigma(\theta hv_a)^{n}+\triangle t_n \left({\cal J}_a((h,\boldsymbol{v})^{n})+{\cal S}_a((h,w)^{n})\right),\\
\end{array}
\end{equation}
\begin{equation}
\label{fvm_2D_eq_frac.06}
\begin{array}{rl}
\sigma(\theta h)^{n+1}&=\sigma(\theta h)^{*}+\sigma\triangle t_n\mathfrak{M}(t^{n+1},h^{n+1}),\\
\sigma(\theta hv_a)^{n+1}&=\sigma(\theta hv_a)^{*}-\triangle t_n{\cal K}(h)|\boldsymbol{v}^{n+1}| v^{n+1}_{a}.\\
\end{array}
\end{equation}
The steps (\ref{fvm_2D_eq_frac.05}) and
(\ref{fvm_2D_eq_frac.06}) lead to
\begin{equation}
\label{fvm_2D_eq_frac.07}
\begin{array}{rl}
\sigma(\theta h)^{n+1}=&\sigma(\theta h)^n+\triangle t_n{\cal L}((h,\boldsymbol{v})^{n})+\sigma\triangle t_n\mathfrak{M}(t^{n+1},h^{n+1}),\\
\sigma(\theta hv_a)^{n+1}=&\sigma(\theta hv_a)^{n}+\triangle t_n\left({\cal J}_a((h,\boldsymbol{v})^{n})+{\cal S}_a((h,w)^{n})\right)-\\
&-\triangle t_n\sigma{\cal K}(h)|\boldsymbol{v}^{n+1}| v^{n+1}_{a}.\\
\end{array}
\end{equation}
To advance a time step, one needs to solve a scalar
nonlinear equation for $h$ and a 2D nonlinear system of
equations for velocity $\boldsymbol{v}$.
In what follows, we investigate some important physical
properties of the numerical solution given by
(\ref{fvm_2D_eq_frac.07}): $h$-positivity, well balanced
property and monotonicity of the energy.
\subsection{h-positivity. Stationary points}
\begin{proposition}[$h$-positivity]
There exists an upper bound $\tau_n$ for the time step
$\triangle t_n$ such that if $\triangle t_n<\tau_n$ and
$h^n>0$, then $h^{n+1}\geq 0$.
\end{proposition}
\noindent
\begin{proof} For any cell $i$ one has
\begin{equation*}
\begin{array}{ll}
\sigma_i\theta_ih^{n+1}_i+\triangle t_n\iota(t^{n+1},h^{n+1}_i)=&\sigma_i\theta_ih^{n}_i\left(1-\displaystyle\frac{\triangle t_n}{\sigma_i}\sum\limits_{j\in{\cal N}(i)}l_{(i,j)} (v_n)^{n,+}_{(i,j)}
\right)+\\
&\displaystyle +\triangle t_n \sum\limits_{j\in{\cal N}(i)}l_{(i,j)}(\theta h^n)_{j} (v_n)^{n,-}_{(i,j)}+\triangle t_nr(t^{n+1}).
\end{array}
\end{equation*}
A choice for the upper bound $\tau_n$ is given by
\begin{equation}
\label{fvm_2D_eq_frac.07-1}
\tau_n=\displaystyle\frac{1}{v^n_{\rm max}}\min_i\left\{\displaystyle\frac{\sigma_i}{\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}}\right\}.
\end{equation}
\end{proof}
\begin{proposition}[Well balanced]
The lake and the uniform flow are stationary points of the
scheme {\rm (\ref{fvm_2D_eq_frac.07})}.
\end{proposition}
\noindent
\begin{proof}
One can prove this result similarly as in propositions
\ref{river} and \ref{lake}.
\end{proof}
Unfortunately, the semi-implicit scheme
(\ref{fvm_2D_eq_frac.07}) does not preserve the monotonicity
of the energy.
\subsection{Discrete energy}
The variation of the energy between two consecutive time
steps can be written as
\begin{equation}
\label{fvm_2D_eq_frac.08}
\begin{array}{rl}
{\cal E}^{n+1}-{\cal E}^n&=\displaystyle\sum\limits_i\theta_i\sigma_i(h_i^{n+1}-h^n)(w_i^n-\displaystyle\frac{|\boldsymbol{v}^n_i|^2}{2})+\\
&+\displaystyle\sum\limits_i\theta_i\sigma_i\left<(h\boldsymbol{v})^{n+1}_i-(h\boldsymbol{v})^{n}_i,\boldsymbol{v}^n_i\right>+\\
&+g\displaystyle\sum\limits_i\theta_i\sigma_i\displaystyle\frac{(h_i^{n+1}-h^n)^2}{2}+\sum\limits_i\theta_i\sigma_i \displaystyle\frac{h_i^{n+1}}{2}\left|\boldsymbol{v}^{n+1}_i-\boldsymbol{v}^{n}_i\right|^2.
\end{array}
\end{equation}
If the sequence $(h,\boldsymbol{v})^n$ is given by the
scheme (\ref{fvm_2D_eq_frac.07}), we obtain
\begin{equation}
\label{fvm_2D_eq_frac.09}
\begin{array}{rl}
{\cal E}^{n+1}-{\cal E}^n&=-\displaystyle\triangle t_n\sum\limits_i\sigma_i{\cal K}(h^{n+1})|\boldsymbol{v}^{n+1}_i|^2\left<\boldsymbol{v}_i^{n+1},\boldsymbol{v}^{n}_i\right>+\\
&+\displaystyle g\sum\limits_i\theta_i\sigma_i\displaystyle\frac{(h_i^{n+1}-h^n)^2}{2}+\sum\limits_i\theta_i\sigma_i\displaystyle\frac{h_i^{n+1}}{2}\left|\boldsymbol{v}^{n+1}_i-\boldsymbol{v}^{n}_i\right|^2+\\
&+TS+TB,
\end{array}
\end{equation}
where $TB$ and $TS$ stand for the contribution of boundary
and mass source to the energy production.
Note that, in the absence of $TB$ and $TS$ we cannot
conclude from (\ref{fvm_2D_eq_frac.09}) that the energy is
decreasing in time. Our numerical computations emphasize
that the scheme introduces spurious oscillations in the
neighborhood of the lake points. In order to decrease a
possible increase of energy added by the semi-implicit
scheme (\ref{fvm_2D_eq_frac.07}) and to eliminate these
oscillations, we introduce an artificial viscosity in the
scheme \cite{veque, kurganov}. Adding a ``viscous''
contribution to the term ${\cal J}$,
\begin{equation}
\label{fvm_2D_eq_frac.10}
{\cal J}^{\boldsymbol{v}}_{a\,i}={\cal J}_{a\,i}(h,\boldsymbol{v})+
\displaystyle\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}\mu_{(i,j)} ((v_a)_j- (v_a)_i),
\end{equation}
the variation of energy is now given by
\begin{equation}
\label{fvm_2D_eq_frac.11}
{\cal E}^{n+1}_{\boldsymbol{v}}-{\cal E}_{\boldsymbol{v}}^n={\cal E}^{n+1}-{\cal E}^n-\triangle t_n\displaystyle\sum\limits_{s(i,j)}l_{(i,j)}\mu_{(i,j)}\left|\boldsymbol{v}_i-\boldsymbol{v}_j\right|^2,
\end{equation}
where $\mu_{(i,j)}>0$ is the artificial viscosity.
\subsection{Stability}
The stability of any numerical scheme ensures that errors in
data at a time step are not further amplified along the next
steps. To acquire the stability of our scheme, we have
investigated several time-bounds $\tau_n$ and different
formulas for the viscosity $\nu$. The best results were
obtained with
\begin{equation}
\label{fvm_2D_eq_frac.12}
\tau_n=\displaystyle\frac{\phi_{\rm min}}{c^n_{\rm max}}, \quad
\mu_{(i,j)}=(\theta h)_{(i,j)}c_{(i,j)},
\end{equation}
where
\begin{equation}
\label{fvm_2D_eq_frac.13}
\begin{array}{l}
c_i=|\boldsymbol{v}|_i+\sqrt{gh_i},\\
c_{\rm max}=\max\limits_i \{c_i\},\\
c_{(i,j)}=\max\{c_i,c_j\},\\
\phi_{\rm min}=\min\limits_i
\left\{
\displaystyle\frac{\sigma_i}{\sum\limits_{j\in{\cal N}(i)}l_{(i,j)}}
\right\}.
\end{array}
\end{equation}
\begin{remark}
An upper bound, as {\rm (\ref{fvm_2D_eq_frac.12})}, for
the time-step is well known in the theory of hyperbolic
system, CFL condition {\rm \cite{bouchut-book, veque}}.
\end{remark}
\section{Validation}
A rough classification of validation methods splits them
into two classes: internal and external. For the internal
validation, one analyses the numerical results into a
theoretical frame: comparison to analytical results,
sensibility to the variation of the parameters, robustness,
stability with respect to the errors in the input data etc.
These methods validate the numerical results with respect to
the mathematical model and not with the physical processes;
this type of validation is absolutely necessary to ensure
the mathematical consistency of the method.
The external validation methods assume a comparison of the
numerical data with measured real data. The main advantage
of these methods is that a good consistency of data
validates both the numerical data and the mathematical
model. In the absence of measured data, one can do a
qualitative analysis: the evolution given by the numerical
model is similar to the observed one, without pretending
quantitative estimations.
\subsection{Internal validation}
We compare numerical results given by a 1-D version of our
model with the analytical solution for a Riemann Problem
\footnote{Ion S, Marinescu D, Cruceanu SG. 2015. Riemann
Problem for Shallow Water Equations with Porosity. {\\ \tt
http://www.ima.ro/PNII\_programme/ASPABIR/pub/slides-CaiusIacob2015.pdf}}.
Figure \ref{fig_1Dcomparison} shows a very good agreement
when the porosity is constant and a good one when the
porosity (cover plant density) varies.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\textwidth]{sds_h_case1_01_farav.eps}
\hspace{-3cm}
\includegraphics[width=0.6\textwidth]{sds_h_case1_01.eps}
\caption{Comparison of the numerical and analytical
solutions for the Riemann Problem. The surface is
described by $z=1$; at the initial moment we have the
velocity field $\boldsymbol{v}=\boldsymbol{0}$ and a
discontinuity in the water-depth $h$:
$\left\{ h=9,\; {\rm for }\; x<1 \right\}$,
$\left\{ h=1,\; {\rm for }\; x>1 \right\}$. Left picture
- constant porosity: $\theta=1$. Right picture -
variable porosity:
$\left\{ \theta=0.8,\; {\rm for }\; x<1 \right\}$,
$\left\{ \theta=1,\; {\rm for }\; x>1 \right\}$.}
\label{fig_1Dcomparison}
\end{figure}
Also, in Figure \ref{fig_2Dcomparison} we analyze the
response of our model to the variation of the parameters.
\subsection{External validation}
Unfortunately, we do not have data for the water
distribution, plant cover density and measured velocity
field in a hydrographic basin to compare our numerical
results with. However, to be closer to reality, we have
used GIS data for the soil surface of Paul's Valley and
accomplished a theoretical experiment: starting with a
uniform water depth on the entire basin and using different
cover plant densities, we run our model, ASTERIX based on a
hexagonal cellular automaton \cite{sds-ADataPortingTool}.
Figure \ref{fig_2Dcomparison} shows that the numerical
results are consistent with direct observations concerning
the water time residence in the hydrographic basin.
\begin{landscape}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{sds_paul_veg3proc_ape_20.eps}
\hspace{1cm}
\includegraphics[width=0.6\textwidth]{sds_paul_veg35proc_ape_20.eps}
\caption{Snapshot of water distribution in Paul's Valley
hydrographic basin. Direct observations indicate that
the water time residence depends on the density of the
cover plant. Our numerical data are consistent with
terrain observations: the water drainage time is
bigger for the case of higher cover plant density.
$\theta=3\%$ and $\theta=35\%$ for the left and right
picture, respectively.}
\label{fig_2Dcomparison}
\end{figure}
\end{landscape}
Figure \ref{fig_asterix_vs_caesar} shows the results for the
water content in Paul's Valley basin obtained with our
models ASTERIX and CAESAR-Lisflood-OSE
\cite{sds-ADataPortingTool, sds-ose}. This variable $q$ is
in fact the relative amount of water in the basin at the
moment of time $t$:
\begin{equation*}
q(t) = \displaystyle\frac{\displaystyle\int_{\Omega}h(t,x){\rm d}x}{\displaystyle\int_{\Omega}h(0,x){\rm d}x}.
\end{equation*}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\textwidth]{sds_asterix_qout_3h.eps}
\includegraphics[width=0.49\textwidth]{sds_caesar_qout_3h.eps}
\caption{Time evolution of the water content in Paul's
Valley hydrographic basin with ASTERIX (left picture)
and CAESAR (right picture).}
\label{fig_asterix_vs_caesar}
\end{figure}
This variable is also a measure of the amount of water
leaving the basin. A general issue relates to whether
higher cover plant densities can prevent soil erosion and
flood or not. Both pictures show that if the cover plant
density is increasing then the decreasing rate $\dot{q}$ of
$q$ is smaller. We can think at a ``characteristic
velocity'' of the water movement in the basin and this
velocity is in a direct relation with $\dot{q}$. We can now
speculate that smaller values of $\dot{q}$ imply softer
erosion processes.
This valley belongs to Ampoi's catchment basin. Flood
generally appears when the discharge capacity of a river is
overdue by the water coming from the river catchment area.
Our pictures show that higher cover plant densities imply
smaller values of $\dot{q}$ which in turn give Ampoi River
the time to evacuate the water amount flowing from the
valley.
\section*{Acknowledgement}
Partially supported by the Grant 50/2012 ASPABIR funded by
Executive Agency for Higher Education, Research, Development
and Innovation Funding, Romania (UEFISCDI).
\bibliographystyle{plain}
|
1,116,691,499,736 | arxiv | \section{Supersymmetry of Pleba\'nski-Hacyan geometries}
\label{sec-susy}
We are going to consider configurations whose metric is the direct product of
two 2-dimensional subspaces of constant curvature, the first one parametrized
by the first two (timelike and spacelike) coordinates and the second one
parametrized by the last two (spacelike) coordinates. This generic class of
solutions to EM-$\Lambda$ was first obtained by Pleba\'nski {\&} Hacyan in
Ref.~\cite{art:PlebHacyan1979}, and includes as special cases the
Bertotti-Robinson solution ($aDS_{2}\times S^{2}$) and the Nariai universe
($DS_{2}\times S^{2}$) \cite{art:nariai}, whose discovery predates
the work \cite{art:PlebHacyan1979}.
The geometry of the purely spacelike 2-dimensional subspace is expected to
correspond to that of the constant-time sections of a black-hole horizon. The
Maxwell field will have non-vanishing components $F_{01}=\alpha$ and
$F_{23}=\beta$, where $\alpha$ and $\beta$ are real constants (that is: the
components of the Maxwell field are proportional to the volume 2-forms of the
two subspaces). We will make this more precise Ansatz later on.
\subsection{$N=1,d=4$ Supergravity with constant superpotential}
\label{sec-susy1}
As was mentioned before, the minimal version of this theory was constructed by Townsend in
Ref.~\cite{Townsend:1977qa} and when coupled to a vector multiplet corresponds to a supersymmetric version of the
EM-$\Lambda$ theory with the cosmological constant $\Lambda = -8g^{2}$ being
of the anti-De Sitter kind. The supersymmetry transformations of the fermions
for vanishing fermions are\footnote{
For clarity's sake we mention that we are using a normalized version of the slash,
{\em i.e.\/} for the 2-form $F$ we have $2\slashed{F} \equiv F_{ab}\gamma^{ab}$.
}
\begin{eqnarray}
\label{eq:Town1}
0\; =\; \delta_{\epsilon}\psi_{\mu} & = &
\nabla_{\mu}\epsilon
+\tfrac{i}{2}g\gamma_{\mu}\epsilon^{*}\, ,\\
& & \nonumber \\
\label{eq:Town2}
0\; =\; 2\delta_{\epsilon}\lambda & = &
\not\! F^{+}\epsilon\, ,
\end{eqnarray}
\noindent
where $\nabla$ is the general and Lorentz-covariant derivative.
That this theory does not admit supersymmetric solution of the type we are
after is easily deduced by calculating the integrability condition for
Eq.~(\ref{eq:Town1}):
\begin{equation}
\left[\ \slashed{R}_{\mu\nu} \ +\ g^{2}\gamma_{\mu\nu}\ \right]\epsilon \; =\; 0\, .
\end{equation}
\noindent
The split into 2-dimensional spaces of constant curvature, implies that {\em
e.g.\/} $\slashed{R}_{02}=0$, which immediately implies that $\epsilon =0$,
whence no supersymmetric PH solutions exist.
\subsection{Minimal gauged $N=1,d=4$ supergravity}
\label{sec-susy2}
This $N=1$ $d=4$ theory was constructed by Freedman in
Ref.~\cite{Freedman:1976uk} and has the curiosity that it corresponds to
supergravity theory with a De Sitter-like cosmological constant ($\Lambda =
g^{2}/2$). The relevant supersymmetry transformations for vanishing fermions
are
\begin{eqnarray}
\label{eq:Freed1}
0\; =\; \delta_{\epsilon}\psi_{\mu}
& = &
\left[\nabla_{\mu} +\tfrac{i}{2}gA_{\mu}\right]\epsilon \, ,
\\
& & \nonumber \\
\label{eq:Freed2}
0\; =\; \delta_{\epsilon}\lambda
& = &
\left[ \slashed{F}^{+}\, -\, \textstyle{i\over 2}\ g\ \right]\epsilon\, .
\end{eqnarray}
De Sitter spacetime is a solution of the theory but breaks all supersymmetries.
The Killing spinor equation (\ref{eq:Freed2}) only admits solutions for our
Ansatz if $\alpha=0$ and
\begin{equation}
\label{eq:Freed3}
\beta \ =\ \pm g/2
\hspace{.5cm}\mbox{and}\hspace{.3cm}
\left[\ 1\ \pm\ i\gamma^{23}\ \right]\epsilon \ =\ 0\, ,
\end{equation}
\noindent
so that we are dealing with a purely magnetic configuration.
\par
The integrability condition of the Killing spinor equation (\ref{eq:Freed1}) reads
\begin{equation}
\left[\ \slashed{R}_{\mu\nu} \, -\, ig\ F_{\mu\nu}\ \right]\epsilon \; =\; 0\, .
\end{equation}
\noindent
The product structure of the metric that we have assumed indicates that the
first factor must be flat 2-dimensional Minkowski spacetime and the second a
2-sphere whose curvature is related to $\beta$ and, therefore, to $g$.
At this point a more precise form for the Ansatz becomes necessary: using
standard spherical coordinates for the 2-sphere we write
\begin{equation}
\begin{array}{rcl}
ds^{2} & = & dt^{2} - dx^{2} - R^{2}(d\theta^{2} +\sin^{2}\theta d\phi^{2})\,
,
\\
& & \\
A_{\phi} & = & -\beta\ R^{2}\ \cos\theta \, .
\end{array}
\end{equation}
\noindent
The non-vanishing components of the Ricci and Maxwell field strength tensors
are, in the obvious tetrad basis
\begin{equation}
R_{22}\ =\ R_{33}\ =\ -\frac{1}{R^{2}}\, ,
\hspace{1cm}
F_{23} \ =\ \beta\, .
\end{equation}
The Maxwell equations are automatically solved as the field strength is an
invariant 2-form on a symmetric space; the Einstein equations are solved if
\begin{equation}
R^{2} \; =\; \frac{g^{2}}{4} \ +\ \beta^{2}\, ,
\end{equation}
\noindent
which due to Eq.~(\ref{eq:Freed3}) implies:
\begin{equation}
\label{eq:Freed10}
R \; =\; \sqrt{2}/g \; .
\end{equation}
In order to finish the analysis we need to solve the Killing spinor equations
(\ref{eq:Freed1}); the $0$, $1$, $2$ components are trivial and are solved for
any $t$, $x$ and $\theta$ independent spinor. The last component is also
trivially satisfied once we take into account the following relation
between the spin and the gauge connections $A_{3}= \pm\ g^{-1}\ \omega_{323}$ and
use the projection in Eq.~(\ref{eq:Freed3}).
In conclusion we found a half-BPS solution to Freedman's gauged $N=1$ $d=4$
supergravity that is purely magnetic and whose geometry is
$\mathbb{R}^{1,1}\times S^{2}$. The obvious question then is: can this geometry
be the NH limit of a black hole? A first naive worrisome point is about the
occurrence of the $\mathbb{R}^{1,1}$ factor in the NH geometry, as the usual
one of supersymmetric black holes would not give rise to $\mathbb{R}^{1,1}$
but rather to $aDS_{2}$. But, as said, this is a naive preoccupation as,
following Gutowski {\&} Papadopoulos, we are asking for the NH-geometry to be
supersymmetric and not the complete solution. If we then couple this to the
fact that the NH geometry of black holes with non-vanishing temperature, such
a Schwarzschild's, leads to a 2-dimensional Rindler space which is locally
isometric to $\mathbb{R}^{1,1}$, the preoccupation should cease to exist. So
in order to find the candidate black hole whose NH-limit gives rise to the
supersymmetric solution, we should analyze the NH-limits of
magnetically-charged black holes with spherical topology in De Sitter spaces.
\section{Reissner-Nordstr\"om-De Sitter black holes}
\label{sec:RNDSbhs}
\begin{figure}
\centering
\includegraphics[height=4cm]{DSbhSpectrum}
\caption{
A plot of the values of $M$ and $Z$ for which the RNDS
black holes exist. The straight line are the extreme bh's, {\em
i.e.\/} the ones for which $M^{2}=Z^{2}$.
}
\label{fig:DSbhSpectrum}
\end{figure}
The Reissner-Nordstr\"om-De Sitter (RNDS) black holes can be written in
standard coordinates as
\begin{eqnarray}
\label{eq:31}
ds^{2} & =& fdt^{2} \ -\ f^{-1}dr^{2} \ -\ r^{2}dS^{2}_{[\theta ,\varphi ]}
\; ,\\
& & \nonumber \\
A & =& \frac{Q}{r}\ dt \; -\; P\cos\theta d\varphi \; ,
\end{eqnarray}
\noindent
where $dS^{2}_{[\theta ,\varphi ]} $ stands for the round metric on $S^{2}$
with coordinates $\theta$ and $\varphi$, and the function $f=f(r)$ is given by
\begin{equation}
\label{eq:1}
f \; =\; -\frac{\Lambda}{6}r^{2} \ +1\ -\frac{2M}{r} \ +\
\frac{Z^{2}}{r^{2}}\, ,
\hspace{.4cm}\mbox{with}\;
Z^{2} \ \equiv\ Q^{2}+P^{2} \; .
\end{equation}
As is well-known De Sitter black holes need not exist for all values of the
mass, $M$, and the electro-magnetic charge, $Z$; a plot of the pairs $(M,Z)$
that can give rise to black holes are indicated in
Fig.~(\ref{fig:DSbhSpectrum}) by the grey area and its boundary. As is
paramount from the figure $M$ and $|Z|$ are bounded by maximal values that in
our normalization of $\Lambda$ are given by
\begin{equation}
\label{eq:2}
M_{crit}\; =\; \frac{2}{3\sqrt{\Lambda}} \; =\; \frac{2\sqrt{2}}{3g}
\hspace{.5cm}\mbox{and}\hspace{.5cm}
Z^{2}_{crit} \; =\; \frac{1}{2\Lambda} \; =\; \frac{1}{g^{2}} \; .
\end{equation}
\noindent
A point in the grey area corresponds to a black hole with three horizons,
namely an inner one at $r=r_{i}$, an outer one at $r=r_{o}$ and a cosmological
horizon at $r=r_{c}$, the nomenclature deriving from the fact that
$0<r_{i}<r_{o}<r_{c}$. Furthermore, all these horizons are {\em warm} in the
sense that they correspond to single zeroes of $f$, whence one can associate a
temperature to at least the outer and the cosmological horizon.\footnote{ As
is well-known by expanding $f$ in Eq.~(\ref{eq:1}) around the horizon
location $r=r_{H}$ as $f=(r-r_{H})\ h(r)$ with $h$ being regular at $r_{H}$,
one finds that the NH geometry is that of a Rindler space of temperature $T=
h(r_{H})/(4\pi )$ times a 2-sphere of radius $r_{H}$. }
The left boundary corresponds to those black holes for which the inner and the
outer horizon coincide $0<r_{i}=r_{o}<r_{c}$, implying that this coincident
horizon, but not the cosmological horizon, has zero temperature: these black
holes are called {\em cold black holes}. The right boundary corresponds to
the situation where the outer and the cosmological horizons coincide
$0<r_{i}<r_{o}=r_{c}$ and are also cold black holes; they receive the name
{\em Nariai} black holes. The intersection of these two boundaries,
corresponding to the pair $(M_{crit},Z_{crit})$, for which all three horizons
coincide, goes by the name {\em ultracold black hole} \cite{Romans:1991nq}.
This small discussion then brings us to the question: How are we to identify
the RNDS black-hole solution whose NH limit gives us the supersymmetric
Pleba\'nski-Hacyan solution? The answer is simple: by looking at the NH limit
of the gauge field! First of all, a non-zero $Q$ would lead to a non-zero
$F_{01}$ so we will take $Q=0$. The NH limit of the vector field strength for
a horizon located at $r=r_{H}$ is
\begin{equation}
\label{eq:3}
F \ =\ d( -P\cos\theta \ d\varphi ) \ =\ P\ d\theta \wedge\ \sin\, \theta d\varphi
\ \longrightarrow\ \frac{P}{r_{H}^{2}}\ e^{2}\wedge e^{3} \; ,
\end{equation}
\noindent
and leads to the identification that $P=\beta \ r_{H}^{2}$. Seeing that the
value of $\beta$ for the supersymmetric solution is given in
Eq.~(\ref{eq:Freed3}) and that $r_{H}$ is effectively the radius of the
2-sphere in the NH limit, Eq.~(\ref{eq:Freed10}), we can deduce that our
candidate black hole must have
\begin{equation}
\label{eq:6}
P\; =\; \beta\ r_{H}^{2} \; =\; \pm \frac{g}{2}\
\left( \frac{\sqrt{2}}{g}\right)^{2} \; =\; \pm 1/g \; ,
\end{equation}
\noindent
implying that our candidate black hole is none other than the ultracold black hole.
This poses, however, an immediate problem, one already pointed out by Romans
\cite{Romans:1991nq}: as the horizon of the ultracold black hole corresponds
to a triple zero of the function $f$ in Eq.~(\ref{eq:1}), the naive NH limit
does not give as NH geometry Rindler space times $S^{2}$ but a different one,
one that is not even a solution to the equations of motion: the reason for this is that in
this case the usual procedure of zooming in does not conform to Geroch's
criteria of limiting spaces \cite{Geroch:1969ca}.
There is an alternative limiting procedure that does give rise to the desired
result \cite{Ginsparg:1982rs,Cardoso:2004uz} which basically consists in going
first to the cold limit in which $f(r)$ has a double zero and then taking
the NH limit simultaneously with the ultracold limit in a particular way. The
result is the supersymmetric Pleba\'nski-Hacyan solution\footnote{Notice that
we can arrive at the same result in a more pedestrian way by taking the NH
limit of a warm or a cold horizon in a first step and then taking the
ultracold limit in a second step. In the first case, we arrive at the NH
geometry Rindler$_{2}\times S^{2}$ in the first step and then adjust the
physical parameters to those of the supersymmetric PH solution in the
second. In the second case, we arrive to the NH geometry $aDS_{2}\times
S^{2}$ in the first step while the second step flattens out the $aDS_{2}$
factor because the ultracold limit is the limit of infinite $aDS$ radius. We
get the same result in all cases.} which can, therefore, be identified as
the NH limit of the ultracold, purely magnetic, RNDS black hole.
\section{Conclusions}
\label{sec-conclusions}
In this letter we have tried to find simple examples of supersymmetric
horizons in $N=1,d=4$ supergravity theories motivated by the prediction made
in Ref.~\cite{Gutowski:2010gv} that, if any, their spatial sections would
always be topologically equivalent to tori. We have focused on two $N=1,d=4$
theories (Freedman's and Townsend's) whose bosonic sector is the cosmological
Einstein-Maxwell theory with positive and negative cosmological constant,
respectively, and on candidate near-horizon geometries which are the direct
product of two 2-dimensional spaces of constant curvature. We have shown that
none of our candidates is supersymmetric in Townsend's theory ($\Lambda <0$)
but we have also shown that one of them, with the geometry
Minkowski$_{2}\times S^{2}$ is actually supersymmetric in Freedman's ($\Lambda
>0$). Then we have shown that this supersymmetric solution is the NH limit of
the ultracold RNDS black-hole solution when the NH limit is correctly
computed, which means that, even though no RNDS black-hole solution is
supersymmetric, the horizon of the ultracold one, which has the topology of
$S^{2}$, is. We can also say that the non-supersymmetric ultracold RNDS black
hole solution interpolates between non-supersymmetric DS spacetime at infinity
and a half-supersymmetric Pleba\'nski-Hacyan solution at the horizon.
This result is a clear counterexample for the generic prediction of
Ref.~\cite{Gutowski:2010gv}. The reason why our spherically-symmetric NH
geometry was missed is, as far as we can see, that the analysis made in that reference is based on a
gravitino Killing spinor equation that is not general enough, and in particular does not
include Freedman's theory.
Of course, our results do not imply that these are the only possible
supersymmetric NH geometries nor that Freedman's theory and its generalizations
are the only possible $N=1,d=4$ supergravities in which supersymmetric NH
geometries can be found.
At this moment we do not have a clear physical interpretation of this
result. We can only stress the fact that the supersymmetric solution has mass
and magnetic charge which are extremized for a given value of the
cosmological/coupling constant. Furthermore, we would like to point out that,
while Townsend's theory is sometimes called $N=1,d=4,aDS$ supergravity,
Freedman's (studied, for instance, in
Refs.~\cite{Chamseddine:1995gb,Castano:1995ci}) is very different from a naive
(and inconsistent) $N=1,d=4,DS$ supergravity and can be embedded in string
theory \cite{Cvetic:2004km}.
As a final comment let us point out that a fake version of Freedman's gauged
supergravity can be constructed and the existence of fake-supersymmetric
NH-geometries can be studied, which shows that indeed there is a
fake-supersymmetric $aDS_{2}\times \Sigma_{g>1}^{2}$ solution. One can then
also show that there is no $aDS$-black hole which has this NH-geometry.
\section*{Acknowledgments}
This work has been supported in part by the Spanish Ministry of Science and
Education grants FPA2006-00783 and FPA2009-07692, a Ram\'on y Cajal fellowship
RYC-2009-05014, the Comunidad de Madrid grant HEPHACOS S2009ESP-1473, the
Princip\'au d'Asturies grant IB09- 069 and the Spanish Consolider-Ingenio 2010
program CPAN CSD2007-00042. TO wishes to thank M.M.~Fern\'andez for her
permanent support.
|
1,116,691,499,737 | arxiv | \section{Introduction}
Starburst galaxies (SBGs) are unique sources showing a very intense star formation activity, at a level that can be as high as $\dot{M} \sim 10 \div 100$ $M_{\odot} yr^{-1}$, as discussed by \citet{2004ApJ...606..271G}. Their star forming regions, called starburst nuclei (SBNi), typically extend on few hundred parsec and are often observed in the cores of SGBs. The rapid star forming activity, which reflects in an enhanced far infrared (FIR) luminosity \cite[]{2003A&A...401..519M}, leads to a correspondingly higher supernova rate, $\mathcal{R}_{SN} \sim 0.1 \div 1 \; yr^{-1}$, thereby suggesting that SBNi may be efficient sites of cosmic ray (CR) production.
The density of interstellar medium (ISM) in SBNi is estimated to be of the order of $n_{ISM} \sim 10^2$ $cm^{-3}$, with a mass in the form of molecular clouds $M_{mol} \sim 10^8 M_{\odot}$. The mass in the form of ionized gas is typically a few percent of that of the neutral gas \citep[a detailed discussion for the case of M82 was presented by][]{2001ApJ...552..544F}. The FIR radiation can easily reach an energy density of $U_{RAD} \sim 10^3$ $ eV/cm^3$ while the strength of the inferred magnetic field is of order $B \sim 10^2 \div 10^3$ $\mu G$ \citep[e.g.][]{2006ApJ...645..186T}. Moreover, the high supernova rate, together with a possible coexisting AGN activity, are expected to highly perturb the global SBN environment. Strong winds are in fact observed in many starbursts at every wavelength with estimated velocities of several hundred kilometers per seconds as reported for the case of M82 by \citet{2009ApJ...697.2030S}, \citet{1538-4357-642-2-L127} and \citet{1991ApJ...369..320S}.
Winds and turbulence play a fundamental role in CR transport in SBNi. The former lead to advection of CRs, a phenomenon that typically acts in the same way for CRs of any energy. The latter is responsible for CR diffusion through resonant scattering off perturbations in the magnetic field. The combination of wind advection, diffusion and energy losses shapes the transport of CRs in SBNi and determines whether or not the bulk of CRs is confined inside the nucleus, namely if particles lose most of their energy before escaping the nucleus (through either advection or diffusion). The phenomenon of CR confinement is crucial to understand the production of non-thermal radiation and neutrinos in SBGs. At energies where losses act faster than escape, the production of secondary electrons and positrons is prominent and in fact secondary electrons can be shown to be dominant upon primary electrons, for typical values of environmental parameters. In turn this implies that secondary electrons shape the multifrequency emission of SBNi through their synchrotron (SYN) and inverse Compton (IC) emission, a situation quite unlike the one of our Milky Way. Here we study in detail under which conditions SBNi behave as calorimeters: we find that for the conditions expected in SBNi, transport is dominated by advection with the wind up to very high energies. At sufficiently high energies (depending upon the level of turbulence), diffusion starts being dominant and leads to a transition to a regime where CR protons can leave the SBN before appreciable losses occur. In passing, we notice that the wind itself has been proposed as possible site where particle acceleration to extremely high energies might take place \cite[]{Anchordoqui_1999_Wind_1,Romero_wind,2018PhRvD..97f3010A}.
Several models have been previously developed to describe the behaviour of CRs in starburst environments and infer their high energy emission \cite[]{1996ApJ...460..295P,2004ApJ...617..966T,2008A&A...486..143P,2010MNRAS.401..473R,0004-637X-762-1-29}. In all these works diffusion effects were typically accounted for by assuming a diffusive escape time defined by a power law energy dependence with slope $\delta = 0.5$ and a normalization of few millions of years at GeV energies. On the other hand, \cite{Yoast-Hull:2013wwa} assumed that CR transport \textbf{is} dominated solely by wind advection and energy losses, while diffusion would be negligible. \cite{2018MNRAS.474.4073W} focused on hadronic gamma-ray emission in the framework in which SBNi are treated as calorimeters, whereas \citet{Sudoh:2018ana} modeled the proton transport accounting for wind advection and Kolmogorov-like diffusion. SBGs have been also discussed as possible neutrino factories, both as isolated sources \cite[]{2003ApJ...586L..33R,2009ApJ...698.1054D,2015PhDT........94T} and as possible relevant contributors to the global diffuse flux \cite[]{Loeb:2006tw,2011ApJ...734..107L,Tamborra:2014xia,Bechtol:2015uqb}.
In this article we improve with respect to previous studies in several respects: 1) the issue of calorimetric behaviour of SBNi is addressed in a quantitative way, by discussing how different assumptions about the turbulence in the ISM of SBNi changes the the escape of CRs from the confinement volume as compared with the role of an advecting wind. This means that we can now also describe the transition from calorimetric behaviour to diffusion dominated regime. This transition reflects into features in the spectrum of high energy gamma rays from the decay of neutral pions. 2) The spectrum of secondary electrons is self-consistently calculated taking into account advection, diffusion and energy losses, so as to have at our disposal a self-consistent calculation of the multifrequency spectrum of radiation produced by electrons (primary and secondary) through SYN and ICS. 3) The absorption of gamma rays as due to electron-positron pair production inside the starburst region is taken into account. This allows us to determine the spectrum of gamma rays reaching us from an individual SBG and the contribution to the diffuse gamma ray background. 4) The secondary electrons resulting from the decay of charged pions and from absorption of gamma rays on the photon background inside a SBN both contribute to the production of a diffuse X-ray radiation as due to SYN emission. The detection of such emission would represent an unambiguous signature of the calorimetric behaviour of SBNi.
The paper is organized as follows: in \S \ref{Sezione_2} we describe the theoretical approach for the calculation of the CR distribution function inside a generic SBN and the associated photon and neutrino spectra. In \S \ref{Sezione_3} we discuss how different assumptions on the diffusion coefficient affects the confinement of cosmic rays inside SBNi and in \S \ref{Sezione_4} we apply our model to three SBGs, namely NGC253, M82 and Arp220 so as to have a calibration of our calculation to their observed multifrequency spectra. This allows us to have a physical understanding of CR transport in a SBG that can be applied to the determination of SBGs to the diffuse gamma and neutrino emission that will be discussed in detail in a forthcoming paper. We draw our conclusions in \S \ref{Conclusioni}.
\section{Cosmic ray transport in a SBN}
\label{Sezione_2}
Since the starburst nucleus of a SBG is rather compact and populated by both gas and sources, the simplest approach to CR transport in such a region is represented by a leaky-box-like model in which the injection of CR protons and electrons is balanced by energy losses, advection with a wind and diffusion:
\begin{equation}
\phantom{xxxxxxxxx} \frac{f(p)}{\tau_{\rm loss}(p)} + \frac{f(p)}{\tau_{\rm adv}(p)} + \frac{f(p)}{\tau_{\rm diff}(p)} = Q(p) ,
\label{eq:CR_Equation}
\end{equation}
where $f$ is the CR distribution function, $Q$ is the injection term due to supernovae explosions, while $\tau_{\rm loss}$, $\tau_{\rm adv}$ and $\tau_{\rm diff}$ are the timescales of energy losses, wind advection and diffusion, respectively. The characteristic time for energy losses is derived combining effects due to radiative emission and collisions, namely
\begin{equation}
\phantom{xxxxxxxxxxxxxxx} \frac{1}{\tau_{\rm loss}}= \sum_i \left( - \frac{1}{E} \frac{dE}{dt} \right)_i,
\label{CR_losses_timescale}
\end{equation}
where $i$ sums over ionization, proton-proton collisions and Coulomb interactions in the case of protons, whereas in the case of electrons it represents losses due to ionization, synchrotron, inverse Compton and bremsstrahlung. The detailed expressions adopted for each channel are reported for completeness in Appendix \ref{app:timescales}. The advection timescale $\tau_{\rm adv}$ is the ratio between the SBN size and the wind speed, i.e. $\tau_{\rm adv}=R/v_{\rm wind}$, and provides an estimate of the typical time in which particles are advected away from the SBN. Similarly, the diffusion timescale is taken as $\tau_{\rm diff}(p)=R^2/D(p)$, where $D(p)$ is the diffusion coefficient as a function of particle momentum. Here we adopt an expression for $D(p)$ that is inspired by the quasi-linear formalism
\begin{equation}
\phantom{xxxxxxxxxxxxxxxx} D(p)=\frac{r_L(p)v(p)}{3\mathcal{F}(k)},
\label{Kolmogorov}
\end{equation}
where $\mathcal{F}(k)=kW(k)$ is the normalized energy density per unit logarithmic wavenumber $k$, and $W(k)=W_0(k/k_0)^{-d}$, with $k_0^{-1}=L_0$ characteristic length scale at which the turbulence is injected. We calculate $\mathcal{F}(k)$ by requiring the following normalization condition
\begin{equation}
\phantom{xxxxxxxxxxxxxxx} \int^{\infty}_{k_0}W(k)dk = \left( \frac{\delta B}{B} \right)^2 = \eta_{B}.
\end{equation}
In order to bracket plausible models of CR diffusion in SBNi we adopt three models of diffusion: 1) a benchmark model in which $d=5/3$, $L_{0}=1$ pc and $\eta_B=1$ (model A), which leads to a Kolmogorov-like diffusion coefficient with asymptotic energy dependence $\sim E^{1/3}$ ; 2) a case in which $d=0$ and $\eta_B=1$ which leads to a Bohm diffusion coefficient (Model B); 3) a case in which $d=5/3$ and $\eta_B$ is normalized in such a way that the diffusion coefficient at 10 GeV is $\sim 3\times 10^{28}~\rm cm^{2}/s$, which is supposed to mimic the diffusion coefficient inferred for our Galaxy (Model C). The latter case is expected to lead to faster diffusion and lesser confinement of CR protons in the SBN. In Model A, the choice of $L_{0}\ll R$ was made to mimic an ISM with strong turbulence on pc scales. For the cases above we choose a magnetic field $B=200\mu G$ and a size of the SBN $R=200$ pc.
Given the starburst nature of the sources, it is expected that the main injection of CRs in SBNi occurs through supernova explosions. The injection term $Q$ in Eq.\ref{eq:CR_Equation} is assumed to be constant in the entire spherical volume and is computed as
\begin{equation}
\phantom{xxxxxxxxxxxxxxxx} Q(p)= \frac{\mathcal{R}_{\rm SN}\mathcal{N}_{p}(p)}{V},
\label{CR_Injection}
\end{equation}
where $\mathcal{N}_{p}(p)$ is the injection spectrum of protons from an individual SNR, and $\mathcal{R}_{\rm SN}$ is the rate of SN explosions in the SBN volume $V$. Assuming that the spectrum of accelerated CR protons has the shape of a power law in momentum with index $\alpha$ up to a maximal value $p_{max,p}$, we can write
\begin{equation}
\phantom{xxxxxxxxxxx} \mathcal{N}_{p}(p) \propto \left( \frac{p}{m_p c} \right)^{- \alpha}e^{-p/p_{p,\max}},
\end{equation}
where the normalization constant is calculated by requiring that
\begin{equation}
\phantom{xxxxxxxxxxx} \int^{\infty}_0 4 \pi p^2 \mathcal{N}_{p}(p)T(p) dp = \xi_{\rm CR} E_{\rm SN},
\end{equation}
with $T(p)$ the kinetic energy of particles, $\xi_{\rm CR}$ the acceleration efficiency (of order $10\%$), and $E_{\rm SN}$ the explosion energy for which we adopt the typical value of $10^{51}$ erg.
For electrons, the slope of the injection spectrum is assumed to be the same as for protons, but the cutoff is assumed to be as found in calculations of diffusive shock acceleration in the presence of energy losses and Bohm diffusion \cite[]{2007A&A...465..695Z,Blasi:2009ix}:
\begin{equation}
\phantom{xxxxxxxxxxx} \mathcal{N}_{e}(p) \propto p^{- \alpha}e^{-(p/p_{e,max})^{2}}.
\end{equation}
Throughout the paper we assume that $p_{p,\max}=10^{5}$ TeV/c and $p_{e,\max}=10$ TeV/c. We also assume that the spectrum of injected electrons has a lower normalization than protons by a factor $\sim 50$, as also assumed by \cite{2004ApJ...617..966T} and \citet{Yoast-Hull:2013wwa} and close to what is inferred for our Galaxy.
In order to quantify the confinement properties of SBNi, namely the situations in which CR protons and electrons lose energy before escaping the SBN, we adopt some reference values for the parameters, summarized in Table~\ref{tab:parameters_table} and adopted in the estimates of time scales for the different processes. We refer to this set of parameters as our ``\textit{reference case}''.
\begin{table}
\centering
\begin{tabular}{|l|r|}
\hline
List of parameters & Value \\ \hline \hline
$D_L$ $({\rm Mpc})$ $[\rm redshift]$ & $3.8 \; [8.8 \times 10^{-4}]$ \\ \hline
$\mathcal{R}_{\rm SN}$ $({\rm yr^{-1}})$ & $0.05$ \\ \hline
$R$ (pc) & $200$ \\ \hline
$\alpha$ & $4.25$ \\ \hline
$B$ $(\mu$G) & $200$ \\ \hline
$v_{\rm wind}$ (km/s) & $500$ \\ \hline
$M_{\rm mol}$ $(10^8 M_{\odot})$ & $1.0$ \\ \hline
$n_{\rm ISM}$ $\rm (cm^{-3})$ & $125$ \\ \hline
$n_{\rm ion}$ $\rm (cm^{-3})$ & $18.75$ \\ \hline
$T_{\rm plasma} \rm (K)$ & $6000$ \\ \hline
$U^{\rm FIR}_{\rm Rad}$ ($\rm eV \,cm^{-3}$) [kT (meV)] & $1101$ $[3.5]$ \\ \hline
$U^{\rm MIR}_{\rm Rad}$ ($\rm eV \,cm^{-3}$) [kT (meV)] & $330$ $[8.75]$ \\ \hline
$U^{\rm NIR}_{\rm Rad}$ ($\rm eV \,cm^{-3}$) [kT (meV)] & $330$ $[29.75]$ \\ \hline
$U^{\rm OPT}_{\rm Rad}$ ($\rm eV \,cm^{-3}$) [kT (meV)] & $1652$ $[332.5]$ \\ \hline
\end{tabular}
\caption{\label{tab:parameters_table} Table of parameters for the adopted reference case. $D_L$ is the luminosity distance of the source, $\mathcal{R}_{\rm SN}$ is the supernova rate, $R$ is the radius of the SBN, $\alpha$ is the injection index in momentum, $B$ is the mean magnetic field and $v_{\rm wind}$ is the outgoing wind velocity. The molecular cloud mass in the SBN is represented by $M_{\rm mol}$ and it coincides with an overall particle density given by $n_{\rm ISM}$. The ionized gas density is expressed by $n_{\rm ion}$ which has a temperature $T_{\rm plasma}$. The last four lines show the energy density $U$ and the temperature $kT$ of the three IR components due to dust and the optical one due to stars.}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Electrons_Timescale_MNRAS.pdf}\quad\includegraphics[width=0.45\textwidth]{Proton_Timescale_MNRAS.pdf}
\caption{\label{fig:TimescalesPrimo} Energy dependence of the characteristic timescales (expressed in years) of cosmic ray electrons (upper panel) and protons (lower panel) for the parameters of our reference case. Black thick lines represent energy losses, green dotted lines show the advection timescales. The timescales of diffusion are represented by blue dashed lines in the case of Kolmogorov, red dot-dashed in the case of Bohm and magenta dot-dot-dashed in the MW-like case.}
\end{figure}
The time scales for diffusion, advection and energy losses for CR electrons and protons are shown in the top and bottom panels of Figure \ref{fig:TimescalesPrimo}, respectively. The horizontal (dotted green) line refers to the advection time scales, which is clearly independent of energy and is the same for electrons and protons. For typical values of the radius $R \sim 10^2$ $pc$ \citep[see i.e.][]{1538-4357-576-1-L19} and wind velocity $v_{\rm wind} \sim 10^2 \div 10^3$ km s$^{-1}$ \citep[see e.g.][]{1998ApJ...493..129S}, the advection timescale is of the order of a few hundred thousand years.
The time scale for losses of electrons (solid black line) shows an increasing trend for low momenta, reflecting the dominant ionization and bremsstrahlung channels. At high energy synchrotron and inverse Compton scattering start being important and the loss time drops with energy approximately as $E^{-1}$. The time scale for diffusive escape from the SBN for Model A (dashed blue line), Model B (dash-dotted red line) and Model C (dash-dot-dotted magenta line) are also shown. For all these models it is clear that energy losses dominate the transport of electrons at all energies. For Models A and B, the escape of electrons occurs due to wind advection, while for Model C there is a transition from advection to diffusion at energies $\sim$GeV. In any case, SNBi behave as electron calorimeters.
For CR protons, energy losses are dominated by ionization at low energies and by inelastic pp collisions at high energy. For the Models A and B of diffusive transport, the loss time scales is always shorter than the time for diffusive escape. However, transport is dominated by wind advection at all energies of interest. The time scale for advection and pp scattering remain comparable over many orders of magnitude in energy, due to the fact that both are roughly energy independent. In other words, SBNi behave as approximate, though not perfect, calorimeters. In Model C, CR transport is dominated by diffusion for energies above $\sim$ GeV, and only a small fraction of the energy is lost during propagation. This latter case does not appear to be well motivated and is shown here only as a rather extreme scenario. Moreover, as we discuss below, the multifrequency spectra of individual SBGs are not easy to explain in the context of Model C.
\subsection{Secondary and tertiary electrons and neutrinos}
Electron-positron pairs are copiously produced in SBNi because of the severe rate of energy losses of CR protons. Following the approach put forward by \citet{2006PhRvD..74c4018K}, we compute the pion injection rate as
\begin{equation}
\phantom{xxxx} q_{\pi}(E_{\pi}) = \frac{c n_{\rm ISM}}{K_{\pi}} \sigma_{pp}
\left(m_p c^2+\frac{E_{\pi}}{K_{\pi}} \right) n_p \left(m_p c^2+\frac{E_{\pi}}{K_{\pi}}\right),
\label{pion_injection}
\end{equation}
where $K_{\pi}\sim 0.17$ is the fraction of kinetic energy transferred from the parent proton to the single pion. $n_p(E)$ is the proton distribution function in energy, which is linked to the distribution in momentum by $n_p(E)dE= 4 \pi p^2 f_p(p) dp$. The secondary electron injection (here we refer to electrons as the sum of secondary electrons and positrons) is then computed as follows:
\begin{equation}
\phantom{xxxxxxxxxx} q_e(E_e)= 2 \int_{E_e}^{\infty} q_{\pi}(E_{\pi}) \tilde{f}_e \left(\frac{E_{e}}{E_{\pi}} \right) \frac{dE_{\pi}}{E_{\pi}},
\end{equation}
where $\tilde{f}_e$, defined in equations (36-39) of \citet{2006PhRvD..74c4018K}, is reported in Appendix \ref{app:secondaries}. As we discuss below, gamma rays are also produced as a result of the production and decay of neutral pions.
As illustrated in Table \ref{tab:parameters_table}, the density of FIR photons is large enough that the opacity for photons above threshold for pair production is $\tau_{\gamma\gamma}\gg 1$ (see discussion in Appendix \ref{Appendice_Stime_Analitiche}), so that photons with $E_{\gamma} \gtrsim 10$ TeV are absorbed inside the SBN, and give rise to $e^{\pm}$ pairs that we refer to as {\it tertiary electrons}.
The rate of injection of tertiary electrons is calculated using the leading particle approximation suggested by \citet{2013SAAS...40.....A}. The corresponding spectrum of injected pairs is
\begin{align}
\begin{split}
\phantom{xxxxxxx} q_{e}(E,r) & = \int d\epsilon \; n_{\rm bkg}(\epsilon) n_{\gamma}(E,r) \sigma_{\gamma \gamma}(E,\epsilon) c \\
& = n_{\gamma}(E,r)c \tau_{\gamma \gamma}(E)/R,
\end{split}
\label{pair_production_inj}
\end{align}
where $n_{\rm bkg}(\epsilon)$ is the target background photon density and $n_{\gamma}(E)$ is the gamma-ray photon density, related to the photon emissivity through the expression $\epsilon_{\gamma}(E,r) \approx n_{\gamma} (E,r) c/(4 \pi R) $, which accounts for $\pi_0$ decay, synchrotron, inverse Compton and bremsstrahlung emission of electrons. All these radiation mechanisms are discussed in the following subsection.
The $p \gamma$ interaction could also provide a contribution to secondary electrons, provided the maximum energy of CR protons is higher than $\sim 1.5 \times 10^{8}$ GeV, a case that we do not consider here, but could retain some interest in other contexts.
The equilibrium spectrum of secondary (and tertiary) electrons is calculated by solving Eq. \ref{eq:CR_Equation}. However , since for electrons energy losses are always dominant, the equilibrium spectrum is well approximated by $f_{{\rm sec},e}(p)= q_e(p) \tau_{\rm loss}(p)$. Such approximation is also valid for tertiary electrons above the production threshold. Nevertheless, below such threshold the spectrum is not vanishing but is populated by electrons that lose energy during the propagation. To account also for this component we calculate the spectrum of tertiary electrons as
\begin{equation}
f_{{\rm ter},e}(E)= \frac{\tau_{\rm loss}(E)}{E} \int_{E}^{\infty} E' q_e(E') dE'
\end{equation}
where $q_e$ is taken from Eq.~(\ref{pair_production_inj}).
We also computed the production rate of neutrinos from $pp$ interactions, following the approach proposed by \cite{2006PhRvD..74c4018K}, where the muon neutrino injection was written as
\begin{equation}
\phantom{xxxxxxx} q_{\nu_{\mu}}(E)= 2 \int_{0}^{1} \left[ f_{\nu_{\mu}^{(1)}}(x)+f_{\nu_{\mu}^{(2)}}(x) \right]
q_{\pi} \left( \frac{E}{x} \right) \frac{dx}{x},
\label{neutrino_formula}
\end{equation}
with $x= E/E_{\pi}$ and the functions $f_{\nu_{\mu}^{(1)}}$ and $f_{\nu_{\mu}^{(2)}}$, as reported in Appendix \ref{app:secondaries}, describe muon neutrinos produced by the direct decay $\pi \longrightarrow \mu \nu_{\mu}$ and by the muon decay $\mu \longrightarrow \nu_{\mu} \nu_e e$, respectively. The latter process also produces electron neutrinos which are described by the same equation \ref{neutrino_formula} where the square bracket is replaced with the function $f_{\nu_{e}}$ (see Appendix \ref{app:secondaries}). During propagation over cosmological distances, neutrino oscillations lead to equal distribution of the flux among the three flavors.
\subsection{Non thermal radiation from SBNi}
Neutral pion decay is the leading process for the production of $\gamma$-rays in SBNi. Following the approach of \cite{2006PhRvD..74c4018K}, we calculate the photon emissivity in the following way
\begin{equation}
\phantom{xxxxxxxxx} 4 \pi \; \epsilon_{\gamma}(E)= 2 \int_{E_{\min}}^{\infty} \frac{q_{\pi}(E_{\pi})}{\sqrt{E_{\pi}^2 - m_{\pi}^2c^4}} dE_{\pi} ,
\end{equation}
where $E_{\min}=E+m_{\pi}^2c^4/(4E) $ and $q_{\pi}$ is defined in equation \ref{pion_injection}.
The emissivity due to bremsstrahlung is calculated here following \cite[]{1971NASSP.249.....S}:
\begin{equation}
\phantom{xxxxxxx} 4 \pi \; \epsilon_{\rm brem}(E)= \frac{n_{\rm ISM} \sigma_{\rm brem} c }{E} \int_{E}^{\infty} N_e(E_e,r) dE_e,
\end{equation}
where $\sigma_{\rm brem} \approx 3.4 \times 10^{-26} \rm cm^2$.
The synchrotron emissivity is calculated using the simplified approach proposed by \cite{2013LNP...873.....G}, namely assuming that all energy is radiated at the critical frequency, $\nu_{\rm syn}=\gamma^2 eB/2\pi m_e c$:
\begin{align}
\begin{split}
\phantom{xxxxxxxxxxxx} 4 \pi \; \epsilon_{\rm syn}(\nu) d \nu = P_{\rm syn}(\gamma) N_e(\gamma) d \gamma \\ \gamma= \sqrt{\frac{\nu}{\nu_{\rm syn}}} \; \; \frac{d \gamma}{d \nu}= \frac{\nu^{-1/2}}{2 \nu_{\rm syn}^{1/2}},
\end{split}
\label{SY_IC}
\end{align}
where $P_{\rm syn}$ is the total power emitted by a single electron \cite[]{1986rpa..book.....R, 2011hea..book.....L} (see Appendix \ref{app:timescales}).
The low energy background thermal radiation plays a very important role both as a target for ICS and for $\gamma \gamma$ absorption and pair production. We model the dust thermal contribution in the FIR domain with a diluted blackbody (DBB) as proposed by \citet{0004-637X-568-1-88} and \citet{2008A&A...486..143P}, or possibly a combination of them in order to model different kinds of dust emitting at different temperatures. The single-temperature DBB has the following expression
\begin{equation}
\label{Diluted_BB}
\phantom{xxxxxxxxx} n_{\rm FIR}(E)= C_{\rm dil} \frac{8 \pi}{(hc)^3} \frac{E^2}{\exp^{E/kT}-1} \left( \frac{E}{E_0} \right)^{\sigma}.
\end{equation}
This functional shape allows the dust spectrum to be a pure black body above the energy $E_0$, whereas at lower energies it reduces to a grey body spectrum $\propto E^{2+\sigma}$, where the dust spectral index $\sigma$ generally assuming values between $0$ and $2$ \citep[see][]{0004-637X-568-1-88}. The normalization $C_{\rm dil}$ is obtained from a fit to the IR spectrum of SBGs, while the stellar contribution, treated as a standard blackbody, is obtained fitting the optical spectrum. We notice that, for the cases considered in \S~\ref{Sezione_4}, we need three different IR (dust) components and one optical component. The presence of three separate populations of dust is probably unphysical, but here are used to provide a good fit to the spectra.
The emissivity of ICS \citep[see][]{PhysRev.167.1159} is computed under the assumption that the low energy background photon field is concentrated at the peak $\epsilon_{\rm peak}$ of the dust and starlight components. This approximation leads a factor $\sim 2$ of uncertainty in the predicted IC flux. We consider this uncertainty as acceptable since IC is subdominant compared to other channels.
Within this approximation the IC emissivity is given by:
\begin{align}
\begin{split}
\phantom{xxxxxxxxxx} 4 \pi \; \epsilon_{\rm IC}(E, \epsilon_{\rm peak},r) =
\frac{3 c \sigma_{\rm T}}{4} \frac{U_{\rm rad}}{\epsilon_{\rm peak}^2} \; \times \\ \int^{\infty}_{p_{\min}} f_e(p,r)
\left[ \frac{m_e c^2}{E_e(p)} \right]^2 G\left(q,\Gamma \right) \,4 \pi p^2 dp,
\end{split}
\end{align}
where $U_{\rm rad}$ is the energy density of the thermal component, $f_e(p,r)$ is the electron distribution function (primary + secondaries), $p_{\min}$ is the momentum corresponding to the threshold energy $E_e$ such that $E_e= E/2 \left[1+\left(1+m_e^2 c^4/(E \epsilon_{\rm peak}) \right)^{1/2} \right]$. The function $G(q,\Gamma)$ and the variables $q$ and $\Gamma$ are reported in Appendix \ref{app:timescales}. The luminosity of each thermal component "$i$" is computed as $U_{\rm rad,i}= 9 L_{i}/(16 \pi R^2 c)$, namely assuming that the spherical SBN is not opaque at those wavelengths \citep[see also][\S~1.6]{2013LNP...873.....G}.
Gamma rays with energy above threshold for pair production may be absorbed inside the SBN, and in turn lead to the production of (tertiary) electrons (and positrons). In the same way, low frequency radiation may be absorbed due to free-free absorption whose emissivity is given by:
\begin{equation}
\phantom{xxxxxxx} \epsilon_{ff}(E)= 6.8 \times 10^{-38} T^{-1/2} Z^2 n_e n_i e^{-E/kT} \bar{g}_{ff},
\end{equation}
where $\bar{g}_{ff}$ is the mean Gaunt factor \cite[]{1986rpa..book.....R,1973blho.conf..343N} in a plasma with temperature $T$, $Z$ is the electric charge of the plasma elements, namely protons and electrons (with densities $n_i=n_e$).
In order to account for absorption, the flux of radiation escaping the SBN is calculated by solving the radiative transfer equation in the whole starburst nucleus \cite[see, e.g.][]{1986rpa..book.....R}:
\begin{equation}
\phantom{xxxxxxxxxxx} \frac{dI(E,s)}{ds}= \epsilon(E) - I(E,s) \eta(E) ,
\label{radiative_transfer}
\end{equation}
where $\eta$ is the absorption coefficient for photons of given energy $E$. In the high energy part of the spectrum it takes into account $\gamma \gamma$ absorption, $\eta= \eta_{\gamma \gamma}= \int \sigma_{\gamma \gamma}(E,E') n_{\rm bkg}(E') dE'$, whereas at low energies it describes free-free absorption $\eta=\eta_{ff} \approx 0.018 T^{-3/2} Z^2 n_e n_i \bar{g}_{ff} (h/E)^2$. The spatial coordinate $s$ runs through the SBN at a given distance from the center.
The intensity $I(E)$ for each line of sight across the SBN is calculated by solving numerically Eq.~\ref{radiative_transfer} then, summing up over all line of sight we get the total luminosity of the SBN.
Although redshift effects from nearby SBNi are negligible, absorption of gamma rays at very high energies due to pair production off the diffuse background light remains important at it is accounted for following the approach of \citet{2017A&A...603A..34F} (see Appendix \ref{app:EBL} for the detailed description).
\section{CR Diffusion and calorimetry}
\label{Sezione_3}
The modelling of the non thermal activity of SBGs relies upon the assessment of the assumption of calorimetry, that is often adopted with no much discussion in most literature on the topic. In this section we address the issue of whether CRs lose most of their energy inside SBNi or not in terms of CR transport, and we discuss the observational evidence in terms of emission of non thermal radiation. In order to reach this goal, we compute the spectra of protons, (primary, secondary and tertiary) electrons and the radiation emitted by them in the three diffusion models discussed earlier.
Model A is our benchmark transport model: it predicts that protons lose an appreciable fraction of their energy inside the SBN, although the time scale for escape is comparable with that of the wind advection. The smallness of the diffusion coefficient for this model causes the advection to be the main channel of escape of CR protons from the nucleus, for energies as high as $\sim 10$ PeV. The time scale of energy losses of protons, dominated by pion production, becomes shorter than the advection time above $\sim 10$ TeV, because of the weak energy dependence of the cross section for this process.
All electrons (primary, secondary from pp collisions and tertiary) lose their energy inside the SBN, hence the assumption of calorimetry is certainly justified for the electrons.
The equilibrium spectra of protons and electrons for Model A are shown in Figure \ref{fig:Kolmogorov} (top panel), where we adopted the reference values of parameters as listed in Table \ref{tab:parameters_table}. The strong role of energy losses makes the spectrum of protons reflect the injection spectrum, with a small correction due to the energy dependence of losses. For electrons, energy losses are always faster than both advection and diffusion, hence their spectrum is steeper than the injection spectrum by approximately one power of energy, since the main channel of losses are represented by synchrotron emission in the intense magnetic field of the SBN and by IC off the IR photons. Small wiggles are present in the high energy spectrum of secondary electrons (not clearly visible in the Figure due to the large vertical scale) reflecting the fact that the cross section for ICS off photon backgrounds enters the Klein-Nishina regime when $E_{\gamma}\epsilon_{ph}\sim m_{e}^{2} c^4$. At energies $\lesssim$ GeV, the spectrum of primary electrons is dominated by ionization losses, while for secondary electrons the low energy part of the spectrum falls fast because of the threshold for pion production in pp collisions. Tertiary electrons, produced by pair production of high energy gamma rays in the SBN, start at energies $\sim$ TeV, where absorption off the NIR background becomes important. A second peak is present at energies $\sim 20$ TeV due to the peak in the FIR. On the contrary the contribution due to optical photons is almost negligible. The spectrum of tertiary electrons at energies lower than their minimum injection energy is due to synchrotron and ICS ageing of tertiary electrons injected at higher energies.
The bottom panel of Figure \ref{fig:Kolmogorov} shows the photon spectra from a SBN with the values of the parameters listed in Table \ref{tab:parameters_table}. The number labels refer to the contribution of primary (1), secondary (2) and tertiary (3) electrons. Most gamma rays with energy $\gtrsim 100$ MeV are due to production and decay of neutral pions. The cutoff in the spectrum of gamma rays at energies $\sim 10$ TeV is due to absorption of gamma rays inside the SBN. For larger distances of the galaxies from the Earth the absorption on the extragalactic background light is also expected to become important. We will discuss this point further when dealing with individual sources.
It is interesting to notice that while the synchrotron emission of primary electrons quickly becomes unimportant at high photon energy, the synchrotron emission of secondary and tertiary electrons is dominant in the hard X-ray band. Hence the detection of such hard X-ray emission may be considered as a rather unique signature of strong CR interactions inside the SBN and corresponding copious production of secondary electrons and even efficient gamma ray absorption (tertiary electrons).
In the soft gamma ray band, most emission is due to a combination of ICS and bremsstrahlung of primary and secondary electrons and to synchrotron emission of secondary and tertiary electrons.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Kolmogorov_Particles_MNRAS.pdf}\quad\includegraphics[width=0.45\textwidth]{Kolmogorov_Spectrum_MNRAS.pdf}
\caption{\label{fig:Kolmogorov} Particle and photon spectra in Model A. The upper panel shows primary protons (green dashed), primary electrons (black thick line), secondary electrons (red dashed line) and tertiary electrons (blue dot-dashed line). The lower panel shows the high energy spectral components of $\pi_0$ decay (black dashed line), inverse Compton (red dotted line), synchrotron (blue thick line) and bremsstrahlung (green dot-dashed line). The relative contributions of the different electron populations are separated in primaries ($1$), secondaries ($2$) and tertiaries ($3$).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Bohm_Particles_MNRAS.pdf}\quad\includegraphics[width=0.45\textwidth]{Bohm_Spectrum_MNRAS.pdf}
\caption{\label{fig:Bohm} Particle and photon spectra in Model B. The line style is the same as in Figure \ref{fig:Kolmogorov}.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{MWLIKE_Particles_MNRAS.pdf}\quad\includegraphics[width=0.45\textwidth]{MWLIKE_Spectrum_MNRAS.pdf}
\caption{\label{fig:MW-like} Particle and photon spectra in Model C. The line style is the same as in Figure \ref{fig:Kolmogorov}.}
\end{figure}
In Figure \ref{fig:Bohm} we show the particle (upper panel) and the photon spectrum (lower panel) in the context of Model B, where Bohm diffusion was assumed. Although the time scale for diffusion in Model B is typically much shorter than for Model A, not much difference is observed in the predicted spectra, as a result of the fact that in both models the transport of CRs is mostly dominated by advection and energy losses. Electrons are well confined inside the SBN and lose all their energy inside the nucleus. These two conditions imply that calorimetry is a good approximation for both Model A and B, hence much of what has been said for Model A also applies to Model B.
Model C is qualitatively different from previous diffusion models, in that the larger diffusion coefficient determines a transition from advection to diffusion dominated transport at $E\sim 1$ GeV for protons, while electrons remain loss dominated. The corresponding results are shown in Figure~\ref{fig:MW-like}. The spectrum of CR protons is steeper than injection by an amount determined by the energy dependence of the diffusion coefficient (1/3) and, as a consequence, the injection spectrum of secondary electrons is correspondingly steeper as $ E^{-(2.25+1/3)}$. Moreover, the shorter diffusion time leads to a smaller density of secondary electron when compared with the results of Models A and B, so that the electron spectrum is now dominated by primary electrons.
The main imprints on the spectrum of photons (lower panel) are the steeper spectrum of gamma rays from $\pi^{0}$ decays and the fact that the synchrotron emission in the hard X-ray band is sizeably smaller than for Models A and B, as a result of the lack of calorimetry for CR protons.
The different emission in the hard X-ray band between Models A and B on one hand and Model C on the other illustrate well the potential importance of the detection of hard X-rays from SBNi, in that such photons carry information about the calorimetric properties of the SBN.
Although hard X-rays from the cores of SBGs have been observed \cite[]{2007ApJ...658..258S,2017ApJ...841...44P,2014ApJ...797...79W}, an important contribution to such diffuse emission is typically attributed to unresolved X-ray binaries (XBs), SNRs, $O$ or early-$B$ spectral type stars, diffuse thermal plasma and a possible AGN activity \citep[for a detailed discussion of these components see][]{2002A&A...382..843P}. CR electrons are also expected to contribute to the diffuse hard X-ray emission mainly through ICS on the IR background \citep[see ][]{2002A&A...382..843P}. The possibility that a contribution to the diffuse hard X-ray flux could come from synchrotron emission of CR electrons was first suggested by \citet{0004-637X-762-1-29}. Nevertheless, in their model the X-ray emission is dominated by IC from primary electrons and is roughly 10 time smaller than our prediction in the same energy band, which is, instead, dominated by synchrotron emission from secondary and tertiary electrons. In their case the contribution from secondary electrons is much smaller due to a faster diffusion of protons.
Indeed, as we discussed earlier, the contribution of secondary and tertiary electrons to the diffuse hard X-ray emission reflects the effectiveness of the confinement of CRs inside SBNi, which in turn can be expressed in terms of luminosity in some selected bands.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& Model A & Model B & Model C \\ \hline \hline
$L_{\gamma}$ & $162 $ & $163 $ & $94 $ \\ \hline
$L_{\rm IR}$ & $1.65 \times 10^{6}$ & $1.65 \times 10^{6}$ & $1.65 \times 10^{6}$ \\ \hline
$L_{X}$ & $13.6 \; [7.1,6.5] $ & $ 14.3 \; [7.8,6.5] $ & $ 5.6 \; [0.5,5.1] $ \\ \hline
$L_{X_1}$ & $4.8 \; [3.4,1.4]$ & $ 5.1 \; [3.7,1.4]$ & $ 1.5 \; [0.3,1.2]$ \\ \hline
$L_{X_2}$ & $5.4 \; [3.0,2.4]$ & $ 5.7 \; [3.2,2.5]$ & $ 2.1 \; [0.2,1.9]$ \\ \hline
$L_{X_3}$ & $5.3 \; [2.0,3.3]$ & $ 5.5 \; [2.2,3.3]$ & $ 2.6 \; [0.1,2.5]$ \\ \hline
\end{tabular}
\caption{\label{tab:outcomes_diffusion_table} Luminosity (expressed in units of $10^{38} erg/s$) in three selected energy bands in Models A, B and C. $L_{\gamma}$ is the gamma-ray luminosity computed in the energy range $0.1-10^2 GeV$, whereas $L_{\rm IR}$ is computed in the far infrared ($8 \; \mu m<\lambda<10^3 \mu m$). $L_X$ is computed in the X-ray channel $1-10^2$ keV, whereas $L_{X_{1}}$, $L_{X_2}$ and $L_{X_3}$ are computed in the sub-bands $1-8$ keV, $4-25$ keV and $25-100$ keV, respectively. The square brackets show separately the contribution of SYN and IC to the total luminosity (value out of the parentheses).}
\end{table}
In Table~\ref{tab:outcomes_diffusion_table} we show the luminosity in gamma-rays ($0.1-10^2$ GeV), X-rays ($1-10^2$ keV ) and IR radiation ($8-10^3 \mu \rm m$). Models A and B basically return the same result. On the other hand, Model C shows a clear reduction in the X-ray and gamma-ray luminosity by about a factor $\sim 2\div 3$, while the IR luminosity remains unchanged since the thermal contribution dominates upon synchrotron by $\sim 5$ orders of magnitudes.
For completeness, in the same Table, we also report the X-ray luminosities in three bands, $1-8$ keV (typical of Chandra), at $4-25$ keV (typical of NuStar) and $25-10^2$ keV.
Clearly, the synchrotron emission of secondary and tertiary electrons can contribute (together with the XRBs component) to provide a natural explanation of the hard X-ray extra-component in the band $0.5-10$ keV discussed in \citet{2002A&A...382..843P}.
\section{Application to known SBGs}
\label{Sezione_4}
In this section we specialize our calculation to the case of the three nearby SBGs, namely NGC253, M82 (respectively with \citealp[$D_L \approx 3.8$ and $D_L \approx 3.9$ as found by][]{2005MNRAS.361..330R,1999ApJ...526..599S}) and Arp220 \citep[located at $D_L \approx 77 \; Mpc$, as inferred by][]{1538-4357-492-2-L107}. The latter belongs to the ULIRG class, characterized by very prominent IR luminosity, higher ISM density and magnetic field energy density and more intense star formation activity \cite[]{2015ApJ...800...70S}. Arp220 shows a rate of SN explosions which is more than one order of magnitude higher than typical SBGs \cite[]{2003A&A...401..519M,2006ApJ...647..185L}.
For the modelling of the emission from these SBGs we start by fitting the thermal emission, in the $\sim 0.1$ meV - few eV range, assuming that the observed emission in this band is dominated by the SBN and then we tune the other parameters to fit the multiwavelength spectra, from radio to gamma rays. The parameters' values used for each source are listed in Table \ref{tab:input_fits}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Parameters & NGC253 & M82 & Arp220 \\ \hline \hline
$D_L$ (Mpc) [z] & $3.8$ $[8.8 \; 10^{-4}]$ & $3.9$ $[9 \; 10^{-4}]$ & $77.0$ $[1.76 \; 10^{-2}]$ \\ \hline
$\mathcal{R}_{\rm SN}$ (yr$^{-1}$) & $ 0.027$ & $0.05$ & $ 2.25$ \\ \hline
$R$ (pc) & $150$ & $220$ & $250$ \\ \hline
$\alpha$ & $4.3$ & $4.25$ & $ 4.45$ \\ \hline
$B$ ($\mu$G) & $ 170$ & $ 210$ & $ 500$ \\ \hline
$M_{\rm mol}$ $(10^8 M_{\odot})$ & $0.88$ & $1.94$ & $ 57 $ \\ \hline
$n_{\rm ISM}$ (cm$^{-3}$) & $ 250$ & $175$ & $ 3500$ \\ \hline
$n_{\rm ion}$ (cm$^{-3}$) & $ 30$ & $22.75$ & $ 87.5$ \\ \hline
$v_{\rm wind}$ (km/s) & $300$ & $600$ & $500$ \\ \hline
$T_{\rm plasma}$ (K) & $8000$ & $7000$ & $3000$ \\ \hline
$U^{\rm FIR}_{\rm eV/cm^3}$ [$
\frac{\rm kT}{\rm meV}$] & $ 1958$ $[3.5]$ & $ 910$ $[3.0]$ & $ 31321$ $[3.5]$ \\ \hline
$U^{\rm MIR}_{\rm eV/cm^3}$ [$
\frac{\rm kT}{\rm meV}$] & $ 587$ $[8.75]$ & $ 637$ $[7.5]$ & $ 9396$ $[7.0]$ \\ \hline
$U^{\rm NIR}_{\rm eV/cm^3}$ [$
\frac{\rm kT}{\rm meV}$] & $ 587$ $[29.75]$ & $ 455$ $[24.0]$ & $ 125$ $[29.75]$ \\ \hline
$U^{\rm OPT}_{\rm eV/cm^3}$ [$
\frac{\rm kT}{\rm meV}$] & $ 2936$ $[332.5]$ & $ 546$ $[330.0]$ & $ 1566$ $[350.0]$ \\ \hline
\end{tabular}
\caption{\label{tab:input_fits} Input parameters for the galaxies examined in \S~\ref{Sezione_4}.}
\end{table}
We check {\it a posteriori} that the best fit values of the parameters are in agreement with values reported in the literature. In particular, the inferred radius of the SBN and the ISM conditions appear to compare well with the values presented by \citet{0004-637X-735-1-19} and \citet{2008ApJ...689L.109H} for NGC253, \citet{2003ApJ...599..193F} and \citet{2001ApJ...552..544F} for M82.
For Arp220, we adopt a simplified spherical geometry embedding the two galactic nuclei that are observed. In fact we have adopted parameters that are a reasonable average between the highly compact SBNi and their surrounding environment (\citealp[detailed observations of Arp220 and its ISM condition are discussed in][]{2015ApJ...800...70S,1999ApJ...514...68S}). Parameters like the average magnetic field and the advection speed have been taken consistently with typical values expected from SBNi (\citealp[see for instance][]{2006ApJ...645..186T,2017arXiv170109062H} respectively).
For all three sources analyzed in this section, radio data in the frequency range $1-10$ GHz are taken from \cite[]{Radio_sources}, whereas data at higher energies, namely from $\sim 0.1$ meV to $\sim 10$ eV, have been retrieved from the NED\footnote{The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.} catalog (in particular we use the SED-builder online tool https://tools.ssdc.asi.it/). In all cases we use three DBBs for the dust contribution, a normal BB for the stellar component and a free-free contribution from the thermal plasma. The parameters of these low energy components are listed in the last five rows of Table \ref{tab:input_fits} while Table~\ref{tab:results} summarizes the main outcomes of our modelling for all the three SBNi. Below we briefly describe our findings related to the three chosen SBG and we draw some general conclusions.
\paragraph*{NGC253:} The nuclear region of NGC253 is very compact and luminous at optical wavelength. This causes a non-negligible $\gamma \gamma$ absorption at energies of few hundred GeV, which in turn determines a softening of the gamma-ray spectrum already below $\sim 1$ TeV. The spectrum above $100$ MeV is totally dominated by the $\pi_0$ component. Below $\sim 100$ MeV the dominant emission mechanism is IC (mainly from secondary electrons). Only at keV energies the IC emission becomes comparable with the SYN components from secondary and tertiary electrons. Relativistic bremsstrahlung is always subdominant but provides a non negligible contribution to the total gamma ray emission in the range $10 \div 100$ MeV.
The multifrequency spectrum of NGC253 is shown in Figure~\ref{fig:NGC253}. The top panel illustrates the good agreement between the results of our modelling of the low energy emission and observations. The bottom panel is more interesting in that it shows the gamma ray emission coming from both the decays of neutral pions and from interactions of electrons and gamma rays with magnetic fields and low energy photon background inside the SBN.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{NGC253_LE_MNRAS_5.pdf}\quad\includegraphics[width=0.45\textwidth]{NGC253_HE_MNRAS_5.pdf}
\caption{\label{fig:NGC253} Multiwavelength spectrum of NGC253. Upper panel shows the low energy spectrum with relative components:: thermal dust DBBs (red, orange and yellow dashed), optical star BB (green dot-dashed), thermal free-free (magenta dot-dot-dashed) and SYN (blue dotted). Lower panel shows the high energy spectral components: $\pi_0$ (red dashed), IC (magenta dotted), BREM (green dot-dashed) and SYN (blue dashed). Together with the photons we show the single flavor neutrino flux (thin gold dashed). The data (black points) are observed by Fermi-LAT and HESS and presented in \citet{Abdalla:2018nlz} for the HE and VHE domain, whereas the hard X-ray upper limit is taken from \citet{2014ApJ...797...79W}.}
\end{figure}
Gamma-ray data collected by Fermi-LAT and HESS \citep[see][]{Abdalla:2018nlz} are well reproduced. Of particular interest is the shape of the spectrum below $\sim 1$ GeV where data show a strong hint of the pion bump, a clear signature of the hadronic origin of gamma-rays. The computed hard X-ray flux, contributed by both synchrotron and IC, is at the level of $E^2F(E) \approx 10^{-10} \rm GeV \, cm^2 \, s^{-1}$ at $10$ keV, appreciably larger than previous estimates \citep[e.g.][]{0004-637X-762-1-29}, but consistent with detailed observations of the nuclear region of NGC253 performed by NuStar \cite[]{2014ApJ...797...79W}. In this case, our larger flux with respect to \cite{0004-637X-762-1-29} is mainly due to the IC emission from secondary electrons copiously produced because the larger confinement time of CRs, whereas SY dominates only below $\sim 5$ keV.
\paragraph*{M82:} The multiwavelength spectrum of M82 is very similar to that of NGC253, but requires a slight harder injection (see Table \ref{tab:input_fits}) to explain the harder observed gamma-ray spectrum. In this way, the gamma-ray observations from Fermi-LAT and Veritas (\citealp[see for istance][]{2012ApJ...755..164A,2009Natur.462..770V}) are again well reproduced. The absorption of VHE gamma-rays is almost negligible below a few $TeV$ because the optical background is almost a factor $5$ lower at the peak with respect to the case of NGC253.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{M82_LE_MNRAS_2.pdf}\quad\includegraphics[width=0.45\textwidth]{M82_HE_MNRAS_2.pdf}
\caption{\label{fig:M82} Multiwavelength spectrum of M82. The line style is the same of figure \ref{fig:NGC253}. The HE gamma-ray observation are taken from the Fermi-LAT observation discussed in \citet{3FGL}, whereas VHE data come from Veritas and are published in \citet{2009Natur.462..770V}. The X-ray point is a Chandra \citep[see][]{2007ApJ...658..258S} we have taken as upper limit because of possible contamination from undetected point-like sources (e.g. XRBs) and thermal plasma.}
\end{figure}
The computed diffuse hard X-ray flux is again very high ($E^2F(E)\approx 10^{-10} \rm GeV \, cm^{-2} \, s^{-1}$ at a few keV) and, different from NGC253, it is dominated by synchrotron emission of secondary and tertiary electrons up to $\sim 20$ keV. Although no measurement of the truly diffuse hard X-ray flux from the nuclear region of M82 is available at present, recent observations carried out using Chandra \citep[see][]{2007ApJ...658..258S}, XMM-Newton \citep[see ][]{2008MNRAS.386.1464R} and more recently NuStar \citep[][]{Bachetti:2014qsa} suggest that our computed hard X-ray diffuse flux is $\approx 5 \% \div 10 \%$ of the total observed flux in the energy band $3-8$ keV, hence we interpret the X-ray point in Figure \ref{fig:M82} as an upper limit to the diffuse emission, since point-like sources could contaminate such measurement.
\paragraph*{Arp220:}
Our simple assumptions on the geometric properties of the SBN are particularly restrictive when applied to a source such as Arp220, with its complex morphology (two nuclei and possibly a low activity AGN). In this sense, it is noteworthly that, despite such limitations, a reasonable fit to the multifrequency emission can be obtained for this sources, using the input parameters listed in the last column of Table \ref{tab:input_fits}. In particular, we have found that our best fit value for the magnetic field ($\sim 500~\mu$G) is about a factor 2 lower than the typical $\sim$mG field assumed in literature for the two SBNi of Arp220 \citep[see for istance][]{Thompson2006-magneticfield,Barcos-munoz_Arp220_B,McBride_Arp220_B,Yoast-Hull_Arp220}. Our value for the magnetic field is not in tension with previous estimates because it represents an average between the magnetic field inside the two nuclei and the one in the surrounding region, estimated to be $\sim 10^2$ $\mu$G \citep[for similar discussions see also][]{2004ApJ...617..966T,Varenius_Arp220}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Arp220_LE_MNRAS_4.pdf}\quad\includegraphics[width=0.45\textwidth]{Arp220_HE_MNRAS_4.pdf}
\caption{\label{fig:ARP220} Multiwavelength spectrum of Arp220. The line stile is the same of figure \ref{fig:NGC253}. Gamma-ray data are taken from \citet{Peng:2016nsx}, whereas the X-ray point (which again we take as upper limit for our diffuse flux taking into account possible contamination from pointlike sources and thermal plasma) has been taken from \citet{2017ApJ...841...44P}.}
\end{figure}
The multifrequency spectrum of Arp220 is shown in Figure \ref{fig:ARP220}. Gamma ray observations \citep[see][]{Peng:2016nsx} suggest that Arp220 requires a softer injection slope with respect to normal starbursts like NGC253 and M82. In alternative, one could speculate that the level of turbulence in Arp220 is lower so as to make CR transport dominated by diffusion. However, this possibility does not seem to sit well with the observed level of activity of this source. On the other hand, it is not easy to envision the reason why one should expect a steeper injection spectrum. In the absence of better indications, here we just assume a steeper injection spectrum.
The dominant gamma-ray component above $\sim 100 \; MeV$ is again the $\pi_0$ decay, whereas at lower energies only ICS and bremsstrahlung emissions are expected to be relevant. Moreover, different from normal starbursts, the synchrotron component is completely negligible in the whole high energy part of the photon spectrum (see lower panel of Figure \ref{fig:ARP220}).
The diffuse hard X-ray flux from the central region of Arp220 has been investigated by \citet{2017ApJ...841...44P}. Taking into account that we are modelling the core of Arp220 as a unique region we show their measured X-ray luminosity coming from the central $4.5''$, corresponding to a radius of $\sim 840$ pc that also accounts for the region between the two nuclei (see X-ray upper limit in the lower panel of Figure \ref{fig:ARP220}). As for the other two SBNi analyzed above, we take this measured luminosity as an upper limit for our non-thermal X-ray flux because of possible contamination from pointlike sources. Indeed, after converting the measured luminosity in a differential flux assuming an energy slope of $-1.6$, we find out that the measured flux is located above our computed spectrum, as expected.
The application of our calculations of CR transport to individual SBGs allows us to draw some general conclusions: 1) in all cases we considered, observations show that CR protons lose an appreciable fraction of their energy inside the SBN; 2) from the point of view electrons, the SBN is an excellent calorimeter. 3) Most of the emission at frequencies other than high energy gamma rays is dominated by secondary electrons, products of pp collisions. 4) Electron positron pairs are effectively generated because of the absorption of high energy gamma rays with the background light in the SBN. The absorption of gamma rays inside the nucleus inhibits the development of an electromagnetic cascade during propagation, which might have important implications for the sources of high energy neutrinos; 5) The synchrotron emission of secondary and tertiary electrons generates a diffuse hard X-ray emission that can be envisioned as a unique diagnostic to investigate the calorimetric properties of SBGs.
More detailed observations of gamma-ray emission from SBGs with upcoming telescopes, and in particular with the Cherenkov Telescope Array \citep[see][]{2017arXiv170907997C}, will certainly shed new light on the physical processes at work in SBGs.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Outcomes & NGC253 & M82 & Arp220 \\ \hline \hline
$E_{\rm SN} \xi_{\rm CR}$ (erg s$^{-1}$) & $ 8.56 \times 10^{40}$ & $ 1.59 \times 10^{41}$ & $ 7.14 \times 10^{42}$ \\ \hline
$L_{0.1-10^2 \rm GeV}$ (erg s$^{-1}$) & $ 1.31 \times 10^{40}$ & $ 1.82 \times 10^{40}$ & $ 1.36 \times 10^{42}$ \\ \hline
$L_{1-10^2 \rm keV}$ (erg s$^{-1}$) & $ 0.81 \times 10^{39} $ & $ 1.51 \times 10^{39} $ & $ 9.91 \times 10^{40} $ \\ \hline
$L_{8-10^3 \mu \rm m}$ (erg s$^{-1}$) & $ 1.65 \times 10^{44}$ & $ 2.27 \times 10^{44}$ & $ 6.51 \times 10^{45}$ \\ \hline
$U_{B}$ (eV cm$^{-3}$) & $ 717.71$ & $ 1095.19$ & $ 6208.54$ \\ \hline
$U_{p}$ (eV cm$^{-3}$) & $ 655.63$ & $ 413.29$ & $ 1323.91$ \\ \hline
$U_{e}$ (eV cm$^{-3}$) & $ 5.06$ & $ 3.41$ & $ 14.35$ \\ \hline
$U_{e,\rm sec}$ (eV cm$^{-3}$) & $ 6.15$ & $ 3.92$ & $ 15.78$ \\ \hline
$U_{e, \rm ter}$ (eV cm$^{-3}$) & $ 3.48 \times 10^{-3}$ & $ 1.49 \times 10^{-3}$ & $ 2.65 \times 10^{-3}$ \\ \hline
\end{tabular}
\caption{\label{tab:outcomes_fit_table} Inferred values for the luminosity at different energies and energy density of magnetic field and non thermal particles for the examined galaxies.}
\label{tab:results}
\end{table}
The single-flavor neutrino fluxes are well described by power laws in energy of index $\alpha - 2$. The flux normalization at $10^2$ TeV obtained for NGC253 and M82 is roughly $10^{-11}$ GeV cm$^{-2}$ s$^{-1}$, and it is about a factor $50$ lower for Arp220.
Considering that the pointlike source sensitivity for IceCube and KM3NeT allows for the detection of a neutrino flux two orders of magnitude higher than what we obtained for NGC253 and M82 \citep[see][]{Aartsen:2017kru,Aiello:2018usb}, the probability of detecting a nearby SBN as an isolated neutrino source is very small.
\section{Conclusions}
\label{Conclusioni}
We have modeled starburst nuclei as leaky box systems assuming spherical symmetry and homogeneous properties of the medium. We have investigated how different diffusion coefficients change the high energy spectra modifying the normalization and the slope in the energy range above GeV and determining an enhanced flux in the hard X-ray energy band. We have found that in the most likely diffusion scenario, which is described by a Kolmogorov diffusion coefficient assuming $\delta B/B \approx 1$ and typical length of perturbation $L_0 \approx 1$ pc, the escape is completely provided by the wind advection up to PeV energies. At higher energies the timescale at which particles can diffuse away could become comparable with advection and energy losses.
Normal starbursts like NGC253 and M82 are consistent with a slope of injection $\alpha \approx 4.2 \div 4.3$ and the softening taking place in the high energy part of their photon spectra can be explained by the $\gamma \gamma$ absorption. On the other hand the ULIRG class Arp220 is compatible with a softer injection $\alpha = 4.45$. Moreover, in agreement with the results obtained in \cite{Yoast-Hull_Equipartition}, the galaxies we have analysed are consistent with a sub-equipartition between the energy density of CR-particle and the magnetic field, namely $U_p/U_B \approx$ $ 0.9$, $ 0.4$ and $0.2$ for NGC253, M82 and Arp220, respectively. The ratio between the gamma-ray luminosity and the total injected energy in CRs ($\ge 1/10$) suggests that proton calorimetry is at least partially achieved in NGC253 and M82, whereas Arp220 seems to be able to better confine particles (showing a ratio $\sim 1/5$).
The neutrino flux from individual SBNi was found to be well below the point source sensitivity of current neutrino telescopes. On the other hand, as pointed out by \citet[]{Loeb:2006tw,2011ApJ...734..107L,Tamborra:2014xia,Bechtol:2015uqb}, the contribution of SBNi to the diffuse neutrino flux might be relevant. The implications of the CR confinement studied in the present paper for the diffuse neutrino flux will be discussed in an upcoming article.
\section*{Acknowledgements}
This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
\bibliographystyle{mnras}
|
1,116,691,499,738 | arxiv |
\section{Introduction}
Encoding and decoding are central problems in communication \cite{mackay2003information}.
Compressed sensing (CS) provides a framework that separates encoding and decoding into independent measurement and reconstruction processes \cite{candes2006stable,donoho2006compressed}.
Unlike commonly used auto-encoding models \cite{bourlard1988auto,kingma2013auto,rezende2014stochastic}, which feature end-to-end trained encoder and decoder pairs, CS reconstructs signals from low-dimensional measurements via online optimisation.
This model architecture is highly flexible and sample efficient: high dimensional signals can be reconstructed from a few random measurements with little or no training at all.
CS has been successfully applied in scenarios where measurements are noisy and expensive to take, such as in MRI \cite{lustig2007sparse}.
Its sample efficiency enables the development of, for example, the ``single pixel camera'', which reconstructs a full resolution image from a single light sensor \cite{duarte2008single}.
However, the wide application of CS, especially in processing large scale data where modern deep learning approaches thrive, is hindered by its assumption of sparse signals and the slow optimisation process for reconstruction.
Recently, \citet{bora2017compressed} combined CS with separately trained neural network generators.
Although these pre-trained neural networks were not optimized for CS, they demonstrated reconstruction performance superior to existing methods such as the Lasso \cite{tibshirani1996regression}.
Here we propose the deep compressed sensing (DCS) framework in which neural networks can be trained from-scratch for both measuring and online reconstruction.
We show that this framework leads naturally to a family of models, including GANs~\cite{goodfellow2014generative}, which can be derived by training the measurement functions with different objectives.
In summary, this work contributes the following:
\begin{itemize}
\item We demonstrate how to train deep neural networks within the CS framework.
\item We show that a meta-learned reconstruction process leads to a more accurate and orders of magnitudes faster method compared with previous models.
\item We develop a new GAN training algorithm based on latent optimisation, which improves GAN performance. The non-saturated generator loss $-\ln \left(D(G(\mathbf{z})) \right)$ emerges as a measurement error.
\item We extend our framework to training semi-supervised GANs, and show that latent optimisation results in semantically meaningful latent spaces.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.25\textwidth]{figures/model.png}
\caption{Illustration of Deep Compressed Sensing. $\mathbf{F}$ is a measurement process that produces a measurement of the signal, $\mathbf{m}$ and $\mathbf{G}$ is a generator that reconstructs the signal from a latent representation $\hat{\vz}$. The latent representation is optimised to minimise a measurement error $E_\theta(\mathbf{m}, \hat{\mathbf{m}})$.}
\label{fig:model}
\end{figure}
\subsection*{Notations}
We use bold letters for vectors and matrices and normal letters for scalars. $\expt{p(\mathbf{x})}{f(\mathbf{x})}$ indicates taking the expectation of $f(\mathbf{x})$ over the distribution $p(\mathbf{x})$.
We use subscriptions of Greek letters to indicate function parameters. For example, $G_\theta$ is a function parametrised by $\theta$.
\section{Background}
\subsection{Compressed Sensing}
Compressed sensing aims to recover signal $\mathbf{x}$ from a linear measurement $\mathbf{m}$:
\begin{equation}
\mathbf{m} = \mathbf{F} \, \mathbf{x} + \eta
\label{eq:cs}
\end{equation}
where $\mathbf{F}$ is the $C \times D$ \emph{measurement matrix}, and $\eta$ is the measurement noise which is usually assumed to be Gaussian distributed. $\mathbf{F}$ is typically a ``wide'' matrix, such that $C \ll D$. As a result, the measurement $\mathbf{m}$ has much lower dimensionality compared with the original signal; solving $\mathbf{x}$ is generally impossible for such under-determined problems.
The elegant CS theory shows that one can nearly perfectly recover $\mathbf{x}$ with high probability given a random matrix $\mathbf{F}$ and sparse $\mathbf{x}$ \cite{donoho2006compressed,candes2006stable}.
In practice, the requirement that $\mathbf{x}$ be sparse can be replaced by sparsity in a set of basis $\Phi$, such as the Fourier basis or wavelet, so that $\Phi \, \mathbf{x}$ can be non-sparse signals such as natural images.
Here we omit the basis $\Phi$ for brevity; the linear transform from $\Phi$ does not affect our following discussion.
At the centre of CS theory is the Restricted Isometry Property (RIP) \footnote{The theory can also be proved from the closely related and more general Restricted Eigenvalue condition \cite{bora2017compressed}. We focus on RIP in this form for its more straightforward connection with the training loss (see section \ref{sec:csml}).},
which is defined for $\mathbf{F}$ and the difference between signals $\mathbf{x}_1 - \mathbf{x}_2$ as
\begin{equation}
\begin{split}
(1 - \delta)\,\norm{\mathbf{x}_1 - \mathbf{x}_2}_2^2 &\leq \norm{\mathbf{F} \, (\mathbf{x}_1 - \mathbf{x}_2) }_2^2 \\
&\leq (1+\delta) \, \norm{\mathbf{x}_1 - \mathbf{x}_2}_2^2
\end{split}
\label{eq:rip}
\end{equation}
where $\delta \in (0, 1)$ is a small constant.
The RIP states that the projection from $\mathbf{F}$ preserves the distance between two signals bounded by factors of $1-\delta$ and $1 + \delta$.
This property holds with high probability for various random matrices $F$ and sparse signals $x$.
It guarantees minimising the measurement error
\begin{equation}
\hat{\vx} = \argmin_{\mathbf{x}}\norm{\mathbf{m} - \mathbf{F} \, \mathbf{x} }_2^2
\label{eq:tcs-opt}
\end{equation}
under the constraint that $\mathbf{x}$ is sparse, leads to accurate reconstruction $\hat{\vx} \approx \mathbf{x}$ with high probability \cite{donoho2006compressed,candes2006stable}.
This constrained optimisation problem is computationally intensive --- a price for the measuring process that only requires sparse random projections of the signals \cite{baraniuk2007compressive}.
\subsection{Compressed Sensing using Generative Models}
\label{sec:csgm}
The requirement of sparsity poses a strong restriction on CS.
Sparse bases, such as the Fourier basis or wavelet, only partially relieve this constraints, since they are restricted to domains known to be sparse in these bases and cannot adapt to data distributions.
Recently, \citet{bora2017compressed} proposed compressed sensing using generative models (CSGM) to relax this requirement.
This model uses a \emph{pre-trained} deep neural network $G_\theta$ (from a VAE or GAN) as the structural constraint in the place of sparsity.
This generator maps a latent representation $\mathbf{z}$ to the signal space:
\begin{equation}
\mathbf{x} = G_\theta(\mathbf{z})
\end{equation}
Instead of requiring sparse signals, $G_\theta$ implicitly constrains output $\mathbf{x}$ in a low-dimensional manifold via its architecture and the weights adapted from data.
This constraint is sufficient to provide a generalised Set-Restricted Eigenvalue Condition (S-REC) with random matrices, under which low reconstruction error can be achieved with high probability.
A minimisation process similar to that in CS is used for reconstruction:
\begin{align}
\hat{\vz} &= \argmin_{\mathbf{z}}E_\theta(\mathbf{m}, \mathbf{z}) \label{eq:argmin-csg} \\
E_\theta &= \norm{\mathbf{m} - \mathbf{F} \, G_\theta(\mathbf{z})}^2_2
\label{eq:cs-err}
\end{align}
such that $\hat{\vx} = G_\theta(\mathbf{z})$ is the reconstructed signal.
In contrast to directly optimising the signal $\mathbf{x}$ in CS (eq.\ref{eq:tcs-opt}), here optimisation is in the space of latent representation $\mathbf{z}$.
The $\argmin$ operator in eq.~\ref{eq:argmin-csg} is intractable since $E_\theta$ is highly non-convex.
It is therefore approximated using gradient descent starting from a randomly sampled point $\hat{\vz} \sim p_\mathbf{z}(\mathbf{z})$:
\begin{equation}
\hat{\vz} \leftarrow \hat{\vz} - \alpha \, \frac{\partial E_\theta(\mathbf{m}, \mathbf{z})}{\partial \mathbf{z}} \bigg\rvert_{\mathbf{z}=\hat{\vz}}
\label{eq:z-gd}
\end{equation}
where $\alpha$ is a learning rate. One can take a specified $T$ steps of gradient descent.
Typically, hundreds or thousands of gradient descent steps and several re-starts from the initial step are needed to obtain a sufficiently good $\hat{\mathbf{z}}$ \cite{bora2017compressed,bojanowski18a}.
This process is illustrated in Figure~\ref{fig:model}.
This work established the connection between compressed sensing and deep neural networks, and demonstrated performance superior to the Lasso \cite{tibshirani1996regression}, especially when the number of measurements is small.
The theoretical properties of CSGM have been more closely examined by \citet{hand2017global}, who also proved stronger convergence guarantees.
More recently, \citet{dhar2018modeling} proposed additional constraints to allow \emph{sparse deviation} from the generative model's support set, thus improving generalisation.
However, CSGM still suffers from two restrictions:
\begin{enumerate}
\item The optimisation for reconstruction is still slow, as it requires thousands of gradient descent steps.
\item It relies on random measurement matrices, which are known to be sub-optimal for highly structured signals such natural images. Learned measurements can perform significantly better \cite{weiss2007learning}.
\end{enumerate}
\subsection{Model-Agnostic Meta Learning}
\label{sec:maml}
Meta-learning, or learning to learn, allows a model adapting to new tasks by self-improving \cite{schmidhuber1987evolutionary}.
Model-Agnostic Meta learning (MAML) provides a general method to adapt parameters for a number of tasks \cite{finn2017model}.
Given a differentiable loss function $\mathcal{L}(\mathcal{T}_i; \theta)$ for task $\mathcal{T}_i$ sampled from the task distribution $\ptask{\mathcal{T}}$, the task-specific parameters are adapted by gradient descent from the initial parameters $\theta$:
\begin{equation}
\theta_i \leftarrow \theta - \alpha \nabla_\theta \mathcal{L}(\mathcal{T}_i; \theta)
\label{eq:inner-opt}
\end{equation}
The initial parameters $\theta$ are trained to minimise the loss across all tasks
\begin{equation}
\min_{\theta} \expt{\mathcal{T}_i \sim \ptask{\mathcal{T}}}{\mathcal{L}(\mathcal{T}_i; \theta_i)}
\label{eq:maml-opt}
\end{equation}
Multiple steps and more sophisticated optimisation algorithms can be used in the place of eq.~\ref{eq:inner-opt}.
Despite $\mathcal{L}$ usually being a highly non-convex function, by back-propagating through the gradient-descent process, only a few gradient steps are sufficient to adapt to new tasks.
\subsection{Generative Adversarial Networks}
\label{sec:gan-review}
A Generative Adversarial Network (GAN) trains a parametrised generator $G_\theta$ to fool a discriminator $D_\phi$ that tries to distinguish real data from fake data sampled from the generator \cite{goodfellow2014generative}.
The generator $G_\theta$ is a deterministic function that transforms samples $\mathbf{z}$ from a source $\pz{\mathbf{z}}$ to the same space as the data $\mathbf{x}$, which has the distribution $\pd{\mathbf{x}}$.
This adversarial game can be summarised by the following min-max problem with the value function $V(G_\theta, D_\phi)$:
\begin{equation}
\begin{split}
\min_{G_\theta} \max_{D_\phi} & V(G_\theta, D_\phi) = \expt{\mathbf{x} \sim \pd{\mathbf{x}}} {\ln D_\phi(\mathbf{x})} \\
& \quad+ \expt{\mathbf{z} \sim p_{\mathbf{z}}(\mathbf{z})}{\ln (1 - D_\phi(G_\theta(\mathbf{z})))}
\end{split}
\label{eq:gan_obj}
\end{equation}
GANs are usually difficult to train due to this adversarial game \cite{balduzzi2018mechanics}. Training may either diverge or converge to bad equilibrium with, for example, collapsed modes, unless extra care is taken in designing and training the model \cite{radford2015unsupervised,salimans2016improved}.
A widely adapted trick is using $-\ln \left(D(G(\mathbf{z})) \right)$ as the objective for the generator \cite{goodfellow2014generative}.
Compared with eq.~\ref{eq:gan_obj}, this alternative objective avoids saturating the discriminator in the early stage of training when the generator is too weak.
However, this objective voids most theoretical analyses \cite{hu2018on}, since the new adversarial objective is no longer a zero-sum game (eq.~\ref{eq:gan_obj}).
In most GAN models, discriminators become useless after training. Recently, \citet{tao2018b} and \citet{azadi2018discriminator} proposed methods using the discriminator for importance sampling.
Our work provides an alternative: our model moves latent representations to areas more likely to generate realistic images as deemed by the discriminator.
\section{Deep Compressed Sensing}
We start by showing the benefit of combining meta-learning with the model in \citet{bora2017compressed}.
We then generalise measurement matrices to parametrised measurement functions, including deep neural networks.
While previous work relies on random projections as measurement functions, our approach learns measurement functions by imposing the RIP as a training objective.
We then derive two novel models by imposing properties other than the RIP on the measurements, including a GAN model with discriminator-guided latent optimisation, which leads to more stable training dynamics and better results.
\subsection{Compressed Sensing with Meta-Learning}
\label{sec:csml}
We hypothesise that the run-time efficiency and performance in CSGM (\citealt{bora2017compressed}, section \ref{sec:csgm}), can be improved by training the latent optimisation procedure using meta-learning, by back-propagating through the gradient descent steps \cite{finn2017model}.
The latent optimisation procedure for CS models can take hundreds or thousands of steps. By employing meta-learning to optimise this optimisation procedure we aim to achieve similar results with far fewer updates.
To this end, the model parameters, as well as the latent optimisation procedure, are trained to minimise the expected measurement error:
\begin{equation}
\min_{\theta} \, \mathcal{L}_G, \,\, \text{for} \,
\mathcal{L}_G = \expt{\mathbf{x}_i \sim \pd{\mathbf{x}}}{E_\theta(\mathbf{m}_i, \hat{\vz}_i)}
\label{eq:cs-opt}
\end{equation}
where $\hat{\vz}_i$ is obtained from gradient descent (eq.~\ref{eq:z-gd}).
The gradient descent in eq.~\ref{eq:z-gd} and the loss function in eq.~\ref{eq:cs-opt} mirror their counterparts in MAML (eq.~\ref{eq:inner-opt} and \ref{eq:maml-opt}), except that:
\begin{enumerate}
\item Instead of the stochastic gradient computed in the outside loop, here each measurement error $E_\theta$ only depends on a single sample $\mathbf{z}$, so eq.~\ref{eq:z-gd} computes the exact gradient of $E_\theta$.
\item The online optimisation is over latent variables rather than parameters. There are usually much fewer latent variables than parameters, so the update is quicker.
\end{enumerate}
Like in MAML, we implicitly perform second order optimisation, by back-propagating through the latent optimisation steps which compute $\hat{\vz}_i$
when optimising eq. \ref{eq:cs-opt}.
We empirically observed that this dramatically improves the efficiency of latent optimisation, with only 3-5 gradient descent steps being sufficient to improve upon baseline methods.
Unlike~\citet{bora2017compressed}, we also train the generator $G_\theta$.
Merely minimising eq.~\ref{eq:maml-opt} would fail --- the generator can exploit $\mathbf{F}$ by mapping all $G_\theta(\mathbf{z})$ into the null space of $\mathbf{F}$.
This trivial solution always gives zero measurement error, but may contain no useful information.
Our solution is to enforce the RIP (eq.~\ref{eq:rip}) via training, by minimising the \emph{measurement loss}:
\begin{equation}
\begin{split}
\mathcal{L}_F &= \expt{\mathbf{x}_1, \mathbf{x}_2}{\left( \norm{\mathbf{F} \, (\mathbf{x}_1 - \mathbf{x}_2) }_2 - \norm{\mathbf{x}_1 - \mathbf{x}_2}_2 \right)^2}
\end{split}
\label{eq:rip-reg}
\end{equation}
$\mathbf{x}_1$ and $\mathbf{x}_2$ can be sampled in various ways.
While the choice is not unique, it is important to sample from both the data distribution $\pd{\mathbf{x}}$ and generated samples $G_\theta(\mathbf{z})$, so that the trained RIP holds for both real and generated data.
In our experiments, we randomly sampled one image from the data and two generated images at the beginning and end of latent optimisation, then computed the average between the 3 pairs of losses between these 3 points as a form of ``triplet loss''.
Our algorithm is summarised in Algorithm \ref{alg:m-cs}.
Since Algorithm \ref{alg:m-cs} still uses a random measurement matrix $\mathbf{F}$, it can be used as any other CS algorithm when ground truth reconstructions are available for training the generator.
\begin{algorithm}[tb]
\caption{Compressed Sensing with Meta Learning}
\begin{algorithmic}
\STATE {\bfseries Input:} minibatchs of data $\{\mathbf{x}_i\}_{i=1}^N$, random matrix $\mathbf{F}$, generator $G_\theta$, learning rate $\alpha$, number of latent optimisation steps $T$
\REPEAT
\STATE Initialize generator parameters $\theta$
\FOR{$i=1$ {\bfseries to} $N$}
\STATE Measure the signal $\mathbf{m}_i \leftarrow \mathbf{F} \, \mathbf{x}_i$
\STATE Sample $\hat{\vz}_i \sim p_\mathbf{z}(\mathbf{z})$
\FOR{$t=1$ {\bfseries to} $T$}
\STATE Optimise $\hat{\vz}_i \leftarrow \hat{\vz}_i - \frac{\partial}{\partial \mathbf{z}} E_\theta(\mathbf{m}_i, \hat{\vz}_i)$
\ENDFOR
\ENDFOR
\STATE $\mathcal{L}_G = \frac{1}{N} \sum_{i=1}^N E_\theta(\mathbf{m}_i, \hat{\vz}_i)$
\STATE Compute $\mathcal{L}_F$ using eq.~\ref{eq:rip-reg}
\STATE Update $\theta \leftarrow \theta - \frac{\partial}{\partial \theta} (\mathcal{L}_G + \mathcal{L}_F)$
\UNTIL{reaches the maximum training steps}
\end{algorithmic}
\label{alg:m-cs}
\end{algorithm}
\subsection{Deep Compressed Sensing with Learned Measurement Function}
In Algorithm \ref{alg:m-cs}, we use the RIP property to train the generator. We can use the same approach and enforce the RIP property to learn the measurement function $\mathbf{F}$ itself, rather than using a random projection.
\subsubsection{Learning Measurement Function}
\label{sec:dcs_vanilla}
We start by generalising the measurement matrix $\mathbf{F}$ (eq.~\ref{eq:cs}), and define a parametrised measurement function $\mathbf{m} \leftarrow F_\phi(\mathbf{x})$.
The model introduced in the previous section corresponds to a linear function $F_\phi(\mathbf{x}) = \mathbf{F} \, \mathbf{x}$; now both $F_\phi$ and $G_\theta$ can be deep neural networks.
Similar to CS, the central problem in this generalised setting is inverting the measurement function to recover the signal $\mathbf{x} \leftarrow F_\phi^{-1}(\mathbf{m})$ via minimising the measurement error similar to eq.~\ref{eq:cs-err}:
\begin{equation}
E_\theta(\mathbf{m}, \mathbf{z}) = \norm{\mathbf{m} - F_\phi \left(G_\theta (\mathbf{z}) \right)}^2_2
\label{eq:dcs-err}
\end{equation}
The distance preserving property as a counterpart of the RIP can be enforced by minimising a loss similar to eq.~\ref{eq:rip-reg}:
\begin{equation}
\mathcal{L}_F = \expt{\mathbf{x}_1, \mathbf{x}_2}{ \left( \norm{F_\phi(\mathbf{x}_1 - \mathbf{x}_2) }_2 - \norm{\mathbf{x}_1 - \mathbf{x}_2}_2 \right)^2}
\label{eq:f-rip-reg}
\end{equation}
Minimising $\mathcal{L}_F$ provides a relaxation of the constraint specified by the RIP (eq.~\ref{eq:rip}).
When $\mathcal{L}_F$ is small, the projection from $\mathbf{F}$ better preserves the distance between $\mathbf{x}_1$ and $\mathbf{x}_2$.
This relaxation enables us to transform the RIP into a training objective for the measurements, which can then be integrated into training other model components. Empirically, we found this relaxation leads to high quality reconstruction.
The rest of the algorithm is identical to Algorithm \ref{alg:m-cs}, except that we also update the measurement function's parameters $\phi$.
Consequently, different schemes can be employed to coordinate updating $\theta$ and $\phi$, which will be discussed more in section \ref{sec:train}.
This extended algorithm is summarised in Algorithm \ref{alg:dcs}.
We call it Deep Compressed Sensing (DCS) to emphasise that both the measurement and reconstruction can be deep neural networks.
Next, we turn to generalising the measurements to properties other than the RIP.
\begin{algorithm}[tb]
\caption{Deep Compressed Sensing}
\begin{algorithmic}
\STATE {\bfseries Input:} minibatchs of data $\{\mathbf{x}_i\}_{i=1}^N$, measurement function $F_\phi$, generator $G_\theta$, learning rate $\alpha$, number of latent optimisation steps $T$
\REPEAT
\STATE Initialize generator parameters $\theta$
\FOR{$i=1$ {\bfseries to} $N$}
\STATE Measure the signal $\mathbf{m}_i \leftarrow F_\phi(\mathbf{x}_i)$
\STATE Sample $\hat{\vz}_i \sim p_\mathbf{z}(\mathbf{z})$
\FOR{$t=1$ {\bfseries to} $T$}
\STATE Optimise $\hat{\vz}_i \leftarrow \hat{\vz}_i - \frac{\partial}{\partial \mathbf{z}} E_\theta(\mathbf{m}_i, \hat{\vz}_i)$
\ENDFOR
\ENDFOR
\STATE $\mathcal{L}_G = \frac{1}{N} \sum_{i=1}^N E_\theta(\mathbf{m}_i, \hat{\vz}_i)$
\STATE Compute $\mathcal{L}_F$ using eq.~\ref{eq:rip-reg}
\STATE Option 1 : joint update $\theta \leftarrow \theta - \frac{\partial}{\partial \theta} (\mathcal{L}_G + \mathcal{L}_F)$
\STATE Option 2 : alternating update
\STATE \hspace{2cm} $\theta \leftarrow \theta - \frac{\partial}{\partial \theta} \mathcal{L}_G \qquad
\phi \leftarrow \phi - \frac{\partial}{\partial \phi} \mathcal{L}_F
$
\UNTIL{reaches the maximum training steps}
\end{algorithmic}
\label{alg:dcs}
\end{algorithm}
\subsubsection{Generalised CS 1: CS-GAN}
\label{sec:dcs-gan}
Here we consider an extreme case: a \emph{one-dimensional} measurement that only encodes how likely an input is a real data point or fake one sampled from the generator.
One way to formulate this is to train the measurement function $F_\phi$ using the following loss instead of eq.~\ref{eq:f-rip-reg}:
\begin{equation}
\mathcal{L}_F =
\begin{cases}
\norm{F_\phi(\mathbf{x}) - 1}_2^2 & \mathbf{x} \sim p_\text{data}(\mathbf{x}) \\
\norm{F_\phi(\hat{\mathbf{x}})}_2^2 & \hat{\mathbf{x}} \sim G_\theta(\hat{\mathbf{z}}), \forall \hat{\mathbf{z}}
\end{cases}
\label{eq:lsgan_lf}
\end{equation}
Algorithm \ref{alg:dcs} then becomes the Least Squares Generative Adversarial Nets (LSGAN, \citealp{mao2017least}) with latent optimisation --- they are exactly equivalent when the latent optimisation is disabled ($T=0$, zero step).
LSGAN is an alternative to the original GAN \cite{goodfellow2014generative} that can be motivated from Pearson $\chi^2$ Divergence.
To demonstrate a closer connection with original GANs \cite{goodfellow2014generative}, we instead focus on another formulation whose measurement function is a binary classifier (the discriminator).
This is realised by using a binary classifier $D_\phi$ as the measurement function, where we can interpret $D_\phi(\mathbf{x})$ as the probability that $\mathbf{x}$ comes from the dataset. In this case, the measurement function is equivalent to the \emph{discriminator} in GANs.
Consequently, we change the the squared-loss in eq.~\ref{eq:dcs-err} to the cross-entropy loss as the matching measurement loss function \cite{bishop2006pattern} (ignoring the expectation over $\mathbf{x}$ for brevity):
\begin{equation}
\mathcal{L}_F = t(\mathbf{x}) \, \ln \left[ D_\phi(\mathbf{x}) \right] + (1-t(\mathbf{x})) \, \ln \left[ 1 - D_\phi(\mathbf{x}) \right]
\label{eq:csgan_reg}
\end{equation}
where the binary scalar $t$ is an indicator function identifies whether $\mathbf{x}$ is a real data point.
\begin{equation}
t(\mathbf{x}) =
\begin{cases}
1 & \mathbf{x} \sim p_\text{data}(\mathbf{x}) \\
0 & \mathbf{x} \sim G_\theta(\mathbf{z}), \forall \mathbf{z}
\end{cases}
\label{eq:gan-indicator}
\end{equation}
Similarly, a cross-entropy measurement error is employed to quantify the discrepancy between $D_\phi (G_\theta (\mathbf{z}))$ and the scalar measurement $m = D_\phi(\mathbf{x})$:
\begin{equation}
\begin{split}
E_\theta(m, \mathbf{z}) &= m \, \ln\left[D_\phi (G_\theta (\mathbf{z})) \right] \\
& \quad + (1-m) \, \ln \left[ 1 - D_\phi (G_\theta (\mathbf{z})) \right]
\end{split}
\label{eq:csgan-err-0}
\end{equation}
At the minimum of $\mathcal{L}_F = 0$ (eq.~\ref{eq:csgan_reg}), the optimal measurement function is achieved by the perfect classifier:
\begin{equation}
D_\phi(\mathbf{x}) =
\begin{cases}
1 & \mathbf{x} \sim p_\text{data}(\mathbf{x}) \\
0 & \mathbf{x} \sim G_\theta(\mathbf{z}), \forall \mathbf{z}
\end{cases}
\end{equation}
We can therefore simplify eq.~\ref{eq:csgan-err-0} by replacing $m$ with its target value $1$ as in teacher-forcing \cite{williams1989learning}:
\begin{equation}
E(\mathbf{m}, \mathbf{z}) = \ln\left[ D_\phi (G_\theta (\mathbf{z})) \right]
\label{eq:csgan-err}
\end{equation}
This objective recovers the vanilla GAN formulation with the commonly used alternative loss~\cite{goodfellow2014generative}, which we derived as a measurement error. When latent optimisation is disabled ($T=0$), Algorithm \ref{alg:dcs} is identical to a vanilla GAN.
In our experiments (section \ref{sec:exp-gan}), we observed that the additional latent optimisation steps introduced from the CS perspective significantly improved GAN training.
We reckon this is because latent optimisation moves the representation to areas more likely to generate realistic images as deemed by the discriminator.
Since the gradient descent process remains local, the latent representations are still spread broadly in latent space, which avoids mode collapse.
Although a sufficiently powerful generator $G_\theta$ can transform the source $\pz{\mathbf{z}}$ into arbitrarily complex distribution, a more informative source, as implicitly manifested from the optimised $\mathbf{z}$, may significantly reduce the complexity required for $G_\theta$, thus striking a better trade-off in terms of the overall computation.
\subsubsection{Generalised CS 2: Semi-supervised GANs}
\label{sec:dcs_class}
So far, we have shown two extreme cases of Deep Compressed Sensing: in one case, the distance preserving measurements (section \ref{sec:dcs_vanilla}) essentially encode all information for recovering the original signals;
on the other hand, the CS-GAN (section \ref{sec:dcs-gan}) has one-dimensional measurements that only indicates whether signals are real or fake.
We now seek a middle ground, by using measurements that preserve class information for labelled data.
We generalise CS-GAN by replacing the binary classifier (discriminator) $D_\phi$ with a multi-class classifier $C_\phi$.
For data with $K$ classes, this classifier outputs $K+1$ classes with the $(K+1)$'th class reserved for ``fake'' data that comes from the generator.
This specification is the same as the classifier used in semi-supervised GANs (SGANs, \citet{salimans2016improved}).
Consequently, we extend the binary indicator function in eq.~\ref{eq:gan-indicator} to multi-class indicator, so that its $k$'the element $t^k(\mathbf{x}) = 1$ when $\mathbf{x}$ in class $k$.
The $k$'th output of the classifier $C_\phi^k(\mathbf{x})$ indicates the predicted probability that $\mathbf{x}$ is in the $k$'th class, and multi-class cross-entropy loss is used for the measurement loss and measurement error:
\begin{align}
\mathcal{L}_F &= \sum_{k=1}^{K+1} t^k(\mathbf{x}) \, \ln \left[ C_\phi^k(\mathbf{x}) \right] \label{eq:gc-reg}\\
E(\mathbf{m}, \mathbf{z}) &= t^k(\mathbf{x}) \, \ln \left[ C_\phi^k(G_\theta(\mathbf{z})) \right]
\end{align}
When latent optimisation is disabled ($T = 0$), the model is similar to other semi-supervised GANs \cite{salimans2016improved,acgan}.
However, when $T > 0$ the online optimisation moves latent representations towards regions representing particular classes. This provides a novel way of training conditional GANs.
Compared with conditional GANs which concatenate labels to latent variables \cite{mirza2014conditional}, optimising latent variables is more adaptive and uses information from the entire model.
Compared with Batch-Norm based methods \cite{miyato2018cgans}, the information for conditioning is presented in the target measurements, and does not need to be trained as Batch-Norm statistics \cite{ioffe2015batch}.
Since both of these methods use separate sources (label inputs or batch statistics) to provide the condition, their latent variables tend to retain no information about the condition.
Our model, on the other hand, distils the condition information into the latent representation, which results in semantically meaningful latent space (Figure~\ref{fig:samples-gc}).
\begin{table}[t]
\caption{A family of DCS models differentiated by the properties of measurements in comparison with CS. The CS measurement matrix does not need training, so it does not have a training loss.}
\label{tab:sum}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcc}
\toprule
Model & Property & Loss \\
\midrule
CS & RIP & N/A \\
DCS & trained RIP & eq.~\ref{eq:f-rip-reg} \\
CS-GAN & validity preserving & eq.~\ref{eq:csgan_reg} \\
CS-SGAN & class preserving & eq.~\ref{eq:gc-reg} \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\subsection{Optimising Models}
\label{sec:train}
The three models we derived as examples in the DCS framework are summarised in Table \ref{tab:sum} along side CS.
The main difference between them lies is the training objective used for the measurement functions $\mathcal{L}_F$.
Once $\mathcal{L}_F$ is specified, the generator objective $\mathcal{L}_G$, in the form of measurement error, can be derived follow suit.
When $\mathcal{L}_F$ and $\mathcal{L}_G$ are adversarial, such as in the CS-GAN, $F_\phi$ and $G_\theta$ need to be optimised separately as in GANs.
This is implemented as the alternating update option in Algorithm \ref{alg:dcs}.
In optimising the latent variables (eq.~\ref{eq:z-gd}), we normalise $\hat{\vz}$ after each gradient descent step, as in \cite{bojanowski18a}.
We treat the step size $\alpha$ in latent optimisation as a parameter and back-propagate through it in optimising the model loss function.
An additional technique we found useful in stabilising CS-GAN training is to penalise the distance $\mathbf{z}$ moves as an optimisation cost and add it to $\mathcal{L}_G$:
\begin{equation}
\mathcal{L}_O = \beta \cdot \norm{\hat{\vz} - \mathbf{z}_0}_2^2
\label{eq:reg-transport}
\end{equation}
where $\beta$ is a scalar controlling the strength of this regulariser.
This regulariser encourages small moves of $\mathbf{z}$ in optimisation, and can be interpreted as approximating an \emph{optimal transport} cost \cite{villani2008optimal}.
We found a range of $\beta$ from $1.0$ to $10.0$ made little difference in training, and used $\beta=3.0$ in our experiments with CS-GAN.
\section{Experiments}
\subsection{Deep Compressed Sensing for Reconstruction}
\label{sec:exp-dcs}
We first evaluate the DCS model using the MNIST \cite{yann1998mnist} and CelebA \cite{liu2015faceattributes} datasets.
To compare with the approach in \citet{bora2017compressed}, we used the same generators as in their model.
For the measurements functions, we considered both linear projection and neural networks.
We considered both random projections and trained measurement functions, while the generator was always trained jointly with the latent optimisation process.
Unless otherwise specified, we use 3 gradient descent steps for latent optimisation. More details, including hyperparameter values, are reported in the Appendix. Our code will be available at \url{https://github.com/deepmind/deep-compressed-sensing}.
Tables \ref{tab:recons-mnist} and \ref{tab:recons-celeba} summarise the results from our models as well as the baseline model from \citet{bora2017compressed}.
The reconstruction loss for the baseline model is estimated from Figure 1 in \citet{bora2017compressed}.
DCS performs significantly better than the baseline.
In addition, while the baseline model used hundreds or thousands of gradient-descent steps with several re-starts, we only used 3 steps without any re-starting, achieving orders of magnitudes higher efficiency.
Interestingly, for fixed $F$, random linear projections outperformed neural networks as the measurement functions in both datasets across different neural network structures (row 2 and 3 of Table \ref{tab:recons-mnist} and \ref{tab:recons-celeba}).
This empirical result is consistent with the optimality of random projections described in the compressed sensing literature and the more general Johnson-Lindenstrauss lemma \cite{donoho2006compressed,candes2006stable,johnson1984extensions}.
The advantage of neural networks manifested when $F_\phi$ was optimised; this variant reached the best performance in all scenarios.
As argued in \cite{weiss2007learning}, we observed that random projections are sub-optimal for highly structured signals such as images, as seen in the improved performance when optimising the measurement matrices (row 4 of Table \ref{tab:recons-mnist} and \ref{tab:recons-celeba}).
The reconstruction performance was further improve when the linear measurement projections were replaced by neural networks (row 5 of Table \ref{tab:recons-mnist} and \ref{tab:recons-celeba}).
Examples of reconstructed MNIST images from different models are shown in Figure \ref{fig:recons}.
Unlike autoencoder-based methods, our models were not trained with any pixel reconstruction loss, which we only use for testing. Despite this, our results are comparable with the recently proposed ``Uncertainty Autoencoders'' \cite{grover2018uncertainty}. We have worse MNIST reconstructions: with 10 and 25 measurements, ours best model achieved 5.3 and 3.4 per-image reconstruction errors compared with theirs 3.8 and 2.5 (estimated from figure 4). However, we achieved better CelebA results: with 20 and 50 measurements, we have errors of 23.4 and 18.5 compared with their 27 and 22 (estimated from Figure 6).
\begin{table}[t]
\caption{Reconstruction loss on MNIST test data using different measurement functions. All rows except the first are from our models. ``$\pm$'' shows the standard deviation across test samples. (L) indicates learned measurement functions. Lower is better.}
\label{tab:recons-mnist}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{l|ccc}
\toprule
Model & 10 & 25 measurements & steps \\
\midrule
Baseline & $54.8$ & $17.2$ & $10 \times 1000$\\
Linear& $10.8 \pm 3.8$ & $6.9 \pm 2.7$ & 3 \\
NN & $12.5 \pm 2.2$ & $10.2 \pm 1.7$ & 3\\
Linear(L)& $6.5 \pm 2.1$ & $4. \pm 1.4$ & 3\\
NN(L) & $\mathbf{5.3 \pm 1.9}$ & $\mathbf{3.4 \pm 1.2}$ & 3\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\begin{table}[t]
\caption{Reconstruction loss on CelebA test data using different measurement functions. All rows except the first are from our models. ``$\pm$'' shows the standard deviation across test samples. (L) indicates learned measurement functions. Lower is better.}
\label{tab:recons-celeba}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{l|ccc}
\toprule
Model & 20 & 50 measurements & steps \\
\midrule
Baseline & $156.8$ & $82.3$ & $2\times 500$ \\
Linear & $34.7 \pm 7.9$ & $27.1 \pm 6.1 $ & 3\\
NN & $46.1 \pm 8.9$ & $41.2 \pm 8.3$ & 3\\
Linear(L) & $26.2 \pm 5.9$ & $20.5 \pm 4.3$ & 3\\
NN(L) & $\mathbf{23.4 \pm 5.8}$ & $\mathbf{18.5 \pm 4.3}$ & 3\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.2\textwidth]{figures/mnist_recons.png}
\caption{Reconstructions using 10 measurements from random linear projection (top), trained linear projection (middle), and trained neural network (bottom).}
\label{fig:recons}
\end{figure}
\subsection{CS-GANs}
\label{sec:exp-gan}
To evaluate our proposed CS-GANs, we first trained a small model on MNIST to demonstrate intuitively the advantage of latent optimisation.
For quantitative evaluation, we trained larger and more standard models on CIFAR10 \cite{krizhevsky2009learning}, and evaluate them using the Inception Score (IS) \cite{salimans2016improved} and Fréchet Inception Distance (FID) \cite{heusel2017gans}.
To our knowledge, latent optimisation has not been previously used to improving GANs, so our approach is orthogonal to existing methods such as \citet{arjovsky2017wasserstein,miyato2018spectral}.
We first compare our model with vanilla GANs, which is a special case of the CS-GAN (see section \ref{sec:dcs-gan}).
We use the same MNIST architectures as in section \ref{sec:exp-dcs}, but changed the the measurement function to a GAN discriminator (section \ref{sec:dcs-gan}).
We use the alternating update option in Algorithm \ref{alg:dcs} in this setting.
All other hyper-parameters are the same as in previous experiments.
We use this relatively weak model to reveal failure modes as well as advantages of the CS-GAN.
Figure \ref{fig:samples-csgan-mnist} shows samples from models with the same setting but different latent optimisation iterations.
The three panels show samples from models using 0, 3 and 5 gradient descent steps respectively. The model using 0 iteration was equivalent to a vanilla GAN.
Optimising latent variables exhibits no mode collapse, one of the common failure modes of GAN training.
\begin{figure}
\centering
\includegraphics[width=0.42\textwidth]{figures/GAN_mnist.png}
\caption{Samples from CS-GANs using 0 (left), 3 (central) and 5 (right) gradient descent steps in latent optimisation. The CS-GAN using 0 step was equivalent to a vanilla GAN.}
\label{fig:samples-csgan-mnist}
\end{figure}
To confirm this advantage, we more systematically evaluate our method across a range of 144 hyper-parameters (similar to \citet{kurach2018gan}). We use the CIFAR dataset which contains various categories of natural images, whose features from an Inception Network \cite{ioffe2015batch} are meaningful for evaluating the IS and FID.
Other than the number of gradient descent steps (0 vs.~3) the model architectures and training procedures were identical.
The change of IS and FID during training are plot in figure \ref{fig:csgan-curves}.
CS-GANs achieved better performance in both IS and FID, and had less variance across the range of hyper-parameters.
The blue horizontal lines at the bottom of Fig.~\ref{fig:csgan-curves} (left) and the top of Fig.~\ref{fig:csgan-curves} (right) shows failed vanilla GANs, but none of the CS-GANs diverged in training.
\begin{table}[t]
\caption{Comparison with Spectral Normalised GANs.}
\label{tab:sn-table}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{l|ccc}
\toprule
& SN-GAN & SN-GAN (ours) & CS+SN-GAN \\
\midrule
IS & $7.42 \pm 0.08$ & $7.34 \pm 0.07$ & $\mathbf{7.80 \pm 0.05}$\\
FID & $29.3$ & $29.53 \pm 0.36$ & $\mathbf{23.13 \pm 0.50}$\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
We also applied our latent optimisation method on Spectral-Normalised GANs (SN-GANs) \cite{miyato2018spectral}, which use Batch Normalisation \cite{ioffe2015batch} for the generator and Spectral Normalisation for the discriminator.
We compared our model with SN-GAN in Table \ref{tab:sn-table}: the SN-GAN column reproduces the numbers from \cite{miyato2018spectral}, and the next column are numbers from our replication of the same baseline.
Our results demonstrate that deeper architectures, Batch Normalisation and Spectral Normalisation can further improve CS-GAN and that CS-GAN can improve upon a competitive baseline, SN-GAN.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.45\columnwidth]{figures/vanilla_is.png} &
\includegraphics[width=0.45\columnwidth]{figures/vanilla_fid.png}
\end{tabular}
\caption{Inception Score (higher is better) and FID (lower is better) during CIFAR training.}
\label{fig:csgan-curves}
\end{figure}
\subsection{CS-SGANs}
We now experimentally assess our approach to use latent optimisation in semi-supervised GANs, CS-SGAN.
We illustrate this extension with the MNIST dataset, and leave it to future work to study other applications.
We keep all the hyper-parameters the same as in section \ref{sec:exp-gan}, except changing the number of measurements to 11 for the 10 MNIST classes and 1 class reserved for generated samples. Samples from CS-SGAN can be seen in Figure \ref{fig:samples-gc} (left).
Figure~\ref{fig:samples-gc} (right) illustrates this with T-SNE~\cite{maaten2008visualizing} computed from 2000 random samples, where class labels are colour-coded.
The latent space formed separated regions representing different digits.
It is impossible to obtain such clustered latent space in typical conditional GANs \cite{mirza2014conditional,miyato2018cgans}, where labels are supplied as separate inputs while the random source only provides label-independent variations.
In contrast, in our model the labels are distilled into latent representation via optimisation, leading to a more interpretable latent space.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.23\textwidth]{figures/gc_samples_all.png} &
\includegraphics[width=0.21\textwidth]{figures/latent_tsne}
\end{tabular}
\caption{Left: samples from the generative classifier, and t-SNE illustration of the generator's latent space.}
\label{fig:samples-gc}
\end{figure}
\section{Discussion}
We present a novel framework for combining compressed sensing and deep neural networks.
In this framework we trained both the measurement and generation functions, as well as the latent optimisation (i.e., reconstruction) procedure itself via meta-learning.
Inspired by \citet{bora2017compressed}, our approach significantly improves upon the performance and speed of reconstruction obtain in this work.
In addition, we derived a family of models, including a novel GAN model, by expanding the set of properties we consider for the measurement function (Table~\ref{tab:sum}).
Our method differs from existing algorithms that aim to combine compressed sensing with deep networks in that our approach preserves the online minimisation of measurement errors in generic neural networks.
Previous attempts that combine CS and deep learning generally fall into two categories.
One category of methods interprets taking compressed measurements and reconstructing from these measurements as an encoding-decoding problem and formulate the model as an autoencoder \citep{mousavi2015deep, kulkarni2016reconnet, mousavi2017deepcodec, grover2018uncertainty, mousavi2018data,lu2018convcsnet}.
Another category of methods are designed to mimic principled iterative CS algorithms using specialised network architectures \citep{metzler2017learned,sun2016deep}.
In contrast, our framework maintains the separation of measurements from generation but still uses generic neural networks.
Therefore, both the measurements and latent representation of the generator can be flexibly optimised for different, even adversarial, objectives, while taking advantage of powerful neural network architectures.
Moreover, we can train measurement functions with properties that are difficult or impossible to obtain from random or hand-crafted projections, thus broadening the range of problems that can be solved by minimising error measurements online.
In other words, learning the measurement can be used as a useful stepping stone for learning complex tasks where the cost function is difficult to design directly.
Our approach can also be interpreted as training implicit generative models, where explicit minimisation of divergences is replaced by statistical tests \cite{mohamed2016learning}.
We have illustrated this idea in the context of relatively simple tasks, but anticipate that complex tasks such as style transfer \cite{zhu2017unpaired}, in areas already seen the applications of CS, as well as applications including MRI \cite{lustig2007sparse} and unsupervised anomaly detection \cite{schlegl2017unsupervised}, may further benefit from our approach.
\subsubsection*{Acknowledgments}
We thank Shakir Mohamed and Jonathan Hunt for insightful discussions. We also appreciate the feedback from the anonymous reviewers.
|
1,116,691,499,739 | arxiv | \section{To mention}
\section{Introduction}
\label{sect:introduction}
Complex networks are frequently used to describe evolving, constantly changing objects that consist of elements and complicated functional relations between them. Simplified models were created to study and compare certain network properties. One of the examples are random Boolean networks (RBNs). RBNs are generic \cite{gershenson04} and that is why they have been applied in many different fields.
Gene regulatory networks (GRNs) are models describing the structure and behavior of the transcriptional network responsible for regulating the gene expression in a cell \cite{costa08, mnw}. GRNs are both robust and stochastic. GRNs must respond to diverse stimuli and they are highly modular, e.g. hierarchical regulatory interactions in transcriptional networks in yeast. Analysis of GRNs provided evidence that diverse stimuli may cause modifications of gene interactions and network topology. During GRN evolution edges are created and destroyed, and in this way the gene expression is altered.
In \cite{mlb} Liu and Bassler proposed the RBN modification, where network topology is time dependent and the observed coevolution encompasses some effects noticed in GRN. Here, we extend the original Liu and Bassler's model, most of all by incorporating the concept of hierarchy. In our approach there are no robust genes, but a part of network edges are assumed to be robust, i.e. they stay unchanged during the system evolution.
The paper is organised as follows: in section \ref{sect:model} we describe our model, i.a. we present the algorithm of network evolution (section \ref{sect:algorithm}). Section \ref{sect:information} presents the applied information measures. Section \ref{sect:simulations} contains simulations description, results and discussion. Finally, in section \ref{sect:conclusions} we conclude our studies.
\section{Model of Hierarchical Adaptive Random Boolean Network (HARBN)}
\label{sect:model}
Let us consider a system of coupled nodes in a form of a directed network that corresponds to a GRN. For simplicity we will assume that internal nodes' variables $\sigma_n(t)$ are $0$ or $1$ and they can evolve in time according to randomly chosen boolean functions $f_i$. Arguments of the function $f_n$ are internal variables $\sigma_m(t)$ of all nodes $m$ such that there is a connection from a node $m$ to the node $n$.
During simulations of adaptive RBN (ARBN) changes of network state and topology are measured. The simulation consists of defined {\it a priori} number of epochs. In each epoch the network's attractor is found (i.e a periodic orbit of variables $\sigma_n(t)$ that is reached by the system after a transient time), one node is randomly chosen and according to the Activity-dependent Rewiring Rule (ADRR) (described in section \ref{sect:rule}) a number of incoming connections to this node is changed.
We extended Liu and Bassler's model. First, we consider a small number of ARBNs, called subnetworks. Different subnetworks are sparsely connected by directed edges, called interlinks. We build our model in such a way that the considered number of interlinks is smaller than the number of edges within the subnetworks. Therefore, the created network is hierarchical in terms of connection density, because certain groups of nodes are much denser connected in internal blocks (subnetworks) as compared to the rest of the network. During a single epoch the network's attractor is found and for each subnetwork a node is chosen randomly. Connections of such a node are changed according to the ADRR. The edges may arise only between nodes in the same subnetwork. The interlinks are permanent edges: they are created at the beginning of each simulation and they cannot be changed. Secondly, we also alter the ADRR by introducing a resilience parameter $\alpha$ (see section \ref{sect:rule}). In the original ADRR the
resilience parameter did not exist and it corresponds to $0$ in our approach.
\subsection{Activity-Dependent Rewiring Rule}
\label{sect:rule}
Let us define the network resilience parameter that will decide on the type of changes in network topology as $\alpha$ where $\alpha \in [0,0.5]$.
If the node's mean state $\left\langle \sigma_n\right\rangle$ during the attractor is not higher than $\alpha$ or at least $(1-\alpha)$, then this node is considered to be frozen and one new incoming edge is added to this node. The new edge starts in a random node chosen from the group of nodes from the same subnetwork that does not already have a link to the considered node. Otherwise, this node is considered to be active and one of its edges is randomly chosen and deleted.
For non-zero values of parameter $\alpha$ the original ADRR is extended so as nodes with a mean state higher than $0$ or smaller than $1$ are considered to be frozen.
\subsection{Adaptive Algorithm for Hierarchical RBNs}
\label{sect:algorithm}
Our algorithm for the coevolution of hierarchical RBNs is as follows:
\begin{enumerate}
\item Generate $M$ uniform Boolean networks (subnetworks) containing $N_M$ nodes each with $K_i$ directed incoming edges starting in a randomly chosen group of nodes belonging to the same subnetwork. Then generate $K_M$ edges between subnetworks (interlinks). (For each node generate necessary Boolean functions with a bias parameter $p=1/2$.)
\item Generate a random initial state $S(0)$ of all internal node variables $\sigma_n(0)$ and find the network's attractor length using the following algorithm \cite{mlb}:
\begin{description}
\item [$a.$] Define comparison moments. Here: \\ $\boldsymbol{T}=\{0; 100; 1,000; 10,000; 100,000\}$. Set $k=0$ and $i=1$.
\item [$b.$] Synchronically update the states of the network's nodes: $S(i)$. If $S(i)$ equals $S(T_k)$ the attractor length is $(i - T_k)$. Go to point 3. Otherwise continue.
\item [$c.$] If $i$ equals $T_{k+1}$, increment $k$. If $k$ is higher than $4$ end this subalgorithm with the attractor length equal to $(T_4 - T_3)$. Otherwise, increment $i$ and go back to point 2b.
\end{description}
\item Choose a node from each subnetwork and calculate its mean state during one attractor cycle.
\item According to the ADRR, change the topology of each subnetwork. Do not delete interlinks. In case there are no connections to delete, randomly choose another node and repeat this point.
\item Generate new Boolean functions for each node.
\item If the predefined number of epochs is not reached, go back to point $2$.
\end{enumerate}
Remarks:
\begin{itemize}
\item The interlinks are generated as follows: first the highest possible number of interlinks is equally distributed between ordered pairs of subnetworks (due to directed edges each pair of subnetworks appear twice); secondly, the remaining interlinks are generated between random pairs.
\item The described method allows finding the attractor length equal to $90,000$ at most.
\item Advancing through the steps 2-6 is called an epoch (after \cite{mlb}). Updating the states of all nodes in step 2b. is called an iteration.
\end{itemize}
\section{Information Measures of RBNs}
\label{sect:information}
Measuring information amount transmitted by a network is an important tool that facilitates exploring system's features. There are many approaches to do it, see e.g. \cite{blf,pks}. Here we define network {\it activity information} $I$ as a sum of transformed activities of each node. Activity $A_n$ of a node $n$ is a number of changes of the node's state during the network attractor divided by the attractor length. There is no information stored in frozen nodes (nodes with $A_n=0$) and in nodes whose state changes in every iteration ($A_n=1$). Such nodes should not contribute to the network information. On the other hand, nodes which change their state the same number of times as they keep the old state ($A_n=0.5$) contribute mostly because of their unexpected behavior. Node activity can be identified with probability of changing the node's state. Then the activity $A_n$ links with the parameter $a$ described in \cite{blf} as system's "self-overlap": $A_n=1-a$. Here we introduce a natural definition of node
{\it activity information} as:
\begin{equation}
I_n=-A_n\log(A_n)-(1-A_n)\log(1-A_n)
\end{equation}
The total network activity information is given as $I=\sum_{n} I_n$. For brevity we further write {\it network information} instead of network activity information.
\section{Simulations}
\label{sect:simulations}
The simulations of different structures of HARBN model consisted of $1000$ epochs repeated $100$ times. We let $MxN_M xK_M$ denote a network consisting of $M$ subnetworks each of $N_M$ nodes and linked by $K_M$ interlinks. Here we show the results for $60-$ and $80-$node networks. Our simulations were conducted in $3$ parts. In the beginning we reproduced the results achieved in \cite{mlb} ($M=1$, $\alpha=0$). Then, we divided networks into $2-4$ subnetworks and linked them by $0-80$ interlinks. We compared the results for ARBNs and HARBNs. At the end the system properties for non-zero resilience values $\alpha$ were explored. An example structure of simulated network after $1000$ epochs is shown in Figure \ref{fig:example}.
\begin{figure}[tb]
\begin{centering}
\includegraphics[width=0.5\textwidth]{4x15x10x999.eps}
\caption{Example structure after $1000$ epochs of a network $4x15x10$.}
\label{fig:example}
\end{centering}
\end{figure}
Each realization of ARBNs and HARBNs demonstrates different structure and information parameters. However identical networks tend to oscillate (after the initial transient period) near the same mean steady-state (m.s.s.) levels. On the other hand, networks of different number of nodes, subnetworks or interlinks tend to reach different m.s.s. values. Therefore, for each network type we define: m.s.s. incoming connectivity $K_{ss}$, m.s.s. network information $I$, m.s.s. node information $IPN$ (calculated as information $I$ per node), m.s.s. edge information $IPE$ (calculated as information $I$ per edge) and geometric m.s.s. attractor length $T$. If not other mentioned we assume an arithmetic mean. In order to determine all above parameters $200$ beginning epochs of each realization were discarded as transient periods. $K_{ss}$ is calculated as follows:
\begin{itemize}
\item calculate mean incoming connectivity in an epoch $\rightarrow$ $K_{in}$,
\item calculate mean $K_{in}$ starting from 201th epoch in a realization $\rightarrow$ $\left<K_{in}\right>$,
\item calculate mean $\left<K_{in}\right>$ over all realizations $\rightarrow$ $K_{ss}$.
\end{itemize}
The information parameters $I$ and $IPN$ are calculated likewise. $IPE$ is computed by dividing $IPN$ by $K_{ss}$.
Now we shall discuss the influence of network topology on system properties. We are aware of the fact that because of a small number of investigated networks additional large-scale simulations will be necessary to confirm our findings.
\subsection{Mean Steady-State Connectivity}
\label{sect:mss-con}
Let $X$ denote the ratio of the interlinks to the total number of edges. For small $X$ values different network types differentiate themselves in terms of $K_{ss}$ by the size of a subnetwork (Figure \ref{fig:Kss-X}) and $K_{ss}$ decreases with the growth of the subnetwork size. With the increase of $X$ the subnetworks become more and more linked. For high $X$ values different network types start to differentiate themselves by the size of the whole network, i.e. subnetworks are strongly mutually dependent. We estimate that by $X\approx0.40$ distinct subnetworks do not exist anymore. The resulting $K_{ss}$ values are independent from the subnetwork size.
\begin{figure}[tb]
\begin{centering}
\includegraphics[width=0.9\textwidth]{Kss-X.eps}
\caption{The mean steady-state connectivity $K_{ss}$ as a function of the ratio of the interlinks to the total number of edges for various networks.}
\label{fig:Kss-X}
\end{centering}
\end{figure}
\subsection{Information per Node}
\label{sect:ipn}
Calculated information per node $IPN$ also exhibits different ordering for small and large $X$ values (Figure \ref{fig:IPN-X}). For $X \ll 0.1$ networks (apart from those with one subnetwork) differentiate with the increase of the number of distinct subnetworks. When $X >0.1$ then the information $IPN$ differentiates the networks by the size of the whole network. $IPN$ is larger in smaller networks. In the interval $0<X<0.1$, especially for $80-$node networks, ambiguous behaviour is observable. For the $4x20$ network the information $IPN$ starts at a rather low value. A small increase of the ratio $X$ causes a quick rise of $IPN$ with a maximum for $X\approx 0.05$. Then until $X\approx 0.1$ a moderate decrease of $IPN$ is observable. The behaviour of $IPN$ for $2x40$ is different. $IPN$ starts from a high value for $X=0$ and then quickly falls to a minimum in $X\approx 0.05$. Afterwards a stable increase is visible.
\begin{figure}[tb]
\begin{centering}
\includegraphics[width=0.9\textwidth]{IPN-X.eps}
\caption{The mean node information $IPN$ as a function of the ratio of the interlinks to the total number of edges for various networks.}
\label{fig:IPN-X}
\end{centering}
\end{figure}
\subsection{Information per Edge}
\label{sect:ipe}
The behaviour of the third network parameter --- $IPE$ (Figure \ref{fig:IPE-X}) includes some features of the previous two observables. For $X\approx0$ networks differentiate themselves first by the number of distinct subnetworks (less subnetworks, higher $IPE$), secondly by the size of an individual subnetwork (with one exception for networks $1xN$). On the other hand, ordering by the network size for higher $X$ is very weak and it appears that $IPE$ is equal for all systems investigated by the same $X$ value. It does not depend on the total network size or the number of subnetworks.
\begin{figure}[tb]
\begin{centering}
\includegraphics[width=0.9\textwidth]{IPE-X.eps}
\caption{The mean edge information $IPE$ as a function of the ratio of the interlinks to the total number of edges for various networks. $IPE$ has been calculated by dividing the mean node information $IPN$ by the mean steady-state in-degree connectivity $K_{ss}$.}
\label{fig:IPE-X}
\end{centering}
\end{figure}
\subsection{Geometric Mean Attractor Length}
\label{sect:references}
Figure \ref{fig:T-X} shows the relationship between the geometric mean attractor length $T$ and the ratio $X$. The lengths of the attractors grow with the increase of the total number of nodes. Moreover, structures of the same size with more subnetworks tend to possess longer attractors. Dividing a network into subnetworks creates separate unconnected parts ($X \approx 0$). Such structures reach the highest $T$ values. It is related to the attractor search subalgorithm: the attractor is found for the whole network and its separate parts multiply and elongate the attractor length. Interlinks connect the subnetworks, but interlinks are not flexible. Interlinks significantly decrease the attractor length leading to higher flexibility. The smallest attractors are achieved for $X$ ranging from around $0.1$ to around $0.25$. Further increase of the interlinks ratio leads to longer lengths $T$, as interlinks stiffen the network.
\begin{figure}[ht]
\begin{centering}
\includegraphics[width=0.9\textwidth]{T-X.eps}
\caption{The geometric mean attractor length $T$ as a function of the ratio of the interlinks to the total number of edges for various networks.}
\label{fig:T-X}
\end{centering}
\end{figure}
\subsection{Resilience Parameter}
\label{sect:resilience}
In order to explore non-zero values of the resilience parameter $\alpha$ different network structures with $10$ and $40$ interlinks were analysed. These numbers of interlinks corresponded to $X\approx 0.05$ and $X\approx 0.25$ respectively. Effects of hierarchical structures and interactions between different subnetworks were observed. Figure \ref{fig:IPE-a} shows that the parameter $\alpha$ differentiates networks of different types. First of all we can distinguish number of interlinks: there are $3$ levels for $0$, $10$, and $40$ interlinks. Their order is primarily the result of values demonstrated in Figure \ref{fig:IPE-X}. Within the networks of the same number of interlinks higher mean edge information corresponds to networks with larger subnetworks. Comparing $3x20x10$ and $4x20x10$ networks we can see the next level of ordering: higher $IPE$ is in a smaller network. We have also observed (it is not shown in the paper) that non-zero values of $\alpha$ lead to higher $K_{ss}$ values. Therefore the
information per node $IPN$ grows with the steady-state-connectivity $K_{ss}$ faster than the information per edge $IPE$.
\begin{figure}[ht]
\begin{centering}
\includegraphics[width=0.9\textwidth]{IPE-a.eps}
\caption{The mean edge information $IPE$ as a function of the resilience parameter $\alpha$ for several different network types. $IPE$ has been calculated by dividing the mean node information $IPN$ by the mean steady-state in-degree connectivity $K_{ss}$.}
\label{fig:IPE-a}
\end{centering}
\end{figure}
\section{Conclusions}
\label{sect:conclusions}
The model of hierarchical adaptive random Boolean network (HARBN) has been introduced and numerically explored. The system consists of subnetworks connected by fixed interlinks, where the internal topology of the subnetworks can evolve depending on individual node activity. We have observed that the main natural feature of ARBNs, i.e. their adaptability is preserved in HARBNs that can evolve towards stable configurations. When the ratio $X$ of interlinks to the total number of edges is of the order of $X\approx 0.4 $ then the interlinks efficiently re-connect the whole network (as if separate parts did not exist). However such a large system is less flexible and displays longer attractors. The shortest attractors are observable for $X$ ranging from $0.1$ to $0.25$ depending on the network structure.
The mean node information ($IPN$) and the mean edge information ($IPE$) grow in HARBN with the increase of the ratio $X$ as well as with the resilience parameter $\alpha$ and $IPE$ tends to achieve the same value for all network types in case of many interlinks. Adding a new node to the network decreases $IPN$ and, moreover, leads to fewer incoming connections per node. On the other hand adding a new interlink increases $IPE$ values. We conclude that the introduced HARBNs may successfully be used to model GRNs with a modular structure and they well describe the processes of evolution and speciation.
\subsection{Acknowledgements}
The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under Grant Agreement No. 317534 (the Sophocles project) and from the Polish Ministry of Science and Higher Education Grant No. 2746/7.PR/2013/2. J.A.H. has been also partially supported by Russian Scientific Foundation, proposal $\#$14--21--00137 and by European Union COST TD1210 KNOWeSCAPE action.
\label{sect:bib}
\bibliographystyle{unsrt}
|
1,116,691,499,740 | arxiv | \section{Introduction}\label{intro}
Entanglement plays a fundamental role in quantum information, being
recognized as the essential resource for quantum computing,
teleportation, and cryptographic protocols. In the framework of
quantum information with continuous variables (CV)
\cite{vLB_rev,Napoli} the possibility of generating and manipulating
entanglement allowed the realization of a variety of quantum
protocols, such as teleportation, cryptography, dense coding and
entanglement swapping. In these protocols the source of entanglement
is the bipartite twin-beam state of two modes of radiation, usually
generated by parametric down-conversion in $\chi^{(2)}$ crystals.
However, recent experimental progresses \cite{Exps} show that the
coherent manipulation of entanglement between more then two modes may
be achieved with current technology. This opens the opportunity to
realize a true quantum information network, in which the information
can be stored, manipulated and distributed among many parties, in a
fashion resembling the current classical telecommunication networks.
In a realistic implementation, entanglement needs to be transmitted
along physical channels, such as optical fibers or the atmosphere. As
a matter of fact, the propagation and the influence of the environment
unavoidably lead to degradation of entanglement, owing to decoherence
effects induced by losses and thermal noise. In this scenario, it is
worth to study the entanglement properties and the possible
applications of multipartite systems in noisy environments, which will
be the subject of this paper.
\par
A prominent class of CV states is constituted by the Gaussian states.
They can be theoretically characterized in a convenient way, and
generated and manipulated experimentally in a variety of physical
systems. In a quantum information setting, entangled Gaussian states
provide the basis for the quantum information protocols mentioned
above. The basic reason for this is that the QED vacuum and radiation
states at thermal equilibrium are themselves Gaussian states. This
observation, in combination with the fact that the evolutions
achievable with current technology are described by Hamiltonian
operators at most bilinear in the fields, accounts for the fact that
the states commonly produced in labs are Gaussian. Indeed, bilinear
evolutions preserve the Gaussian character. As we already mentioned,
the outmost used source of CV entanglement are the twin-beams, which
belong to the class of bipartite Gaussian states. In a
group-algebraic language, they are the coherent states of the group
${\rm SU}(1,1)$, {\em i.e.}, the states evolved from vacuum via a unitary
realization of the group. Within the class of Gaussian states, the
simplest generalization of twin-beams to more then two modes are the
coherent states of the group ${\rm SU}(m,1)$. Indeed, this states can be
generated by multimode parametric processes in second order nonlinear
crystals, with Hamiltonians that are at most bilinear in the fields.
In particular, these processes involve $m+1$ modes of the field
$a_0,a_1,\dots,a_m$, with mode $a_0$ that interacts through a
parametric-amplifier-like Hamiltonian with the other modes, whereas
the latter interact one each other only via a beam-splitter-like
Hamiltonian \cite{vecchio,puri}. In the framework of CV quantum
information, the first proposal to produce such states has been given
in Ref.~\cite{vLB_tlc}, where a half of a two-mode squeezed vacuum
state interacts with $m$ vacua via a proper sequence of beam
splitters. Other unitary realizations of the algebra of ${\rm SU}(m,1)$
have been proposed, in optical settings \cite{como,chirkin} as well as
with cold atoms \cite{nics} or optomechanical systems \cite{cams}. In
these schemes the Hamiltonian of the system, rather then involving a
sequence of two-mode interaction, is realized via simultaneous
multimode interactions. Experimental realizations of tripartite
entanglement in the optical domain have been recently reported
\cite{Exps}.
\par
In this work we do not focus on any specific implementation of the
${\rm SU}(m,1)$ evolution. Rather, we will analyze the entanglement
properties of ${\rm SU}(m,1)$ coherent states in a unified fashion valid
for a generic Hamiltonian of this kind. As we will see in
Sec.~\ref{s:SUm1}, this is allowed by the observation that the
coherent states of ${\rm SU}(m,1)$ have a common structure, which can be
conveniently written in the Fock representation of the field
\cite{puri}. In particular, the degradation effects of both the
thermal background in the generation process and of losses and thermal
photons in the propagation will be outlined. The
robustness of these states against noise will be analyzed in
Sec.~\ref{s:NoisySUm1} where it will be also compared with the
bipartite case.
\par
As already mentioned, one of the main results in CV quantum
communication concerned the realization of the teleportation protocol
(for a recent experiment see \cite{furu05}). The natural
generalization of standard teleportation to many parties corresponds
to a telecloning protocol \cite{murao}. Teleportation is based on the
coherent states of ${\rm SU}(1,1)$, which provide the shared entangled
states supporting the protocol. Thus, in order to implement a
multipartite version of this protocol, one is naturally led to
consider a shared entangled state produced by a generic ${\rm SU}(m,1)$
interaction. The telecloning protocol will be analyzed in detail in
Sec.~\ref{s:tlc}. As concern cloning with CV, there are general
results to assess the optimality of $n\rightarrow m$ symmetric cloning
of coherent states \cite{cerf}. Optimal local unitary realization of
such schemes have been proposed in \cite{braunetal,fiurasek}, and an
experimental realization of $1\rightarrow 2$ cloning has been recently
reported \cite{leuchs}. Concerning telecloning, existing proposals
are about optimal $1\rightarrow m$ symmetric cloning of pure Gaussian
states, using a particular coherent state of ${\rm SU}(m,1)$ as support
\cite{vLB_tlc}. Recently, a proposal which make use of partially
disembodied transport has also been reported \cite{zhang}. In view of
the realization of a quantum information network, one is naturally led
to consider the possibility to retrieve different amount of
information from different clones. This means that one may consider
the possibility to produce clones different one from each other, in
what is called asymmetric cloning. Examples of optimal $1\rightarrow
2$ asymmetric cloning are given in Ref.~\cite{fiurasek,josab}, where
local and non-local realizations are considered. In this work, we will
see how the telecloning protocol involving a generic coherent state of
${\rm SU}(m,1)$ provides the first example of a completely asymmetric
$1\rightarrow m$ cloning of pure Gaussian states. In this sense, we
provide a generalization of the proposal in Ref.~\cite{vLB_tlc} to the
asymmetric case. Moreover, we found an expression for the maximum
fidelity achievable by one clone when the fidelities of the others are
fixed to prescribed values, thus giving explicitly the trade-off
between the qualities of the different clones.
\par
In Sec.~\ref{s:NoisyTLC} we will analyze the effect of noise in each
step of the telecloning protocol. As expected, the presence of both
thermal noise and losses unavoidably leads to a degradation of the
cloning performances. Nevertheless, we will show that the protocol can
be optimized in order to reduce these degradation effects. In
particular, one may optimize not only the energy of the entangled
support, but also the location of the source of entanglement itself.
Remarkably, when only losses are considered, this optimization
completely cancels the degrading effects of noise on the fidelity of
the clones. This happens for finite propagation times which, however,
diverge as the number of modes increases.
\par
We conclude the paper with Sec.~\ref{esco}, where the main results
will be summarized.
\section{Multimode parametric
interactions: ${\rm\bf SU} \boldsymbol{(m,1)}$
coherent states}\label{s:SUm1}
Let us consider the set of bilinear Hamiltonians expressed by
\begin{equation}
H_m=\sum_{l<k=1}^{m} \gamma_{kl}^{(1)}\, a_k a^{\dag}_l
+ \sum_{k=1}^{m} \gamma_{k}^{(2)}\, a_k a_0 + h. c.
\label{Hm}\;,
\end{equation}
where $[a_k,a_l]=0$, $[a_k,a^\dagger_l]=\delta_{k,l}$
($k,l=0,\dots,m$) are independent bosonic modes. A conserved quantity
is the difference $D$ between the total mean photon number of the mode
$a_0$ and the remaining modes, in formula
\begin{eqnarray}
D= \sum_{k=1}^m a^\dag_k a_k - a_0^\dag a_0
\label{cons}\;.
\end{eqnarray}
The transformations induced by Hamiltonians (\ref{Hm}) correspond to
the unitary representation of the ${\rm SU}(m,1)$ algebra \cite{puri}.
Therefore, the set of states obtained from the vacuum coincides with
the set of ${\rm SU}(m,1)$ coherent states {\em i.e.}
\begin{eqnarray}
|\gr{\Psi}_m\rangle\equiv
\pexp{-i H_{m} t} |\gr{0}\rangle = \pexp{\sum_{k=1}^m
\beta_{k} a_k^\dag a_0^\dag - h.c.} |\gr{0}\rangle
\label{PsiAux1}\;,
\end{eqnarray}
where $\beta_{k}$ are complex numbers, parameterizing the state, which
are related to the coupling constants $\gamma_{kl}^{(1)}$ and
$\gamma_{k}^{(2)}$ in \refeq{Hm}. Upon defining
$${\cal C}_{k} = \beta_{k}\frac{\tanh\left(\sum_{r=1}^m
|\beta_{r}|^2\right)}{\sum_{r=1}^m |\beta_{r}|^2}\:,$$
$|\gr{\Psi}_m\rangle$ in Eq.~(\ref{PsiAux1}) can be explicitly
written as
\begin{align}
|\gr{\Psi}_m\rangle =& \sqrt{{\cal Z}_m}\sum_{\{\gr{n}\}}
\frac{{\cal C}_1^{n_1} {\cal C}_2^{n_2}... {\cal C}_m^{n_m}\: \sqrt{(n_1+n_2+...
+n_m)!}}{\sqrt{n_1! n_2! ... n_m!}}\:
|\sum_{k=1}^m n_k ;\{\gr{n}\} \rangle
\label{Psi}\;,
\end{align}
where $\{\gr{n}\}=\{n_1,n_2,...,n_m\}$. The sums over $\gr{n}$ are
extended over natural numbers and ${\cal Z}_m= 1-\sum_{k=1}^m
|{\cal C}_k|^2$ is a normalization factor. We see that for $m=1$ one
recover the twin-beam state. Notice that, being interested in the
entanglement properties and applications of states $|\gr{\Psi}_m\rangle$, we can
take the ${\cal C}_k$'s coefficients as real numbers. In fact one can put
to zero the possible phases associated to each ${\cal C}_k$ by performing
a proper local unitary operation on mode $a_k$, which in turn does not
affect the entanglement of the state. Calculating the expectation
values of the number operators $N_k=\langle a^\dag_k a_k \rangle$ on
the multipartite state $|\gr{\Psi}_m\rangle$ one may re-express the coefficient in
\refeq{Psi} as follows:
\begin{align}
{\cal C}_k=\left(\frac{N_k}{1+N_0}\right)^{1/2} \;,
\qquad {\cal Z}_m=\frac{1}{1+N_0}
\qquad (k=1,\dots m)
\label{calCs}\;.
\end{align}
In order to obtain \refeq{calCs} we have considered \refeq{cons} with
$D=0$ (vacuum input), from which follows that
\begin{align}
N_0=\sum_{k=1}^m N_k \;,
\label{cons2}
\end{align}
and repeatedly used the following identity: \begin{align} \sum_{n=0}^\infty
x^n\frac{(n+a)!}{n!}=a!(1-x)^{-1-a} \;.
\label{SeriesIdentity}
\end{align}
The case $D\neq0$ will be considered in the next section, in which the
effects of thermal background will be taken into account. The basic
property of states in \refeq{Psi} is their full inseparability, {\em
i.e.}, they are inseparable for any grouping of the modes. To prove
this statement first notice that, being evolved with a bilinear
Hamiltonian from the vacuum, the states $|\gr{\Psi}_m\rangle$ are pure Gaussian
states. They are completely characterized by their covariance matrix
$\boldsymbol\sigma$, whose entries are defined as
\begin{align}
\label{defCOV}
[\boldsymbol\sigma]_{kl} &= \frac12 \langle \{R_k,R_l\} \rangle -
\langle R_l \rangle
\langle R_k \rangle\,,
\end{align}
where $\{A,B\}=AB+BA$ denotes the anticommutator, ${\boldsymbol R}=
(q_0,p_0,\ldots,q_m,p_m)^{\scriptscriptstyle T}$ and the position and momentum operator
are defined as $q_k = (a_k + a_k^\dag)/\sqrt2$ and $p_k = (a_k -
a_k^\dag)/\sqrt2$. The covariance matrix for the states $|\gr{\Psi}_m\rangle$ reads
as follows:
\begin{align}
\label{CovPsi}
\boldsymbol\sigma_{m} &= \left(
\begin{array}{ccccc}
\boldsymbol {\cal N}_0 & \boldsymbol {\cal A}_1 & \boldsymbol {\cal A}_2 & \ldots & \boldsymbol {\cal A}_m \\
\boldsymbol {\cal A}_1 & \boldsymbol {\cal N}_1 & \boldsymbol {\cal B}_{1,2} & \ldots & \boldsymbol {\cal B}_{1,m} \\
\boldsymbol {\cal A}_2 & \boldsymbol {\cal B}_{1,2} & \boldsymbol {\cal N}_2 & \ddots & \vdots \\
\vdots & \vdots & \ddots & \ddots & \boldsymbol {\cal B}_{m-1,m} \\
\boldsymbol {\cal A}_m & \boldsymbol {\cal B}_{1,m} & \ldots & \boldsymbol {\cal B}_{m-1,m} & \boldsymbol {\cal N}_m \\
\end{array}
\right)\,,
\end{align}
where the entries are given by the following $2\times 2$ matrices
($k=0,\dots,m$, $h=1,\dots,m$, $j=2,\dots,m$ and $0 < i < j$)
\begin{align}
\label{CovPsiAux}
\boldsymbol {\cal N}_k=(N_k+\frac12)\,\openone \qquad
\boldsymbol {\cal A}_h= \sqrt{N_h(N_0+1)}\,\mathbb{P} \qquad
\boldsymbol {\cal B}_{i,j} = \sqrt{N_i\,N_j}\,\openone \;,
\end{align}
with $\openone={\rm Diag}(1,1)$ and $\mathbb{P}={\rm Diag}(1,-1)$.
Since $|\gr{\Psi}_m\rangle$ are pure states, full inseparability can be demonstrated
by showing that the Wigner function does not factorize for any
grouping of the modes, which in turn is ensured by the explicit
expression of the covariance matrix $\boldsymbol\sigma_m$ given above (as soon
as $N_h\neq0$).
\section{Effect of noise on the generation and propagation of ${\rm\bf SU} \boldsymbol{(m,1)}$
coherent states} \label{s:NoisySUm1}
In view of possible applications of the coherent states of ${\rm SU}
(m,1)$ to a real quantum communication scenario, it is worth to
analyze the degrading effects on their entanglement that may arise
when generation and propagation in a noisy environment is taken into
account. Unfortunately, a manageable necessary and sufficient
entanglement criterion for the general case of a Gaussian multipartite
state is still lacking. Thus, in order to study quantitatively the
effects of noise we must limit ourselves to the case when only three
modes are involved (insights for the general $m$-mode case will be
given in the following Sections). In fact, up to three modes the
partial transpose criterion introduced in \cite{ppt,duan,simon} is
necessary and sufficient for separability \cite{ppt3}. It says that a
Gaussian state described by a covariance matrix $\boldsymbol\sigma$ is fully
inseparable if and only if the matrices
$\omega_k=\boldsymbol\sigma-\frac{i}{2}{\widetilde {\boldsymbol J}}_k$ ($k=0,1,2$) are
non-positive definite, where ${\widetilde {\boldsymbol J}}_k=\boldsymbol \Lambda_k {\boldsymbol J}
\boldsymbol \Lambda_k$ with $\boldsymbol \Lambda_0={\rm Diag}(1,-1,1,1,1,1)$, $\boldsymbol \Lambda_1
={\rm Diag}(1,1,1,-1,1,1)$, $\boldsymbol \Lambda_2 ={\rm Diag}(1,1,1,1,1,-1)$
and
\begin{align}
\label{InsepMat}
\gr{J} = \left(\begin{array}{cc} \boldsymbol{0} &-
\openone_3 \\
\openone_3
&\boldsymbol{0} \end{array}\right)\;,
\end{align}
and $\openone_n$ is the $n\times n$ identity matrix. This criterion has
been applied in Refs.~\cite{ppt3,chen} in order to assess the
separability of the CV tripartite state proposed in
Ref.~\cite{vLB_tlpnet} when thermal noise is taken into account. In
Ref.~\cite{cams} the entanglement properties of a state generated via
a ${\rm SU} (2,1)$ evolution when one of the modes starts from thermal
background has been also numerically addressed.
\par
Let us now analyze if the generation process of states $|\gr{\Psi}_m\rangle$ is
robust against thermal noise. This means that we have to study the
separability properties of a state generated by a ${\rm SU} (m,1)$
interaction starting from a thermal background rather then from the
vacuum, in formulae $\varrho=e^{-iH_mt}\varrho_\nu\, e^{iH_mt}$, where
$\varrho$ and $\varrho_\nu$ are the density matrix of the evolved
state and of a thermal state, respectively. We may call these states
thermal coherent states of ${\rm SU} (m,1)$. First notice that, being the
thermal state Gaussian, the thermal coherent states will be Gaussian
too, and their covariance matrix $\boldsymbol\sigma_{m,{\rm th}}$ may be
immediately identified from \refeq{CovPsi}. In fact, in the phase
space identified by the vector ${\boldsymbol R}$, every ${\rm SU} (m,1)$ evolution
will act as a symplectic operation ${\boldsymbol {\cal S}}$ on the
covariance matrix of the input state, {\em i.e.}, $\boldsymbol\sigma_{\rm
out}={\boldsymbol {\cal S}}^T\,\boldsymbol\sigma_{\rm in}{\boldsymbol {\cal
S}}$. Recalling that the covariance matrix of a thermal state can
be written as $\boldsymbol\sigma^{\rm th}=(2\,\nu+1)\boldsymbol\sigma_v$, being
$\boldsymbol\sigma_v=\openone/2$ the covariance matrix of vacuum and $\nu$ the mean
thermal photon number, we obtain
\begin{align}
\label{sigma_th}
\boldsymbol\sigma_{m,{\rm th}}=(2\nu+1)\boldsymbol\sigma_m
\end{align}
Let us now apply the separability criterion recalled above to
$\boldsymbol\sigma_{3,{\rm th}}$. Concerning the first mode,
from an explicit calculation of the minimum eigenvalue of matrix
$\omega_0$ it follows that
\begin{equation}
\lambda^{\rm min}_0= \nu+(1+2 \nu)
\left[N_0-\sqrt{N_0(N_0+1)}\right]\ \;.
\label{c4:TThAvalMin}
\end{equation}
As a consequence, mode $a_0$ is separable from the others when
\begin{equation}
\nu > N_0+\sqrt{N_0(N_0+1)} \;.
\label{c4:TThTrsh1}
\end{equation}
Calculating the characteristic polynomial of matrix $\omega_1$ one
deals with the following pair of cubic polynomials
\begin{multline}
q_1(\lambda,N_0,N_1,N_2,\nu) =
\lambda^3 - 2\left[ 2(1+ N_0)+ \nu(3+4N_0)
\right]\lambda^2\\
+4\left[1+ N_1+2N_2+
\nu(4+4N_1+6N_2+ \nu (3+4N_0))\right]\lambda\\
-8 \nu\left[1+N_1+ \nu(2+ \nu
+2N_1)\right]
\;,
\end{multline}
\vspace{-1cm}
\begin{multline}
q_2(\lambda,N_0,N_1,\nu) =
\lambda^3
-2\left[1+2N_0+ \nu(3+4N_0) \right]\lambda^2
\\
+4\left[ N_1+2 \nu(1+N_0)+ \nu^2(3+4N_0)
\right]\lambda\\
-8(1+ \nu)( \nu^2-2N_1-2\nu N_1)
\;.
\end{multline}
While the first polynomials admits only positive roots, the second one
shows a negative root under a certain threshold. It is possible to
summarize the three separability thresholds of the three modes
involved in the following inequalities
\begin{equation}
\nu > N_k+\sqrt{N_k(N_k+1)} \;.
\label{c4:TThGenericThrs}
\end{equation}
If Inequality (\ref{c4:TThGenericThrs}) is satisfied for a given $k$,
then mode $a_k$ is separable. Clearly, it follows that the state
$|\gr{\Psi}_2\rangle$ evolved from vacuum ({\em i.e.}, $ \nu=0$) is fully
inseparable, as expected from Section \ref{s:SUm1}. Remarkably,
Inequality (\ref{c4:TThGenericThrs}) is the same as for the twin beam
evolved from noise \cite{serale}, which means that the entanglement of
the thermal coherent states of ${\rm SU}(2,1)$ is as robust against noise
as it is for the case of the thermal coherent states of ${\rm SU}(1,1)$.
\par
Let us now consider the evolution of the state $|\gr{\Psi}_2\rangle$ in three
independent noisy channels characterized by loss rate $\Gamma$ and
thermal photons $\mu$, equal for the three channels. The covariance
matrix $\boldsymbol\sigma_2(t)$ at time $t$ is given by a convex combination of
the ideal $\boldsymbol\sigma_2(0)$ [{\em i.e.}, $\boldsymbol\sigma_2$ in \refeq{CovPsi}]
and of the stationary covariance matrix
$\boldsymbol\sigma_{\infty,2}=(\mu+\mbox{$\frac12$})\openone_6$
\begin{equation}
\boldsymbol\sigma_2(t)=e^{-\Gamma t}\,\boldsymbol\sigma_2+ (1-e^{-\Gamma t})\,\boldsymbol\sigma_{\infty,2}
\;.
\label{c4:3mCMEvol}
\end{equation}
Consider for the moment a pure dissipative environment, namely
$\mu=0$. Applying the separability criterion above to
$\boldsymbol\sigma_2(t)$, one can show that it describes a fully inseparable
state for every time $t$. In fact, we have that the minimum eigenvalue
of $\omega_0$ is given by
\begin{equation}
\lambda^{\rm min}_0= 2e^{-\Gamma t}\left[N_0-\sqrt{N_0(N_0+1)}\right]\,.
\label{c4:LChAvalMin1}
\end{equation}
Clearly, $\lambda_0^{\rm min}$ is negative at every time $t$, implying
that mode $a_0$ is always inseparable from the others. Concerning mode
$a_1$, the characteristic polynomial of $\omega_1(t)$ factorizes into two
cubic polynomials:
\begin{subequations}
\label{c4:LossyChCubic}
\begin{align}
q_1(\lambda,\Gamma,N_0,N_1,N_2) &=
-\lambda^3+4\left[ 1+ e^{-\Gamma t}N_0 \right]\lambda^2 \nonumber\\
&\hspace{1cm}+4\left[ -1-e^{-\Gamma t}( 2N_1 + 3N_2
- e^{-\Gamma t} N_0) \right]\lambda + 8e^{-\Gamma t}N_2(1-e^{-\Gamma t})\;,
\\
q_2(\lambda,\Gamma,N_0,N_1,N_2) &=
-\lambda^3 + 2\left[1+2e^{-\Gamma t}N_0 \right]\lambda^2 \nonumber\\
&\hspace{3cm}
+ 4\left[ -e^{-\Gamma t}(2N_1+N_2)+ e^{-2\Gamma t}N_0 \right]\lambda
- 8e^{-2\Gamma t}N_1 \;.
\end{align}
\end{subequations}
While the first polynomial has only positive roots, the second one
admits a negative root at every time. Due to the symmetry of state
$|\gr{\Psi}_2\rangle$ the same observation applies to mode $a_2$, hence full
inseparability follows. This result resembles again the case of the
twin beam state in a two-mode channel \cite{duan,kim_stefano}. In other
words, the behavior of the coherent states of ${\rm SU}(2,1)$ in a pure
lossy environment is the same as the behavior of the coherent states
of ${\rm SU}(1,1)$, concerning their entanglement properties.
\par
\begin{figure}[b]
\includegraphics[width=5cm]{FigThresh.ps}
\caption{Separability thresholds for modes $a_0$ (continuous line) and $a_1$ (dashed line) according to \refeq{threshold0} and \refeq{threshold1} for the case of $N_1=N_2=N=1$. The behavior of these curves is similar if different values of $N$ are considered.} \label{f:thresh}
\end{figure}
When thermal noise is taken into account ($\mu\neq0$) separability
thresholds arise, which again resembles the two-mode channel case. Concerning
mode $a_0$, the minimum eigenvalue of matrix $\omega_0(t)$ is
negative when
\begin{equation}
\label{threshold0}
t < \frac{1}{\Gamma}
\ln\left(1+\frac{\sqrt{\frac{1}{2}N_{tot}(\frac{1}{2}N_{tot}+1)}-\frac{1}{2}N_{tot}}{\mu}\right)
\;, \end{equation} where $N_{tot}=N_0+N_1+N_2$.
Remarkably, this threshold is exactly the same as the two-mode case \cite{duan}, if
one consider both of them as a function of the total mean photon
number of the TWB and of state $|\gr{\Psi}_2\rangle$ respectively. This
consideration confirms the robustness of the entanglement of the
tripartite state $|\gr{\Psi}_2\rangle$. Concerning mode $a_1$, the characteristic
polynomial of $\omega_1(t)$ factorizes again into two
cubic polynomials. As above, one of the two have always positive
roots, while the other one admits a negative root for time $t$ below a
certain threshold, in formulae:
\begin{multline}
-8e^{-2\Gamma t}N_1+8(e^{-\Gamma t}-1)e^{-\Gamma t}(e^{-\Gamma t} N_0
- 2N_1 - N_2)\mu \\
+8(e^{-\Gamma t}-1)^2(1+2e^{-\Gamma t}N_0)\mu^2
- 8(e^{-\Gamma t}-1)^3 \mu^3<0 \;.
\label{threshold1}
\end{multline}
Mode $a_2$ is subjected to an identical separability threshold, upon
the replacement $N_1\leftrightarrow N_2$. In Fig.~\ref{f:thresh} we
compare the separability thresholds given by \refeq{threshold0} and
\refeq{threshold1}. As it is apparent from the plot, modes $a_1$ and
$a_2$ become separable faster then modes $a_0$, hence the threshold
for full inseparability of $|\gr{\Psi}_2\rangle$ is given by \refeq{threshold1}.
\par
We conclude that the entanglement properties of the coherent states of
$SU(2,1)$ in a noisy environment resembles the twin-beam case both in
generation and during propagation. This may be relevant for
applications, as the robustness of twin beam is at the basis of
current applications in bipartite CV quantum information.
\section{Telecloning}\label{s:tlc}
We now show how the multipartite states $|\gr{\Psi}_m\rangle$ introduced in
Sec.~\ref{s:SUm1} can be used in a quantum communication scenario. In
particular we show that states $|\gr{\Psi}_m\rangle$ permit to achieve optimal
symmetric and asymmetric $1\rightarrow m$ telecloning of pure Gaussian
states. Optimal symmetric telecloning has been in fact already
proposed in \cite{vLB_tlc} using a shared state produced by a
particular ${\rm SU}(m,1)$ interaction. Also a protocol performing
optimal $1\rightarrow 2$ asymmetric telecloning of coherent states has
been already suggested in Ref.~\cite{josab}, where the shared state is
produced by a suitable bilinear Hamiltonian which generates a
${\rm SU}(2,1)$ evolution operator. Here we consider the general
$1\rightarrow m$ telecloning of Gaussian pure states in which the
shared entanglement is realized by a generic coherent state of ${\rm SU}
(m,1)$. First recall that a single-mode pure Gaussian state can be
always written as
\begin{align}
\label{1Mpure}
|\xi,\alpha\rangle &= S_b(\xi)\,D_b(\alpha)|0\rangle \;,
\end{align}
where $S_b(\xi)= \pexp{\frac12 \xi (b^\dag)^2-\frac12\xi^{*} b^2}$ and
$D_b(\alpha)=\pexp{\alpha b^\dag -\alpha^{*} b}$ are the squeezing and
the displacement operator respectively, whereas $b$ is the mode to be
cloned. We emphasize that our goal is to create $m$ clones of state
$\varrho_{\rm in}=|\xi,\alpha\rangle\langle\xi,\alpha|$ in a non-universal
fashion, {\em i.e.} the information that we clone is encoded only in
the coherent amplitude $\alpha$. In other words, we consider the
knowledge of the squeezing parameter $\xi$ as a part of the protocol,
as in the case of local cloning of Gaussian pure states \cite{cerf23}.
The telecloning protocol is schematically depicted in
Fig.~\ref{f:tlc}. As a shared entangled state we consider the
following \cite{note1}:
\begin{align}
\label{shared}
|\gr{\Phi}_m\rangle=S_{a_0}(\xi^{*})\otimes S_{a_1}(\xi) \otimes \ldots \otimes S_{a_m}(\xi) |\gr{\Psi}_m\rangle \;.
\end{align}
\begin{figure}[t]
\includegraphics[width=10.9cm]{figmtlc.ps}
\caption{Schematic diagram of the telecloning scheme. After the
preparation of the state $|\gr{\Phi}_m\rangle$, a conditional measurement is made
on the mode $a_0$, which corresponds to the joint measurement of the
sum- and difference-quadratures on two modes: mode $a_0$ itself and
another reference mode $b$, which is excited in a pure Gaussian
state $|\xi,\alpha\rangle$, to be teleported and cloned. The result
$z$ of the measurement is classically sent to the parties who want
to prepare approximate clones, where suitable displacement
operations (see text) on modes $a_1,\dots,a_m$ are performed. We
indicated with $\nu$ and $\mu$ the mean thermal photons in
generation and propagation, whereas $\Delta$ takes into account for
the non unit efficiency in the detection stage. The effective
propagation times $\tau_0$ and $\tau_c$ (see Sec.~\ref{s:NoisyTLC})
are related to the losses during propagation.
\label{f:tlc}}
\end{figure}
After the preparation of the state $|\gr{\Phi}_m\rangle$, a joint
measurement is made on modes $a_0$ and $b$, which corresponds to measure
the complex photocurrent $Z = b + a_0^\dag$ (double-homodyne
detection), as in the teleportation protocol. The
measurement is described by the POVM $\{\Pi(z)\}_{z\in {\mathbb C}}$,
acting on the mode $a_0$, whose elements are given by \begin{align}
\Pi (z) &= \pi^{-1} D_{a_0}(z)\: \varrho_{\rm in}^{\scriptscriptstyle T} D_{a_0}^\dag (z) \nonumber \\
&= \pi^{-1} S_{a_0}(\xi^{*})D_{a_0}(z')D_{a_0}(\alpha^{*})\,
\ket{0}\bra{0}\, D^\dag_{a_0}(\alpha^{*}) D_{a_0}^\dag
(z')S^\dag_{a_0}(\xi^{*}) \;,
\label{povm}
\end{align}
where $z$ is the measurement outcome, $z'=z \cosh r +e^{-i\theta}
z^{*} \sinh r $, $\xi=re^{i\theta}$ and $^T$ denotes transposition.
The probability distribution of the outcomes is given by
\begin{align}
P (z) &= \hbox{Tr}_{0\dots m} \left[|\gr{\Phi}_m\rangle\langle\gr{\Phi}_m|\:
\Pi(z) \bigotimes_{h=1}^m
\mathbb{I}_h \right] \nonumber \\
&= \frac{1}{\pi(1+N_0)} \: \exp
\left\{-\frac{|z'+\alpha^*|^2}{1+N_0}\right\}
\label{Pz}\;,
\end{align}
being $\mathbb{I}_h$ the identity operator acting on mode $a_h$.
The conditional state of the remaining modes then reads
\begin{align}
\varrho_z &= \frac{1}{P(z)}\: \hbox{Tr}_{0}
\left[|\gr{\Phi}_m\rangle\langle\gr{\Phi}_m|\:
\Pi(z) \bigotimes_{h=1}^m
\mathbb{I}_h \right] \nonumber \\
&= \bigotimes_{h=1}^m S_{a_h}(\xi)|{\cal C}_h(z'^{*}+\alpha)\rangle
\langle{\cal C}_h(z'^{*}+\alpha)| S^\dag_{a_h}(\xi) \label{RhoZ}\;,
\end{align}
where $|{\cal C}_h(z'^{*}+\alpha)\rangle$ denotes a coherent state (of
the usual Heisenberg Weyl group) with amplitude
${\cal C}_h(z'^{*}+\alpha)$. After the measurement, the conditional state
should be transformed by a further unitary operation, depending on the
outcome of the measurement. In our case, this is a $m$-mode product
displacement $U_z = \bigotimes_{h=1}^m D_h^{\scriptscriptstyle T}(z)$. This is a local
transformation, which generalizes to $m$ modes the procedure already
used in the original CV teleportation protocol. The overall state is
obtained by averaging over the possible outcomes
$$
\varrho_{1\dots m}=\int_{\mathbb C} d^2 z\: P (z) \:
\tau_z\:.$$ where $\tau_z=U_z\: \varrho_z\:
U_z^\dag$. Thus, the partial traces
$\varrho_h=\hbox{Tr}_{1,\dots,h-1,h+1,\dots,m}[\varrho_{1\dots m}]$ read as follows
\begin{align}
\varrho_h = S_h(\xi)\,\left[\int_{\mathbb C} d^2z\: P (z) \:
|\alpha\,{\cal C}_h+z'^*\,({\cal C}_h-1)\rangle\langle
\alpha\,{\cal C}_h+z'^*\,({\cal C}_h-1) |\,\right] S^\dag_h(\xi)\;.
\end{align}
Upon a changing in the integration variable we obtain the following
expression for the clones:
\begin{align}
\label{clonesAUX}
\varrho_h = S_h(\xi)\,\left[\int_{\mathbb C} d^2w\:\frac{1}{\pi\, n_h}\exp \left\{ -\frac{|w-\alpha|^2}{n_h} \right\} \:
|w\rangle\langle
w |\,\right] S^\dag_h(\xi)\;,
\end{align}
where we defined
\begin{align}
\label{noise}
n_h=\left(\sqrt{N_0+1}-\sqrt{N_h}\right)^2\,.
\end{align}
From expression (\ref{clonesAUX}) one immediately recognize that
the clones are given by thermal states $\varrho_{\rm th}(n_h)$, with mean
photon number $n_h$, displaced and squeezed by the amounts $\alpha$ and
$\xi$ respectively, {\em i.e.}:
\begin{align}
\label{clones}
\varrho_h=
S_h(\xi)\,D_h(\alpha)\,\varrho_{\rm th}(n_h)\,D^\dag_h(\alpha)\,S^\dag_h(\xi)\;.
\end{align}
As a consequence, we see that the protocol acts like a proper
covariant Gaussian cloning machine \cite{cerf23}, and that the noise
introduced by the cloning process is entirely quantified by the
thermal photons $n_h$, which in turn depend only on the value of the
mean photon numbers $N_h$ of the shared state. The fidelity $F_h$
between the $h$-th clone and the initial state $|\xi,\alpha\rangle$
does not depend on the latter and is given by
\begin{align}
F_h=\frac{1}{1+n_h}\,.
\label{CloFid}
\end{align}
\par
The expression of the clones in \refeq{clones} says that they can
either be equal or different one to each other, depending on the
values of the $n_h$'s. In other words, a remarkable feature of this
scheme is that it is suitable to realize both symmetric, when
$n_2=\dots=n_m=n$, and asymmetric cloning, $n_2\neq \dots \neq n_m$.
This arises as a consequence of the possible asymmetry of the state
that supports the telecloning. To our knowledge, this is the first
example of a completely asymmetric $1\rightarrow m$ cloning machine
for continuous variable systems.
\par
Concerning the symmetric cloning one has that this scheme saturates
the bound given in Ref.~\cite{cerf}, hence ensuring the optimality of
the protocol. In fact, the minimum added noise for a symmetric
$1\rightarrow m$ cloner of coherent states is given by
$n=\frac{m-1}{m}$, which in our case can be attained by setting
$N_1=\dots=N_m=N^{\rm opt}$, where
\begin{align}
N^{\rm opt}=\frac{1}{m(m-1)}
\label{NOptId}
\end{align}
It follows that the fidelity is optimal, namely $F=m/(2m-1)$. It is
not surprising that this result is the same as the one obtained in
Ref.~\cite{vLB_tlc}. In fact, as already mentioned, the latter uses
as support a specific ${\rm SU} (m,1)$ coherent state, generated with a particular
interaction built from single mode squeezers and beam-splitters. Our
calculation extends this result to any ${\rm SU} (m,1)$ coherent state
used to support the telecloning protocol.
\par
\subsection{Asymmetric cloning} \label{ss:asym}
Consider now the case of asymmetric cloning. In this case one deals with a true quantum
information distributor, in which the information encoded in an
original state may be distributed asymmetrically between many parties
according to the particular task one desires to attain. In this
scenario, a particularly relevant question concerns the
maximum fidelity achievable by one party, say $F_1$, once the
fidelities $F_j$ ($j=2,\dots,m$) of the other ones are fixed. Thanks
to \refeq{CloFid} we see that this is equivalent to the issue of
finding the minimum noise $n_1$ introduced by the cloning process for
fixed $n_j$'s ($n_j\neq1$). The optimization has to be performed under the
constrain given by \refeq{cons}, which allows to write $n_1$ as a
function of the $n_j$'s and of the total mean photon number $N_0$ (the
sums run for $j=2,\dots,m$):
\begin{align}
\label{n1}
n_1=\left[\sqrt{N_0+1}-\sqrt{N_0-\sum_j\left(\sqrt{N_0+1}-\sqrt{n_j}\right)^2}\right]^2
\;.
\end{align}
The minimum noise $n_1^{\rm min}$ is then
found setting $N_0$ such that
\begin{align}
\label{EqA}
(N_0+1)(m-1)(m-2)-2\,\sqrt{N_0+1}(m-1){\scriptstyle \sum_j}\sqrt{n_j}
+{\scriptstyle \sum_j}n_j
+\left({\scriptstyle \sum_j}\sqrt{n_j}\right)^2-1=0
\,.
\end{align}
For $m=2$ one obtains that the optimal choice for $N_0$ is given by $N_0^{\rm
opt}=n_2+1/4n_2$. It follows that the minimum noise $n_1^{\rm min}$
allowed by our telecloning protocol for fixed $n_2$ is given by
$n_1^{\rm min}=1/4n_2$. Hence we recover the result of
Ref.~\cite{josab} for the fidelities:
\begin{align}
\label{Fid12}
F_1^{\rm max}=\frac{4(1-F_2)}{4-3F_2}\;.
\end{align}
Notice that if one requires $F_2=1$ then $F_1=0$, that is no
information is left to prepare a non-trivial clone on mode $a_1$. We
remark that the result in \refeq{Fid12} shows that the protocol introduced above,
besides reaching the optimal bound in the symmetric case, is optimal
also in the case of asymmetric $1\rightarrow 2$ cloning
\cite{fiurasek}. Coming back to the general case we see from
Eqs.(\ref{n1}) and (\ref{EqA}) that for $m\ge 3$ the minimum noise
$n_1$ is given by
\begin{align}
n_1^{\rm min}=\frac{1}{(m-2)^2}\left\{
{\scriptstyle \sum_j}\sqrt{n_j}-\sqrt{(m-1)\left[
({\scriptstyle \sum_j}\sqrt{n_j})^2-
(m-2){\scriptstyle \sum_j}n_j-
(m-2)
\right]}
\right\}^2
\label{n1_min}
\;,
\end{align}
and it is attained for the following optimal choice of $N_0$
\begin{align}
N_0^{\rm opt}=\frac{1}{(m-1)^2(m-2)^2}\left\{
(m-1){\scriptstyle \sum_j}\sqrt{n_j}-\sqrt{(m-1)\left[
({\scriptstyle \sum_j}\sqrt{n_j})^2-
(m-2){\scriptstyle \sum_j}n_j-
(m-2)
\right]}
\right\}^2-1
\;.
\label{N0_opt}
\end{align}
Substituting \refeq{n1_min} in \refeq{CloFid} one then obtains the
maximum fidelity $F_1^{\rm max}$ achievable for $F_j$ fixed.
Summarizing, if one fixes the fidelities $F_j$ (for $j=2,\dots,m$)
then the thermal photons $n_j$ are given by \refeq{CloFid}, which in
turn individuate the mean photon numbers $N_j$ and $N_1$ of the state
that supports the telecloning via Eqs.~(\ref{noise}), (\ref{n1_min})
and (\ref{N0_opt}). This choice guarantees that the fidelity $F_1$ is
the maximum achievable with the telecloning protocol described above,
thus providing the optimal trade-off between the qualities of the
different clones.
\par
As an example, consider the fully asymmetric $1\rightarrow3$
telecloning and fix the couple of fidelities $F_2$ and $F_3$. Specializing
the formulae above we have that, choosing the state
$|\gr{\Phi}_3\rangle$ such that:
\begin{align}
N_1 =\sqrt{\left(\frac{1}{F_2}-1\right)\left(\frac{1}{F_3}-1\right)}-\frac12 \,, \qquad
N_{2\,(3)} =
\left[
\sqrt{\frac{1}{F_{3\,(2)}}-1}
-\sqrt{N_1}
\right]^2
\,,
\end{align}
then the fidelity of the first clone is the maximal allowed by our scheme, in formulae:
\begin{align}
\label{Fid123}
F_1^{\rm max}=\left\{
1+\left[
\sqrt{\frac{1}{F_2}-1}+\sqrt{\frac{1}{F_3}-1}-\sqrt{2\left(
2\sqrt{\left(\frac{1}{F_2}-1\right)\left(\frac{1}{F_3}-1\right)}-1
\right)}
\right]^2
\right\}^{-1}\;.
\end{align}
Notice that $F_1^{\rm max}$ in \refeq{Fid123} is valid iff the fixed
fidelities $F_2$ and $F_3$ satisfy the relation
$F_2\le4(1-F_3)/(4-3F_3)$, which coincides with the optimal relation
given by \refeq{Fid12}. In other words, the optimal bound imposed by
quantum mechanics to $1\rightarrow 2$ telecloning is automatically
incorporated into the bound (\ref{Fid123}) for $1\rightarrow 3$
telecloning of our scheme. When $F_2=F_3=3/5$ (that is, the
bound for an optimal symmetric $1\rightarrow 3$ cloner) we have that
$F_1^{\rm \max}=3/5$, as one may expect from the discussion above
concerning the symmetric cloning case. Remarkably, when $F_2=F_3=2/3$
(that is, the bound for an optimal symmetric $1\rightarrow 2$ cloner)
one has that $F_1^{\rm \max}=1/3>0$. This means that, even if two
optimal clones have been produced, there still remains some quantum
information to produce a non-trivial third clone. A similar situation
occurs for the case of cloning with discrete variables, as pointed out
in Ref.~\cite{IblisdirAsym}.
\par
Similar results occur for the generic $m$ case. In fact, it can be
immediately shown by inspection that substituting $n_j=(m-1)/m$ (that
is, the bound for the noise introduced by an optimal symmetric
$1\rightarrow m$ cloner) in \refeq{n1_min} one obtains $n_1^{\rm
min}=(m-1)/m$. Hence optimal symmetric cloning is recovered.
Similarly, substituting $n_j=(m-2)/(m-1)$ (that is, the bound for the
noise introduced by an optimal symmetric $1\rightarrow (m-1)$ cloner) in
\refeq{n1_min} one obtains $n_1^{\rm min}=(m-1)/(m-2)$, from which a
fidelity $F_1^{\rm max}=\frac{m-2}{2m-3}>0$ follows. This confirms
that the production of $(m-1)$ optimal clones still leave some quantum
information at disposal to produce an additional non-trivial clone.
An explanation for this effect may be individuated recalling that for large
$m$ the optimal cloner coincides with an optimal measurement on the
original state followed by $m$ reconstruction \cite{cerf}. As a consequence, one
may expect that the production (reconstruction) of $(m-1)$ optimal
clones leaves information ({\em i.e.}, the measurement result) for the
reconstruction of other ones.
\par
A question strictly related to the one faced above, and probably more
significant from an information distribution viewpoint, is the
following. Suppose that one wants to distribute the information
encoded in the original state by fixing the ratio between the noise
that affects all the $m$ clones, and not by fixing the fidelities of
$(m-1)$ clones. More specifically, suppose that one wants to give the
minimum noise to, say, the first clone ($n_j>n_1$ for every
$j=2,\dots,m$). Now fix the noise that affects the other clones by
fixing their ratio $q_j$ with respect to the first one, that is
$n_j=q_j\,n_1$. Then, which is the minimum noise $n_1^{\rm min}$
allowed by our protocol for fixed $q_j$? Solving \refeq{n1_min} for
$n_1$, one may find the following closed expression for $n_1^{\rm min}$ as a
function of $q_j$:
\begin{align}
n_1^{\rm min}=\frac{m-1}{\left(1+\sum_j\sqrt{q_j}\right)^2-(m-1)\left(1+\sum_jq_j\right)}
\;.
\end{align}
The state $|\gr{\Psi}_m\rangle$ that provides this optimal result is simply given
by setting the $N_1$ and $N_j$'s obtained by substituting back $n_j^{\rm
min}=q_j\,n_1^{\rm min}$ in \refeq{N0_opt} and \refeq{noise}.
\par
As a final remark we point out that a general bound for the fidelities
in a fully asymmetric $1\rightarrow m$ cloning of coherent states has
not yet been derived when $m\ge 3$. As a consequence, we cannot judge
if the telecloning process introduced above is in general optimal or
not for $m\ge3$. Nevertheless, there are valuable indications for its
optimality, {\em i.e.} the fact that it is optimal in the case of
$m=2$, and, as we have already pointed out, it is optimal for any $m$
in the symmetric case. In addition, as already mentioned, our
telecloning protocol allows to built a non-trivial additional clone
when $(m-1)$ optimal ones have been produced.
\section{Telecloning in a noisy environment} \label{s:NoisyTLC}
The protocol described in the previous section is referred to the case
of ideal generation and propagation of the states $|\gr{\Psi}_m\rangle$ as well as
to double-homodyne detection with unit quantum efficiency. In order to
take into account the possible losses and noise in the various steps,
it is useful to reformulate the whole protocol in the phase space.
Consider the characteristic function associated to the states $|\gr{\Psi}_m\rangle$:
$
\chi[\boldsymbol\sigma_m](\boldsymbol \Lambda)=\exp\{-\frac12\boldsymbol \Lambda^T\boldsymbol\sigma_m\boldsymbol \Lambda
\}$. The covariance matrix $\boldsymbol\sigma_m$ given in \refeq{CovPsi} can be
written accordingly to the following bipartite structure
\begin{align}
\label{BipSigma_m}
\boldsymbol\sigma_m=
\begin{pmatrix}
{\boldsymbol A} & {\boldsymbol C} \\
{\boldsymbol C}^T & {\boldsymbol B}
\end{pmatrix}
\,,
\end{align}
where ${\boldsymbol A}$ is a $2\times2$ matrix corresponding to mode $a_0$, while
${\boldsymbol B}$ and ${\boldsymbol C}$ are $2m\times 2m$ and $2 \times 2m$ matrices
respectively. Consider now a generic Gaussian POVM, acting on modes
$a_0$ and $b$, defined by a covariance matrix ${\boldsymbol M}$ and a vector of
first moments ${\boldsymbol X}$, {\em i.e.}, $ \chi[{\boldsymbol M},{\boldsymbol X}](\boldsymbol \Lambda)=\exp\{
-\frac12\boldsymbol \Lambda^T{\boldsymbol M}\boldsymbol \Lambda-i\boldsymbol \Lambda^T{\boldsymbol X}\}$. The case of the
ideal double-homodyne measurement introduced above, corresponds to
\begin{align}
{\boldsymbol M}=\mathbb{P}\boldsymbol\sigma_{\rm in}\mathbb{P} \,,\qquad {\boldsymbol X}=\mathbb{P}\overline{\boldsymbol X} +{\boldsymbol Z} \,,
\end{align}
where $\boldsymbol\sigma_{\rm in}$ and $\overline{\boldsymbol X}$ are the covariance
matrix and the vector of first moments of the input state (mode $b$),
whereas ${\boldsymbol Z}=\{\re z,\im z\}$ is the measurement result [we recall
that $\mathbb{P}={\rm Diag}(1,-1)$]. Then, the state conditioned to the
result ${\boldsymbol Y}$ is given by a Gaussian state with covariance matrix
\begin{align}
\boldsymbol\sigma_c={\boldsymbol B}-{\boldsymbol C}^T({\boldsymbol A}+{\boldsymbol M})^{-1}{\boldsymbol C}
\end{align}
and vector of displacements ${\boldsymbol H}={\boldsymbol C}^T({\boldsymbol A}+{\boldsymbol M})^{-1}{\boldsymbol X}$. The
protocol is now completed with the proper generalized local displacement
introduced in the previous section, {\em i.e.}, $U_z =
\bigotimes_{h=1}^m D_h^{\scriptscriptstyle T}(z)$. Averaging over all the possible outcomes we
finally obtain the following expression for the covariance matrix of
the Gaussian state at the output \cite{besancon}:
\begin{align}
\label{SigmaOut}
\boldsymbol\sigma={\boldsymbol B}+\mathbb{J}^T\mathbb{P}({\boldsymbol A}+{\boldsymbol M})\mathbb{P}\mathbb{J}-\mathbb{J}^T\mathbb{P}{\boldsymbol C}-{\boldsymbol C}^T\mathbb{P}\mathbb{J}
\;,
\end{align}
where $\mathbb{J}$ is given by the $2\times2m$ matrix $\mathbb{J}=(\openone,\dots,\openone)$.
\par
As already pointed out in Sec.~\ref{s:NoisySUm1}, if we consider a
realistic scenario for the application of the telecloning protocol, we
must take into account that the generation and the propagation of the
states $|\gr{\Psi}_m\rangle$ are affected by thermal background and losses. In
particular, concerning propagation we can consider that modes
$a_1,\dots,a_m$ propagate in noisy channels characterized by the same
losses $\Gamma_c$. We may then define an effective propagation time
$\tau_c = \Gamma_c t$ equal for all the clones, while the effective
propagation time $\tau_0 = \Gamma_0 t$ for mode $a_0$ is left
different from $\tau_c$. Consider in fact a scenario in which one has
two distant location (see Fig.~\ref{f:tlc}): the sending station,
where the double-homodyne measurement is performed, and the receiving
station, where the clones are eventually retrieved. The distance
between the two stations can be viewed as a total effective
propagation time $\tau_{\mbox{\tiny $T$}}$ which can be written as $\tau_{{\mbox{\tiny $T$}}}
=\tau_0+\tau_c$. Then, the choice made above corresponds to the
possibility of choosing at will, for a given $\tau_{\mbox{\tiny $T$}}$, which
modes ($a_1,\dots,a_m$ or $a_0$) will be affected by the unavoidable
noise that separates the sending and the receiving station and to
which extent. With a slight abuse of language, we may say that one can
choose whether to put the source of the entangled state $|\gr{\Psi}_m\rangle$ near
the sending station ($\tau_{{\mbox{\tiny $T$}}}=\tau_c$), near the receiving one
($\tau_{{\mbox{\tiny $T$}}}=\tau_0$), or somewhere in between. A similar strategy
has been pursued in \cite{welsch} to optimize the CV teleportation
protocol in a noisy environment. In the following, we will see how to
determine both the optimal location and the optimal $|\gr{\Psi}_m\rangle$ for a
given amount of noise. For the sake of simplicity, the thermal photons
$\mu$ will be taken equal in all the noisy channels. As it is natural to
expect, in the generation process all the modes will be also
considered to be affected by the same amount of noise, characterized
by $\nu$ mean thermal photons. As a consequence, the matrix
$\boldsymbol\sigma_m$ in \refeq{BipSigma_m} should be substituted by (see,
e.g., Ref.~\cite{Napoli}) its noisy counterpart:
\begin{align}
\boldsymbol\sigma_{m,{\rm n}}=\mathbb{G}^{1/2}\boldsymbol\sigma_{m,{\rm th}}\mathbb{G}^{1/2}+(1-\mathbb{G})\boldsymbol\sigma_{\infty,m}
\end{align}
where we have used \refeq{sigma_th}, and defined
\begin{align}
\mathbb{G}=e^{-\tau_0}\openone\oplus_{j=1}^m\,e^{-\tau_c}\openone
\qquad \boldsymbol\sigma_{\infty,m}=(\mu+\mbox{$\frac12$})\openone_{2m}
\end{align}
Performing the calculation explicitly, upon defining
$\gamma_c=e^{-\tau_c}$, $\gamma_0=e^{-\tau_0}$ , $\kappa=\mu+\mbox{$\frac12$}$ and $\zeta=1+2\nu$,
we obtain:
\begin{align}
\label{sigma_n}
\boldsymbol\sigma_{m,\rm n}=
\begin{pmatrix}
\widetilde{\boldsymbol A} & \widetilde{\boldsymbol C} \\
\widetilde{\boldsymbol C}\,\!^T & \widetilde{\boldsymbol B}
\end{pmatrix}
\,,
\end{align}where $\widetilde{\boldsymbol A}=\zeta\,[\gamma_0\boldsymbol {\cal N}_0 + \frac\kappa\zeta(1-\gamma_0)\openone]$, $\widetilde{\boldsymbol C} = \zeta\,\sqrt{\gamma_0\,\gamma_c}\,{\boldsymbol C}$ and
\begin{align}
\widetilde{\boldsymbol B} = \zeta
\left(
\begin{array}{ccccc}
\gamma_c\boldsymbol {\cal N}_1 + \frac\kappa\zeta(1-\gamma_c)\openone & \gamma_c\boldsymbol {\cal B}_{1,2} & \ldots & \gamma_c \boldsymbol {\cal B}_{1,m} \\
\gamma_c\boldsymbol {\cal B}_{1,2} & \gamma_c\boldsymbol {\cal N}_2 + \frac\kappa\zeta(1-\gamma_c)\openone & \ddots & \vdots \\
\vdots & \ddots & \ddots & \gamma_c\boldsymbol {\cal B}_{m-1,m} \\
\gamma_c\boldsymbol {\cal B}_{1,m} & \ldots &\gamma_c \boldsymbol {\cal B}_{m-1,m} & \gamma_c\boldsymbol {\cal N}_m + \frac\kappa\zeta(1-\gamma_c)\openone\\
\end{array}
\right)\,.
\end{align}
A non-unit efficiency $\eta$ in the detection stage corresponds to
have the covariance matrix of the double-homodyne detection given by
$\widetilde{\boldsymbol M}=\mathbb{P}\boldsymbol\sigma_{\rm in}\mathbb{P}+\frac12 \Delta\openone$
(where $\Delta=\frac{1-\eta}{\eta}$). Finally, considering an initial
coherent state \cite{note2} and recalling
\refeq{SigmaOut}, we have
$\widetilde{\boldsymbol M}=\frac12 (1+\Delta)\openone$, whereas
the covariance matrix of the $m$ output modes now reads:
\begin{align}
\label{SigmaOutNoise}
\boldsymbol\sigma_{\rm
n}=\widetilde{\boldsymbol B}+\mathbb{J}^T\mathbb{P}(\widetilde{\boldsymbol A}+\widetilde{\boldsymbol M})\mathbb{P}\mathbb{J}-\mathbb{J}^T\mathbb{P}\widetilde{\boldsymbol C}-\widetilde{\boldsymbol C}\,\!^T\mathbb{P}\mathbb{J}
\;,
\end{align}
which in turn gives the following covariance matrix for the $h$-th clone:
\begin{align}
\label{Reduced_n}
\boldsymbol\sigma_{h,{\rm n}}=
\left(\frac{1}{F_h}-\frac12\right)
\openone\,.
\end{align}
In the Equation above, $F_h$ represents the fidelity between the $h$-th clone and the
original coherent state:
\begin{align}
\label{FidNoise}
F_h &=\left\{\det\left[\boldsymbol\sigma_{h,{\rm n}}+\mbox{$\frac12$}\openone
\right]\right\}^{-1/2} \nonumber \\
&= \left\{1+\frac\Delta2+2\kappa+\zeta\left[
\gamma_0\left(N_0+\frac12-\frac\kappa\zeta\right)
+\gamma_c\left(N_h+\frac12-\frac\kappa\zeta\right)
-2\,\sqrt{\gamma_0\,\gamma_c\,N_h(N_0+1)}\right] \right\}^{-1} \,.
\end{align}
\subsection{Optimization of the symmetric protocol} \label{ss:optim}
In order to clarify the implication of the formula (\ref{FidNoise}), let us focus our
attention to the case of symmetric cloning (recall that in this case
$N_1=\ldots=N_m=N$). Upon defining $x=\frac\kappa\zeta-\frac12$,
$\gamma_{\mbox{\tiny $T$}}=e^{-\tau_{\mbox{\tiny $T$}}}=\gamma_0\gamma_c$ and the following
function
\begin{align}
\label{f}
f(N,\gamma_0;x,\gamma_\tinyT)=\frac{\gamma_{\mbox{\tiny $T$}}}{\gamma_0}(N-x)+\gamma_0(m\,N-x)-2\,\sqrt{\gamma_{\mbox{\tiny $T$}} N(m\,N+1)}
\;,
\end{align}
the fidelity reads as follows
\begin{align}
\label{NFidSym}
F=\left\{\zeta\,f(N,\gamma_0;x,\gamma_\tinyT)+2\,\kappa+1+\frac{\Delta}{2}
\right\}^{-1}\,.
\end{align}
Our aim is now to optimize, for a fixed amount of noise, the
shared state $|\gr{\Psi}_m\rangle$ and the location of its source between the sending and
the receiving station. Namely, one has to find $N$ and $\gamma_0$
which maximize the fidelity $F$ for $\gamma_{\mbox{\tiny $T$}}$, $\kappa$,
$\zeta$, $\Delta$ fixed. This, in turn, means to minimize
$f(N,\gamma_0;x,\gamma_{\mbox{\tiny $T$}})$ for fixed $\gamma_{\mbox{\tiny $T$}}$ and $x$.
The domain where to perform the minimization is the region $N>0$ and
$\gamma_T<\gamma_0<1$. We will see that the possibility of varying
$\gamma_0$ will reveal crucial in order to adapt the ideal cloning
protocol, presented in Sec.~\ref{s:tlc}, to a noisy environment.
\par
Calculating the stationary points of $f(N,\gamma_0;x,\gamma_\tinyT)$ one finds:
\begin{align}
s_1&=\left\{ N=\frac{x}{1-x(m-1)}\,,\,\gamma_0=\sqrt{\frac{\gamma_{\mbox{\tiny $T$}}\,x}{x+1}} \right\}\,, \nonumber \\
s_2&=\left\{ N=\frac{x}{m[1+x(m-1)]}\,,\,\gamma_0=\sqrt{\frac{\gamma_{\mbox{\tiny $T$}}\,(1+m\,x)}{m\,x}} \right\}\,.
\end{align}
The points $s_1$ and $s_2$ belong to the domain for $\left\{
\gamma_{\mbox{\tiny $T$}}<\frac{x}{x+1}\,, x<\frac{1}{m-1} \right\}$ and for
$\left\{ \gamma_{\mbox{\tiny $T$}}<\frac{m\,x}{m\,x+1}\,, \forall x \right\}$
respectively. By evaluating the Hessian matrix associated to
$f(N,\gamma_0;x,\gamma_\tinyT)$, it follows that both $s_1$ and $s_2$ are not extremal
points. As a consequence one has to look for the minimum of
$f(N,\gamma_0;x,\gamma_\tinyT)$ along the boundary of the minimization domain. Three local minima
are found in the three regions parametrized by
$\gamma_0=\gamma_{\mbox{\tiny $T$}}$, $\gamma_0=1$ and $N\rightarrow\infty$, whereas the forth extremum is a maximum.
In particular, in the first region the minimum is attained for
\begin{align}
N=\left\{
\begin{array}{ll}
-\dfrac{\gamma_{\mbox{\tiny $T$}}}{m\gamma_{\mbox{\tiny $T$}}-1} & \;\gamma_T>1/m \\
\dfrac{1}{m(m\gamma_{\mbox{\tiny $T$}}-1)} & \;\gamma_T<1/m
\end{array} \right.
\end{align}
Concerning the second and the third region, one finds that the minima
are located at
\begin{align}
N=-\frac{\gamma_{\mbox{\tiny $T$}}}{m(\gamma_{\mbox{\tiny $T$}}-m)}
\end{align}
and at
\begin{align}
\gamma_0=\sqrt{\frac{\gamma_{\mbox{\tiny $T$}}}{m}}\;,
\end{align}
respectively. By evaluating
the value of $f(N,\gamma_0;x,\gamma_\tinyT)$ in the minima, one eventually attain the global
maximum $F^{\rm max}$ of the fidelity. A summary of the results is given in
Tab.~\ref{optim}, where we have reintroduced the effective propagation
times $\tau_{{\mbox{\tiny $T$}}}$, $\tau_0$ and defined the following quantities:
\begin{align}
F^a &= \frac{2\,m}{
-2-4\,\nu+m\left\{\Delta+2\left[2+\mu+\nu+(\nu-\mu)e^{-\tau_{\mbox{\tiny $T$}}}
\right] \right\}} \label{Fa}\,,\\
F^b &= 2\left\{\Delta+2(2+\mu+\nu)-2\,(1+\mu+\nu)e^{-\tau_{\mbox{\tiny $T$}}}
\right\}^{-1} \label{Fb}\,,\\
F^c &= \bigg\{
2+\frac\Delta2+2\mu-\sqrt{\frac{e^{-\tau_{\mbox{\tiny $T$}}}}{m}}\left[1+\mu+\nu+m(\mu-\nu)
\right] \bigg\}^{-1} \label{Fc}\,.
\end{align}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline $x$ & $\tau_{\mbox{\tiny $T$}}$ & $\tau_0^{\rm opt}$ & $N^{\rm opt}$ &
$F^{\rm max}$ \vspace{0cm}\\
\hline\hline
\rule[-4mm]{0mm}{1.1cm}
$\forall x$ &
$0<\tau_{\mbox{\tiny $T$}}<\ln m$ &
$\tau_{\mbox{\tiny $T$}}$ &
${\displaystyle \frac{1}{m(m\,e^{-\tau_{\mbox{\tiny $T$}}}-1)}}$ &
$F^a$ \\
\hline
\rule[-4mm]{0mm}{1.1cm}
$-\frac12<x<0$ &
$\tau_{\mbox{\tiny $T$}}>\ln m$ &
$\mbox{$\frac12$}(\tau_{\mbox{\tiny $T$}}+\ln m)$ &
$N\rightarrow\infty$ &
$F^c$ \\
\hline
\multirow{2}*{\rule[-2mm]{0mm}{1cm} $0<x<\frac{1}{m-1}$}
\rule[-4mm]{0mm}{1.1cm}&
${\displaystyle \ln m<\tau_{\mbox{\tiny $T$}}<\ln\left[\frac{(1+x)^2}{m\,x^2}\right]}$ &
$\mbox{$\frac12$}(\tau_{\mbox{\tiny $T$}}+\ln m)$ &
$N\rightarrow\infty$ &
$F^c$ \\
\cline{2-5} & \rule[-4mm]{0mm}{1.1cm}
$\tau_{\mbox{\tiny $T$}}>\ln \frac{(1+x)^2}{m\,x^2}$ &
$\tau_{\mbox{\tiny $T$}}$ &
${\displaystyle \frac{e^{-\tau_{\mbox{\tiny $T$}}}}{1-m\,e^{-\tau_{\mbox{\tiny $T$}}}}
}$ &
$F^b$ \\
\hline
\rule[-4mm]{0mm}{1.1cm}
$x>\frac{1}{m-1}$ &
$\tau_{\mbox{\tiny $T$}}>\ln m$ &
$\tau_{\mbox{\tiny $T$}}$ &
${\displaystyle \frac{e^{-\tau_{\mbox{\tiny $T$}}}}{1-m\,e^{-\tau_{\mbox{\tiny $T$}}}}}$ &
$F^b$ \\
\hline
\end{tabular}
\end{center}
\caption{ Values of the optimized $N^{\rm opt}$ and $\tau_0^{\rm
opt}$ for fixed values of $\tau_{\mbox{\tiny $T$}}$ and $x$. The value reached
by the fidelity $F^{\rm max}$ for these optimal choices is given
in the last column. \label{optim}}
\end{table}
\par
An inspection of Tab.~\ref{optim} shows very interesting features of
the telecloning protocol in presence of noise. It is immediate to
recognize that the optimal value $N^{\rm opt}$ is significantly
different from the optimal value in the ideal case [\refeq{NOptId}].
As a matter of fact $N^{\rm opt}$ is divergent in some cases.
Remarkably, in the optimization of $N$ and $\tau_0$ the homodyne
detection efficiency $\Delta$ plays no role, whereas the thermal
noises $\mu$ and $\nu$ introduce a dependence on $x$, {\em i.e.} only
on their ratio. Furthermore, one may note that what we have called the
best location of the source (that is $\tau_0^{\rm opt}$) is never
given by the simple choices $\tau_0=0$ or $\tau_0=\tau_{\mbox{\tiny $T$}}/2$. In
order to clarify this point let us first consider the case $\tau_0=0$,
which can be physically implemented by homodyning mode $a_0$
immediately after the generation of $|\gr{\Psi}_m\rangle$, and then letting the
other modes propagate to the receiving station where they are
eventually displaced. An immediate calculation shows that in this case
the fidelity (\ref{NFidSym}) is maximized for $N^{\rm
opt}=1/m(m\,e^{\tau_{\mbox{\tiny $T$}}}-1)$ and is given by
\begin{align}
F^{\rm max}(\tau_0=0)=\frac{2\,m}{2\,e^{\tau_{\mbox{\tiny $T$}}}\left[1+m(\mu-\nu)+2\,\nu \right]-m\left[\Delta+2(2+\mu+\nu)\right]}\;.
\label{FeasF1}
\end{align}
Concerning the case $\tau_0=\tau_{\mbox{\tiny $T$}}/2$, whose physical
implementation simply means to put the source of $|\gr{\Psi}_m\rangle$ in the middle
of the transmission line, one has that the fidelity is maximized for
$N^{\rm opt}=1/m(m-1)$ and reads
\begin{align}
F^{\rm max}(\tau_0=\tau_{\mbox{\tiny $T$}}/2)
=\frac{2\,m}{m(4+\Delta+4\,\mu)-2e^{-\tau_{\mbox{\tiny $T$}}/2}\left[1+2\,\nu+2\,m(\mu-\nu)\right]}\;.
\label{FeasF2}
\end{align}
Notice that only in this case the optimization over $N$ leads to the
same $|\gr{\Psi}_m\rangle$ as in the ideal case (see \refeq{NOptId}). A comparison
of the last two instances with the optimal one, shows how
significantly the choice of $\tau_0$ affects the value of the clones'
fidelity. In Figs.~\ref{f:OptVsFeas}, \ref{f:OptVsFeas2} and
\ref{f:OptVsFeas3} we compared the two fidelities given in
Eqs.~(\ref{FeasF1}) and (\ref{FeasF2}) with the one given in
Tab.~\ref{optim} (see captions for details). We clearly see that the
optimized fidelity is much larger then the other two, thus providing a
cloning beyond the classical limit for higher propagation times
$\tau_{\mbox{\tiny $T$}}$. As it is apparent from Fig. \ref{f:OptVsFeas3} we have
$F^b < \frac12$ $\forall \tau_{\mbox{\tiny $T$}}$. Indeed, it can be shown
analitically that $F^b<\frac12$ in any regime for which $F^{\rm
max}=F^b$.
\begin{figure}[h]
\vspace{2cm}
\setlength{\unitlength}{0.4cm}
\centerline{%
\begin{picture}(15,0)
\put(0,0){\makebox(-3,0)[c]{\epsfxsize=5cm\epsffile{OVsF_rpm2.eps}}}
\put(3,-4.4){$\tau_{\mbox{\tiny $T$}}$}
\put(-8.5,3.8){$F^{\rm max}$}
\put(2,2.8){\footnotesize $m=2$}
\end{picture}
\begin{picture}(15,0)
\put(0,0){\makebox(-3,0)[c]{\epsfxsize=5cm\epsffile{OVsF_rpm5.eps}}}
\put(3,-4.4){$\tau_{\mbox{\tiny $T$}}$}
\put(-8.5,3.8){$F^{\rm max}$}
\put(1.6,2.5){\footnotesize $m=5$}
\end{picture}
\begin{picture}(-4,0)
\put(0,0){\makebox(-3,0)[c]{\epsfxsize=5cm\epsffile{OVsF_rpm10.eps}}}
\put(3,-4.4){$\tau_{\mbox{\tiny $T$}}$}
\put(-8.5,3.8){$F^{\rm max}$}
\put(1.3,2.5){\footnotesize $m=10$}
\end{picture}
}
\vspace{1.8cm}
\caption{Comparison of the fidelities given in Eqs.~(\ref{FeasF1}) (dotted line) and (\ref{FeasF2}) (dashed line) with the one
given in Tab.~\ref{optim} (solid line). As an example, we have
chosen the following parameters: $\mu=0.03$, $\nu=0.01$ and
$\Delta=0.02$ ($x=1/51$). The plots are referred to the case
$m=2,5,10$ and the vertical line corresponds to $\tau_{\mbox{\tiny $T$}}=\ln 2$,
$\tau_{\mbox{\tiny $T$}}=\ln 5$ and $\tau_{\mbox{\tiny $T$}}=\ln 10$ respectively.
Accordingly to Tab.~\ref{optim}, the optimal fidelity is given by
\refeq{Fa} and \refeq{Fc} at the left and at the right of the
vertical lines, respectively.}
\label{f:OptVsFeas}
\end{figure}
\begin{figure}[h]
\vspace{1.5cm}
\setlength{\unitlength}{0.4cm}
\centerline{%
\begin{picture}(15,0)
\put(0,0){\makebox(-3,0)[c]{\epsfxsize=5cm\epsffile{OVsF_rnm2.eps}}}
\put(3,-4.4){$\tau_{\mbox{\tiny $T$}}$}
\put(-8.5,3.8){$F^{\rm max}$}
\put(2,2.8){\footnotesize $m=2$}
\end{picture}
\begin{picture}(15,0)
\put(0,0){\makebox(-3,0)[c]{\epsfxsize=5cm\epsffile{OVsF_rnm3.eps}}}
\put(3,-4.4){$\tau_{\mbox{\tiny $T$}}$}
\put(-8.5,3.8){$F^{\rm max}$}
\put(1.6,2.8){\footnotesize $m=3$}
\end{picture}
\begin{picture}(-4,0)
\put(0,0){\makebox(-3,0)[c]{\epsfxsize=5cm\epsffile{OVsF_rnm5.eps}}}
\put(3,-4.4){$\tau_{\mbox{\tiny $T$}}$}
\put(-8.5,3.8){$F^{\rm max}$}
\put(1.3,2.5){\footnotesize $m=5$}
\end{picture}
}
\vspace{1.8cm}
\caption{Comparison of the fidelities given in Eqs.~(\ref{FeasF1}) (dotted line) and (\ref{FeasF2}) (dashed line) with the one
given in Tab.~\ref{optim} (solid line). As an example of the case
$\mu<\nu$, we have chosen the following parameters: $\mu=0.05$,
$\nu=0.2$ and $\Delta=0.05$ ($x=-3/28$). The plots are referred to
the case $m=2,3,5$ and the vertical line corresponds to
$\tau_{\mbox{\tiny $T$}}=\ln 2$, $\tau_{\mbox{\tiny $T$}}=\ln 3$ and $\tau_{\mbox{\tiny $T$}}=\ln 5$
respectively. Accordingly to Tab.~\ref{optim}, the optimal fidelity
is given by \refeq{Fa} and \refeq{Fc} at the left and at the right
of the vertical lines, respectively.}
\label{f:OptVsFeas2}
\end{figure}
\begin{figure}[h]
\vspace{1.5cm}
\setlength{\unitlength}{0.4cm}
\centerline{%
\begin{picture}(15,0)
\put(0,0){\makebox(-3,0)[c]{\epsfxsize=5cm\epsffile{OVsF_rgm2.eps}}}
\put(3,-4.4){$\tau_{\mbox{\tiny $T$}}$}
\put(-8.5,3.8){$F^{\rm max}$}
\put(2,2.8){\footnotesize $m=2$}
\end{picture}
\begin{picture}(-5,0)
\put(0,0){\makebox(-3,0)[c]{\epsfxsize=5cm\epsffile{OVsF_rgm3.eps}}}
\put(3,-4.4){$\tau_{\mbox{\tiny $T$}}$}
\put(-8.5,3.8){$F^{\rm max}$}
\put(1.7,2.6){\footnotesize $m=3$}
\end{picture}
}
\vspace{1.8cm}
\caption{Comparison of the fidelities given in Eqs.~(\ref{FeasF1}) (dotted line) and (\ref{FeasF2}) (dashed line) with the one
given in Tab.~\ref{optim} (solid line). As an example, we have
chosen the following parameters: $\mu=0.6$, $\nu=0.01$ and
$\Delta=0.1$ ($x=59/102$). The plots are referred to the case
$m=2,3$ and the vertical line corresponds to $\tau_{\mbox{\tiny $T$}}=\ln 2$
and $\tau_{\mbox{\tiny $T$}}=\ln 3$ respectively. Accordingly to
Tab.~\ref{optim}, for $m=2$ the optimal fidelity is given
by \refeq{Fa} and \refeq{Fc} at the left and at the right of the
vertical line, respectively. For $m=3$, it is instead
given by \refeq{Fb} at the right of the vertical line. }
\label{f:OptVsFeas3}
\end{figure}
\par
Besides what we pointed out above, the most striking feature of the
proposed telecloning protocol is that it saturates the bound for
optimal cloning even in the presence of losses, for propagation times
$\tau_{\mbox{\tiny $T$}}<\ln m$, hence divergent as the number of modes increases.
More specifically, consider the first row in Tab.~\ref{optim} and set
$\mu=\nu=\Delta=0$. Then, one has that for $\tau_{\mbox{\tiny $T$}}<\ln m$ the
maximum fidelity is given by $F^{\rm max}=m/(2m-1)$. That is, the
optimal fidelity for a symmetric cloning can still be attained,
carefully choosing $N$ and $\tau_0$. Such a result cannot be achieved
letting the input state propagate directly to the receiving station
and then cloning it locally \cite{besancon}. Thus, in the context of our protocol the
entangled resource significantly enhances the capacity of distributing
quantum information. This is due to the fact that the transmission
through a lossy channel of an unknown coherent state irreversibly
degrades the information encoded in it, thus avoiding the local
construction of optimal clones at the receiving station. On the other
hand, multimode entanglement is robust against this type of noise and,
even if decreased along the transmission line, it is still sufficient
to provide optimal cloning. Actually, there is no need of an infinite
amount of entanglement to perform an optimal telecloning process.
\par
As concern the case of higher transmission times, {\em i.e.}
$\tau_{\mbox{\tiny $T$}}>\ln m$, the fidelity reads (again for $\mu=\nu=\Delta=0$)
\begin{align}
F^{\rm max}=\left(2-\sqrt{\frac{e^{-\tau_{\mbox{\tiny $T$}}}}{m}}\right)^{-1}.
\label{LossyFLong}
\end{align}
Eq.~(\ref{LossyFLong}) shows that the fidelity is always greater then the
classical bound $F=\frac12$, which in turn means that the state used
to support the protocol is entangled for any $\tau_{\mbox{\tiny $T$}}$. This is
reminiscent of the result already pointed out in
Sec.~\ref{s:NoisySUm1}, where the full inseparability has been proved
for any $\tau_{\mbox{\tiny $T$}}$ for $m=2$ (notice that $\tau_0=\tau_{\mbox{\tiny $T$}}/2$ in case
of Sec.~\ref{s:NoisySUm1}). Here, we proved that the same conclusion
is valid for any $m$.
\par
Another interesting feature in the case $\tau_{\mbox{\tiny $T$}}<\ln m$ is that
$F^{\rm max}$ does not depend on $\tau_{\mbox{\tiny $T$}}$ if $\mu=\nu$. Indeed,
it turns out that for $\mu=0$ and $\nu\neq0$ it is better to let the
entangled resource propagate (up to $\tau_T=\ln m$) instead of using
it immediately after the generation. This effect may be naively
understood by considering that the entangled state generated for
$\nu\neq0$ is mixed and, as consequence, the propagation in a purely
dissipative environment acts like a sort of purification process on
it. As it is apparent from \refeq{Fa} this effect is present whenever
$\mu<\nu$ (see also Fig.~\ref{f:OptVsFeas2}).
\par
Finally, a comment is needed concerning the scaling of the fidelity with respect
to the number of modes $m$. We have already pointed out that for the
case $\mu=\nu=\Delta=0$ the fidelity remains optimal for times
$\tau_{\mbox{\tiny $T$}}$ diverging with the number of modes. However, when
thermal noise is added ($\mu,\nu,\Delta\neq0$) the fidelity goes below
the classical value $F=\frac12$ for times $\tau_{\mbox{\tiny $T$}}$ that become
smaller as $m$ increases, as it is apparent from
Figs.~\ref{f:OptVsFeas}, \ref{f:OptVsFeas2} and \ref{f:OptVsFeas3}.
Indeed, this is consistent with the fact that the optimal fidelity itself
approaches the classical value $F=\frac12$ as $m$ increases. Hence, even a
small amount of thermal noise is enough to cancel the benefits
due to quantum entanglement.
\section{Conclusions}\label{esco}
In this paper we have dealt with the properties and applications of a
class of multimode states of radiation, the coherent states of group
${\rm SU} (m,1)$, which represent a potential resource for multiparty
quantum communication, as recent theoretical and experimental
investigation have shown. In particular, the common structure of these
multimode states allowed to consider a $1\rightarrow m$ telecloning
scheme in which a generic coherent state of ${\rm SU} (m,1)$ plays the
role of entangled resource. Exploiting the possible asymmetry of
${\rm SU} (m,1)$ coherent states we have suggested the first example, in
the framework of CV systems, of a fully asymmetric $1\rightarrow m$
cloning and have found the optimal relation, within our scheme,
between the different fidelities of the clones. In particular, we have
shown that when $(m-1)$ optimal clones are produced (accordingly to
the general bound imposed by quantum mechanics), there still remains
some quantum information at disposal. In fact, our protocol is able
to use the remaining information to realize a non-trivial $m$-th
clone. Our asymmetric scheme is aimed at the distribution of quantum
information among many parties \cite{qid}, and may find application
for quantum cryptographic purposes \cite{gisin}.
\par
In view of possible applications of our protocol in realistic
situations, we have considered the effects of noise in the various
stages of the protocol, {\em i.e.} the presence of thermal photons in
the generation process, thermal noise and losses during propagation,
and non-unit efficiency in the detection. We have derived the
fidelities of the clones as a function of the noise parameters, which
in turn allowed for adaptive modification of the protocol to face the
detrimental effects of noise. In particular, we have shown that the
optimal entangled resource in the presence of noise is significantly
different from the one in the ideal case. Also the location of the
source plays a prominent role. In fact, we have demonstrated that the
optimal location is neither in the middle between the sender and the
receiver, nor at the sender station. A striking feature of the
optimized protocol is that, even in the presence of losses along the
propagation line, the clones' fidelity remains maximal, a result which
is not achievable by means of direct transmission followed by local
cloning. This happens for propagation times that diverge as the number
of modes increases. We then conclude that our optimized telecloning
protocol is robust against noise.
\section*{Acknowledgments}
The authors are grateful to S.~Olivares for fruitful discussions. This work has
been partially supported by MIUR (FIRB RBAU014CLC-002) and by INFM
(PRA-CLON).
|
1,116,691,499,741 | arxiv | \section{Introduction}\label{sec1}
The coupled-cluster (CC) theory is considered to be the gold standard of electronic structure calculations in atoms and molecules~\cite{Kaldor, Nataraj}. It owes the title to its ability to capture electron correlation effects to a much better extent than other well-known many-body approaches such as configuration interaction (CI)~\cite{Bishop}, at a given level of truncation. This feature has led to accurate calculations of many properties in both the atomic and molecular systems (for example, see Refs.~\cite{BKS,MA}). We shall focus on the application of this method to evaluate molecular properties that are useful to probe fundamental physics, specifically the permanent electric dipole moment (PDM) and parity and time-reversal violating electric dipole moment of the electron (eEDM) \cite{Landau,Luders}. The molecular PDM is a very interesting property, and it plays a role in the sensitivity of an eEDM experiment through the polarizing factor \cite{HgX,FFCC}. The PDM is also an extremely relevant property in the ultracold sector, and molecules with large PDMs find innumerable applications in that domain. For example, the SrF molecule possesses a fairly large PDM and hence gives rise to long-range, tunable, and anisotropic dipole-dipole interactions. This aspect, in combination with the fact that SrF is laser-coolable, makes the molecule important for applications such as exploring new quantum phases and quantum computing \cite{Shuman}.
\begin{figure}[h!]
\centering
\includegraphics[width=13.00cm,height=7.0cm]{lecc_new.jpg}
\caption{Depictions of Goldstone diagrams representing the linear terms of the expectation value evaluation expression using the LERCCSD method. The notations $i, j, k, \cdots$ denote the hole lines, while $a, b, c ,\cdots$ denote the particle lines. Diagram (i) corresponds to contribution from the DF method, (ii) is from the $OT_1$ term, (iii) and (iv) are from $T_1^\dag O T_1$, (v), (vi), (vii) and (viii) are diagrams for $T_1^\dag O T_2$, with (v) and (vii) corresponding to direct terms and (vi) and (viii) corresponding to exchange terms. Sub-figures (ix) to (xvi) include direct and exchange diagrams from $T_2^\dag O T_2$. We also note that the hermitian conjugate diagrams of those given above are not explicitly sketched here. }
\label{fig:figure1}
\end{figure}
\begin{table}
\centering
\caption{\label{tab:table1} Contributions from the DF, LERCCSD and nLERCCSD methods to the $\mathcal{E}_\mathrm{eff}$s (in GV/cm) and PDMs (in Debye) of HgX, SrF, and BaF molecules from the present work (denoted as `This work' in the table). Comparison of the two properties from various works with our results are also presented.}
\begin{tabular}{llcc}
\hline
\hline
Molecule& Method & PDM&$\mathcal{E}_\mathrm{eff}$\\
\hline
SrF & CASSCF-MRCI \cite{Jardali} & 3.36&\\
& CASSCF-RSPT2 \cite{Jardali} & 3.61&\\
& Z-vector \cite{Sasmal} & 3.45&\\
& LERCCSD \cite{AEM,FFCC} & 3.6&2.17\\
& FFCCSD \cite{FFCC} & 3.62&2.16\\
& X2C-MRCI \cite{Hao} & 3.20&\\
& X2C-FSCC \cite{Hao} & 3.46&\\
&DF ({\bf This work}) & 2.99&1.54\\
& LERCCSD ({\bf This work}) & 3.57& 2.15\\
& nLERCCSD ({\bf This work}) & 3.60 & 2.16 \\
& Experiment~\cite{SrFexpt} & 3.4676(1)& \\
BaF & MRCI \cite{Tohme} & 2.96&\\
& LERCCSD \cite{AEM} & 3.4&6.50\\
& FFCCSD \cite{FFCC} & 3.41 & 6.46\\
& X2C-MRCI \cite{Hao} & 2.90 &\\
& X2C-FSCC \cite{Hao} & 3.23 &\\
& Z-vector \cite{Talukdar} & 3.08 &\\
& ECP-RASSCF \cite{Kozlov} & & 7.5\\
& RASCI \cite{Nayak} & &7.28\\
& MRCI \cite{Meyer} & & 5.1\\
& MRCI \cite{Meyer2} & & 6.1\\
&DF ({\bf This work}) & 2.61&4.81\\
& LERCCSD {\bf (This work)} &3.32&6.45\\
& nLERCCSD {\bf (This work)} & 3.37 & 6.39 \\
&Experiment (PDM)~\cite{BaFexpt} & 3.17(3) &\\
HgF & CI \cite{Yu Yu} & 4.15 & 99.26\\
& LERCCSD \cite{AJP} &2.61 &\\
& MRCI \cite{Meyer} & & 68\\
& MRCI \cite{Meyer2} & & 95 \\
&DF ({\bf This work}) & 4.11&105.69\\
& LERCCSD \cite{HgX} & &115.42\\
& FFCCSD \cite{FFCC} & 2.92 & 116.37\\
& LERCCSD {\bf (This work)} & 3.25 & 114.93\\
& nLERCCSD {\bf (This work)} & 3.45 & 113.77 \\
HgCl & CI \cite{Wadt} & 3.28 & \\
& LERCCSD \cite{AJP} & 2.72 & \\
& LERCCSD \cite{HgX} & & 113.56 \\
& FFCCSD \cite{FFCC} & 2.96 & 114.31 \\
&DF ({\bf This work}) & 4.30&104.33\\
& LERCCSD {\bf (This work)} & 3.26 & 112.51 \\
& nLERCCSD {\bf (This work)} & 3.45 & 110.94 \\
HgBr & CI \cite{Wadt} & 2.62 & \\
& LERCCSD \cite{AJP} & 2.36 & \\
& LERCCSD \cite{HgX} & & 109.29 \\
& FFCCSD \cite{FFCC} & 2.71 & 109.56 \\
&DF ({\bf This work}) & 4.14&99.72\\
& LERCCSD {(\bf This work)} & 2.62 & 109.38 \\
& nLERCCSD {(\bf This work)} & 2.94 & 107.42 \\
HgI & LERCCSD \cite{AJP} & 1.64 &\\
& LERCCSD \cite{HgX} & & 109.3\\
& FFCCSD \cite{FFCC} & 2.06 & 109.56\\
&DF ({\bf This work}) & 3.61&99.27\\
& LERCCSD {(\bf This work)} & 1.50 & 110.00\\
& nLERCCSD {(\bf This work)} & 2.01 & 107.38 \\
\hline \hline
\end{tabular}
\\
* The bond lengths chosen in out work are 2.00686 $A^{\circ}$, 2.42 $A^{\circ}$, 2.62 $A^{\circ}$, 2.81 $A^{\circ}$, 2.075 $A^{\circ}$, and 2.16 $A^{\circ}$ for HgF, HgCl, HgBr, HgI, SrF
and BaF, respectively. We used Dyall’s quadruple zeta (QZ) basis for Hg and I, Dunning’s correlation consistent polarized valence quadruple zeta (cc-pVQZ) basis for the halide atoms (F, Cl, and Br), and Dyall’s QZ functions augmented with Sapporo’s diffuse functions for Sr and Ba.
\end{table}
\begin{sidewaystable}
\begin{center}
\caption{\label{tab:table2} Individual correlation contributions to the effective electric fields (in GV/cm) of mercury monohalides (HgX; X$=$F, Cl, Br, and I), SrF, and BaF, from the LERCCSD (abbreviated as `L') and nLERCCSD (denoted by `nL') methods. In the first column, the $A$ could be $O$ (which corresponds to LERCCSD diagrams) or $O_{x-y}$ (which is associated with nLERCCSD diagrams), where `x' and `y' could stand for the corresponding particle or hole line for a given term. The values are all rounded-off to two decimal places for HgX, while numbers that are extremely small in the case of SrF and BaF are denoted in the scientific notation instead. }
\begin{tabular}{l|l|cc|cc|cc|cc|cc|cc}
\hline
\hline
\multicolumn{2}{c|}{Molecule} & \multicolumn{2}{c|}{HgF} & \multicolumn{2}{c|}{HgCl}& \multicolumn{2}{c|}{HgBr}& \multicolumn{2}{c|}{HgI}& \multicolumn{2}{c|}{SrF}& \multicolumn{2}{c}{BaF} \\ \hline
Term&Diagram&L&nL&L&nL&L&nL&L&nL&L&nL&L&nL\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
DF&Fig. 1(i)&\multicolumn{2}{c|}{105.69}&\multicolumn{2}{c|}{104.33}&\multicolumn{2}{c|}{99.72}&\multicolumn{2}{c|}{99.27}&\multicolumn{2}{c|}{1.54}&\multicolumn{2}{c}{4.81} \\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
$AT_1$&Fig. 1(ii)&17.09&13.11&17.05&12.21&19.83&14.76&23.85&15.71&0.63&0.61&1.79&1.60\\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
$T_1^\dag A T_1$&Fig. 1(iii)&$-$1.85&$-$0.28&$-$2.01&$-$0.25&$-$2.65&$-$0.62&$-$3.66&$-$0.41&$-$1.86$\times 10^{-2}$&$-$1.00$\times 10^{-3}$&$-$7.65$\times 10^{-2}$&$-$2.00$\times 10^{-4}$\\
&Fig. 1(iv)&$-$1.41&0.16&$-$1.40&0.28&$-$1.21&0.47&$-$1.56&1.16&$-$9.01$\times 10^{-3}$&4.80$\times 10^{-4}$&$-$6.47$\times 10^{-2}$&7.60$\times 10^{-3}$\\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
$T_1^\dag A T_2$&Fig. 1(v)&1.19&0.93&0.65&0.29&0.38&$-$0.11&0.38&$-$0.27&2.73$\times 10^{-3}$&1.02$\times 10^{-3}$&9.46$\times 10^{-3}$&2.51$\times 10^{-3}$\\
&Fig. 1(vi)&0.05&0.08&0.06&0.05&$-$0.01&$-$0.07&$-$0.03&$-$0.09&$-$4.91$\times 10^{-4}$&$-$7.93$\times 10^{-4}$&$-$1.49$\times 10^{-3}$&$-$2.39$\times 10^{-3}$\\
&Fig. 1(vii)&0.61&0.58&0.92&0.85&0.66&0.32&0.57&0.19&1.43$\times 10^{-2}$&1.48$\times 10^{-2}$&7.04$\times 10^{-2}$&7.13$\times 10^{-2}$\\
&Fig. 1(viii)&$-$1.31&$-$1.27&$-$1.24&$-$1.18&$-$0.91&$-$0.63&$-$1.26&$-$0.98&9.63$\times 10^{-3}$&9.91$\times 10^{-3}$&$-$2.49$\times 10^{-2}$&$-$2.32$\times 10^{-2}$\\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
$T_2^\dag A T_2$&Fig. 1(ix)&$-$2.50&$-$2.46&$-$2.54&$-$2.49&$-$2.68&$-$2.65&$-$2.93&$-$2.89&8.58$\times 10^{-3}$&6.15$\times 10^{-3}$&3.22$\times 10^{-2}$&2.17$\times 10^{-2}$\\
&Fig. 1(x)&$-$0.17&$-$0.17&$-$0.15&$-$0.14&$-$0.14&$-$0.13&$-$0.13&$-$0.11&$-$2.17$\times 10^{-3}$&$-$1.93$\times 10^{-3}$&$-$6.87$\times 10^{-3}$&$-$6.83$\times 10^{-3}$\\
&Fig. 1(xi)&$-$1.22&$-$1.40&$-$1.50&$-$1.47&$-$1.65&$-$1.85&$-$1.96&$-$1.99&$-$1.96$\times 10^{-2}$&$-$2.17$\times 10^{-2}$&$-$7.54$\times 10^{-2}$&$-$7.71$\times 10^{-2}$\\
&Fig. 1(xii)&$-$0.17&$-$0.17&$-$0.15&$-$0.14&$-$0.14&$-$0.13&$-$0.13&$-$0.11&$-$2.17$\times 10^{-3}$&$-$1.93$\times 10^{-3}$&$-$6.87$\times 10^{-3}$&$-$6.83$\times 10^{-3}$\\
&Fig. 1(xiii)&$-$1.64&$-$1.57&$-$1.67&$-$1.57&$-$1.70&$-$1.58&$-$1.84&$-$1.69&$-$1.38$\times 10^{-3}$&$-$1.96$\times 10^{-3}$&$-$3.20$\times 10^{-4}$&$-$9.61$\times 10^{-4}$\\
&Fig. 1(xiv)&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$5.39$\times 10^{-4}$&$-$5.53$\times 10^{-4}$&$-$2.28$\times 10^{-3}$&$-$2.33$\times 10^{-3}$\\
&Fig. 1(xv)&0.77&0.74&0.36&0.37&0.08&0.12&$-$0.37&$-$0.21&4.42$\times 10^{-3}$&4.51$\times 10^{-3}$&2.82$\times 10^{-3}$&3.18$\times 10^{-3}$\\
&Fig. 1(xvi)&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$0.10&$-$5.39$\times 10^{-4}$&$-$5.53$\times 10^{-4}$&$-$2.28$\times 10^{-3}$&$-$2.33$\times 10^{-3}$\\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
\multicolumn{2}{c|}{Total}&114.93&113.77&112.51&110.94&109.38&107.42&110.00&107.38&2.15&2.16&6.45&6.39\\
\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
\hline \hline
\end{tabular}
\end{center}
\end{sidewaystable}
The extremely tiny eEDM is yet to be detected. Upper bound to it are extracted by a combination of relativistic many-body theory and experiment \cite{ACME2018}. These bounds, in turn, help to constrain several theories that lie beyond the Standard Model of particle physics, for example, supersymmetric theories \cite{Cesarotti}. A knowledge of the eEDM also aids in understanding the underlying physics that describes the matter-antimatter asymmetry in the universe~\cite{Fuyuto}. The theoretical molecular property of interest to eEDM is the effective electric field, $\mathcal{E}_\mathrm{eff}$. It is the internal electric field that is experienced by an electron due to other electrons and nuclei in a molecule. An accurate estimate of this quantity is used in setting or improving an upper bound to eEDM (for example, Ref. \cite{MA}), or to propose a new candidate for molecular eEDM experiments (for example, Ref. \cite{HgX}). This quantity can only be obtained using a \textit{relativistic} many-body theory~\cite{BPD}. Calculating the PDM provides information on polarizing factor for molecules that are proposed for an eEDM searches, where the property has not been measured.
There have been several calculations of $\mathcal{E}_\mathrm{eff}$ for various molecules using the singles and doubles excitations approximation in the relativistic CC theory (RCCSD method), for example, Refs.~\cite{SPbF,SRaF}. In our earlier RCCSD calculations \cite{MA,HgX,Sunaga,HgA,RaH,YbOH}, the expectation value evaluating expression was approximated to only the linear terms (referred to as LERCCSD method).
Later, the calculations performed for HgX (X=F, Cl, Br, and I), SrF, and BaF besides other molecules were verified by using the finite-field energy derivative approach of the RCCSD theory (FFRCCSD method)~\cite{FFCC}, by adding the interaction Hamiltonians along with the residual Coulomb interaction operator. The LERCCSD and the FFRCCSD approaches showed excellent agreements (within 1 percent) in the values of $\mathcal{E}_\mathrm{eff}$. The results for the PDMs obtained in these methods were comparable for SrF and BaF and also were over-estimating the property with respect to their experimental values, but they differed substantially for HgX (with as much as 20 percent for HgI). The shortcomings of the above FFRCCSD method were that the accuracy of the results depended on numerical differentiation. Moreover, orbital relaxation effects were neglected by not including the perturbation in the Dirac-Fock (DF) level itself, in order to avoid breaking of Kramer's symmetry in the presence of a time-reversal symmetry violating eEDM interaction, which has to eventually be compensated for with further iterations.
\begin{sidewaystable}
\begin{center}
\caption{\label{tab:table3} Correlation contributions to the PDMs (in Debye) of mercury monohalides (HgX; X$=$F, Cl, Br, and I), SrF, and BaF. The notation is same as in Table \ref{tab:table2}. The entry, `NC' stands for nuclear contribution to the PDM. }
\begin{tabular}{l|l|cc|cc|cc|cc|cc|cc}
\hline
\hline
\multicolumn{2}{c|}{Molecule} & \multicolumn{2}{c|}{HgF} & \multicolumn{2}{c|}{HgCl}& \multicolumn{2}{c|}{HgBr}& \multicolumn{2}{c|}{HgI}& \multicolumn{2}{c|}{SrF}& \multicolumn{2}{c}{BaF} \\ \hline
Term&Diagram&L&nL&L&nL&L&nL&L&nL&L&nL&L&nL\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
DF&Fig. 1(i)&\multicolumn{2}{c|}{$-$767.04}&\multicolumn{2}{c|}{$-$925.61}&\multicolumn{2}{c|}{$-$1002.31}&\multicolumn{2}{c|}{$-$1075.83}&\multicolumn{2}{c|}{$-$375.75}&\multicolumn{2}{c}{$-$578.39}\\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
$AT_1$&Fig. 1(ii)&$-$0.60&$-$0.78&$-$0.83&$-$1.01&$-$1.26&$-$1.54&$-$1.92&$-$2.33&0.63&0.65&0.80&0.82\\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
$T_1^\dag A T_1$&Fig. 1(iii)&0.21&0.04&0.26&0.06&0.34&0.11&0.48&0.23&0.14&$-$0.01&0.19&$-$0.02\\
&Fig. 1(iv)&$-$0.45&0.05&$-$0.48&0.07&$-$0.62&0.13&$-$0.79&0.26&$-$0.18&$-$0.01&$-$0.23&$-$0.02\\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
$T_1^\dag A T_2$&Fig. 1(v)&0.10&0.11&0.13&0.13&0.20&0.19&0.30&0.29&2.44$\times 10^{-2}$&2.53$\times 10^{-2}$&3.04$\times 10^{-2}$&3.13$\times 10^{-2}$\\
&Fig. 1(vi)&0.01&0.01&0.00&0.00&0.00&0.00&0.00&0.00&$-$1.99$\times 10^{-3}$&2.13$\times 10^{-3}$&2.40$\times 10^{-3}$&2.55$\times 10^{-3}$\\
&Fig. 1(vii)&0.01&0.01&0.01&$-$0.01&$-$0.01&$-$0.03&0.01&$-$0.01&9.48$\times 10^{-3}$&9.15$\times 10^{-3}$&9.46$\times 10^{-3}$&9.42$\times 10^{-3}$\\
&Fig. 1(viii)&0.02&0.01&0.01&0.01&0.02&0.03&0.04&0.04&1.47$\times 10^{-3}$&1.45$\times 10^{-3}$&$-$4.02$\times 10^{-3}$&4.10$\times 10^{-3}$\\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
$T_2^\dag A T_2$&Fig. 1(ix)&1.19&1.19&1.48&1.47&1.66&1.66&1.84&1.85&0.82&0.82&0.98&0.97\\
&Fig. 1(x)&$-$0.01&$-$0.01&0.00&0.00&0.00&0.00&0.00&0.00&9.92$\times 10^{-4}$&1.57$\times 10^{-3}$&$-$2.43$\times 10^{-3}$&$-$2.99$\times 10^{-3}$\\
&Fig. 1(xi)&1.14&1.16&1.40&1.41&1.57&1.62&1.73&1.81&0.79&0.78&0.95&0.94\\
&Fig. 1(xii)&$-$0.01&$-$0.01&0.00&0.00&0.00&0.00&0.00&0.00&9.22$\times 10^{-4}$&1.57$\times 10^{-3}$&$-$2.43$\times 10^{-3}$&$-$2.99$\times 10^{-3}$\\
&Fig. 1(xiii)&$-$1.26&$-$1.25&$-$1.53&$-$1.52&$-$1.72&$-$1.70&$-$1.91&$-$1.89&$-$0.84&$-$0.84&$-$1.01&$-$1.01\\
&Fig. 1(xiv)&0.01&0.01&0.01&0.01&0.00&0.00&0.01&0.00&7.35$\times 10^{-3}$&7.37$\times 10^{-3}$&$-$8.38$\times 10^{-3}$&0.01\\
&Fig. 1(xv)&$-$1.23&$-$1.21&$-$1.51&$-$1.48&$-$1.70&$-$1.67&$-$1.91&$-$1.85&$-$0.83&$-$0.83&$-$0.99&$-$0.98\\
&Fig. 1(xvi)&0.01&0.01&0.01&0.01&0.00&0.00&0.01&0.00&7.35$\times 10^{-3}$&7.37$\times 10^{-3}$&$-$8.38$\times 10^{-3}$&0.01\\
&&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
\multicolumn{2}{c|}{NC}&\multicolumn{2}{c|}{771.15}&\multicolumn{2}{c|}{929.91}&\multicolumn{2}{c|}{1006.45}&\multicolumn{2}{c|}{1079.44}&\multicolumn{2}{c|}{378.74}&\multicolumn{2}{c}{581.00}\\
\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\ \hline
\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
\multicolumn{2}{c|}{Total}&3.25&3.45&3.26&3.45&2.62&2.94&1.50&2.01&3.57&3.60&3.32&3.37\\
\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c|}{}&\multicolumn{2}{c}{}\\
\hline \hline
\end{tabular}
\end{center}
\end{sidewaystable}
Here, we intend to calculate the values of ${\cal E}_{eff}$ and PDM by including the non-linear terms in the expectation value evaluation expression of the RCCSD method (nLERCCSD method). We adopt the intermediate-diagram approach as discussed in Refs.~\cite{Bartlett,Yash} to implement these non-linear RCC terms. For this purpose, we have undertaken molecules that are very relevant for eEDM studies. HgX molecules were identified as promising candidates for future eEDM searches, owing to their extremely large effective electric fields as well as experimental advantages~\cite{HgX}. A recent work that proposes to laser-cool HgF has opened new avenues for an upcoming eEDM experiment with the molecule~\cite{HgFlc}. Another very important molecule in this regard is BaF, and two eEDM experiments are simultaneously underway for this system~\cite{15,15prime}. Experimental values of the PDMs are available only for BaF among the systems that we mention above. We also present results for the PDM of SrF as it was the first molecule to be laser-cooled~\cite{Shuman}, and a very precise measurement of this quantity has been reported~\cite{SrFexpt}.
\section{Theory and Implementation}\label{sec2}
In the RCC theory, the wave function of a molecular state is expressed as \cite{Cizek}
\begin{eqnarray}
\arrowvert \Psi \rangle = e^{T} \arrowvert \Phi_{0} \rangle,
\end{eqnarray}
where $T$ is the cluster operator and $\arrowvert \Phi_{0} \rangle$ is the reference state obtained by mean-field theory. We use the Dirac-Coulomb Hamiltonian in our calculations, and $\arrowvert \Phi_0 \rangle$ is obtained using the DF method. In the RCCSD method, we approximate $T = T_1 + T_2$ with subscripts $1$ and $2$ indicating singles and doubles excitations, respectively, and they are given using the second-quantization operators as
\begin{eqnarray}
T_1 &=& \sum_{i,a} t_i^a a_a^{\dag}a_i \ \ \ \text{and} \ \ \
T_2 = \frac{1}{2} \sum_{i,j,a,b} t_{ij}^{ab} a_a^{\dag}a_b^{\dag}a_j a_i ,
\end{eqnarray}
where the notation $i, j$ is used to denote holes $a, b$ refer to particles, $t_i^a$ is the one hole-one particle excitation amplitude and $t_{ij}^{ab}$ is the two-hole two-particle excitation amplitude.
We have employed the UTChem~\cite{utchem,utchem2} program for the DF calculations, the atomic orbital to molecular orbital integral transformations as well as generating the property integrals, and the Dirac08~\cite{dirac} code to obtain the RCCSD excitation operator amplitudes. It is important to reiterate that all the non-linear terms were included in the equations of the RCCSD method to determine the excitation amplitudes.
\begin{figure*}[t]
\centering
\begin{tabular}{cccccc}
\includegraphics[width=1.3cm,height=1.4cm]{pp1.png} & \includegraphics[width=1.6cm,height=1.4cm]{pp2.png} & \includegraphics[width=1.6cm,height=1.4cm]{pp3.png} &
\includegraphics[width=1.6cm,height=1.4cm]{pp4.png} &
\includegraphics[width=1.6cm,height=1.4cm]{pp5.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp6.png}\\
(i) & (ii) & (iii) & (iv) & (v) & (vi) \\ \\
\includegraphics[width=2.2cm,height=1.4cm]{pp7.png} & \includegraphics[width=2.2cm,height=1.4cm]{pp8.png} & \includegraphics[width=2.2cm,height=1.4cm]{pp9.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp10.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp11.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp12.png}\\
(vii) & (viii) & (ix) & (x) & (xi) & (xii)\\ \\
\includegraphics[width=2.2cm,height=1.4cm]{pp13.png} & \includegraphics[width=2.2cm,height=1.4cm]{pp14.png} & \includegraphics[width=2.2cm,height=1.4cm]{pp15.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp16.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp17.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp18.png}\\
(xiii) & (xiv) & (xv) & (xvi) & (xvii) & (xviii)\\ \\
\includegraphics[width=2.2cm,height=1.4cm]{pp19.png} & \includegraphics[width=2.2cm,height=1.4cm]{pp20.png} & \includegraphics[width=2.2cm,height=1.4cm]{pp21.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp22.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp23.png} &
\includegraphics[width=2.2cm,height=1.4cm]{pp24.png}\\
(xix) & (xx) & (xxi) & (xxii) & (xxiii) & (xxiv)\\ \\
\includegraphics[width=2.2cm,height=1.4cm]{pp25.png} & \includegraphics[width=2.2cm,height=1.4cm]{pp26.png} & \includegraphics[width=2.2cm,height=1.4cm]{pp27.png} &
&&\\
(xxv) & (xxvi) & (xxvii) & & & \\
\end{tabular}
\caption{The effective one-body terms representing particle-particle (p-p) diagrams considered in this work. $i, j, k, \cdots$ and $a, b, c, \cdots$ refer to holes and particles, respectively. The symbol of the operator, $O_{p-p}$, is not mentioned explicitly in the diagrams, and the property vertex is the dashed line ending with an `$o$' in each diagram. }
\label{fig:figure2}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{tabular}{cccccc}
\includegraphics[width=1.4cm,height=1.4cm]{hh1.png} & \includegraphics[width=1.6cm,height=1.4cm]{hh2.png} & \includegraphics[width=1.6cm,height=1.4cm]{hh3.png} &
\includegraphics[width=1.6cm,height=1.4cm]{hh4.png} &
\includegraphics[width=1.6cm,height=1.4cm]{hh5.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh6.png}\\
(i) & (ii) & (iii) & (iv) & (v) & (vi) \\ \\
\includegraphics[width=2.2cm,height=1.4cm]{hh7.png} & \includegraphics[width=2.2cm,height=1.4cm]{hh8.png} & \includegraphics[width=2.2cm,height=1.4cm]{hh9.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh10.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh11.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh12.png}\\
(vii) & (viii) & (ix) & (x) & (xi) & (xii)\\ \\
\includegraphics[width=2.2cm,height=1.4cm]{hh13.png} & \includegraphics[width=2.2cm,height=1.4cm]{hh14.png} & \includegraphics[width=2.2cm,height=1.4cm]{hh15.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh16.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh17.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh18.png}\\
(xiii) & (xiv) & (xv) & (xvi) & (xvii) & (xviii)\\ \\
\includegraphics[width=2.2cm,height=1.4cm]{hh19.png} & \includegraphics[width=2.2cm,height=1.4cm]{hh20.png} & \includegraphics[width=2.2cm,height=1.4cm]{hh21.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh22.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh23.png} &
\includegraphics[width=2.2cm,height=1.4cm]{hh24.png}\\
(xix) & (xx) & (xxi) & (xxii) & (xxiii) & (xxiv)\\ \\
\includegraphics[width=2.2cm,height=1.4cm]{hh25.png} & \includegraphics[width=2.2cm,height=1.4cm]{hh26.png} & \includegraphics[width=2.2cm,height=1.4cm]{hh27.png} &&&\\
(xxv) & (xxvi) & (xxvii) & & & \\
\end{tabular}
\caption{The effective one-body terms representing the hole-hole (h-h) diagrams that are included in this work. The notations are the same as in the figure for the particle-particle diagrams. The property operator, $O_{h-h}$ is not explicitly mentioned in each of the diagrams, just as in Fig. \ref{fig:figure2}. }
\label{fig:figure3}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{tabular}{cccccc}
\includegraphics[width=1.4cm,height=1.4cm]{ph1.png} & \includegraphics[width=1.6cm,height=1.4cm]{ph2.png} & \includegraphics[width=1.6cm,height=1.4cm]{ph3.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph4.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph5.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph6.png}\\
(i) & (ii) & (iii) & (iv) & (v) & (vi) \\ \\
\includegraphics[width=2.2cm,height=1.4cm]{ph7.png} & \includegraphics[width=2.2cm,height=1.4cm]{ph8.png} & \includegraphics[width=2.2cm,height=1.4cm]{ph9.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph10.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph11.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph12.png}\\
(vii) & (viii) & (ix) & (x) & (xi) & (xii)\\ \\
\includegraphics[width=2.2cm,height=1.4cm]{ph13.png} & \includegraphics[width=2.2cm,height=1.4cm]{ph14.png} & \includegraphics[width=2.2cm,height=1.4cm]{ph15.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph16.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph17.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph18.png}\\
(xiii) & (xiv) & (xv) & (xvi) & (xvii) & (xviii)\\ \\
\includegraphics[width=2.2cm,height=1.4cm]{ph19.png} & \includegraphics[width=2.2cm,height=1.4cm]{ph20.png} & \includegraphics[width=2.2cm,height=1.4cm]{ph21.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph22.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph23.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph24.png}\\
(xix) & (xx) & (xxi) & (xxii) & (xxiii) & (xxiv)\\ \\
\includegraphics[width=2.2cm,height=1.4cm]{ph25.png} & \includegraphics[width=2.2cm,height=1.4cm]{ph26.png} &
\includegraphics[width=2.2cm,height=1.4cm]{ph27.png}&
\includegraphics[width=2.2cm,height=1.4cm]{ph28.png}&
&\\
(xxv) & (xxvi) & (xxvii)&(xxviii) & & \\
\end{tabular}
\caption{The list of the effective one-body terms representing the particle-hole (p-h) diagrams in this work. The notations are the same as in the particle-particle and the hole-hole diagrams. }
\label{fig:figure4}
\end{figure*}
The expectation value of an operator, $O$, in the (R)CC method, can be written as follows
\begin{eqnarray}
\langle {O} \rangle = \frac{\langle \Psi \arrowvert O \arrowvert \Psi \rangle}{\langle \Psi \arrowvert \Psi \rangle}
&=& \frac{\langle \Phi_0 \arrowvert e^{T^\dag} {O} e^T \arrowvert \Phi_0 \rangle}{\langle \Phi_0 \arrowvert e^{ T^\dag} e^T \arrowvert \Phi_0 \rangle} \nonumber \\
&=& \langle \Phi_0 \arrowvert (e^{T\dag} O e^T) \arrowvert \Phi_0 \rangle_c,
\label{eqpr}
\end{eqnarray}
where the subscript, `$c$', means that each term is fully contracted~\cite{Kvas}, or in the diagrammatic terminology, connected \cite{Bartlett}.
The PDM of a molecule is determined as \cite{AEM}
\begin{eqnarray}
\mu = \langle \Psi \arrowvert D \arrowvert \Psi \rangle + \sum_A Z_A r_A,
\end{eqnarray}
where $D$ is the electric dipole operator, the index $A$ runs over the number of nuclei, $Z_A$ is the atomic number of the $A^{th}$ nucleus, and $r_A$ is the position vector from the origin to the site of the $A^{th}$ nucleus. The first term in the above expression is the electronic term, while the second term is the nuclear contribution.
Similarly, $\mathcal{E}_{\mathrm{eff}}$ is evaluated as
\begin{eqnarray}
\mathcal{E}_{\mathrm{eff}}
&=& \langle \Psi \arrowvert \sum_{i=1}^{N_e} \beta \Sigma_i \cdot E^{\mathrm{intl}}_i \arrowvert \Psi \rangle ,
\end{eqnarray}
where the summation is over the number of electrons, $N_e$, $\beta$ is one of the Dirac matrices (also known as $\gamma_0$ in literature), $\Sigma$ is the (4x4) version of Pauli matrices, and $E^{\mathrm{intl}}_i$ is the internal electric field that is experienced by the $i^{th}$ electron, and is given by the negative of the gradient of the sum of electron-nucleus and electron-electron interaction potentials. Since the expression given above involves evaluating integrals over a two-body Coulomb operator, $\frac{1}{r_{ij}}$, and is complicated, we take recourse to employing an effective eEDM Hamiltonian instead of the one introduced above~\cite{BPD}. It follows that
\begin{eqnarray}
\mathcal{E}_{\mathrm{eff}} = - 2ic \langle \Psi \arrowvert \sum_{i=1}^{N_e} \beta \gamma_5 p_i^2 \arrowvert \Psi \rangle ,
\end{eqnarray}
where $\gamma_5$ is the product of the gamma matrices (given by $i\gamma_0\gamma_1\gamma_2\gamma_3$), while $p_i$ is the momentum of the $i^{th}$ electron.
In the LERCCSD method, the following expression has been used in the evaluation of the expectation values
\begin{eqnarray}
\langle {O} \rangle = \langle \Phi_0 \arrowvert (1+T_1+T_2)^{\dag} O (1+T_1+T_2) \arrowvert \Phi_0 \rangle_c.
\end{eqnarray}
These terms are represented using Goldstone diagrams and are shown in Fig. \ref{fig:figure1}. Note that $OT_2$ and its hermitian conjugate are zero, due to Slater-Condon rules \cite{Slater,Condon}. Diagrammatically, such a diagram will have at least two open lines, that is, it is not fully connected. The evaluation of properties using the LERCCSD approximation misses contributions corresponding to many correlation effects that will arise from the relativistic third-order many-body perturbation theory (RMBPT method). On the other hand, it is not possible to evaluate exactly Eq. (\ref{eqpr}) even in the RCCSD method as it contains a non-terminating expression. However, it is possible to demonstrate the importance of contributions from the leading-order non-linear terms corresponding to the third- and fourth-order effects of the RMBPT method. It is still extremely challenging to perform direct calculations by incorporating the non-linear terms of Eq. (\ref{eqpr}) in heavier molecules, due to amount of computations involved in it. In order to tackle this issue, we adopt an additional computational step by breaking the non-linear terms into intermediate parts as described more elaborately below. Further, we parallelize the program using Message Passing Interface (MPI) and show the scalability of their calculations with the number of processors of a computer.
The approach can be understood by revisiting the diagrams in Fig. \ref{fig:figure1}. As an example, we consider sub-figure (ii). The property vertex has one incoming particle line and an outgoing hole line. We define it as a particle-hole vertex. Such particle-hole vertices can be found in sub-figures (v) to (viii) too. In the intermediate-diagram formalism, the vertex $O$ is removed and replaced successively by each of the particle-hole (p-h) diagrams (more precisely, their hermitian conjugates) of Fig. \ref{fig:figure4}. We assign the notation $O_{p-h}$ for such a vertex. This sequence of operations already generates 26 diagrams from $O_{p-h}T_1$, and includes terms that occur in the RMBPT method. We note that $O_{p-h}T_1$ in the case of hermitian conjugate of sub-figure (i) of Fig \ref{fig:figure4} gives back the LERCCSD diagram for $OT_1$. Similar to $O_{p-h}$, we also construct the analogous $O_{h-h}$ and $O_{p-p}$ diagrams (as given in Figs. \ref{fig:figure2} and \ref{fig:figure3}, respectively), and generate more terms. We note that the property vertex from the DF diagram in Fig. \ref{fig:figure1} is \textit{not} replaced with any intermediate diagram as otherwise there will be repetition of diagrams in the calculations. Further, one has to be careful to avoid repetition of diagrams while contracting effective $O$ operators with the $T$ RCC operators. For example, it can be shown that $T_1^{\dagger}OT_1$ diagrams can appear twice through $T_1^{\dagger}O_{p-h}$ and $T_1^{\dagger}O_{p-p/h-h}T_1$ terms. Such diagrams are identified by careful analysis and their double counting is removed manually.
As can be seen from the above discussions, some of the diagrams that are undertaken in this procedure demand up to the order of $n_h^3n_p^3$ in computational cost for $n_h$ number of holes and $n_p$ number of particles. Therefore, the intermediate diagram approach systematically takes into account non-linear terms while simultaneously cutting down drastically on the computational cost as compared to a direct brute-force evaluation of a non-linear expectation value expression. This can be understood by choosing an example as follows. Replacing $O$ of Fig.~\ref{fig:figure1}(v) by the property vertex with sub-figure (xxv) from Fig.~\ref{fig:figure4} entails a computational cost $~\mathcal{O}(n_p^4n_h^4)$ for the direct evaluation of such a diagram. However, the intermediate-diagram approach leads to a cost of $~\mathcal{O}(n_p^2n_h^2+n_p^3n_h^3)$. This becomes especially relevant when we perform computations on heavy systems and with high-quality basis sets, such as those that we have chosen for this work. For instance, the RCC calculations on HgF involved $n_h = 89$, and $n_p=429$, and therefore the computational cost with an intermediate-diagram approach is a full 5 orders of magnitude smaller than a brute-force approach to computing the same diagram (without considering any molecular point group symmetries). A similar level of reduction in computational cost can be seen from the heaviest HgI too. We add at this point that we have exploited the $C_8$ double point group symmetry in our nLERCCSD code, as we had done for the earlier LERCCSD program~\cite{MA,Yanai}. This aspect also substantially lessens the computational cost, as it restricts the number of matrix elements to be evaluated based on group theoretic considerations. For example, $OT_1$ involves computing matrix elements of the form $\langle a \arrowvert O \arrowvert i \rangle$. Given that we have 89 holes and 429 particles, the number of possible matrix elements are $\sim 3.8 \times 10^5$. However, since we impose the restriction that both $i$ and $a$ should belong to the same irreducible representation, we need to evaluate only $\sim 7.2 \times 10^4$ matrix elements. Similar considerations for the more complicated terms involving $T_2$ leads to evaluating much fewer matrix elements.
\section{Results and Discussions}\label{sec3}
To carry out the calculations in the considered molecules, we have chosen values for the bond lengths as 2.00686 $A^{\circ}$, 2.42 $A^{\circ}$, 2.62 $A^{\circ}$, 2.81 $A^{\circ}$, 2.075 $A^{\circ}$, and 2.16 $A^{\circ}$ for HgF, HgCl, HgBr, HgI, SrF and BaF, respectively \cite{HgFR,HgXR,SrFR1,SrFR2,BaFR1,BaFR2}. It is to be noted that the chosen values for the HgX molecules are from theory, while those for SrF and BaF are from experiment. Also, we opted for Dyall's quadruple zeta (QZ) basis for Hg and I~\cite{DyallHg}, Dunning's correlation consistent polarized valence quadruple zeta (cc-pVQZ) basis for the halide atoms (F, Cl, and Br)~\cite{Dunning}, and Dyall's QZ functions augmented with Sapporo's diffuse functions~\cite{Sap} for Sr and Ba. We chose Dyall's basis for Hg and I as it is among the most reliable and widely used basis functions for heavy atoms. We did not add diffuse functions as it increases the computational cost drastically for QZ quality basis sets. Moreover, it was found that inclusion of diffuse functions change the effective electric field by around 2.5 percent for HgF, and is expected to lead to a similar difference for the heavier HgX systems~\cite{FFCC}. However, in the foreseeable future, such computations could be peformed to improve the calculated values of the PDMs. To minimize steep computational costs that we incurred due to our choice of QZ basis sets as well as performing all-electron calculations, we cut-off the high-lying virtuals above 1000 atomic units (a.u.) for HgX and BaF. At such a high cut-off value, we can expect that the missing contributions would be minimal, and possibly even negligible.
In Table~\ref{tab:table1}, we present our results for HgX, SrF, and BaF, all using QZ basis sets. We discuss the trends in the PDMs and $\mathcal{E}_\mathrm{eff}$s across HgX in the three different approaches, namely LERCCSD, and nLERCCSD methods, while briefly making a comparison with the FFRCCSD method from Ref.~\cite{FFCC} wherever relevant, and also examine the correlation effects in the property from lower to all-order methods. SrF and BaF molecules are treated as stand-alone systems. Firstly, we observe that the effect of non-linear corrections is to increase the PDM and decrease the effective electric field (except in the case of SrF, where the difference is still within 0.5 percent). We find from Table~\ref{tab:table1} that for SrF and BaF, the nLERCCSD method yields PDMs that are very close to their LERCCSD counterparts (within 1.5 percent of each other for both the molecules), but are not in better agreement with experiments than their LERCCSD counterparts. However, the nLERCCSD values agree well with the results from the earlier work that used the FFRCCSD approach (within 1.2 percent of each other) that also employed a QZ quality basis with diffuse functions. Such a comparison cannot be made with the HgX molecules, as available FFRCCSD data uses a double zeta (DZ) quality basis. For HgX systems, we observe that unlike in the cases of SrF and BaF, the difference between the LERCCSD and the nLERCCSD results widen from about 6 percent for HgF and HgCl, to about 25 percent for HgI. The values for $\mathcal{E}_\mathrm{eff}$ for SrF and BaF show that the LERCCSD, nLERCCSD, and FFRCCSD methods all agree to within 1 percent. In the case of HgX molecules, the LERCCSD and the nLERCCSD results are found to differ by at most 2.5 percent. We chose HgF as a representative molecule and performed FFRCCSD calculations with a QZ basis, and found that its effective electric field is 110.87 GV/cm, which is lesser than the nLERCCSD value by 2.5 percent.
The individual contributions that arise from diagrams given in Fig. \ref{fig:figure1} to the effective electric fields and PDMs of HgX, SrF, and BaF molecules are given in Tables~\ref{tab:table2} and \ref{tab:table3}. The tables give the LERCCSD contributions, where the property vertex is $O$, as well as the nLERCCSD values, where the property vertex could be of the $p-p$, $h-h$, or the $p-h$ type, depending on the diagram. For example, the contribution from sub-figure (ii) of Fig. \ref{fig:figure1}, for the nLERCCSD case involves a p-h vertex, that is, $O_{p-h}T_1$, and therefore includes in it the contributions from the 26 diagrams in Fig. \ref{fig:figure4}. In general, $O$ or $O_{x-y}$ (where `$x$' and `$y$' could be $p$ or $h$) is the eEDM Hamiltonian for $\mathcal{E}_\mathrm{eff}$ (which is given in Table \ref{tab:table2}), while it is the dipole operator for the PDM (which is presented in Table \ref{tab:table3}).
Table~\ref{tab:table2} shows that for all the systems, the $AT_1$ term always dominates among the correlation terms, where $A$ could correspond to either $OT_1$ or $O_{x-y}T_1$ for LERCCSD or nLERCCSD, respectively. For the effective electric fields of the HgX molecules, in the LERCCSD case, there are strong cancellations among the positive $AT_1$ and the negative $T_1^\dag A T_1$ and $T_2^\dag A T_2$ terms. However, the final values of nLERCCSD and the LERCCSD calculations match within 2.5 percent, since in the former case, the $AT_1$ values are significantly lower than the latter, and the $T_1^\dag A T_1$ sector provides a far smaller contribution. In the case of SrF, the $AT_1$ terms are comparable for LERCCSD and the nLERCCSD scenarios, and therefore inclusion of non-linear terms does not change its effective electric field, while for BaF, we observe a mechanism that is similar to that for the HgX systems. We observe a different trend for the PDMs, in Table~\ref{tab:table3}. As the DF value and the nuclear contribution are the same for a given molecule, whether it is LERCCSD or an nLERCCSD calculation, the interplay between $AT_1$ and $T_1^\dag A T_1$ terms decide the importance of non-linear terms. For HgX, the $AT_1$ term in nLERCCSD calculations is always slightly larger in magnitude than the LERCCSD ones, while the net contributions from the $T_1^\dag A T_1$ terms, which are less significant, are the other way round. The resulting non-linear effects are not so important for SrF and BaF as seen in the earlier paragraph, while for HgX molecules, it becomes significant, with their effects changing the PDM by up to about 25 percent for HgI.
We now conduct a survey of previous works on the effective electric fields and PDMs of the molecules that we have considered, in Table~\ref{tab:table1}. For the effective electric fields of BaF, we observe that effective core potential-restricted active space SCF (ECP-RASSCF) \cite{Kozlov} and restricted active space CI (RASCI) \cite{Nayak} methods give larger values, while the result from MRCI approach in Ref. \cite{Meyer2} estimate the values as being slightly lower, with respect to our nLERCCSD value. A discussion of the previous works on the effective electric fields of HgX and our improved estimate of the quantity using LERCCSD approach has already been presented in Ref.~\cite{HgX}, and hence we re-direct the reader to the earlier work. Our nLERCCSD results improve over the earlier LERCCSD and FFRCCSD results, as both of those were calculated using a DZ quality basis. Most of the works that calculate PDMs and which are not mentioned in the table, including Refs. \cite{Torring,Langhoff,Mestdagh,Allouche,Kobus}, have been expounded in our earlier works in detail~\cite{AEM,FFCC}, and therefore we only discuss in this paragraph the more recent works. The differences in the PDMs between the LERCCSD results in our earlier work and those in the present work for HgX are due to the choice of basis (DZ basis functions~\cite{HgX,AJP} in the former, as against a QZ basis in the current work). We observe that the values of PDM for SrF that are obtained by using complete active space self consistent field (CASSCF) approach to multi-reference CI (MRCI) and second-order Rayleigh-Schrodinger perturbation theory (RSPT2) \cite{Jardali} (which agrees with our nLERCCSD as well as FFRCCSD results) underestimate and overestimate the results with respect to experiment, respectively. The results for PDMs of SrF and BaF from Hao \textit{et al}~\cite{Hao} using exact two-component Hamiltonian-Fock Space coupled-cluster (X2C-FSCC) formalism and the PDM of SrF from Sasmal \textit{et al}~\cite{Sasmal} using a relativistic Z-vector coupled-cluster approach (with both the works employing a QZ basis) agree closely with experimental values. However, we also note that while the Z-vector approach predicts the PDM of SrF very accurately, it underestimates that of BaF~\cite{Talukdar}. This existing difference in the PDMs of SrF and BaF between the nLERCCSD and the FFRCCSD approaches on one side and the Z-vector RCCSD approach on the other could possibly be resolved in future works that employ methods that are even more refined.
\begin{figure}[t]
\centering
\includegraphics[width=10.0cm, height=7cm]{scaling2.png}
\caption{Plot showing the scaling behaviour of the program in the property evaluating expression for a representative system, SrF, with number of processors of our computer. The X-axis gives the number of processors, while the Y-axis is the speedup, $S_p = t_1/t_p$, where t is the time taken and the subscript denotes the number of processors. We have used a double-zeta quality basis for this purpose, and test up to 192 processors, as the parallelism in our code is limited by the number of virtual orbitals, which is 208 in this case.}
\label{fig5}
\end{figure}
We now check for the scalability of our code that was parallelized using MPI. We do so by testing it with the SrF molecule, using a DZ basis. The code is to calculate both effective electric field and PDM of the molecule for this test. As the code is structured in a way that the extent of parallelization is limited by the number of virtuals, which is 208 in this case, we chose to study scaling up to 192 processors (across 8 nodes, and with 24 processors employed per node). The details of the computer (VIKRAM-100 super-computing facility at Physical Research Laboratory, Ahmedabad, India) that we used are: a 100-teraflop IBM nx360 M5 machine with 1848 processors. Each node has 24 processors (two Intel Xeon E5-2670 v3, each with 12 cores) and a memory of 256 GB RAM. The inter-process communication is via a 100\% non-blocking FAT Tree Topology FDR (56 Gbits/Sec) Infiniband network. We use an Intel 15.2 compiler, and impi5.0 and mkl libraries. As Figure~\ref{fig5} shows, our calculations indicate that the code is scalable up to this mark. In the figure, we plot the speedup against the number of processors, where the former is defined as $S_p = t_1/t_p$, with $t_p$ referring to the time taken for a computation with $p$ processors. Performing the computations in serial mode takes about 6.5 days, while calculations with 4 processors consumes around 2 days. The code takes only 2.17 hours to finish with 192 processors. The walltime approaches saturation after 96 processors (2.51 hours to 2.17 hours from 96 to 192 processors), and hence the optimal number of processors to use is around 96, where we still get speed-up from 6.5 days to 2.51 hours. The walltimes are reliable as estimates, but not extremely accurate, as the computations were performed on a common cluster, and the speeds depend upon other factors such as the number of users, the computer's specifications, and type of jobs during the time interval across which our computations are done, although we took utmost care to ensure that no other application ran on the same node(s) as ours. However, our analysis is sufficient for the purposes of broadly demonstrating that our code is scalable to a reasonably large number of processors.
We also estimate the errors in our calculations. We first examine the error due to choice of basis. We use QZ quality basis sets for our calculations, and as there is no 5-zeta basis that is available for us to carry out any kind of estimate, we calculate the effective electric fields and the PDMs at DZ level of basis with our nLERCCSD code. We find that the percentage fraction difference between DZ and QZ basis for $\mathcal{E}_\mathrm{eff}$ is around 3, 4, 5, and 7 percent for HgF, HgCl, HgBr, and HgI, respectively. We do not anticipate the difference between the DZ and QZ estimates to be over 10 percent for SrF and BaF either. Therefore, we do not expect that the difference between results from a higher quality basis set than QZ and those from a QZ basis should exceed 10 percent. Based on similar considerations, we estimate the error due to choice of basis for the PDM to be at most 15 percent. Next, we shall look at the errors due to the ignored non-linear terms. They are expected to be negligible, and we shall ascribe a conservative estimate of 2 percent, which is the percentage fraction difference between the DF values and the current nLERCCSD values for the HgX molecules. Lastly, we comment on the importance of triple and other higher excitations. Based on our previous works and error estimates in them, we expect that these excitations would be around 3 percent for the purposes of calculating $\mathcal{E}_\mathrm{eff}$~\cite{FFCC}, but could become important for PDMs. In conclusion, we linearly add the uncertainties and set an optimistic error estimate for the effective electric fields at about 15 percent. However, it is not so straightforward to set an error estimate for PDMs, as seen above, but we do not anticipate it to exceed 20 percent.
\section{Conclusion}\label{sec4}
We have investigated contributions from the non-linear terms of the property evaluating expression of the relativistic coupled-cluster theory in the determination of permanent electric dipole moments and effective electric field due to electron electric dipole moment of SrF, BaF and mercury monohalides (HgX with X = F, Cl, Br, and I) molecules. We find that the inclusion of these terms at the singles and doubles excitations approximation brings the permanent electric dipole moments (PDMs) of SrF and BaF closer to the previously calculated finite-field relativistic coupled-cluster values, which were found to have overestimated the PDMs of the two molecules with respect to available measurements. The non-linear terms considerably change the PDMs of HgX systems. For all of the chosen molecules, the non-linear terms are found to not significantly change the values of effective electric fields with respect to the results from the linear expectation value approach. However, such a result is a consequence of several cancellations at work. Since accurate estimation of these quantities are of immense interest to probe new physics from the electron electric dipole moment studies using molecules, our analysis demonstrates importance of considering non-linear terms in relativistic coupled-cluster theory for their evaluations. We have also presented the scaling behaviour of our code with a representative SrF molecule, and discussed the error estimates.
\section*{Acknowledgments}
All the computations were performed on the VIKRAM-100 cluster at PRL, Ahmedabad.
|
1,116,691,499,742 | arxiv | \section*{Nomenclature}\label{Nomenclature}
The mathematical symbols used throughout this paper are classified below as follows.
\subsection*{Sets}
\begin{description} [\IEEEsetlabelwidth{5000000}\IEEEusemathlabelsep]
\item[${\Psi}^N$] Set of indexes of all nodes of the distribution grid.
\item[${\Psi}^{SS}$] Set of indexes of nodes that are substations of the distribution grid.
\item[${\Omega}$] Set of indexes of failure scenarios.
\item[${\Omega}^{resilience}$] Set of indexes of failure scenarios associated with resilience.
\item[${\Omega}^{routine}$] Set of indexes of routine failure scenarios.
\item[${\cal C}$] Set of indexes of failure states.
\item[${\cal D}$] Set of indexes of typical days.
\item[${\mathfrak{D}_{jec}}$] Set of indexes buses in each ``island'' $e$ when investment decision $j$ is taken for contingency state $c$.
\item[${E}_{jc}$] Set of indexes of islands if investment decision $j$ is taken under contingency state $c$.
\item[${H}$] Set of indexes of all storage devices (including existing and candidates).
\item[${H}^C$] Set of indexes of candidate storage devices.
\item[${\cal J}^{L,on}_j$] Set of indexes of candidate line segments that are build for the investment plan $j$.
\item[${\cal J}^{L,off}_j$] Set of indexes of candidate line segments that are not build for the investment plan $j$.
\item[${\cal L}$] Set of indexes of all lines (including existing and candidates).
\item[${\cal L}^{C}$] Set of indexes of existing transmission lines.
\item[${\cal L}^{E}$] Set of indexes of candidate transmission lines.
\item[${Rel}_c$] Set of indexes of relevant investments under contingency state $c$.
\item[${Rel}^{L,on}_{jc}$] Set of indexes of candidate line segments that are build for the investment plan $j$ that is relevant to failure state $c$.
\item[${Rel}^{L,off}_{jc}$] Set of indexes of candidate line segments that are not build for the investment plan $j$ that is relevant to failure state $c$.
\item[$T$] Set of indexes of operation periods during each typical day.
\end{description}
\subsection*{Indexes}
\begin{description} [\IEEEsetlabelwidth{5000000}\IEEEusemathlabelsep]
\item[$c$] Index of failure state.
\item[$d$] Index of typical days.
\item[$e$] Index of the islands that are formed under a contingency state $c$.
\item[${h}$] Index of storage devices.
\item[${j}$] Index of investment decision
\item[$l$] Index of lines.
\item[$n$] Index of buses.
\item[$s$] Index of scenarios.
\item[$t$] Index of time periods.
\item[$t^0$] Index of the first time period of a day type $d$.
\end{description}
\subsection*{Parameters}
\begin{description} [\IEEEsetlabelwidth{5000000}\IEEEusemathlabelsep]
\item[${\alpha^{CVaR}}$] CVaR parameter.
\item[${\delta}$] Number of hours in a time period $t$.
\item[$\eta$] Round trip efficiency of batteries.
\item[${\lambda}$] Risk aversion user-defined parameter (between 0 and 1).
\item[${\rho}$] Probability of scenario $s$.
\item[$C^{Imb}$] Cost of imbalance.
\item[$C^{L,fix}_l$] Fixed investment cost of candidate line $l$.
\item[$C^{SD,fix}_h$] Fixed investment cost of candidate storage device $h$.
\item[$C^{SD,var}_h$] Variable investment cost of candidate storage device $h$.
\item[${D^{peak}_i}$] Peak demand of bus $i$.
\item[${D_{ntd}}$] Demand of bus $n$, at time period $t$ of typical day $d$.
\item[${{f}^{bat}_{h,t,d}}$] Percentage of state of charge of battery $h$ at time period $t$ of day type $d$.
\item[${{f}^{load}_{\tau,d}}$] Percentage of peak load at time period $\tau$ of day type $d$.
\item[${\overline{F}_l}$] Maximum capacity of line $l$.
\item[$\overline{G}^{Tr}_n$] Limit of injection in substation $n$.
\item[${k}_s$] Number of time periods of failure scenario $s$.
\item[${M}$] Sufficiently large number.
\item[$\overline{P}^{in}_h$] Maximum charging of storage device $h$ per time period.
\item[$\overline{P}^{out}_h$] Maximum discharging of storage device $h$ per time period.
\item[$pf$] Power factor.
\item[${r^{len}}$] Length of line $l$.
\item[$\overline{S}$] Number of hours to fully charge storage devices.
\item[${\underline{V}}$] Maximum voltage.
\item[${\overline{V}}$] Maximum voltage.
\item[${W_{d}}$] Number of days of type $d$ in one year.
\item[${x}^{state}_{cs}$] parameter that is equal to 1 if scenario $s$ implies in failure state $c$, being equal to 0 otherwise. Note that each scenario $s$ can only imply in one contingency state $c$.
\item[${Z^L_{l}}$] Impedance of line $l$.
\end{description}
\subsection*{Decision Variables}
\begin{description}
[\IEEEsetlabelwidth{5000000}\IEEEusemathlabelsep]
\item[${\Delta^{+}_{ntd}}$] Positive imbalance in bus $n$ at time period $t$ of day type $d$.
\item[${\Delta^{-}_{ntd}}$] Negative imbalance in bus $n$ at time period $t$ of day type $d$.
\item[$\zeta_{td}$] CVaR auxiliary variable that represents the value at risk at time period $t$ of day type $d$.
\item[$\psi^{CVaR}_{tds}$] CVaR auxiliary variable.
\item[$f_{ltd}$] Flow in line $l$ at time period $t$ of day type $d$.
\item[$g^{Tr}_{ntd}$] Injection via substation $n$ at time period $t$ of day type $d$.
\item[$L^{\dagger}_{tds}$] Load shedding at time period $t$ of day type $d$ of scenario $s$.
\item[$L_{jec}$] Load shedding in island $e$ for relevant investment $j$ under failure state $c$.
\item[$p^{in}_{htd}$] Charging of storage device $h$ at time period $t$ of day type $d$.
\item[$p^{out}_{htd}$] Discharging of storage device $h$ at time period $t$ of day type $d$.
\item[$SOC_{htd}$] State of charge of storage device $h$ at time period $t$ of day type $d$.
\item[$SOC^{aux}_{hjec}$] State of charge of storage device $h$ that belongs to island $e$ for relevant investment $j$ under contingency state $c$.
\item[$SOC^{ref}_{h}$] Reference state of charge of storage device $h$.
\item[$v_{ntd}$] Voltage in bus $n$ at time period $t$ of day type $d$.
\item[$x^{ind}_{jc}$] Binary variable that indicates which relevant investment option $j$ has been taken under contingency state $c$.
\item[$x^{L,fix}_l$] Binary investment in line $l$.
\item[$x^{SD,fix}_{h}$] Binary investment in storage device $h$.
\item[$x^{SD,var}_{h}$] Continuous investment in storage device $h$.
\end{description}
\vspace{-0.5cm}
\section{Introduction}\label{Introduction}
\IEEEPARstart{D}{distribution} grid assets represent a significant portion of the overall power system costs and, in the US, the highest share of capital investments of investor-owned utilities \cite{EEI2019}. Given this determinant role, utilities are periodically required to justify to regulators their proposed investments and the corresponding impact on consumer rates \cite{Cooke2018}. Typical reasons for those investments in the grid include expected load growth, hosting capacity and improvements in reliability performance.
In practice, grid investments driven by load growth can be justified using quantitative approaches, based on load flow simulations or, as done by Pacific Gas and Electric (PG\&E) in California, using more advanced methodologies including forecasting future feeder demands in different locations combined with consumer behavior under different meteorological seasons \cite{PGE2021_GNA}. Similarly, a hosting capacity analysis is often required to justify the corresponding grid investments, which can be a highly regulated process in some US jurisdictions, such as Minnesota, Hawaii, California, and New York \cite{Schwartz2020}.
In the reliability investments case, the process is slightly different. First, utilities are often evaluated by the reliability performance of their feeders and required to report reliability standardized metrics \cite{Cooke2018}, such as System Average Interruption Frequency Index (SAIFI), System Average Interruption Duration Index (SAIDI), Customer Average Interruption Frequency Index (CAIFI) and Customer Average Interruption Duration Index (CAIDI) \cite{IEEEStd1366}. Based on this ex-post reliability evaluation, utilities can suggest new investments to improve their performance. For example, in California, PG\&E publishes an annual report with reliability metrics in its service territory, including potential grid investments to improve them \cite{PGE2021_AnnualReliability}. In Illinois, utilities are requested to publish annual reliability performance reports and present a 3-year plan for reliability investments \cite{Illinois2020}, very similar to Ohio \cite{Ohio2021_1}, where utilities report metrics of their worse performing feeders \cite{Ohio2021_2}. Commonwealth Edison (ComEd) has a detailed process to propose grid investments \cite{ComEd2021_InvestmentsProposal}, being ``system performance'' (reliability) one among seven capital investment categories presented to the regulator. ``System performance'' includes investments that can improve the reliability of the system based on characteristics such as historical data of failures as well as material condition and age of system elements.
In short, the current practices of the industry show that distribution reliability investments are (1) based on an ex-post analysis of performance and (2) determined by empirical knowledge. Unlike other drivers of grid investments, such as load growth or hosting capacity, no forward-looking optimization nor simulation analysis is carried out. A forward-looking reliability assessment is already an usual practice in bulk power systems, in which forward-looking reliability indices, such of loss of load expectation (LOLE) and/or expected energy not served (EENS), are defined as requirements of the system \cite{NationalGrid2017_SecurityofSupply}.
Existing practices are even more limited when it comes to resilience investments. However, given the projected increase in frequency, intensity and duration of extreme weather hazards \cite{USGCRP_2017} and their consequences to the power supply and delivery \cite{DOE_2013}, resilience has become a central topic in the power systems community over the last few years. Despite the broader definition of resilience provided by FERC \cite{FERC2018_resilienceDef} - ``{\it the ability to withstand and reduce the magnitude and/or duration of disruptive events, which includes the capability to anticipate, absorb, adapt to, and/or rapidly recover from such an event}'' - resilience-related standards and metrics are still to be developed \cite{Vugrin2017}. In the absence of a consensus on resilience metrics, utilities remain relying on traditional reliability indices, conceived to capture routine failures instead of HILP events \cite{Schwartz2019_UtilityInvestmentsResilience} and to be used in ex-post evaluation. Therefore, the methods currently used by industry to plan the upgrade of distribution systems do not consider the risk associated with HILP events, which are much less predictable and much more impactful compared to routine events.
Thus, there is a need for analytical methodologies to support utilities' investment decisions, under reliability and resilience programs, that can capture forward-looking risk mitigation benefits and can demonstrate to regulators the added resilience value of different investment options. This paper presents a practical and scalable methodology to fill this gap and demonstrates it using Target Feeders from Commonwealth Edison (ComEd) Reliability Program.
\vspace{-0.4cm}
\subsection{Literature Review}
Different metrics \cite{reliability_guide} and methods \cite{allan_billinton_1996} were developed in the past to perform reliability assessment in power systems, particularly in stochastic simulation environments, and later integrated into optimization methodologies addressing, for example, the expansion planning of distribution networks \cite{Munoz2016,Munoz2018}. However, recently, due to an increasing number of occurrences of natural disasters, a great deal of attention has been devoted to take resilience into consideration while planning and operating power systems. In this paper, we propose a methodology to plan the expansion of large-scale distribution systems while considering not only reliability but also resilience in the form of risk-aversion.
Several works have proposed approaches to tackle the distribution grid planning problem over the last years. In \cite{Moradijoz2018}, the authors propose a bilevel mixed-integer program that optimizes the distribution system expansion while taking into account the presence of Electric Vehicles (EVs). While the first level determines investments in the grid, the second level manages the strategies of charging and discharging of parked EVs so as to maximize the revenue of parking lots that provide grid services. In \cite{Amjady2018}, line reinforcement, distributed energy resources (DERs) and dispatchable units are candidate investments to be selected by the proposed methodology while facing uncertainty in DERs output and demand and neglecting reliability and resilience against failures of system elements. In \cite{Li2013}, a game-theoretical approach is presented to tackle the distribution planning problem. In \cite{Arasteh2019}, the distribution system expansion planning problem is addressed while considering the private investor (PI) who owns distributed generation, the distribution company (DISCO), and the demand response provider (DRP) as different players with different objectives. While the DISCO performs line reinforcements to improve reliability and to decrease costs by minimizing expected energy not served associated with line failures, DRP and PI aim to maximize the conditional value at risk (CVaR) of their profits under uncertainty in the availability of demand response and in renewable generation. In \cite{Ahmadian2019}, particle swarm optimization and tabu search are integrated into an algorithm that plans the expansion of distribution networks. In \cite{Zhao2020}, the distribution system planning is addressed by a stochastic optimization approach that determines investment in substations, feeders, and batteries while considering battery degradation and facing uncertainty in electricity prices and demand. In \cite{Troitzsch2020}, the flexibility to reduce peak demands provided by thermal building systems is considered while planning the distribution grid expansion. In \cite{Fan2020}, the distribution system expansion problem is addressed via a model that considers EVs and uncertainty in renewable energy sources.
Security under high impact and low probability (HILP) events has been a recent topic of concern in the context of expansion planning methodologies. At the transmission level, for example, a two-stage stochastic Mixed-Integer NonLinear Programming (MINLP) model is formulated in \cite{Romero2013} to determine the investment plan to increase resilience while considering seismic activity. Moreover, in \cite{Lagos2020}, an approach that leverages on simulation techniques and optimization is proposed to define the portfolio of investments needed to deal with potential events of earthquakes.
In addition, relevant works have also considered resilience while planning investments at the distribution level. In \cite{Nazemi2020}, seismic hazards are considered in a model that decides sitting and sizing of storage devices. In \cite{Lin2018}, a trilevel model is proposed to select lines to be hardened to reduce the vulnerability of the distribution system to intentional or unintentional attacks.
Finally, \cite{Barnes2019} proposes an approach to address the expansion planning (selecting network upgrades) of large scale distribution systems with a focus on preparing the grid to withstand extreme events specifically related to ice and wind storms.
\begin{comment}
falar que metricas seriam bem-vindas e citar que menciona isso. falar que pratica atual visa mitigar o que ja aconteceu ao inves de projetar como quer que seja. falar que seria bom ter uma metodologia que otimiza investmentos com vistas a melhoria simultanea de metricas de confiabilidade e resiliencia e eh isso que propomos.
falar sobre o processo de selecao de investmentos da ComEd.
\end{comment}
\subsection{Contributions}
In this paper, we propose a practical methodology to plan the expansion of large-scale distribution systems while minimizing the convex combination of the expected value and the CVaR of loss of load costs. Our results show that objective functions based on traditional risk-neutral metrics, e.g. the expected energy not served (EENS), produce expansion plans that neglect the consequences of HILP events. Consistent risk-aversion strategies can only be achieved through the inclusion of risk-based objectives. Unlike the previously mentioned works, we propose a methodology that can simultaneously (i) be general enough to consider routine (related to reliability) and extreme events (related to resilience) regardless of the cause while allowing the planner to place more importance on reliability or resilience according to their level of risk aversion, (ii) consider not only traditional investments in line segments but also in storage devices, and (iii) be scaled to realistic large scale distribution systems. Finally, we demonstrate our method using distribution planning information taken from a current US utility distribution system.
The contributions of this paper can be summarized as:
\begin{enumerate}
\item To propose a distribution system expansion planning model that accounts for reliability and resilience metrics while allowing the system planner to define their own level of risk-aversion. In this manner, the trade-off between focusing on reliability or on resilience can be evaluated so as to the determine the most appropriate portfolio of investments in new line segments and storage devices.
\item To reformulate the proposed model based on realistic assumptions in order to improve the scalability of the proposed methodology. As a result, our proposed model can be solved for real size large scale systems while considering several failure scenarios which can be based on historical data.
\end{enumerate}
The remainder of the paper is organized as follows. Section II presents a conventional scenario-based approach to formulate the problem under consideration in this paper. Section III describes the steps to alleviate the computational burden of the model presented in the previous section. Section IV presents case studies, and finally in Section VI we conclude.
\section{Conventional scenario-based approach }\label{sec.MathematicalFormulation}
Next, we present a methodology to select the optimal portfolio of investments to upgrade the distribution system with the objective of alleviating the impact of routine failures and the damage associated with HILP events. To achieve that, we consider not only the minimization of the expected value of the cost of loss of load, but also the CVaR of this cost for a range of failure scenarios (considering failures of line segments of the grid). In a conventional scenario-based approach, this problem can be formulated as follows.
\begin{align}
& \underset{{\substack{\Delta^-_{ntds},\Delta^+_{ntds},\zeta_{td},\psi^{CVaR}_{tds},\\f_{ltds},g^{Tr}_{ntds},p^{in}_{htds},p^{out}_{htds},\\SOC_{htds},v_{ntds},x^{L,fix}_{l}, x^{SD,fix}_{h},x^{SD,var}_{h}}}}{\text{Minimize}} \hspace{0.1cm} \sum_{l \in {\cal L}^C} C^{L,fix}_lx^{L,fix}_{l}
\notag\\
&\hspace{0pt} + \sum_{h \in H^C} \Bigl[ C^{SD,fix}_h x^{SD,fix}_{h} + C^{SD,var}_h x^{SD,var}_{h} {\color{black}\overline{S}}\overline{P}^{in}_h \Bigr] \notag \\
&\hspace{0pt}+ \sum_{d \in {\cal D}}W_d\sum_{t \in T}\Biggl[
pf C^{Imb} \sum_{n \in \Psi^N \setminus \Psi^{SS}} \Bigl[ \Delta^-_{n,t,d,1} + \Delta^+_{n,t,d,1} \Bigr] \Biggr ] \notag \\
&\hspace{0pt} + (1-\lambda) pf C^{Imb} \sum_{d \in {\cal D}} W_d \sum_{t \in T} \sum_{s \in \Omega \setminus{\{1\}}} \rho_s \sum_{n \in \Psi^N \setminus \Psi^{SS}} \Bigl[ \Delta^-_{ntds} \notag\\
&+ \Delta^+_{ntds} \Bigr]+ \lambda ~ pf ~ C^{Imb} \sum_{d \in {\cal D}} W_d \sum_{t \in T} \Bigl[ \zeta_{td}\notag\\
&\hspace{99pt} + \sum_{s \in \Omega \setminus{\{1\}}} \frac{\rho_s}{1-\alpha^{CVaR}} \psi^{CVaR}_{tds} \Bigr] \label{ScenarioBasedFormulation_1}\\
& \text{subject to:}\notag\\
& \psi^{CVaR}_{tds} + \zeta_{td} \geq \sum_{n \in \Psi^N \setminus \Psi^{SS}} \Bigl[ \Delta^-_{ntds} + \Delta^+_{ntds} \Bigr]; \forall d \in {\cal D}, \notag\\
&\hspace{153pt} t \in T, s \in \Omega \setminus \{1\} \label{ScenarioBasedFormulation_2}\\
& \psi^{CVaR}_{tds} \geq 0; \forall d \in {\cal D}, t \in T, s \in \Omega \label{ScenarioBasedFormulation_3}\\
& x^{L,fix}_l \in \{0,1\}; \forall l \in {\cal L}^C \label{ScenarioBasedFormulation_4}\\
& x^{SD,fix}_h \in \{0,1\}; \forall h \in H^C \label{ScenarioBasedFormulation_5}\\
& 0 \leq x^{SD,var}_h \leq x^{SD,fix}_h \overline{x}^{SD}_h; \forall h \in H^C \label{ScenarioBasedFormulation_6}\\
& 0\leq g^{Tr}_{ntds} \leq \overline{G}^{Tr}_n; \forall n \in \Psi^{SS}, d \in {\cal D}, t \in T, s \in \Omega \label{ScenarioBasedFormulation_7}\\
& \underline{V} \leq v_{ntds}\leq \overline{V}; \forall n \in \Psi^N, d \in {\cal D}, t \in T, s \in \Omega \label{ScenarioBasedFormulation_8}\\
& -y_{ltds} \overline{F}_l \leq f_{ltds} \leq y_{ltds} \overline{F}_l; \forall l \in {\cal L}^E, d \in {\cal D}, t \in T, \notag\\
& \hspace{205pt} s \in \Omega \label{ScenarioBasedFormulation_9}\\
& -y_{ltds} x^{L,fix}_l \overline{F}_l \leq f_{ltds} \leq y_{ltds} x^{L,fix}_l \overline{F}_l; \forall l \in {\cal L}^C, \notag\\
& \hspace{144pt}d \in {\cal D}, t \in T, s \in \Omega \label{ScenarioBasedFormulation_10}\\
& -M(1-y_{ltds}) \leq Z^L_l r^{len}_l f_{ltds} - \bigl( v_{fr(l),t,d,s} \notag\\
& \hspace{5pt} - v_{to(l),t,d,s} \bigl) \leq M(1-y_{ltds}); \forall l \in {\cal L}^{E}, d \in {\cal D}, t \in T, \notag\\
& \hspace{200pt} s \in \Omega \label{ScenarioBasedFormulation_11}\\
& - M(1-y_{ltds}) - M(1-x^{L,fix}_{l}) \leq Z^L_l r^{len}_l f_{ltds} \notag\\
& \hspace{5pt}- \bigl( v_{fr(l),t,d,s} - v_{to(l),t,d,s} \bigl) \leq M(1-y_{ltds}) \notag\\
& \hspace{26pt}+ M(1-x^{L,fix}_{l}); \forall l \in {\cal L}^{C}, d \in {\cal D}, t \in T, s \in \Omega \label{ScenarioBasedFormulation_12}\\
& \sum_{l \in {\cal L}|to(l)=n} f_{ltds} - \sum_{l \in {\cal L}|fr(l)=n} f_{ltds} + g^{Tr}_{ntds} = 0; \notag\\
& \hspace{97pt} \forall n \in {\Psi}^{SS}, d \in {\cal D}, t \in T, s \in \Omega \label{ScenarioBasedFormulation_13}\\
& \sum_{l \in {\cal L}|to(l)=n} f_{ltds} - \sum_{l \in {\cal L}|fr(l)=n} f_{ltds} = \sum_{h \in H_n} p^{in}_{htds} \notag\\
& \hspace{40pt} - \sum_{h \in H_n} p^{out}_{htds} - \Delta^-_{ntds} + \Delta^+_{ntds} + D_{ntd};\notag \\
& \hspace{72pt} \forall n \in {\Psi}^{N} \setminus {\Psi}^{SS}, d \in {\cal D}, t \in T, s \in \Omega \label{ScenarioBasedFormulation_14}\\
& SOC_{h|T|ds} = SOC_{ht^{0}ds}; \forall h \in H, d \in {\cal D}, s \in \Omega \label{ScenarioBasedFormulation_15}\\
& SOC_{htds} = SOC_{ht^{0}ds} + \eta \delta p^{in}_{htds} - \delta p^{out}_{htds}; \forall h \in H, \notag \\
& \hspace{143pt} d \in {\cal D}, t=1, s \in \Omega \label{ScenarioBasedFormulation_16}\\
& SOC_{htds} = SOC_{h,t-1,d,s} + \eta \delta p^{in}_{htds} - \delta p^{out}_{htds}; \notag \\
& \hspace{82pt} \forall h \in H, d \in {\cal D}, t \in T|t\geq2, s \in \Omega \label{ScenarioBasedFormulation_17}\\
& 0 \leq SOC_{htds} \leq \overline{S}\overline{P}^{in}_h; \forall h \in H \setminus H^C, s \in \Omega\label{ScenarioBasedFormulation_18}\\
& 0 \leq SOC_{htds} \leq \overline{S} x^{SD,var}_h \overline{P}^{in}_h; \forall h \in H^C, s \in \Omega\label{ScenarioBasedFormulation_19}\\
& 0 \leq p^{in}_{htds} \leq \overline{P}^{in}_h; \forall h \in H \setminus H^C, d \in {\cal D}, t \in T, s \in \Omega \label{ScenarioBasedFormulation_20}\\
& 0 \leq p^{out}_{htds} \leq \overline{P}^{out}_h; \forall h \in H \setminus H^C, d \in {\cal D}, t \in T, s \in \Omega \label{ScenarioBasedFormulation_21}\\
& 0 \leq p^{in}_{htds} \leq x^{SD,var}_h \overline{P}^{in}_h; \forall h \in H^C, d \in {\cal D}, t \in T, \notag\\
&\hspace{201pt} s \in \Omega \label{ScenarioBasedFormulation_22}\\
& 0 \leq p^{out}_{htds} \leq x^{SD,var}_h \overline{P}^{out}_h; \forall h \in H^C, d \in {\cal D}, t \in T, \notag\\
&\hspace{201pt} s \in \Omega \label{ScenarioBasedFormulation_23
\end{align}
The optimization problem \eqref{ScenarioBasedFormulation_1}--\eqref{ScenarioBasedFormulation_23} is a two-stage stochastic program formulated as a mixed-integer linear programming (MILP) model. The first-stage decision determines investment in new line segments and storage devices. The second-stage decision is associated with operation under a failure scenario.
The objective function to be minimized in \eqref{ScenarioBasedFormulation_1} comprises investment cost in new line segments and storage devices, cost of imbalance in the base case (scenario $s=1$), and a convex combination between expected value and CVaR of imbalance cost associated with a set of failure scenarios. Constraints \eqref{ScenarioBasedFormulation_2} and \eqref{ScenarioBasedFormulation_3} model the behavior of variables $\psi^{CVaR}_{tds}$ and $\zeta_{td}$ which are related to the CVaR of imbalance cost present in the objective function. Constraints \eqref{ScenarioBasedFormulation_4} and \eqref{ScenarioBasedFormulation_5} express the binary nature of investment variables $x^{L,fix}_l$ and $x^{SD,fix}_h$ that correspond to the installation of new line segments and storage devices, respectively. Constraints \eqref{ScenarioBasedFormulation_6} limit the continuous variable associated with the capacity of the candidate storage devices to a upper bound that depends on whether $x^{SD,fix}_h$ assumes value equal to one. Constraints \eqref{ScenarioBasedFormulation_7} limit the amount of power injected from the main transmission grid to the substations $n \in \Psi^{SS}$ of the distribution grid. Constraints \eqref{ScenarioBasedFormulation_8} impose voltage bounds for each bus of the distribution grid. Constraints \eqref{ScenarioBasedFormulation_9} and \eqref{ScenarioBasedFormulation_10} enforce transmission capacity limits to existing and candidate line segments, respectively, whereas constraints \eqref{ScenarioBasedFormulation_11} and \eqref{ScenarioBasedFormulation_12} relate power flows to voltages (also for existing and candidate lines) in a linear fashion as often done in distribution planning models (see \cite{Haffner2008} \cite{Munoz2016} for example). Constraints \eqref{ScenarioBasedFormulation_13} and \eqref{ScenarioBasedFormulation_14} ensure nodal power balance for substations and other buses, respectively. Constraints \eqref{ScenarioBasedFormulation_15}--\eqref{ScenarioBasedFormulation_17} model state of charge (SOC) variation along different periods. Constraints \eqref{ScenarioBasedFormulation_18} and \eqref{ScenarioBasedFormulation_19} impose SOC capacities for existing and candidate storage devices, respectively. Constraints \eqref{ScenarioBasedFormulation_20} and \eqref{ScenarioBasedFormulation_21} enforce limits to the charging and discharging of existing storage devices while \eqref{ScenarioBasedFormulation_22} and \eqref{ScenarioBasedFormulation_23} do the same to candidate storage devices.
\section{Scalability-oriented reformulation }\label{sec.Scalability}
The scenario-based formulation \eqref{ScenarioBasedFormulation_1}--\eqref{ScenarioBasedFormulation_23} can explicitly evaluate the cost of pre- and post-failure loss of load under a range of scenarios as it accounts for optimal power flow (OPF)-related constraints for both base case and each scenario of failure. However, for medium-sized systems and a reasonable number of scenarios, solving \eqref{ScenarioBasedFormulation_1}--\eqref{ScenarioBasedFormulation_23} is prohibitive due to large number of constraints, in particular the time coupling ones associated with the battery operation during outages. In this Section, we rewrite formulation \eqref{ScenarioBasedFormulation_1}--\eqref{ScenarioBasedFormulation_23} to address these scalablility issues by considering three assumptions that are based on industry practice.
\vspace{-0.4cm}
\subsection{Assumptions}
\textit{Assumption 1: Storage operation during outages}. Here we distinguish routine ($\Omega^{routine}$)
from resilience ($\Omega^{resilience}$) outage events. The first correspond to spontaneous equipment failures that cannot be predicted nor anticipated by storage operation. Thus, we assume that storage is operated with other objectives (economic) and, when a routine failures occur, the existing storage SOC can be mobilized to mitigate it. The second are extreme events (e.g. storms, floods, wildfires) that can be predicted hours ahead. In this case, when the event occurs, it is assumed that the operators have preventively charged the batteries up to the maximum capacity.
\textit{Assumption 2: Power flow constraints during outages}. We consider that the loss of load associated with a particular state of failure can actually be modelled without writing the respective OPF-related constraints. This means that if a pre-outage state satisfies the steady-state load flow limits, any re-configuration of the network to mitigate an outage will also satisfy those limits. The realistic assumption behind it is that utilities only propose new ties as candidates after evaluating the peak condition of different topology realizations.
\textit{Assumption 3}: We assume that the number of candidate assets are very small in comparison with the number of outages and the grid size (utilities often evaluate a few investment options in grids with thousands of nodes).
\vspace{-0.3cm}
\subsection{Scalability Approach}
\textit{Assumption 1} allows to model storage operation during failure events exclusively as a function of (i) battery capacity and (ii) SOC at the time $t$ when the failure occurs. \textit{Assumption 2} allows to evaluate the loss of load as a function of those two variables and the duration $k$ of the outage when there is no possible reconfiguration to reconnect the portion of the grid that is disconnected by the failed line. With these two assumptions, an outage scenario $s$ can be represented as a state of failure of the grid $c$, starting at time $t$ with a duration $k_s$.
This separation between scenario and state of failure allows to reduce the dimensionality of the problem. Considering \textit{Assumption 3}, it is possible to say that for each state of failure $c$, there is only a small subset of relevant investments ($Rel_c$) that can mitigate the loss of load, regardless of the starting time $t$ and the duration $k_s$ of the outage. For example, investments in Zone A are irrelevant to mitigate the loss of load in Zone B when there is a failure in the line between Zones A and B.
\subsection{Model Formulation}
Following this scalability approach, we considered the set of all states of failure of the grid ${\cal C}$ and we relate scenarios and states of failure using the binary parameter $x^{state}_{cs}$. For each $s \in {\cal S}$, this parameter is set to 1 just for one index $c$ within ${\cal C}$, so as to indicate the state of failure associated with each scenario. The parameter $k_s$ represents the duration of the state of failure $c$ in the outage scenario $s$. Following \textit{Assumption 1}, SOC at time $t$ is calculated separately, based on an economic objective (e.g. price signal), and modeled as a parameter $f^{bat}_{htd}$ both in the base case and failure scenarios. It is important to note that $f^{bat}_{htd}$ is used to determine the storage investment (which remains a variable). Still in \textit{Assumption 1}, the storage is modeled with a maximum SOC in response to extreme failure scenarios. Following \textit{Assumption 2}, the loss of load can be assessed by the energy balance within the multiple network islands that result from the states of failure. This assessment is similar to the expansion planning decision making framework provided in Section \ref{sec.MathematicalFormulation}, but defining the set of indexes of islanded buses $\mathfrak{D}_{jec}$ for each possible portfolio of investments $j$ and state of failure $c$, where $e \in E_c$ and $E_c$ is the set of indexes of islands created by the state of failure $c$. As mentioned in the scalability approach, we define the set relevant investments $Rel_c$ which contains the indexes $j$ of the investment combinations that are relevant to the state of failure $c$. In addition, we also create sets ${Rel}^{L,on}_{jc}$ and ${Rel}^{L,off}_{jc}$ which contain the indexes of line segments that are built and not built, respectively, under the relevant investment combination $j$ associated with failure state $c$.
The model \eqref{ScenarioBasedFormulation_1}--\eqref{ScenarioBasedFormulation_23} is rewritten as follows.
\begin{align}
& \underset{{\substack{\Delta^+_{ntd},\Delta^-_{ntd},\zeta_{td},\psi^{CVaR}_{tds},\\f_{ltd},g^{Tr}_{ntd},L_{jec},L^{\dagger}_{tds},\\p^{in}_{htd},p^{out}_{htd}, SOC_{htd}, \\SOC^{aux}_{hjec}, SOC^{ref}_{h}, v_{ntd}\\mathbf{x}^{ind}_{jc}, x^{L,fix}_{l}, x^{SD,fix}_{l}, x^{SD,var}_{l}}}}{\text{Minimize}} \hspace{0.1cm} \sum_{l \in {\cal L}^C} \Bigl[ C^{L,fix}_lx^{L,fix}_{l} \Bigr] \notag\\
&\hspace{0pt} + \sum_{h \in H^C} \Bigl[ C^{SD,fix}_h x^{SD,fix}_{h} + C^{SD,var}_h x^{SD,var}_{h} {\color{black}\overline{S}}\overline{P}^{in}_h \Bigr] \notag \\
&\hspace{0pt}+ \sum_{d \in {\cal D}}W_d\sum_{t \in T}\Biggl[
pf C^{Imb} \sum_{n \in \Psi^N \setminus \Psi^{SS}} \Bigl[ \Delta^-_{ntd} + \Delta^+_{ntd} \Bigr] \Biggr ] \notag \\
&\hspace{0pt}+ (1-\lambda) pf C^{Imb} \sum_{d \in D} W_d \sum_{t \in T} \sum_{s \in \Omega} \rho_s L^{\dagger}_{tds}\notag\\
&\hspace{0pt}+ \lambda ~ pf ~ C^{Imb} \sum_{d \in D} W_d \sum_{t \in T} \Bigl[ \zeta_{td} \notag\\
&\hspace{110pt} + \sum_{s \in \Omega} \frac{\rho_s}{1-\alpha^{CVaR}} \psi^{CVaR}_{tds} \Bigr] \label{RepairV2_v7_1}\\
& \text{subject to:}\notag\\
& \psi^{CVaR}_{tds} + \zeta_{td} \geq L^{\dagger}_{tds}; \forall d \in {\cal D}, t \in T, s \in \Omega \label{RepairV2_v7_2}\\
& \psi^{CVaR}_{tds} \geq 0; \forall d \in {\cal D}, t \in T, s \in \Omega \label{RepairV2_v7_3}\\
& x^{ind}_{jc} \in \{0,1\}; \forall c \in {\cal C}, j \in {Rel}_c \label{RepairV2_v7_3_a}\\
& x^{L,fix}_{l} \in \{0,1\}; \forall l \in {\cal L}^C \label{RepairV2_v7_4}\\
& x^{SD,fix}_h \in \{0,1\}; \forall h \in H^C \label{RepairV2_v7_5}\\
& 0 \leq x^{SD,var}_h \leq x^{SD,fix}_h \overline{x}^{SD}_h; \forall h \in H^C\label{RepairV2_v7_6}\\
& L^{\dagger}_{tds} \geq \sum_{c \in {\cal C}} x^{state}_{cs} \sum_{j \in Rel_c} \sum_{e \in E_{jc}}\Bigl[ \Bigl [ \sum_{\tau=t}^{min\{t+k_s,|T|\}} L_{jec} f^{load}_{\tau,d} \Bigr ] \notag\\
& \hspace{5pt} - \sum_{h \in {\cal H}_{jec}} SOC^{aux}_{hjec} f^{bat}_{htd} \Bigr ]; \forall t \in T, d \in {\cal D}, s \in \Omega^{routine}\label{RepairV2_v7_7}\\
& L^{\dagger}_{tds} \geq \sum_{c \in {\cal C}} x^{state}_{cs} \sum_{j \in Rel_c} \sum_{e \in E_{jc}}\Bigl[ \Bigl [ \sum_{\tau=t}^{min\{t+k_s,|T|\}} L_{jec} f^{load}_{\tau,d} \Bigr ] \notag\\
& \hspace{14pt}- \sum_{h \in {\cal H}_{jec}} SOC^{aux}_{hjec} \Bigr ];\forall t \in T, d \in {\cal D}, s \in \Omega^{resilience}\label{RepairV2_v7_8}\\
& L^{\dagger}_{tds} \geq 0; \forall t \in T, d \in {\cal D}, s \in \Omega|s\geq2\label{RepairV2_v7_9}\\
& L^{\dagger}_{tds} = 0; \forall t \in T, d \in {\cal D}, s = 1\label{RepairV2_v7_10}\\
& \sum_{j \in Rel_c} x^{ind}_{jc} = 1; \forall c \in {\cal C}\label{RepairV2_v7_11}\\
&-M\sum_{l \in Rel^{L,on}_{jc}}(1-x^{L,fix}_l)
- M\sum_{l \in Rel^{L,off}_{jc}}x^{L,fix}_l
\notag\\
&\hspace{25pt}\leq x^{ind}_{jc} - 1 \leq M\sum_{l \in Rel^{L,on}_{jc}}(1-x^{L,fix}_l)
\notag\\
&\hspace{60pt}+ M\sum_{l \in Rel^{L,off}_{jc}}x^{L,fix}_l
;\forall c \in {\cal C}, j \in Rel_c\label{RepairV2_v7_12}\\
&-M (1-x^{ind}_{jc}) \leq SOC^{ref}_{h} - SOC^{aux}_{hjec} \notag\\
&\hspace{5pt} \leq M (1-x^{ind}_{jc});\forall c \in {\cal C}, j \in Rel_c, e \in E_{jc}, h \in {\cal H}_{jec}\label{RepairV2_v7_13}\\
&-M x^{ind}_{jc} \leq SOC^{aux}_{hjec} \leq M x^{ind}_{jc}; \forall c \in {\cal C}, j \in Rel_c, \notag\\
&\hspace{150pt}e \in E_{jc}, h \in {\cal H}_{jec}\label{RepairV2_v7_14}\\
&-M (1-x^{ind}_{jc}) \leq \Bigl[ \sum_{i \in \mathfrak{D}_{jec}} D_{i}^{peak} \Bigr] - L_{jec} \notag\\
&\hspace{48pt} \leq M (1-x^{ind}_{jc}); \forall c \in {\cal C}, j \in Rel_c, e \in E_{jc} \label{RepairV2_v7_15}\\
& L_{jec} \geq 0; \forall c \in {\cal C}, j \in Rel_c, e \in E_{jc} \label{RepairV2_v7_16}\\
& 0\leq g^{Tr}_{ntd} \leq \overline{G}^{Tr}_n; \forall n \in \Psi^{SS}, d \in {\cal D}, t \in T \label{RepairV2_v7_17}\\
& \underline{V} \leq v_{ntd}\leq \overline{V}; \forall n \in \Psi^N, d \in {\cal D}, t \in T \label{RepairV2_v7_18}\\
& -y_{ltd,0} \overline{F}_l \leq f_{ltd} \leq y_{ltd,0} \overline{F}_l; \forall l \in {\cal L}^E, d \in {\cal D}, t \in T \label{RepairV2_v7_19}\\
& \sum_{l \in {\cal L}|to(l)=n} f_{ltd} - \sum_{l \in {\cal L}|fr(l)=n} f_{ltd} + g^{Tr}_{ntd} = 0; \notag\\
&\hspace{125pt} \forall n \in {\Psi}^{SS}, d \in {\cal D}, t \in T \label{RepairV2_v7_20}\\
& \sum_{l \in {\cal L}|to(l)=n} f_{ltd} - \sum_{l \in {\cal L}|fr(l)=n} f_{ltd} = \sum_{h \in H_n} p^{in}_{htd} \notag\\
&\hspace{0pt} - \sum_{h \in H_n} p^{out}_{htd} - \Delta^-_{ntd} + \Delta^+_{ntd} + D_{ntd};\forall n \in {\Psi}^{N} \setminus {\Psi}^{SS},\notag\\
&\hspace{170pt} d \in {\cal D}, t \in T \label{RepairV2_v7_21}\\
& -M(1-y_{ltd,0}) \leq Z^L_l r^{len}_l f_{ltd} - \bigl( v_{fr(l),t,d} \notag\\
&\hspace{11pt}- v_{to(l),t,d} \bigl) \leq M(1-y_{ltd,0}); \forall l \in {\cal L}^{E}, d \in {\cal D}, t \in T \label{RepairV2_v7_22}\\
& SOC_{h|T|d} = SOC_{ht^{0}d}; \forall h \in H, d \in {\cal D}\label{RepairV2_v7_23}\\
& SOC_{htd} = SOC_{ht^{0}d} + \eta \delta p^{in}_{htd} - \delta p^{out}_{htd}; \forall h \in H, \notag\\
&\hspace{172pt}d \in {\cal D}, t=1 \label{RepairV2_v7_24}\\
& SOC_{htd} = SOC_{h,t-1,d} + \eta \delta p^{in}_{htd} - \delta p^{out}_{htd}; \forall h \in H,\notag\\
&\hspace{147pt} d \in {\cal D}, t \in T|t\geq2 \label{RepairV2_v7_25}\\
& 0\leq SOC^{ref}_{h} \leq \overline{S}\overline{P}^{in}_h; \forall h \in H \setminus H^C\label{RepairV2_v7_26}\\
& 0\leq SOC^{ref}_{h} \leq \overline{S} x^{SD,var}_h \overline{P}^{in}_h; \forall h \in H^C\label{RepairV2_v7_27}\\
& SOC_{htd} = SOC^{ref}_{h} f^{bat}_{htd}; \forall h \in H, d \in {\cal D}, t \in T\label{RepairV2_v7_28}\\
& 0\leq p^{in}_{htd} \leq \overline{P}^{in}_h; \forall h \in H \setminus H^C, d \in {\cal D}, t \in T\label{RepairV2_v7_29}\\
& 0\leq p^{out}_{htd} \leq \overline{P}^{out}_h; \forall h \in H \setminus H^C, d \in {\cal D}, t \in T\label{RepairV2_v7_30}\\
& 0\leq p^{in}_{htd} \leq x^{SD,var}_h \overline{P}^{in}_h; \forall h \in H^C, d \in {\cal D}, t \in T\label{RepairV2_v7_31}\\
& 0\leq p^{out}_{htd} \leq x^{SD,var}_h \overline{P}^{out}_h; \forall h \in H^C, d \in {\cal D}, t \in T\label{RepairV2_v7_32
\end{align}
\begin{comment}
\begin{align}
& \underset{{\substack{\Delta^+_{ntd},\Delta^-_{ntd},\zeta_{td},\psi^{CVaR}_{tds},\\f_{ltd},g^{Tr}_{ntd},L_{jec},L^{\dagger}_{tds},\\p^{in}_{htd},p^{out}_{htd}, SOC_{htd}, \\SOC^{aux}_{hjec}, SOC^{ref}_{h}, v_{ntd}\\mathbf{x}^{ind}_{jc}, x^{L,fix}_{l}, x^{SD,fix}_{l}, x^{SD,var}_{l}}}}{\text{Minimize}} \hspace{0.1cm} \sum_{l \in {\cal L}^C} \Bigl[ C^{L,fix}_lx^{L,fix}_{l} \Bigr] \notag\\
&\hspace{0pt} + \sum_{h \in H^C} \Bigl[ C^{SD,fix}_h x^{SD,fix}_{h} + C^{SD,var}_h x^{SD,var}_{h} \overline{P}^{in}_h \Bigr] \notag \\
&\hspace{0pt}+ \sum_{d \in {\cal D}}W_d\sum_{t \in T}\Biggl[
pf C^{Imb} \sum_{n \in \Psi^N \setminus \Psi^{SS}} \Bigl[ \Delta^-_{ntd} + \Delta^+_{ntd} \Bigr] \Biggr ] \notag \\
&\hspace{0pt}+ (1-\lambda) pf C^{Imb} \sum_{d \in D} W_d \sum_{t \in T} \sum_{s \in \Omega} \rho_s L^{\dagger}_{tds}\notag\\
&\hspace{0pt}+ \lambda ~ pf ~ C^{Imb} \sum_{d \in D} W_d \sum_{t \in T} \Bigl[ \zeta_{td} \notag\\
&\hspace{110pt} + \sum_{s \in \Omega} \frac{\rho_s}{1-\alpha^{CVaR}} \psi^{CVaR}_{tds} \Bigr] \label{RepairV2_v7_1}\\
& \text{subject to:}\notag\\
& \psi^{CVaR}_{tds} + \zeta_{td} \geq L^{\dagger}_{tds}; \forall d \in {\cal D}, t \in T, s \in \Omega \label{RepairV2_v7_2}\\
& \psi^{CVaR}_{tds} \geq 0; \forall d \in {\cal D}, t \in T, s \in \Omega \label{RepairV2_v7_3}\\
& x^{ind}_{jc} \in \{0,1\}; \forall c \in {\cal C}, j \in {Rel}_c \label{RepairV2_v7_3_a}\\
& x^{L,fix}_{l} \in \{0,1\}; \forall l \in {\cal L}^C \label{RepairV2_v7_4}\\
& x^{SD,fix}_h \in \{0,1\}; \forall h \in H^C \label{RepairV2_v7_5}\\
& x^{SD,var}_h \leq x^{SD,fix}_h \overline{x}^{SD}_h; \forall h \in H^C\label{RepairV2_v7_6}\\
& L^{\dagger}_{tds} \geq \sum_{c \in {\cal C}} x^{state}_{cs} \sum_{j \in Rel_c} \sum_{e \in E_{jc}}\Bigl[ \Bigl [ \sum_{\tau=t}^{min\{t+k_s,|T|\}} L_{jec} f^{load}_{\tau,d} \Bigr ] \notag\\
& \hspace{5pt} - \sum_{h \in {\cal H}_{jec}} SOC^{aux}_{hjec} f^{bat}_{htd} \Bigr ]; \forall t \in T, d \in {\cal D}, s \in \Omega^{routine}\label{RepairV2_v7_7}\\
& L^{\dagger}_{tds} \geq \sum_{c \in {\cal C}} x^{state}_{cs} \sum_{j \in Rel_c} \sum_{e \in E_{jc}}\Bigl[ \Bigl [ \sum_{\tau=t}^{min\{t+k_s,|T|\}} L_{jec} f^{load}_{\tau,d} \Bigr ] \notag\\
& \hspace{14pt}- \sum_{h \in {\cal H}_{jec}} SOC^{aux}_{hjec} \Bigr ];\forall t \in T, d \in {\cal D}, s \in \Omega^{resilience}\label{RepairV2_v7_8}\\
& L^{\dagger}_{tds} \geq 0; \forall t \in T, d \in {\cal D}, s \in \Omega|s\geq2\label{RepairV2_v7_9}\\
& L^{\dagger}_{tds} = 0; \forall t \in T, d \in {\cal D}, s = 1\label{RepairV2_v7_10}\\
& \sum_{j \in Rel_c} x^{ind}_{jc} = 1; \forall c \in {\cal C}\label{RepairV2_v7_11}\\
& \sum_{l \in Rel^{L,on}_{jc}}(1-x^{L,fix}_l) + \sum_{l \in Rel^{L,off}_{jc}}x^{L,fix}_l \notag\\
& \hspace{84pt} \leq M (1-x_{jc}^{ind}) ;\forall c \in {\cal C}, j \in Rel_c \label{RepairV2_v7_12}\\
&-M (1-x^{ind}_{jc}) \leq SOC^{ref}_{h} - SOC^{aux}_{hjec} \notag\\
&\hspace{5} \leq M (1-x^{ind}_{jc});\forall c \in {\cal C}, j \in Rel_c, e \in E_{jc}, h \in {\cal H}_{jec}\label{RepairV2_v7_13}\\
&-M x^{ind}_{jc} \leq SOC^{aux}_{hjec} \leq M x^{ind}_{jc}; \forall c \in {\cal C}, j \in Rel_c, \notag\\
&\hspace{150}e \in E_{jc}, h \in {\cal H}_{jec}\label{RepairV2_v7_14}\\
&-M (1-x^{ind}_{jc}) \leq \Bigl[ \sum_{i \in \mathfrak{D}_{jec}} D_{i}^{peak} \Bigr] - L_{jec} \notag\\
&\hspace{48} \leq M (1-x^{ind}_{jc}); \forall c \in {\cal C}, j \in Rel_c, e \in E_{jc} \label{RepairV2_v7_15}\\
& L_{jec} \geq 0; \forall c \in {\cal C}, j \in Rel_c, e \in E_{jc} \label{RepairV2_v7_16}\\
& 0\leq g^{Tr}_{ntd} \leq \overline{G}^{Tr}_n; \forall n \in \Psi^{SS}, d \in {\cal D}, t \in T \label{RepairV2_v7_17}\\
& \underline{V} \leq v_{ntd}\leq \overline{V}; \forall n \in \Psi^N, d \in {\cal D}, t \in T \label{RepairV2_v7_18}\\
& -y_{ltd,0} \overline{F}_l \leq f_{ltd} \leq y_{ltd,0} \overline{F}_l; \forall l \in {\cal L}^E, d \in {\cal D}, t \in T \label{RepairV2_v7_19}\\
& \sum_{l \in {\cal L}|to(l)=n} f_{ltd} - \sum_{l \in {\cal L}|fr(l)=n} f_{ltd} + g^{Tr}_{ntd} = 0; \notag\\
&\hspace{125} \forall n \in {\Psi}^{SS}, d \in {\cal D}, t \in T \label{RepairV2_v7_20}\\
& \sum_{l \in {\cal L}|to(l)=n} f_{ltd} - \sum_{l \in {\cal L}|fr(l)=n} f_{ltd} = \sum_{h \in H_n} p^{in}_{htd} \notag\\
&\hspace{0} - \sum_{h \in H_n} p^{out}_{htd} - \Delta^-_{ntd} + \Delta^+_{ntd} + D_{ntd};\forall n \in {\Psi}^{N} \setminus {\Psi}^{SS},\notag\\
&\hspace{170} d \in {\cal D}, t \in T \label{RepairV2_v7_21}\\
& -M(1-y_{ltd,0}) \leq Z^L_l r^{len}_l f_{ltd} - \bigl( v_{fr(l),t,d} \notag\\
&\hspace{11}- v_{to(l),t,d} \bigl) \leq M(1-y_{ltd,0}); \forall l \in {\cal L}^{E}, d \in {\cal D}, t \in T \label{RepairV2_v7_22}\\
& SOC_{h|T|d} = SOC_{ht^{0}d}; \forall h \in H, d \in {\cal D}\label{RepairV2_v7_23}\\
& SOC_{htd} = SOC_{ht^{0}d} + \eta \delta p^{in}_{htd} - \delta p^{out}_{htd}; \forall h \in H, \notag\\
&\hspace{172}d \in {\cal D}, t=1 \label{RepairV2_v7_24}\\
& SOC_{htd} = SOC_{h,t-1,d} + \eta \delta p^{in}_{htd} - \delta p^{out}_{htd}; \forall h \in H,\notag\\
&\hspace{147} d \in {\cal D}, t \in T|t\geq2 \label{RepairV2_v7_25}\\
& 0\leq SOC^{ref}_{h} \leq \overline{S}\overline{P}^{in}_h; \forall h \in H \setminus H^C\label{RepairV2_v7_26}\\
& 0\leq SOC^{ref}_{h} \leq \overline{S} x^{SD,var}_h \overline{P}^{in}_h; \forall h \in H^C\label{RepairV2_v7_27}\\
& SOC_{htd} = SOC^{ref}_{h} f^{bat}_{htd}; \forall h \in H, d \in {\cal D}, t \in T\label{RepairV2_v7_28}\\
& 0\leq p^{in}_{htd} \leq \overline{P}^{in}_h; \forall h \in H \setminus H^C, d \in {\cal D}, t \in T\label{RepairV2_v7_29}\\
& 0\leq p^{out}_{htd} \leq \overline{P}^{out}_h; \forall h \in H \setminus H^C, d \in {\cal D}, t \in T\label{RepairV2_v7_30}\\
& 0\leq p^{in}_{htd} \leq x^{SD,var}_h \overline{P}^{in}_h; \forall h \in H^C, d \in {\cal D}, t \in T\label{RepairV2_v7_31}\\
& 0\leq p^{out}_{htd} \leq x^{SD,var}_h \overline{P}^{out}_h; \forall h \in H^C, d \in {\cal D}, t \in T\label{RepairV2_v7_32
\end{align}
\end{comment}
The objective function to be minimized \eqref{RepairV2_v7_1} and constraints \eqref{RepairV2_v7_2}--\eqref{RepairV2_v7_6} are similar to \eqref{ScenarioBasedFormulation_1}--\eqref{ScenarioBasedFormulation_6}. One difference is that, in \eqref{RepairV2_v7_1}, $\Delta^-_{ntd}$ and $\Delta^+_{ntd}$ correspond to imbalances only under base case condition where no failure takes place. Also, the loss of load for period $t$ of each typical day $d$ that belongs to each scenario $s$ is represented by $L^{\dagger}_{tds}$, which is bounded for routine failure scenarios in \eqref{RepairV2_v7_7} and for resilience failure scenarios in \eqref{RepairV2_v7_8}. Moreover, constraints \eqref{RepairV2_v7_3_a} enforce the binary nature of decision variables $x^{ind}_{jc}$ that indicate which portfolio of candidate assets will receive investments. For each scenario $s \in \Omega^{routine}$, the right-hand side of constraint \eqref{RepairV2_v7_7} corresponds to the loss of load under the respective failure state $c$, which is assigned to scenario $s$ via the only $x^{state}_{cs}$ equal to $1$ among all $c \in {\cal C}$. This loss of load is the result of the summation across all investment possibilities and islands created by line outages of the demand during the failure period minus the current SOC of batteries connected to the respective islands. Analogously, the right-hand side of constraints \eqref{RepairV2_v7_8} represent loss of load for resilience scenarios. The salient feature in \eqref{RepairV2_v7_8} is that the whole capacity of the storage device can be used under a resilience scenario. This assumption is realistic as extreme events (such as natural disasters) can be usually predicted with enough time in advance to charge batteries to their full potential and provision their capacities to respond to the adverse conditions. Constraints \eqref{RepairV2_v7_9} ensure the non-negativity of loss of variables $L^{\dagger}_{tds}$ while constraints \eqref{RepairV2_v7_10} enforce the loss of load to be zero for the most likely scenario where no element fails as in the base case condition. Constraints \eqref{RepairV2_v7_11} indicate that just one of the possible investment combinations in lines will be chosen and therefore have an impact for failure state $c$. Constraints \eqref{RepairV2_v7_12} associate the combination of lines that are installed (whose indexes are in $Rel^{L,on}_{jc}$) and not installed (whose indexes are in $Rel^{L,off}_{jc}$) with variable $x^{ind}_{jc}$. Constraints \eqref{RepairV2_v7_13} and \eqref{RepairV2_v7_14} indicate which storage devices will be associated with each island created after an outage according to the investment decision. Constraints \eqref{RepairV2_v7_15} associate the loss of load of each island (represented by variable $L_{jec}$) with the summation of the peak demand of the islanded buses according to the investment made. Note the peak demand of each island $L_{jec}$ is multiplied by a factor $f^{load}_{\tau,d}$ in \eqref{RepairV2_v7_7} and \eqref{RepairV2_v7_8} to be adjusted to the demand of time period $\tau$. Constraints \eqref{RepairV2_v7_16} ensure the non-negativity of variables $L_{jec}$. Constraints \eqref{RepairV2_v7_16}--\eqref{RepairV2_v7_32} represent the base case operating condition analogously to \eqref{ScenarioBasedFormulation_7}--\eqref{ScenarioBasedFormulation_23}. The salient features in \eqref{RepairV2_v7_16}--\eqref{RepairV2_v7_32} with respect to \eqref{ScenarioBasedFormulation_7}--\eqref{ScenarioBasedFormulation_23} are the inclusion of the decision variables $SOC^{ref}_h$ and constraints \eqref{RepairV2_v7_26} which enforce a predetermined hourly profile for each storage device that is dictated by parameters $f^{bat}_{htd}$. The values of $f^{bat}_{htd}$ are a priori determined by optimizing storage charging and discharging while only considering energy price variation within the different considered typical days. This assumption on fixed SOC hourly profiles makes sense as batteries are usually operated to avoid higher costs instead of capacity provision for potential routine failures.
In the case of resilience failures, as aforementioned, the full capacity of the storage devices can be used.
\begin{comment}
\subsection{Reducing investment-related constraints}
Despite not having the burden of optimizing the power for each scenario, model \eqref{RepairV2_v7_1}--\eqref{RepairV2_v7_32} can become easily intractable due to considering the set of indexes of all possible combinations of candidate assets ${\cal J}$ in constraints \eqref{RepairV2_v7_3_a}, \eqref{RepairV2_v7_7}, \eqref{RepairV2_v7_8}, \eqref{RepairV2_v7_11}-\eqref{RepairV2_v7_16}. Nevertheless, for a particular failure state $c$, not all investments combinations $j \in {\cal J}$. Rather, just a few combinations of investments are relevant to failure state $c$. For example, a line investment in the north part of a distribution grid may not be able to prevent a loss of load associated with a failure in the south part of the grid. Hence, we create the set $Rel_c$ which contains the indexes $j$ of the investment combinations that relevant to the failure state $c$. In addition, we also create sets ${Rel}^{L,on}_{jc}$ and ${Rel}^{L,off}_{jc}$ which contain the indexes of line segments that are built and not built, respectively, under relevant investment combination $j$ associated with failure state $c$. Thus, the proposed model is formulated as follows.
\begin{align}
& \underset{{\substack{\Delta^+_{ntd},\Delta^-_{ntd},\zeta_{td},\psi^{CVaR}_{tds},\\f_{ltd},g^{Tr}_{ntd},L_{jec},L^{\dagger}_{tds},\\p^{in}_{htd},p^{out}_{htd}, SOC_{htd}, \\SOC^{aux}_{hjec}, SOC^{ref}_{h}, v_{ntd}\\mathbf{x}^{ind}_{jc}, x^{L,fix}_{l}, x^{SD,fix}_{l}, x^{SD,var}_{l}}}}{\text{Minimize}} \hspace{0.1cm} \sum_{l \in {\cal L}^C} \Bigl[ C^{L,fix}_lx^{L,fix}_{l} \Bigr] \notag\\
&\hspace{0pt} + \sum_{h \in H^C} \Bigl[ C^{SD,fix}_h x^{SD,fix}_{h} + C^{SD,var}_h x^{SD,var}_{h} {\color{blac}\overline{S}}\overline{P}^{in}_h \Bigr] \notag \\
&\hspace{0pt}+ \sum_{d \in {\cal D}}W_d\sum_{t \in T}\Biggl[
pf C^{Imb} \sum_{n \in \Psi^N \setminus \Psi^{SS}} \Bigl[ \Delta^-_{ntd} + \Delta^+_{ntd} \Bigr] \Biggr ] \notag \\
&\hspace{0pt}+ (1-\lambda) pf C^{Imb} \sum_{d \in D} W_d \sum_{t \in T} \sum_{s \in \Omega} \rho_s L^{\dagger}_{tds}\notag\\
&\hspace{0pt}+ \lambda ~ pf ~ C^{Imb} \sum_{d \in D} W_d \sum_{t \in T} \Bigl[ \zeta_{td} \notag\\
&\hspace{110pt} + \sum_{s \in \Omega} \frac{\rho_s}{1-\alpha^{CVaR}} \psi^{CVaR}_{tds} \Bigr] \label{RepairV2_v8_1}\\
& \text{subject to:}\notag\\
& \text{Constraints \eqref{RepairV2_v7_2}, \eqref{RepairV2_v7_3}, \eqref{RepairV2_v7_4}--\eqref{RepairV2_v7_6}, \eqref{RepairV2_v7_9}, \eqref{RepairV2_v7_10},}\notag\\
& \hspace{165pt} \text{and \eqref{RepairV2_v7_17}--\eqref{RepairV2_v7_32} } \\
& x^{ind}_{jc} \in \{0,1\}; \forall c \in {\cal C}, j \in {Rel}_c \label{RepairV2_v7_3_a_modified}\\
& L^{\dagger}_{tds} \geq \sum_{c \in {\cal C}} x^{state}_{cs} \sum_{j \in Rel_c} \sum_{e \in E_{jc}}\Bigl[ \Bigl [ \sum_{\tau=t}^{min\{t+k_s,|T|\}} L_{jec} f^{load}_{\tau,d} \Bigr ] \notag\\
& \hspace{5pt} - \sum_{h \in {\cal H}_{jec}} SOC^{aux}_{hjec} f^{bat}_{htd} \Bigr ]; \forall t \in T, d \in {\cal D}, s \in \Omega^{routine}\label{RepairV2_v7_7_modified}\\
& L^{\dagger}_{tds} \geq \sum_{c \in {\cal C}} x^{state}_{cs} \sum_{j \in Rel_c} \sum_{e \in E_{jc}}\Bigl[ \Bigl [ \sum_{\tau=t}^{min\{t+k_s,|T|\}} L_{jec} f^{load}_{\tau,d} \Bigr ] \notag\\
& \hspace{14pt}- \sum_{h \in {\cal H}_{jec}} SOC^{aux}_{hjec} \Bigr ];\forall t \in T, d \in {\cal D}, s \in \Omega^{resilience}\label{RepairV2_v7_8_modified}\\
& \sum_{j \in Rel_c} x^{ind}_{jc} = 1; \forall c \in {\cal C}\label{RepairV2_v7_11_modified}\\
& \sum_{l \in Rel^{L,on}_{jc}}(1-x^{L,fix}_l) + \sum_{l \in Rel^{L,off}_{jc}}x^{L,fix}_l \notag\\
& \hspace{84pt} \leq M (1-x_{jc}^{ind}) ;\forall c \in {\cal C}, j \in Rel_c \label{RepairV2_v7_12_modified}\\
&-M (1-x^{ind}_{jc}) \leq SOC^{ref}_{h} - SOC^{aux}_{hjec} \notag\\
&\hspace{5} \leq M (1-x^{ind}_{jc});\forall c \in {\cal C}, j \in Rel_c, e \in E_{jc}, h \in {\cal H}_{jec}\label{RepairV2_v7_13_modified}\\
&-M x^{ind}_{jc} \leq SOC^{aux}_{hjec} \leq M x^{ind}_{jc}; \forall c \in {\cal C}, j \in Rel_c, \notag\\
&\hspace{150}e \in E_{jc}, h \in {\cal H}_{jec}\label{RepairV2_v7_14_modified}\\
&-M (1-x^{ind}_{jc}) \leq \Bigl[ \sum_{i \in \mathfrak{D}_{jec}} D_{i}^{peak} \Bigr] - L_{jec} \notag\\
&\hspace{48} \leq M (1-x^{ind}_{jc}); \forall c \in {\cal C}, j \in Rel_c, e \in E_{jc} \label{RepairV2_v7_15_modified}\\
& L_{jec} \geq 0; \forall c \in {\cal C}, j \in Rel_c, e \in E_{jc} \label{RepairV2_v7_16_modified}
\end{align}
\end{comment}
\section{Case study}
\begin{figure}[!h]
\centering
\includegraphics[width=0.25\textwidth]{Figs/Candidates.pdf}
\caption{Distribution system map.}
\label{Fig.systemMap}
\end{figure}
The proposed methodology is illustrated in this section using a distribution network from the ComEd in Illinois, USA. This system (depicted in Fig. \ref{Fig.systemMap}) has 1435 customers, a peak load of 3.5MW and it is composed of 2055 nodes, 2062 existing lines, and 2 substations. In addition, we consider 13 candidate lines and 9 candidate nodes to receive storage investment. Each candidate line has an investment cost of \$158K
per mile and each storage costs \$660/kWh. Our methodology {\color{black} was
implemented on a Ubuntu-Linux server with two Intel\textsuperscript{\textregistered} Xeon\textsuperscript{\textregistered} E5-2680 processors @ 2.40GHz and
64 GB of RAM, using Python 3.8, Pyomo and solved via CPLEX 12.9.}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.7\textwidth]{Figs/VoLL1e5.pdf}
\caption{Investment plans for different levels of risk aversion considering VoLL=\$1.50/kWh.}
\label{Fig.investmentPans_1e5Dollar/kWh}
\end{figure*}
To model the load, we considered 4 typical days, representing the electricity demand in different meteorological seasons. We combined the peak demand with the demand profile reported by the U.S Energy Information Administration in \cite{US_EIA_demandProfile} (considering Illinois in Zone 4 of MISO).
Routine failures of the network in Fig. \ref{Fig.systemMap} were modeled based on ComEd's historical outages from February 1998 to November 2020. Additional, we model three major events with a rate of failure of 0.0143 times/year (equivalent to once every 70 years). The first, involves a simultaneous failure of two line segments in the north par of the network that disconnects 46\% of consumers during 3 hours. The second, involves one of the substations and affects 55\% of consumers for 1 hour. The third, mimics a recent extreme event, caused by storm in Illinois in August 2020 (described in \cite{ComEd2021_InvestmentsProposal}), that, according to ComEd's data, simultaneously affected 5 line segments for 58 hours.
\begin{comment}
Illinois is in MISO Zone 4
\end{comment}
\begin{figure}[!h]
\centering
\includegraphics[width=0.4\textwidth]{Figs/Failure.pdf}
\caption{Extreme failure in August 2020 -- lines out-of-service and respective number of customers affected in the system under consideration.}
\label{Fig.extremeFailure}
\end{figure}
Considering these failures and the investment costs, we obtained investment plans for three levels of risk aversion: $\lambda=0$, $\lambda=0.5$, and $\lambda=1$. The first ($\lambda=0$) is a risk neutral plan, considering only the expected value of loss of load \eqref{RepairV2_v7_1}. The second ($\lambda=0.5$), has a medium level of risk aversion as it considers both expected value and CVaR of cost of loss of load with equal weight in \eqref{RepairV2_v7_1}, while the third plan (for $\lambda=1$) has the highest level of risk-aversion, exclusively minimizing the CVaR of cost of loss of load.
It is important to note that this cost is highly dependent on the user defined value of loss of load (VoLL), modeled by the parameter $C^{Imb}$. For routine outages, this economic value can be obtained by tools such as the Interruption Cost Estimate (ICE) Calculator \cite{ICE_calculator}. For the purpose of demonstrating our methodology, we obtain plans for VoLL=\$1.5/kWh and VoLL=\$5.0/kWh.
Table \ref{tab:investmentResults} presents the investments results associated with the different levels of risk aversion and values of loss of load and the respective values of annual expected value and CVaR of loss of load. In addition, Fig. \ref{Fig.investmentPans_1e5Dollar/kWh} illutrates the investments made for all considered values of $\lambda$ when considering the VoLL equal to 1.50/kWh. As expected, a larger cost of VoLL increases the values of expected value and CVaR of cost of loss of load and motivates investments to avoid a more expensive load shedding. In addition, higher levels of risk aversion ($\lambda=0.5$ and $\lambda=1$) substantially decrease the value of the annual costs associated with CVaR of loss of load.
\begin{table*}[htbp]
\footnotesize
\centering
\caption{Investments associated with each level of risk aversion and value of loss of load.}
\begin{tabular}{c c c c c c c c c }
\toprule
\multicolumn{1}{c}{\multirow{2}[4]{*}{Value of }} & \multicolumn{1}{c}{\multirow{5}[8]{*}{$\lambda$}} & \multicolumn{1}{c}{Annual } & \multicolumn{1}{c}{Annual } & \multicolumn{1}{c}{\multirow{2}[4]{*}{Total}} & \multicolumn{1}{c}{\multirow{2}[4]{*}{Total}} & \multicolumn{1}{c}{\multirow{2}[4]{*}{Number}} & \multicolumn{1}{c}{\multirow{2}[4]{*}{Installed}} & \multicolumn{1}{c}{\multirow{2}[4]{*}{Computing}} \\
\multicolumn{1}{c}{} & & \multicolumn{1}{c}{expected value } & \multicolumn{1}{c}{CVaR} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\
\multicolumn{1}{c}{loss of} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{(loss of load)} & \multicolumn{1}{c}{(loss of load)} & \multicolumn{1}{c}{investments} & \multicolumn{1}{c}{investments} & \multicolumn{1}{c}{of } & \multicolumn{1}{c}{storage} & \multicolumn{1}{c}{times} \\
\multicolumn{1}{c}{load} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ costs } & \multicolumn{1}{c}{ costs } & \multicolumn{1}{c}{in lines } & \multicolumn{1}{c}{in storage } & \multicolumn{1}{c}{installed} & \multicolumn{1}{c}{capacity} & \multicolumn{1}{c}{\multirow{2}[2]{*}{(s)}} \\
\multicolumn{1}{c}{(\$/kWh) } & \multicolumn{1}{c}{} & \multicolumn{1}{c}{(\$k/year)} & \multicolumn{1}{c}{(\$k/year)} & \multicolumn{1}{c}{(\$k)} & \multicolumn{1}{c}{(\$k)} & \multicolumn{1}{c}{lines} & \multicolumn{1}{c}{(MWh)} & \multicolumn{1}{c}{} \\
\midrule
1.50 & 0 & {\color{white}0}71.31 & 11,388,684.38 & 256.80 & {\color{white}0,00}0.00 & {\color{white}0}6 & {\color{white}0}0.00 & {\color{white}0,}380.05 \\
1.50 & 0.5 & {\color{white}0}61.88 & {\color{white}00,00}1,237.58 & 572.80 & 1,038.20 & 11 & {\color{white}0}1.60 & 1,926.94 \\
1.50 & 1 & {\color{white}0}57.52 & {\color{white}00,00}1,150.49 & 572.80 & 4,609.60 & 11 & {\color{white}0}7.00 & 2,727.73 \\
\midrule
5.00 & 0 & 216.05 & 37,962,281.25 & 476.50 & {\color{white}0,00}0.00 & {\color{white}0}9 & {\color{white}0}0.00 & {\color{white}0,}445.73 \\
5.00 & 0.5 & 185.76 & {\color{white}00,00}3,715.13 & 824.20 & 5,942.40 & 13 & {\color{white}0}9.00 & 2,106.29 \\
5.00 & 1 & 183.65 & {\color{white}00,00}3,673.09 & 824.20 & 7,438.70 & 13 & 11.30 & 2,216.20 \\
\bottomrule
\end{tabular}%
\label{tab:investmentResults}%
\end{table*}%
\subsection{Simulation of system performance under an extreme failure}
For all obtained expansion plans, we have simulated the system performance under the extreme failure reported by ComEd in August 2020. For illustrative purposes, we have limited this failure to 12 hours in a summer day. In Fig. \ref{Fig.resilienceTrapezoids}, we depict how much of the demand was served for each plan considering VoLL = 1.50/kWh and VoLL = 5.00/kWh, respectively. Compared to the plan obtained for $\lambda=0$, the plan attained for $\lambda=1$ can serve up to 12\% more of the demand during the extreme event when considering VoLL = 1.50/kWh. This difference increases to 29\% for VoLL = 5.00/kWh. In fact, since the plan for $\lambda=0$ is risk neutral and therefore can only capture the effect of expected outages during normal operating conditions, the performance of this plan under this extreme failure is the same as not investing in anything. In Fig. \ref{Fig.totalLoadSheddingVersusStorage}, we compare the investment made in storage to the total load not served during the day simulated with an extreme event. As can be seen, higher levels of risk aversion and VoLL significantly decrease the total load not served.
\begin{comment}
\begin{figure}[!h]
\centering
\includegraphics[width=0.4\textwidth]{Figs/ResTrapezoide_VoLL_1e50_plot.pdf}
\caption{Hourly served demand under extreme event for investments considering VoLL=\$1.50/kWh.}
\label{Fig.resilienceTrapezoid_1e5Dollar/kWh}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.4\textwidth]{Figs/ResTrapezoide_VoLL_5_plot.pdf}
\caption{Hourly served demand under extreme event for investments considering VoLL=\$5.00/kWh.}
\label{Fig.resilienceTrapezoid_5Dollars/kWh}
\end{figure}
\end{comment}
\begin{figure}[!h]
\centering
\includegraphics[width=0.3\textwidth]{Figs/ResTrapezoides_plot.pdf}
\caption{Hourly served demand under extreme event for investments considering VoLL=\$1.50/kWh on the left and VoLL=\$5.00/kWh on the right.}
\label{Fig.resilienceTrapezoids}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.4\textwidth]{Figs/TotalLoadSheddingVersusStorage.pdf}
\caption{Total load shedding under extreme event versus investment in storage capacity.}
\label{Fig.totalLoadSheddingVersusStorage}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.48\textwidth]{Figs/cvar_lol1.5.pdf}
\caption{Out-of-sample analysis---CVaR$_{1\%}$ of hourly energy not served for expansion plans obtained under different levels of risk aversion while considering VoLL=\$1.50/kWh.}
\label{Fig.CVaR_hourlyEnergyNotServed}
\end{figure}
\subsection{Out-of-sample simulation}
We have generated 1000 annual scenarios of operation to evaluate the performance of the six obtained expansion plans in an out-of-sample analysis. For each hour of each scenario, we generated Bernoulli trials for line states \textcolor{black}{(1~in service; 0 failure)} with probabilities according to the rates of failure used while attaining the expansion plans. The performance of the obtained expansion plans was then assessed under the realization of the generated scenarios and compared to a base case without investments. This assessment involved computing hourly and annual energy not served as well as SAIFI and SADI for each scenario. In Tables \ref{tab:outOfSampleEnergyNotServedMetrics} and \ref{tab:SAIFI_SAIDI_metrics}, we present the resulting metrics and, in Fig. \ref{Fig.CVaR_hourlyEnergyNotServed}, we present a histogram that shows the distributions of the CVaR of hourly energy not served for the plans obtained under different levels of risk aversion and the base case. Average metrics in Tables \ref{tab:outOfSampleEnergyNotServedMetrics} and \ref{tab:SAIFI_SAIDI_metrics} are related to reliability while CVaR and worst case metrics are associated with resilience. As can be seen, both reliability and resilience metrics significantly improve when the level of risk aversion and the VoLL increase. In addition, in Fig. \ref{Fig.CVaR_hourlyEnergyNotServed}, it is clearly demonstrated that higher levels of risk aversion when determining new investments result in less hours with higher levels of CVaR of energy not served.
\begin{table}[htbp]
\footnotesize
\centering
\caption{Out-of-sample analysis -- Metrics of annual energy not served for expansion plans obtained under different levels of risk aversion and values of loss of load.}
\begin{tabular}{cccccc}
\toprule
\textbf{VoLL} & \multirow{2}[2]{*}{\textbf{Metric}} & \textbf{No } & \multirow{2}[2]{*}{\textbf{$\lambda=0$}} & \multirow{2}[2]{*}{\textbf{$\lambda=0.5$}} & \multirow{2}[2]{*}{\textbf{$\lambda=1$}} \\
\textbf{(\$/kWh)} & & \textbf{Inv.} & & & \\
\midrule
\multirow{9}[6]{*}{1.50} & \textbf{Average annual } & \multirow{3}[2]{*}{20.95} & \multirow{3}[2]{*}{6.09} & \multirow{3}[2]{*}{3.47} & \multirow{3}[2]{*}{2.61} \\
& \textbf{energy not } & & & & \\
& \textbf{served (MWh)} & & & & \\
\cmidrule{2-6} & \textbf{CVaR$_{1\%}$ of } & \multirow{3}[2]{*}{39.03} & \multirow{3}[2]{*}{17.05} & \multirow{3}[2]{*}{13.20} & \multirow{3}[2]{*}{10.36} \\
& \textbf{annual energy} & & & & \\
& \textbf{not served (MWh)} & & & & \\
\cmidrule{2-6} & \textbf{Worst case} & \multirow{3}[2]{*}{44.17} & \multirow{3}[2]{*}{23.21} & \multirow{3}[2]{*}{21.57} & \multirow{3}[2]{*}{17.48} \\
& \textbf{annual energy} & & & & \\
& \textbf{not served (MWh)} & & & & \\
\midrule
\multirow{9}[6]{*}{5.00} & \textbf{Average annual } & \multirow{3}[2]{*}{20.95} & \multirow{3}[2]{*}{4.18} & \multirow{3}[2]{*}{2.36} & \multirow{3}[2]{*}{2.34} \\
& \textbf{energy not } & & & & \\
& \textbf{served (MWh)} & & & & \\
\cmidrule{2-6} & \textbf{CVaR$_{1\%}$ of } & \multirow{3}[2]{*}{39.03} & \multirow{3}[2]{*}{14.05} & \multirow{3}[2]{*}{8.81} & \multirow{3}[2]{*}{8.54} \\
& \textbf{annual energy} & & & & \\
& \textbf{not served (MWh)} & & & & \\
\cmidrule{2-6} & \textbf{Worst case} & \multirow{3}[2]{*}{44.17} & \multirow{3}[2]{*}{22.49} & \multirow{3}[2]{*}{16.08} & \multirow{3}[2]{*}{15.36} \\
& \textbf{annual energy} & & & & \\
& \textbf{not served (MWh)} & & & & \\
\bottomrule
\end{tabular}%
\label{tab:outOfSampleEnergyNotServedMetrics}%
\end{table}%
\begin{table}[htbp]
\footnotesize
\centering
\caption{Out-of-sample analysis -- Metrics of SAIFI and SAIDI for expansion plans obtained under different levels of risk aversion and values of loss of load.}
\begin{tabular}{cccccc}
\toprule
\textbf{VoLL} & \multirow{2}[2]{*}{\textbf{Metrics}} & \textbf{No} & \multirow{2}[2]{*}{\textbf{$\lambda=0$}} & \multirow{2}[2]{*}{\textbf{$\lambda=0.5$}} & \multirow{2}[2]{*}{\textbf{$\lambda=1$}} \\
\textbf{(\$/kWh)} & & \textbf{Inv.} & & & \\
\midrule
\multirow{8}[8]{*}{1.50} & \textbf{Average } & \multirow{2}[2]{*}{1.337} & \multirow{2}[2]{*}{0.432} & \multirow{2}[2]{*}{0.305} & \multirow{2}[2]{*}{0.265} \\
& \textbf{SAIFI} & & & & \\
\cmidrule{2-6} & \multicolumn{1}{l}{\textbf{CVaR$_{5\%}$}} & \multirow{2}[2]{*}{1.901} & \multirow{2}[2]{*}{0.720} & \multirow{2}[2]{*}{0.507} & \multirow{2}[2]{*}{0.439} \\
& \textbf{SAIFI} & & & & \\
\cmidrule{2-6} & \textbf{Average } & \multirow{2}[2]{*}{0.668} & \multirow{2}[2]{*}{0.360} & \multirow{2}[2]{*}{0.284} & \multirow{2}[2]{*}{0.252} \\
& \textbf{SAIDI (h)} & & & & \\
\cmidrule{2-6} & \multicolumn{1}{l}{\textbf{CVaR$_{5\%}$}} & \multirow{2}[2]{*}{0.827} & \multirow{2}[2]{*}{0.544} & \multirow{2}[2]{*}{0.469} & \multirow{2}[2]{*}{0.406} \\
& \textbf{SAIDI (h)} & & & & \\
\midrule
\multirow{8}[8]{*}{5.00} & \textbf{Average } & \multirow{2}[2]{*}{1.337} & \multirow{2}[2]{*}{0.336} & \multirow{2}[2]{*}{0.257} & \multirow{2}[2]{*}{0.253} \\
& \textbf{SAIFI} & & & & \\
\cmidrule{2-6} & \multicolumn{1}{l}{\textbf{CVaR$_{5\%}$}} & \multirow{2}[2]{*}{1.901} & \multirow{2}[2]{*}{0.573} & \multirow{2}[2]{*}{0.421} & \multirow{2}[2]{*}{0.421} \\
& \textbf{SAIFI} & & & & \\
\cmidrule{2-6} & \textbf{Average } & \multirow{2}[2]{*}{0.668} & \multirow{2}[2]{*}{0.302} & \multirow{2}[2]{*}{0.247} & \multirow{2}[2]{*}{0.245} \\
& \textbf{SAIDI (h)} & & & & \\
\cmidrule{2-6} & \multicolumn{1}{l}{\textbf{CVaR$_{5\%}$}} & \multirow{2}[2]{*}{0.827} & \multirow{2}[2]{*}{0.515} & \multirow{2}[2]{*}{0.398} & \multirow{2}[2]{*}{0.393} \\
& \textbf{SAIDI (h)} & & & & \\
\bottomrule
\end{tabular}%
\label{tab:SAIFI_SAIDI_metrics}%
\end{table}%
\section{Conclusions}\label{sec.Conclusions}
In this paper, we propose scalable risk-based method for reliability and resilience planning of distribution systems. Our results using a ComEd distribution network demonstrate that the proposed method is able to produce investment plans (for a real-scale feeder) that have been optimized according to the degree of risk aversion, considering both investment costs and outage frequency and severity.
The proposed method is intended to support ``cost vs risk'' discussions between utilities and regulators by providing an internally consistent framework for evaluating trade-offs and synergies between reliability and resilience investments.
\vspace{-0.5cm}
\bibliographystyle{IEEEtran}
|
1,116,691,499,743 | arxiv | \section{Introduction}\label{s0}
Intuitively, a binary market is a market in which the stock price process $(S_n)_{n=0}^N$ is an adapted stochastic process with strictly positive values and such
that at time $n$ the stock price evolves from $S_{n-1}$ to either $\alpha_n\, S_{n-1}$ or $\beta_n\, S_{n-1} $, where $\beta_n <\alpha_n$. The values $\alpha_n$ and $\beta_n$
depend only on the past. So there are exactly $2^n$ different possible paths for the stock price to evolve up to time $n$.
The study of binary market models is both interesting and useful in order to obtain more information about the behavior of
continuous models. This is indeed the case, as a typical situation that may occur is when a continuous model can
be expressed as a limiting process of a sequence of binary market models. Such a construction comes very natural for the Black-Scholes models
which are driven by a standard Brownian motion. The key point is to approximate, by means of the Donsker theorem, the Brownian motion by a
random walk consisting of independent Bernoulli random variables with the same parameter.
Moreover, this idea can be extended also to Black-Scholes-type markets that are driven by a process, for which
we dispose of a random walk approximation. Examples of this are the fractional Brownian motion and the Rosenblatt process, as one can see in \cite{Sotti} and \cite{Totu} respectively.
In these works, the authors construct a sequence of binary models approximating the fractional Black--Scholes (respectively the Rosenblatt Black--Scholes) by giving an
analogue of the Donsker's theorem, which, in this case, means that the fractional Brownian motion (respectively the Rosenblatt process) can be
approximated by a ``disturbed'' random walk.
An important feature for a binary market model that one can study is its arbitrage opportunities. In \cite{Dzh}
Dzhaparidze extensively describes a general mathematical model for the finite binary securities market in which he
gives a complete characterization of the absence of arbitrage by using ideas of Harrison and Pliska \cite{Hapl}. However, interesting binary
models admitting arbitrage opportunities can be found in the literature. Indeed, in \cite{Sotti}
Sottinen showed that the arbitrage persists in the fractional binary markets approximating the fractional Black--Scholes
and such an opportunity is explicitly constructed using the path information starting from time zero. An analogous result for Rosenblatt binary markets
is obtained in \cite{Totu}.
All the above--mentioned results were obtained for a binary market model without transaction costs.
In the present paper we focus our attention on the study of binary market models under transaction costs
$\lambda$ and their arbitrage opportunities. When one introduces transaction
costs, the usual notion of an equivalent martingale measure that is used in a market without friction is replaced
by the concept of a $\lambda$--consistent price system ($\lambda$--CPS). In this work, we aim to give necessary and sufficient
condition under which a binary market is ``good'' or not. By ``good''
we mean that the parameters of the model are given in such a way that there exist consistent price systems.
Notice that this is not always the case, as one could see from the above discussion. Therefore, we
characterize the smallest transaction costs $\lambda_c$ starting from which one can construct a $\lambda$--CPS.
By using this characterization, we can obtain an explicit expression for $\lambda_c$ when the parameters of the model
are homogeneous in time and space, but also for a large class of
semi--homogeneous cases, i.e.~when the parameters of the model are not necessarily
homogeneous in time but they are still homogeneous in space.
The paper is organized as follows. In Section~2, we start introducing some notations and definitions concerning binary markets that we will use along this work. We recall
necessary and sufficient conditions to exclude arbitrage opportunities for these markets in the frictionless case (see \cite{Dzh}). Finally,
we present the notion of $\lambda$-consistent price system and we state the Fundamental Theorem of Asset Pricing which permits to relate the
existence of consistent price systems to the absence of arbitrage opportunities.
In Section~3, we give a brief presentation of the 1-step model, in which all the calculations are explicit. We also show that the results for 1-step models
allow to obtain a lower bound in the general case.
Section 4 consists of technical lemmas which are used to establish necessary and sufficient conditions for the existence of $\lambda$-CPS.
In Section 5 the main results are concentrated. We start with a characterization for the smallest transaction cost $\lambda_c$ (called ``critical'' $\lambda$)
starting from which one can construct a $\lambda$--consistent price system. In a similar way, we obtain an expression for the set $\Ms(\lambda)$ of all probability measures
inducing $\lambda$-CPS. These results are a consequence of the necessary and sufficient conditions established in Section 4. We finish this section proving that
a binary market with critical transaction costs $\lambda_c$ admits arbitrage if and only if the corresponding frictionless market admits arbitrage.
In Section 6, we apply our results to give an explicit formula for the critical transaction costs $\lambda_c$ for homogeneous and
some semi-homogeneous binary markets.
Even if the binary models in the setting of no transaction costs were already studied in the literature,
there is no result which gives us the conditions under which there exist
consistent price systems when one passes to the case of transaction costs. This is precisely the goal of this
paper.
\section{Preliminaries}\label{s1}
\subsection{Definitions}
To formalize the notion of a binary market, we introduce first some notations which will be useful along to this work. For a detailed treatment of this subject see \cite{Dzh} and Section II.1e of \cite{Shir}.
\subsubsection{The market model}
Let $(\Omega,\Fs,{(\Fs_n)}_{n=0}^N, P)$ be a finite filtered probability space. By a binary market we mean a market in which two
assets (a bond $B$ and a stock $S$) are traded at successive times $t_0=0<t_1<\cdots<t_N$. The evolution of the bond and stock is
described by:
$$B_n=(1+r_n)B_{n-1}$$
and
\begin{equation}\label{stock}
S_n= \left(a_n+(1+X_n)\right)\,S_{n-1},\quad \forall n\in\{1,...,N\},
\end{equation}
where $r_n$ and $a_n$ are the interest rate and the drift of the stock in the time interval $[t_n,t_{n+1})$. The value of $S$ at time
$0$ is given by:
$$S_0=s_0=1+a_0+x_0.$$
We may assume, for the sake of simplicity, that the bond plays the role of a num\'eraire, and, in this case, that it
is equal to $1$ at every time $n$.
The process $(X_n)_{n=0}^N$ is an adapted stochastic process starting at
$X_0=x_0$ and such that, at each time $n$, $X_n$ can take only two possible values $u_n$ and $d_n$ with $d_n<u_n$. While $a_n$ from \eqref{stock}
is deterministic, the values of $u_n$ and $d_n$ may depend on the path of $X$ up to time $n-1$. This means that if, for each $n>1$, we denote by
$\vec{X}_{n-1}=(X_{n-1},...,X_0)$ and by
$$E_{n-1}=\{\vec{X}_{n-1}(\omega)\ :\ \omega\in\Omega\}$$
the set of all possible paths up to time $n-1$, then $X_n\in\{u_n(\vec{X}_{n-1}),d_n(\vec{X}_{n-1})\}$. For $n=1$, we have that $E_0=\{x_0\}$. \\
\begin{center}
\begin{tikzpicture}[grow=right]
\tikzstyle{level 0}=[rectangle,rounded corners, draw,level distance=10mm]
\tikzstyle{level 1}=[rectangle,rounded corners, draw,level distance=20mm, sibling distance=16mm]
\tikzstyle{level 2}=[rectangle,rounded corners, draw,level distance=25mm, sibling distance=8mm]
\node[level 0] {\tiny{$x_0$}}
child{node[level 1]{\tiny{$d_1(x_0)$}}
child{node[level 2]{\tiny{$d_2(d_1(x_0),x_0)$}}}
child{node[level 2]{\tiny{$u_2(d_1(x_0),x_0)$}}}
}
child{node[level 1]{\tiny{$u_1(x_0)$}}
child{node[level 2]{\tiny{$d_2(u_1(x_0),x_0)$}}}
child{node[level 2]{\tiny{$u_2(u_1(x_0),x_0)$}}}
};
\node at (0,-2){\minibox{$X_0$}};
\node at (2,-2) {\minibox{$X_1$}};
\node at (4.5,-2) {\minibox{$X_2$}};
\end{tikzpicture}
\end{center}
Now, we put for each $y\in E_{n-1}$:
$$\alpha_n(y)= 1+a_n+u_n(y)\quad\textrm{and}\quad\beta_n(y)=1+a_n+d_n(y),$$
and we assume that $\alpha_n(y)$ and $\beta_n(y)$ are strictly positive for every $y\in E_{n-1}$.\\
\begin{hyp}\label{ass}
We assume in addition that:
\begin{itemize}
\item The filtration ${(\Fs_n)}_{n=0}^N$ coincides with the natural filtration of $(X_n)_{n=0}^N$.
\item For all $\omega\in\Omega$, $\{\omega\}\in\Fs_N$.
\item For all $\omega\in\Omega$, $P(\{\omega\})>0$.
\end{itemize}
\end{hyp}
\begin{remark}
The first two conditions of Assumptions~\ref{ass} allow to identify the spaces $\Omega$ and $E_N$ as well as the spaces of probability measures $\Ps_1(\Omega)$ and $\Ps_1(E_N)$.
We can also identify $\Ps_1(\Omega)$ with:
$$ {[0,1]}_*^{2^N-1} =\left\{\left(q_n(\Qs,x):1\leq n\leq N,\, x\in E_{n-1}\right): q_n(x)\in[0,1]\right\}$$
by means of the relation:
\begin{equation}\label{qn}
q_n(\Qs,y):=\Qs\left(X_n=u_n(y)\Big|\,\vec{X}_{n-1}=y\right)
\end{equation}
\begin{center}
\begin{tikzpicture}[grow=right]
\tikzstyle{level 0}=[rectangle,rounded corners, draw,level distance=20mm]
\tikzstyle{level 1}=[rectangle,rounded corners, draw,level distance=30mm, sibling distance=20mm]
\node[level 0] {\small{$y$}}
child{node[level 1]{\small{$d_n(y)$}}}
child{node[level 1]{\small{$u_n(y)$}}
edge from parent node[fill=white] {$q_n(\Qs,y)$}};
\end{tikzpicture}
\end{center}
\vspace{.2cm}
When there is no risk of confusion we write $q_n(y)$ instead of $q_n(\Qs,y)$.\\
We are using the notation ${[\ ,\ ]}_*$ to emphasize that the coordinates of a vector will be associated to nodes in the tree. Thus, for example, when we speak
of continuity of a function in $\Ps_1(\Omega)$, we refer to the continuity of the function viewed as a function in ${[0,1]}_*^{2^N-1}$, so coordinate by coordinate.
More precisely, we can define the metric $d_\infty$ in $\Ps_1(\Omega)$ as:
$$d_\infty(\Qs,\widehat{\Qs})=\max_{n\in\{1,...,N\}}\left\{\max_{x\in E_{n-1}}|q_n(\Qs,x)-q_n(\widehat{\Qs},x)|\right\}.$$
\end{remark}
\begin{remark}\label{eqprob}
The last condition of Assumption~\ref{ass} implies that:
$$\Qs\sim P \Longleftrightarrow\Qs(\{\omega\})>0,\textrm{ for all }\omega\in\Omega.$$
\end{remark}
\subsubsection{Notations on the binary tree}
In order to simplify the notations, we introduce the operators extension ``$\star u$'' and ``$\star d$'' acting on the nodes of the binary tree of the paths of the process $X$.
For $n\in\{1,...,N\}$, $y=(y_{n-1},...,y_0)\in E_{n-1}$, we define:
\begin{itemize}
\item[]$y\star u^0=y\quad\textrm{and}\quad y\star d^0=y.$
\item[] $y\star u=(u_n(y),y)\quad\textrm{and}\quad y\star d=(d_n(y),y).$
\item[] $y\star u^{i+1}=(y\star u^i)\star u\quad\textrm{and}\quad y\star d^{i+1}=(y\star d^i)\star d\,;\quad$ for $i\in\{0,..., N-n\}$.
\end{itemize}
\vspace{.3cm}
\begin{center}
\begin{tikzpicture}[grow=right]
\tikzstyle{level 0}=[rectangle,rounded corners, draw,level distance=10mm]
\tikzstyle{level 1}=[rectangle,rounded corners, draw,level distance=20mm, sibling distance=20mm]
\tikzstyle{level 2}=[rectangle,rounded corners, draw,level distance=25mm, sibling distance=10mm]
\node[level 0] {\small{$y$}}
child{node[level 1]{\small{$y\star d$}}
child{node[level 2]{\small{$y\star d^2$}}}
child{node[level 2]{\small{$(y\star d)\star u$}}}
}
child{node[level 1]{\small{$y\star u$}}
child{node[level 2]{\small{$(y\star u)\star d$}}}
child{node[level 2]{\small{$y\star u^2$}}}
};
\end{tikzpicture}
\end{center}
\subsection{No arbitrage condition in the frictionless case}\label{ssnac}
We know by Proposition 3.6.2 in \cite{Dzh} that a binary market excludes arbitrage opportunities if and only if for all $n\in\{1,...,N\}$
and $y\in E_{n-1}$, we have:
\begin{equation}\label{nac1}
d_n(y)<-a_n< u_n(y),
\end{equation}
or equivalently:
\begin{equation}\label{nac2}
\beta_n(y)<1< \alpha_n(y).
\end{equation}
This is related to the existence of a probability measure $\Qs^0$ equivalent to $P$ such that
${(S_n)}_{n=0}^N$ is a $\Qs^0$-martingale. It is easy to see that such $\Qs^0$ must satisfy for each $n\in\{1,...,N\}$ and $x\in E_{n-1}$:
\begin{equation}\label{eqme}
\Qs^0\left(X_n=u_n\left(\vec{X}_{n-1}\right)\Big\arrowvert\vec{X}_{n-1}=x\right)= \frac{-a_n-d_n(x)}{u_n(x)-d_n(x)}=\frac{1-\beta_n(x)}{\alpha_n(x)-\beta_n(x)}.
\end{equation}
Moreover, under condition \eqref{nac1}, identity \eqref{eqme} defines a unique equivalent martingale measure. See Chapter 3 of \cite{Dzh} for more details.
\subsection{\texorpdfstring{Transaction costs and $\lambda$-CPS}{}}
Now, we introduce proportional transaction costs $\lambda\in]0,1[$ in our binary market $S$, which means that the bid and ask price of the stock $S$ are modeled by the processes
${((1-\lambda)S_n)}_{n=0}^N$ and ${(S_n)}_{n=0}^N$ respectively. In this framework the notion of consistent price system
replaces the notion of equivalent martingale measure that is used in a market without transaction costs, and, one can relate the absence of
arbitrage to the existence of such systems.
\begin{definition}[$\lambda$-consistent price system]
A $\lambda$-consistent price system ($\lambda$-CPS) for the binary market $S$ is a pair $(\Qs,\tilde{S})$ of a probability measure
$\Qs\sim P$ and a process ${(\tilde{S}_n)}_{n=0}^N$ which is a martingale under $\Qs$ such that:
\begin{equation}\label{CPSd}
(1-\lambda)S_n\leq \tilde{S}_n\leq S_n,\qquad \textrm{a.s., for all $n\in\{0,...,N\}$}.
\end{equation}
We denote by $\cS^{\lambda}$ the set of $\lambda$-CPS.
\end{definition}
The following Theorem relates the existence of consistent price systems to the absence of arbitrage. A proof for it can be found, for
example, in \cite{Sch}.
\begin{theorem}[Fundamental Theorem of Asset Pricing in the case of finite $\Omega$]\label{TFAP}
Given a stock price process $S=(S_n)_{n=0}^N$ on a finite probability space and transaction costs $0<\lambda<1$, the following are equivalent:
\begin{enumerate}
\item The process $S$ does not allow for an arbitrage under transaction costs $\lambda$.
\item $\cS^{\lambda}\neq \emptyset$.
\end{enumerate}
\end{theorem}
Now, define $\Ms(\lambda)$, the set of all probability measures $\Qs\sim P$ inducing a $\lambda$-CPS, that is:
$$\Ms(\lambda)=\left\{ \Qs\sim P:\,\exists\, \tilde{S}\textrm{ such that }(\Qs,\tilde{S}) \textrm{ is a $\lambda$-CPS}\right\}.$$
One of the goals of this work is to characterize these sets. The other goal is to characterize the critical transaction costs $\lambda_c$, starting from which
the arbitrage opportunities disappear. Using Theorem~\ref{TFAP}, we can express $\lambda_c$ as:
$$\lambda_c=\inf\{\lambda\in[0,1]: \textrm{s.t. }\exists\ \lambda-\textrm{CPS for}\,\, (S,P)\}.$$
By definition, when we assume that the binary market model with $0$ transaction costs excludes arbitrage opportunities, then $\lambda_c=0$.
\section{\texorpdfstring{The 1-step model and a general lower bound for $\lambda_c$}{}}
We start this paragraph by analyzing the $1$-step model ($N=1$) in which we can explicitly find an easy expression for $\lambda_c$.
Indeed, if we assume that for some $\lambda$ there exists a $\lambda$--CPS $(\Qs,\widetilde{S})$, then, by the martingale property of $\widetilde{S}$ and the
inequality \eqref{CPSd}, we obtain that
\begin{equation}\label{ineq}
0\vee\left(\frac{1-\lambda-\beta_1(x_0)}{\alpha_1(x_0)-\beta_1(x_0)}\right)\leq\Qs(X_1=u_1(x_0))\leq 1\wedge\left(\frac{\frac1{1-\lambda}-\beta_1(x_0)}{\alpha_1(x_0)-\beta_1(x_0)}\right).
\end{equation}
Using Remark~\ref{eqprob}, it directly follows that
$$\lambda>1-\alpha_1(x_0)\ \ \mathrm{and}\ \ \lambda>1-\frac{1}{\beta_1(x_0)}.$$
In the other direction, if we start with some transaction costs $\lambda$ given as above, then we can choose a probability
measure $\cQ$ satisfying \eqref{ineq}, and, hence, find a process $\widetilde{S}$ which is a $\cQ$--martingale and satisfies \eqref{CPSd}.
The argument and a more detailed presentation of this example can be found in \cite{Sch}.
We therefore have that:
\begin{equation}\label{r1s}
\lambda_c^{(1)}=1-\alpha_1(x_0)\wedge\frac{1}{\beta_1(x_0)}\wedge 1.
\end{equation}
However, we will see in the next sections that it is more complicated to construct a $\lambda$-CPS for a general $N$-step model than to
construct $\lambda$-CPS for each 1-step sub-binary market and then to paste them together. Even so, this naive idea permits to give a lower bound
for the critical transaction costs $\lambda_c$.
\begin{proposition}[Lower bound for $\lambda_c$]\label{plblc} We have that:
$$\lambda_c\geq\lambda_*=1-\min\limits_{n\in\{1,...,N\}}\left\{\min\limits_{x\in E_{n-1}}\left\{\alpha_n(x)\wedge\frac{1}{\beta_n(x)}\wedge 1\right\}\right\}.$$
\end{proposition}
\begin{proof}
If we take $\lambda>\lambda_c$, then there exists a $\lambda$-CPS $(\Qs,\tilde{S})$. We divided the $N$-step binary market in $2^N-1$ $1$-step binary markets.The restriction of $(\Qs,\tilde{S})$ to each
one of this binary markets is also a $\lambda$-CPS. By using the results for the $1$-step binary markets (equation \eqref{r1s}), we obtain that $\lambda>\lambda_*$.
\end{proof}
\begin{remark}
If we assume that $\lambda_c=0$, then $\alpha_n(x)\geq1\geq\beta_n(x)$ for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$, and, by definition, the arbitrage
opportunities disappear when we introduce arbitrarily small transaction costs. If in addition $\alpha_n(x)>1>\beta_n(x)$ for all $n\in\{1,...,N\}$ and
$x\in E_{n-1}$, then there are no arbitrage opportunities in the frictionless market.\\
\end{remark}
\section{\texorpdfstring{Necessary and sufficient conditions on the measures inducing $\lambda$-CPS}{}}\label{s2}
In this section we study necessary and sufficient conditions for a probability measure to be in $\Ms(\lambda)$. This is the starting point to understand
the nature of the $\lambda$-CPS for binary markets. We will see how the martingale property imposes constraints in the bid-ask spread intervals, and how to
deduce from these constraints a necessary condition to belong to $\Ms(\lambda)$, which turns out to be also sufficient.
\subsection{\texorpdfstring{Effective bid-ask spread of $S$}{}}
The goal of this paragraph is to show that if $(\Qs, \tilde{S})$ is a $\lambda$-CPS for the process $S$, then $\tilde{S}$ verifies a condition which is, in general,
stronger than \eqref{CPSd}.\\
\noindent To make this idea clear, we introduce the next lemma, which shows that, by using the properties of the
$\lambda$--CPS, property \eqref{CPSd} at time $n$ implies a more restrictive condition at time $n-1$.
\begin{lemma}\label{ab}
Let $\lambda\in[0,1]$ and $(\cQ,\tilde{S})\in\cS^{\lambda}$. If, for $n>1$ and $y\in E_{n-1}$, there exists
$a,b,\tilde{a},\tilde{b}$ strictly positive such that
\begin{equation}\label{yu}
\frac{\tilde{S}_n(y\star u)}{S_n(y\star u)}\in\left[(1-\lambda)a,b\right]\quad\textrm{and}\quad \frac{\tilde{S}_n(y\star d)}{S_n(y\star d)}\in\left[(1-\lambda)\tilde{a},\tilde{b}\right],
\end{equation}
then
$$
\frac{\tilde{S}_{n-1}(y)}{S_{n-1}(y)}\in\left[(1-\lambda)(\overline{a}\vee1), \overline{b}\wedge1\right]
$$
where
$$\overline{a}=q_n(y)\alpha_n(y)a+(1-q_n(y))\beta_n(y)\tilde{a}$$
and
$$\overline{b}=q_n(y)\alpha_n(y)b+(1-q_n(y))\beta_n(y)\tilde{b}.$$
\end{lemma}
\begin{proof}
Let $n>1$ and $y\in E_{n-1}$ such that \eqref{yu} hold true. It is enough to prove that
$\frac{\tilde{S}_{n-1}(y)}{S_{n-1}(y)}\in\left[(1-\lambda)\overline{a}, \overline{b}\right]$. Indeed, if this
would be the case, then the desired result follows from the fact that
$(1-\lambda)S_{n-1}(y)\leq \tilde{S}_{n-1}(y)\leq S_{n-1}(y)$.\\
By the martingale property, we obtain that:
\begin{equation}\label{eqmk}
\tilde{S}_{n}(y\star d)=\frac{\tilde{S}_{n-1}(y)- q_{n}(y)\,\tilde{S}_{n}(y\star u)}{1-q_{n}(y)},
\end{equation}
\noindent which, combined with \eqref{yu}, gives us that:
\begin{equation}\label{eqpk1}
\frac{\tilde{S}_{n-1}(y)-(1-q_{n}(y))\tilde{b}S_{n}(y\star d)}{q_{n}(y)} \leq\tilde{S}_{n}(y\star u)
\end{equation}
\noindent and
\begin{equation}\label{eqpk2}
\tilde{S}_{n}(y\star u)\leq \frac{\tilde{S}_{n-1}(y)- (1-\lambda)(1-q_{n}(y))\tilde{a}S_{n}(y\star d)}{q_{n}(y)}
\end{equation}
Then, \eqref{eqpk1} together with \eqref{yu} implies that the left hand side of \eqref{eqpk1} is
smaller or equal than $bS_{n}(y\star u)$. As well, \eqref{eqpk2} combined with
\eqref{yu} implies that the right hand side of \eqref{eqpk2} is bigger or equal than
$(1- \lambda)aS_{n}(y\star u)$.
From this and by using that $S_{n}(y\star u)=\alpha_{n}(y)S_{n-1}(y)$ and that
$S_{n}(y\star d)=\beta_{n}(y)S_{n-1}(y)$, we obtain that:
\begin{equation*}
(1-\lambda)\, \overline{a}\leq \frac{\tilde{S}_{n-1}(y)}{S_{n-1}(y)}\leq \overline{b}.
\end{equation*}
\end{proof}
Now, starting from the result presented in the above lemma, but iterated for every time point,
we introduce, for each $n\in\{1,...,N+1\}$, the functions $\rho_n^+$ and $\rho_n^-$ as follows.
The functions $\rho_{N+1}^+,\rho_{N+1}^-: E_{N}\rightarrow \Rb_+$ are defined by putting:
$$\rho_{N+1}^+=\rho_{N+1}^-\equiv 1.$$
For $n\in\{1,...,N\}$, the functions $\rho_{n}^+,\rho_{n}^-:\Ps_1(\Omega)\times E_{n-1}\rightarrow \Rb_+$ are defined by means of a backward recurrence relation.
More precisely, for each $\Qs\in\Ps_1(\Omega)$ and $x\in E_{n-1}$, we put:
$$\rho_{n}^+(\Qs,x)=1\wedge\left[\,q_{n}(x)\,\alpha_{n}(x)\,\rho_{n+1}^+(\Qs,x\star u)+(1-q_{n}(x))\,\beta_{n}(x)\,\rho_{n+1}^+(\Qs,x\star d)\right],$$
and
$$\rho_{n}^-(\Qs,x)=1\vee\left[\,q_{n}(x)\,\alpha_{n}(x)\,\rho_{n+1}^-(\Qs,x\star u)+(1-q_{n}(x))\,\beta_{n}(x)\,\rho_{n+1}^-(\Qs,x\star d)\right].$$
For $n=N$ we need to replace $(\Qs,x\star\cdot)$ by $x\star\cdot$ in this recurrence relation.\\
In the following proposition, we establish that, as expected according to its construction, the quantities
$(1-\lambda)\rho_{n+1}^-(\Qs,y)S_n(y)$ and $\rho_{n+1}^+(\Qs,y)S_n(y)$
represent the extremities of the effective bid-ask spread interval at the time $n$ at the position $y$.
\begin{proposition}[Effective bid-ask spread of $S$]\label{pcn}
If $\lambda\in[0,1]$ and $(\Qs,\tilde{S})\in\Ss^\lambda$, then, for each $n\in\{0,...,N\}$ and $y\in E_{n}$:
$$\frac{\tilde{S}_{n}(y)}{S_{n}(y)}\in\left[(1-\lambda)\rho_{n+1}^-(\Qs,y),\rho_{n+1}^+(\Qs,y)\right].$$
\end{proposition}
\begin{proof}
We prove the result by backward recurrence. For $n=N$, the statement is true by the definitions of $\lambda$-CPS, $\rho_{N+1}^{+}$ and $\rho_{N+1}^{-}$.
Now, we suppose that the result is true for some $n\in\{1,...,N\}$ and we prove it for $n-1$. We fix $y\in E_{n-1}$.
\begin{center}
\begin{tikzpicture}[grow=right]
\tikzstyle{level 0}=[rectangle,rounded corners, draw,level distance=20mm]
\tikzstyle{level 1}=[rectangle,rounded corners, draw,level distance=35mm, sibling distance=25mm]
\tikzstyle{level 2}=[rectangle,rounded corners, draw,level distance=45mm, sibling distance=15mm]
\node[level 0] {\small{$y$}}
child{node[level 1]{\small{$y\star d$}}
child{node[level 2]{\small{$y\star d^2$}}}
child{ node[level 2] {\small{$(y\star d)\star u$}}
edge from parent node[fill=white] {$q_{n+1}(y\star d)$}
}
}
child{node[level 1]{\small{$y\star u$}}
child {node[level 2]{\small{$(y\star u)\star d$}}}
child {node[level 2] {\small{$y\star u^2$}}
edge from parent node[fill=white] {$q_{n+1}(y\star u)$}
}
edge from parent node[fill=white] {$q_{n}(y)$}
};
\end{tikzpicture}
\end{center}
We know by the recurrence hypothesis that:
\begin{equation}\label{eqbak1}
\frac{\tilde{S}_{n}(y\star u)}{S_{n}(y\star u)}\in\left[(1-\lambda)\rho_{n+1}^-(\Qs,y\star u),\rho_{n+1}^+(\Qs,y\star u)\right].
\end{equation}
and
\begin{equation}\label{eqbak2}
\frac{\tilde{S}_{n}(y\star d)}{S_{n}(y\star d)}\in\left[(1-\lambda)\rho_{n+1}^-(\Qs,y\star d),\rho_{n+1}^+(\Qs,y\star d)\right].
\end{equation}
The result follows immediately by applying Lemma~\ref{ab}.
\end{proof}
\subsection{\texorpdfstring{Some properties of the functions $\rho_n^+$ and $\rho_n^-$}{}}
As we have seen in Proposition \ref{pcn} the functions $\rho^+$ and $\rho^-$ encode the effect of the martingale property in the
dynamic of the bid-ask spread intervals. Therefore, it seems important to understand their nature. To this end we present in this paragraph
some useful properties of these functions.
\begin{lemma}\label{la}Let $\Qs$ be a probability measure equivalent to $P$. For each $n\in \{1,...,N\}$ and $x\in E_{n-1}$.
\begin{enumerate}
\item If $\rho_n^+(\Qs,x)=1$, then $\alpha_n(x)>1$.
\item If $\rho_n^-(\Qs,x)=1$, then $\beta_n(x)<1$.
\item If $\rho_n^+(\Qs,x)=\rho_n^-(\Qs,x)=1$, then for $y\in\{x\star u,x \star d\}$:
$$\rho_{n+1}^+(\Qs,y)=\rho_{n+1}^-(\Qs,y)=1\quad\textrm{and}\quad q_n(x)=\frac{1-\beta_n(x)}{\alpha_n(x)-\beta_n(x)}.$$
\end{enumerate}
\end{lemma}
\begin{proof}
For $n=N$, the statements follow immediately as $\alpha_N(x)-\beta_N(x)>0$ and $\rho_{N+1}^+(\Qs,x)=\rho_{N+1}^-(\Qs,x)=1$. Now let $n\in \{1,...,N-1\}$ and $x\in E_{n-1}$.\\
(1) As $\rho_n^+(\Qs,x)=1$, we have that
\begin{equation}\label{rho+}
q_{n}(x)\,\alpha_{n}(x)\,\rho_{n+1}^+(\Qs,x\star u)+(1-q_{n}(x))\,\beta_{n}(x)\,\rho_{n+1}^+(\Qs,x\star d)\geq1.
\end{equation}
From the above inequality we immediately deduce that $\alpha_n(x)>1$. Indeed, if we assume that $\alpha_n(x)\leq 1$ and since $\rho_{n+1}^+(\Qs,y)\leq1$, with $y\in\{x\star u,x\star d\}$,
and $\beta_n(x)<\alpha_n(x)$, then (\ref{rho+}) implies that:
$$1\leq q_{n}(x)\,\alpha_{n}(x)\,\rho_{n+1}^+(\Qs,x\star u)+(1-q_{n}(x))\,\beta_{n}(x)\,\rho_{n+1}^+(\Qs,x\star d)<1.$$
Hence, we obtained a contradiction.\\
(2) When $\rho_n^-(\Qs,x)=1$, it follows that:
\begin{equation}\label{rho-}
q_{n}(x)\,\alpha_{n}(x)\,\rho_{n+1}^-(\Qs,x\star u)+(1-q_{n}(x))\,\beta_{n}(x)\,\rho_{n+1}^-(\Qs,x\star d)\leq1.
\end{equation}
Like previously, we can directly deduce, using the fact that $\rho_{n+1}^-(\Qs,y)\geq1$ with $y\in\{x\star u,x\star d\}$, that $\beta_n(x)<1$.\\
(3) If $\rho_n^+(\Qs,x)=\rho_n^-(\Qs,x)=1$, the inequalities (\ref{rho+}) and (\ref{rho-}) hold simultaneously. Using that
$\rho_{n+1}^+(\Qs,y)\leq1$ and $\rho_{n+1}^-(\Qs,y)\geq1$ for $y\in\{x\star u,x\star d\}$, we can see that this is only possible if
$\rho_{n+1}^+(\Qs,y)=\rho_{n+1}^-(\Qs,y)=1$ for $y\in\{x\star u,x \star d\}$.
We only have to plug this in (\ref{rho+}) and (\ref{rho-}) to obtain that:
$$q_n(x)\alpha_n(x)+(1-q_n(x))\beta_n(x)=1$$
which clearly implies that $q_n(x)=\frac{1-\beta_n(x)}{\alpha_n(x)-\beta_n(x)}$.
\end{proof}
In order to lighten some of the proofs, we introduce some extra notations. For each $n\in\{1,...,N\}$ and $x\in E_{n-1}$, we set:
$$r_n^+(\Qs,x)=q_{n} (x)\alpha_{n} (x)\rho_{n+1}^+(\Qs,x\star u)+ (1-q_{n} (x))\beta_{n}(x)\rho_{n+1}^+(\Qs,x\star d)$$
and
$$r_n^-(\Qs,x)=q_{n} (x)\alpha_{n} (x)\rho_{n+1}^-(\Qs,x\star u)+ (1-q_{n} (x))\beta_{n}(x)\rho_{n+1}^-(\Qs,x\star d).$$
Using these notations, we remark that:
\begin{equation}\label{rvsrho}
\rho_n^+(\Qs,x)=1\wedge r_n^+(\Qs,x)\textrm{ and }\rho_n^-(\Qs,x)=1\vee r_n^-(\Qs,x).
\end{equation}
Note that, from these identities and the definitions, we can deduce the following chain of inequalities:
\begin{equation}\label{irhpmr}
\rho_n^+(\Qs,x)\leq r_n^+(\Qs,x)\leq r_n^-(\Qs,x)\leq \rho_n^-(\Qs,x).
\end{equation}
\begin{remark}[Continuity]
For each \mbox{$n\in\{1,...,N+1\}$} and $x\in E_{n-1}$, the functions $\rho_n^+(\cdot,x)$ and $\rho_n^-(\cdot,x)$ are continuous. The proof follows easily by backward recurrence. Indeed,
for $n=N+1$, we have that $\rho_{N+1}^+=\rho_{N+1}^-\equiv 1$ and hence they are continuous. For the induction step, by assuming that $\rho_{n+1}^+$
and $\rho_{n+1}^-$ are continuous, it follows directly from the definition that $\rho_n^+$ and $\rho_n^-$ are as well continuous. As a consequence the functions $r_n^+(\cdot,x)$ and $r_n^-(\cdot,x)$ for each
$n\in\{1,...,N\}$ and $x\in E_{n-1}$ are also continuous.
\end{remark}
\begin{remark}\label{r1}
Note that for fixed $n\in\{1,...,N\}$ and $x\in E_{n-1}$, the quantities $\rho_{n}^+(\Qs,x)$ and $\rho_{n}^-(\Qs,x)$ depend only on the coordinates of $\Qs$ associated to the nodes of the sub-tree generated by $x$ and not on the whole probability $\Qs$.
\end{remark}
\subsection{Necessary and sufficient condition}
In this paragraph we establish necessary and sufficient conditions for a measure to induce a $\lambda$-CPS.
In order to provide a necessary condition, we define for $n\in\{1,...,N\}$ and $x\in E_{n-1}$:
$$\Delta_n^\lambda(\Qs,x)\equiv \rho_n^+(\Qs,x)-(1-\lambda)\rho_n^-(\Qs,x).$$
Note that $\Delta_n^\lambda(\Qs,x)S_{n-1}(x)$ is the length of the effective bid-ask spread interval at the time $n-1$ at the position $x$.
Thus, the following necessary condition appears in a natural way.
\begin{corollary}[Necessary condition]\label{ccn}
If $\lambda\in[0,1]$ and $\Qs\in\Ms(\lambda)$, then for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$:
$$\Delta_n^\lambda(\Qs,x) \geq 0.$$
\end{corollary}
\begin{proof}
Direct from Proposition \ref{pcn}
\end{proof}
Now, we establish a sufficient condition, which is in fact the converse of Corollary \ref{ccn}.
\begin{proposition}[Sufficient condition]\label{cpcn}
If for $\lambda>0$ there exists $\Qs\sim P$ such that for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$:
$$\Delta_n^\lambda(\Qs,x) \geq 0,$$
then $\Qs\in\Ms(\lambda)$.
\end{proposition}
\begin{proof}
We fix $\lambda>0$ and $\Qs\sim P$ such that for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$:
$$\Delta_n^\lambda(\Qs,x) \geq 0,$$
and we will construct inductively a process $\tilde{S}={(\tilde{S}_n)}_{n=0}^N$ such that $(\Qs,\tilde{S})$ is a $\lambda$-CPS.
We start by taking:
\begin{equation}\label{ecpnc1}
\tilde{S}_0(x_0)=\tilde{s}_0\in[(1-\lambda)\rho_1^-(\Qs,x_0)s_0,\rho_1^+(\Qs,x_0)s_0].
\end{equation}
We set:
$$d_1(x_0)=\rho_1^+(\Qs,x_0)-\frac{\tilde{s}_0}{s_0},$$
and we note that $0\leq d_1(x_0)\leq \Delta_1^\lambda(\Qs,x_0)$.
Now, for $n\in \{1,...,N\}$, we suppose that we have constructed a $(n-1)$-step martingale $\tilde{S}={(\tilde{S}_k)}_{k=0}^{n-1}$ verifying:
\begin{equation}\label{basc}
\frac{\tilde{S}_{k}(z)}{S_{k}(z)}\in[(1-\lambda)\rho_{k+1}^-(\Qs,z),\rho_{k+1}^+(\Qs,z)],
\end{equation}
for all $k\in\{0,...,n-1\}$ and $z\in E_{k}$. We note that, by defining:
\begin{equation}\label{eddk}
d_{k+1}(z)=\rho_{k+1}^+(\Qs,z)-\frac{\tilde{S}_{k}(z)}{S_{k}(z)},\qquad k\in\{0,...,n-1\},\,z\in E_{k},
\end{equation}
condition \eqref{basc} is equivalent to:
\begin{equation}\label{eqdr}
0\leq d_{k+1}(z)\leq \Delta_{k+1}^\lambda(\Qs,z).
\end{equation}
The goal is to extend $\tilde{S}$ to a $n$-step martingale satisfying \eqref{basc} for $k=n$.
With this purpose in mind, we fix $y\in E_{n-1}$ and we aim to construct $\tilde{S}_{n}(y\star u)$ and $\tilde{S}_{n}(y\star d)$. Since the extension
of $\tilde{S}$ must verify the $\Qs$-martingale property, we need only to choose in a proper way $\tilde{S}_{n}(y\star u)$ and then to put:
\begin{equation}\label{emart}
\tilde{S}_n(y\star d)=\frac{\tilde{S}_{n-1}(y)-q_n(y)\tilde{S}_n(y\star u)}{1-q_n(y)}.
\end{equation}
So, we need to prove that we can choose $\tilde{S}_{n}(y\star u)$ in the associated effective bid-ask spread interval, in such a way that $\tilde{S}_{n}(y\star d)$
defined by means of \eqref{emart} is also in the corresponding effective bid-ask spread interval. Equivalently, we need to show that we can choose $d_{n+1}(y\star u)$ such that:
\begin{equation}\label{ed1}
0\leq d_{n+1}(y\star u)\leq \Delta_{n+1}^\lambda(\Qs,y\star u),
\end{equation}
and, by setting:
\begin{equation}\label{esnyu}
\tilde{S}_n(y\star u)=\left(\rho_{n+1}^+(\Qs,y\star u)-d_{n+1}(y\star u)\right)S_n(y\star u),
\end{equation}
we have that $\tilde{S}_{n}(y\star d)$ defined by \eqref{emart} verifies:
\begin{equation}\label{esnyd}
\frac{\tilde{S}_{n}(y\star d)}{S_{n}(y\star d)}\in[(1-\lambda)\rho_{n+1}^-(\Qs,y\star d),\rho_{n+1}^+(\Qs,y\star d)].
\end{equation}
To this end, we will express condition \eqref{esnyd} in terms of $d_{n+1}(y\star u)$ and then prove that this condition is compatible with \eqref{ed1}.
Plugging \eqref{esnyu} in \eqref{emart} and using \eqref{eddk} for $k=n-1$, we obtain that:
\begin{equation*}
\frac{\tilde{S}_{n}(y\star d)}{S_{n}(y\star d)}=\frac{q_n(y)\alpha_n(y)}{(1-q_n(y))\beta_n(y)}\left[\frac{\rho_{n}^+(\Qs,y)-d_{n}(y)}{q_n(y)\alpha_n(y)}-\rho_{n+1}^+(\Qs,y\star u)+d_{n+1}(y\star u)\right],
\end{equation*}
and then, condition \eqref{esnyd} becomes:
\begin{equation}\label{cdn12}
r_n(\Qs,y)- \frac{(1-q_n(y))\beta_n(y) \Delta_{n+1}^\lambda(\Qs,y\star d)}{q_n(y)\alpha_n(y)}\leq d_{n+1}(y\star u)\leq r_n(\Qs,y),
\end{equation}
where
$$r_n(\Qs,y)=\frac{r_n^+(\Qs,y)-\rho_n^+(\Qs,y)+d_n(y)}{q_n(y)\alpha_n(y)}.$$
Note that condition \eqref{cdn12} makes sense, because $\Delta_{n+1}^\lambda(\Qs,y\star d)\geq 0$ by hypothesis. Similarly, condition \eqref{ed1} make sense since $\Delta_{n+1}^\lambda(\Qs,y\star u)\geq 0$.
Moreover, as $r_n^+(\Qs,y)\geq\rho_n^+(\Qs,y)$ and $d_n(y)\geq 0$, we have that $r_n(\Qs,y)\geq0$. It follows that the right hand side of the inequality \eqref{cdn12}
is compatible with the left hand side of inequality \eqref{ed1}. It remains to prove that the right hand side of inequality \eqref{ed1} is compatible with the left hand side of \eqref{cdn12}, that means:
$$r_n(\Qs,y)- \frac{(1-q_n(y))\beta_n(y) \Delta_{n+1}^\lambda(\Qs,y\star d)}{q_n(y)\alpha_n(y)}\leq\Delta_{n+1}^\lambda(\Qs,y\star u),$$
which is equivalent to:
$$q_n(y)\alpha_n(y)r_n(\Qs,y)\leq r_n^+(\Qs,y)-(1-\lambda)r_n^-(\Qs,y),$$
which is also equivalent to:
$$d_n(y)\leq\Delta_{n}^\lambda(\Qs,y)+(1-\lambda)\left(\rho_n^-(\Qs,y)-r_n^-(\Qs,y) \right),$$
which is true by \eqref{eqdr} and the fact that $\rho_n^-(\Qs,y)\geq r_n^-(\Qs,y)$. We conclude the existence of $d_{n+1}(y\star u)$ verifying \eqref{ed1} and
\eqref{cdn12} and then, by means of \eqref{esnyu} and \eqref{emart}, the existence of $\tilde{S}_{n}(y\star u)$ and $\tilde{S}_{n}(y\star d)$ verifying the desired properties.
Repeating the procedure for each $y\in E_{n-1}$, we succeed to extend $\tilde{S}$ to a $n$-step $\lambda$-CPS.\\
Thus, thanks to a forward recurrence, we can construct $\tilde{S}$ such that $(\Qs,(\tilde{S}_n)_{0\leq n\leq N})$ is a $\lambda$-CPS. The result was proved.
\end{proof}
\section{Characterizations}\label{s3}
In the previous section, we have found a necessary and sufficient condition for a measure $\Qs$ to induce a $\lambda$-CPS. Based on
this condition, in this section, we obtain a characterization for the smallest transaction cost $\lambda_c$ necessary to remove arbitrage opportunities.
Similarly, we obtain a characterization of the set $\Ms(\lambda)$ as the preimage of an interval of a continuous function on the space of probability
measures equivalent to $P$. We end this section studying in depth the set $\Ms(\lambda_c)$.
Before to start with the mentioned characterizations, we define the function $\rho:\Ps_1(\Omega)\rightarrow (0,1]$ by putting:
$$\rho(\Qs)=\min\limits_{n\in\{1,...,N\}}\left[\min\limits_{x\in E_{n-1}}\frac{\rho_n^+(\Qs,x)}{\rho_n^-(\Qs,x)}\right],\quad \Qs\in\Ps_1(\Omega),$$
which will play a crucial role in what follows. Note that this function is continuous, because it is the minimum of a finite number of continuous functions.
\subsection{\texorpdfstring{Characterization of $\lambda_c$}{}}
\begin{theorem}\label{thmc}
We have that:
$$\lambda_c=1-\sup\limits_{Q\sim P}\rho(Q).$$
\end{theorem}
\begin{proof}
We start proving that $\lambda_c\geq 1-\sup\limits_{Q\sim P}\rho(Q)$. By definition of $\lambda_c$, for each $\lambda>\lambda_c$ there exists a $\lambda$-CPS: $(\Qs,\tilde{S})$.
By using Proposition \ref{pcn}, we deduce that for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$:
$$(1-\lambda)\rho_n^-(\Qs,x)\leq \rho_n^+(\Qs,x).$$
We divide by $\rho_n^-(\Qs,x)$ both sides of this inequality and we take the minimum on all $n\in\{1,...,N\}$ and $x\in E_{n-1}$ to obtain:
$$1-\lambda\leq \rho(\Qs)\leq \sup\limits_{Q\sim P}\rho(Q),$$
and then $\lambda\geq 1- \sup\limits_{Q\sim P}\rho(Q)$. The statement follows because the last inequality is true for all $\lambda>\lambda_c$.
Now, we prove that $\lambda_c\leq 1-\sup\limits_{Q\sim P}\rho(Q)$. For this, we take $\lambda<\lambda_c$. By Proposition \ref{cpcn}, for each probability
$\Qs\sim P$, there exists $n\in\{1,...,N\}$ and $x\in E_{n-1}$ such that:
$$1-\lambda>\frac{\rho_n^+(\Qs,x)}{\rho_n^-(\Qs,x)}\geq \rho(\Qs),$$
and then $1-\lambda>\rho(\Qs)$. Since this inequality is true for all $\Qs\sim P$, we deduce that:
$$\lambda<1-\sup\limits_{Q\sim P}\rho(Q).$$
The last inequality being true for all $\lambda<\lambda_c$, the result follows.
\end{proof}
We end this paragraph with the following representation of $\lambda_c$, which is slightly different to that obtained in Theorem \ref{thmc}.
\begin{corollary}\label{lac}
We have that:
$$\lambda_c=1-\sup\limits_{Q\in\Ps_1(\Omega)}\rho(Q).$$
\end{corollary}
\begin{proof}
By Theorem \ref{thmc}, we know that $\lambda_c=1-\sup\limits_{Q\sim P}\rho(Q)$. Since:
$$\{Q: Q\sim P\}\subseteq\Ps_1(\Omega),$$
we have that
$$\sup\limits_{Q\sim P}\rho(Q)\leq \sup\limits_{Q\in\Ps_1(\Omega)}\rho(Q)=\rho(\Qs^*).$$
If $\Qs^*\sim P$, the result follows. If this is not the case, we define, for each $0<\varepsilon<1$, a probability measure $\Qs^\varepsilon\sim P$,
by setting for all $n\in \{1,...,N\}$ and $y\in E_{n-1}$:
\begin{equation*}
q_n^{\varepsilon}(y)=\begin{cases}
q_n^*(y) & \text{if } q_n^*(y)\in(0,1),\\
\varepsilon & \text{if } q_n^*(y) = 0,\\
1-\varepsilon & \text{if } q_n^*(y) =1,
\end{cases}
\end{equation*}
where $q_n^*(y):=q_n(\Qs^*,y)$ and $q_n^{\varepsilon}(y):=q_n(\Qs^{\varepsilon},y)$ with $q_n(\cdot,y)$ given by \eqref{qn}.
As $\rho_n^+$ and $\rho_n^-$ are continuous functions for each $n\in\{1,\ldots,N+1\}$, it follows that $\rho$ is as well continuous, and
therefore
$$\rho(\Qs^{\varepsilon})\xrightarrow[\varepsilon\to0]{} \rho(\Qs^*).$$
Now, since $\Qs^{\varepsilon}\sim P$,
$$\rho(\Qs^{\varepsilon})\leq \sup\limits_{Q\sim P}\rho(Q)\leq \rho(\Qs^*),$$
and we obtain that indeed $\sup\limits_{Q\sim P}\rho(Q)=\rho(\Qs^*)$.
\end{proof}
\begin{remark}
The advantage of Corollary \ref{lac} with respect to Theorem \ref{thmc} lies in the fact that the supremum of $\rho$ in the set $\{\Qs: \Qs\sim P\}$ is
not necessarily reached while the supremum of $\rho$ taken in $\Ps_1(\Omega)$ it is. This will be particularly useful in order to obtain good upper bounds
for $\lambda_c$.
\end{remark}
\begin{remark}\label{r6}
Note that, if $\alpha_n(x)\geq1\geq\beta_n(x)$ for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$, then $\lambda_c=0$. Indeed, if we define the probability $\Qs^*$ by:
$$q_n(\Qs^*,x)=\frac{1-\beta_n(x)}{\alpha_n(x)-\beta_n(x)},\qquad n\in\{1,...,N\},\, x\in E_{n-1},$$
we can prove easily that:
$$\rho(\Qs^*)=1.$$
Using Corollary \ref{lac} we deduce that $\lambda_c=0$.
\end{remark}
\subsection{\texorpdfstring{Characterization of $\mathcal{M}(\lambda)$}{}}
\begin{theorem}\label{thmmlc}
For each $\lambda\in[0,1]$, we have that:
$$\mathcal{M}(\lambda)=\left\{ \Qs\sim P:\,\rho(\Qs)\geq 1-\lambda\right\}=\rho_*^{-1}\left([1-\lambda,1]\right),$$
where $\rho_*$ denotes the restriction of $\rho$ to the set of all probability measures $\Qs\sim P$.
\end{theorem}
\begin{proof}
We fix $\lambda\in[0,1]$. We prove first that:
$$\mathcal{M}(\lambda)\subseteq\left\{ \Qs\sim P:\,\rho(\Qs)\geq 1-\lambda\right\}$$
Indeed, if $\Qs\in\mathcal{M}(\lambda)$, then by Corollary \ref{ccn}, we conclude that for all $n\in\{1,...,N\}$ and
$x\in E_{n-1}$: $\Delta_n^\lambda(\Qs,x)\geq 0$, hence by definition:
$$1-\lambda\leq\frac{\rho_n^+(\Qs,x)}{\rho_n^-(\Qs,y)}\leq 1.$$
It follows that $\rho(\Qs)\geq 1-\lambda$, and this proved the first inclusion.
It remains to prove that:
$$\mathcal{M}(\lambda)\supseteq\left\{ \Qs\sim P:\,\rho(\Qs)\geq 1-\lambda\right\}.$$
In order to do this, we take $\Qs\sim P$ such that $\rho(\Qs)\geq 1-\lambda$. This implies that for all $n\in\{1,...,N\}$ and
$x\in E_{n-1}$:
$$\frac{\rho_n^+(\Qs,x)}{\rho_n^-(\Qs,y)}\geq 1-\lambda,$$
and then $\Delta_n^\lambda(\Qs,x)\geq 0$. Using Proposition \ref{cpcn}
we conclude that $\Qs\in\mathcal{M}(\lambda)$. The proof is finished.
\end{proof}
\subsection{\texorpdfstring{Characterization of $\mathcal{M}(\lambda_c)$}{}}
The Theorem \ref{thmmlc} provides for each $\lambda\in[0,1]$ an expression for the set $\Ms(\lambda)$. Obviously, when $\lambda<\lambda_c$
we can be more precise and say that $\Ms(\lambda)=\emptyset$. When $\lambda>\lambda_c$, we can say that $\Ms(\lambda)\neq\emptyset$. But in the transition phase, i.e., when
$\lambda=\lambda_c$, we can not say a priori if $\Ms(\lambda)$ is empty or not. That is the goal of this paragraph.
We start with the next lemma which is a stronger version of the Theorem \ref{thmmlc} for the special case $\lambda=\lambda_c$.
\begin{lemma}\label{qlc}
$\Qs\in\Ms(\lambda_c)$ if and only if $\Qs\sim P$ and $\lambda_c=1-\rho(\Qs)$.
\end{lemma}
\begin{proof}
If $\Qs\in\Ms(\lambda_c)$, then by using Theorems \ref{thmc} and \ref{thmmlc}, we obtain that:
$$\rho(\Qs)\geq 1-\lambda_c=\sup\limits_{Q\sim P}\rho(Q),$$
which implies that $\rho(\Qs)=1-\lambda_c$.\\
The other implication follows from Theorem \ref{thmmlc}.
\end{proof}
We know from this lemma, that if $\Qs\in\Ms(\lambda_c)$, then there exists $n\in\{1,...,N\}$ and $x\in E_{n-1}$ such that:
\begin{equation}\label{minp}
\rho_n^+(\Qs,x)=(1-\lambda_c)\rho_n^{-}(\Qs,x).
\end{equation}
The nodes verifying this identity will be particularly interesting in order to characterize $\Ms(\lambda_c)$. For this reason, we
define for each $\Qs\in\Ps(\Omega)$, the sets:
$$A_n(\Qs)=\left\{x\in E_{n-1}:\, \rho_n^+(\Qs,x)=(1-\lambda_c)\rho_n^{-}(\Qs,x)\right\},\qquad n\in\{1,...,N\},$$
and we put:
$$\nu(\Qs)=\sum\limits_{n=1}^N|A_n(\Qs)|.$$
By definition, $\nu(\Qs)$ is the number of points verifying \eqref{minp}. It follows from the previous lemma that if $\Qs\in\Ms(\lambda_c)$ then $\nu(\Qs)>0$. In that case, we can define:
$$k_\Qs=\max\{n\in\{1,...,N\}:\, A_n(\Qs)\neq\emptyset\}.$$
\begin{lemma}\label{laux}
If $\Qs\in\Ms(\lambda_c)$, then for all $x\in A_{k_\Qs}(\Qs)$:
$$r_{k_\Qs}^+(\Qs,x)>1 \,\textrm{ or }\,r_{k_\Qs}^-(\Qs,x)<1.$$
\end{lemma}
\begin{proof}
To simplify the notations, we set $k=k_\Qs$. Now, we fix $x\in A_k(\Qs)$. If $r_k^+(\Qs,x)\leq 1$, then, by maximality of $k$, we deduce that:
\begin{align*}
r_k^+(\Qs,x)&=q_{k} (x)\alpha_{k} (x)\rho_{k+1}^+(\Qs,x\star u)+ (1-q_{k} (x))\beta_{k}(x)\rho_{k+1}^+(\Qs,x\star d)\\
&>(1-\lambda_c)\left(q_{k} (x)\alpha_{k} (x)\rho_{k+1}^-(\Qs,x\star u)+ (1-q_{k} (x))\beta_{k}(x)\rho_{k+1}^-(\Qs,x\star d)\right)\\
&=(1-\lambda_c) r_k^-(\Qs,x).
\end{align*}
On the other hand, since $x\in A_k(\Qs)$ and $r_k^+(\Qs,x)\leq 1$:
$$r_k^+(\Qs,x)=\rho_k^+(\Qs,x)= (1-\lambda_c)[1\vee r_k^-(\Qs,x)].$$
Combining this identity with the previous inequality, we obtain the result.\\
\end{proof}
Note that if $\Qs\in\Ms(\lambda_c)$, from the lemma above, we can deduce that for each $x\in A_{k_\Qs}(\Qs)$, we have either $\rho_{k_\Qs}^+(\Qs,x)=1$ or $\rho_{k_\Qs}^-(\Qs,x)=1$. However, the last assertion is not a priori stable
under small perturbations on $\Qs$ while the assertion in the lemma it is. More precisely, if we start with a point satisfying $r_k^+(\Qs,x)>1$ (respectively $r_k^{-}(\Qs,x)<1$), then by continuity, we can find $\varepsilon>0$ such that if
$d_\infty(\widehat{\Qs},\Qs)\leq \varepsilon$, then $r_k^+(\widehat{\Qs},x)>1$ (respectively $r_k^-(\widehat{\Qs},x)<1$).
Now, we fix $k=k_\Qs$ and $x\in A_{k}(\Qs)$. We will be interested on the behavior under small perturbations on $\Qs$ of the ratio:
\begin{equation}\label{ratio}
\frac{\rho_{k}^+(\Qs,x)}{\rho_{k}^-(\Qs,x)}=1-\lambda_c.
\end{equation}
We know from the previous discussion that either the numerator or the denominator remains equal to one under small perturbations. If in addition
$\lambda_c>0$, we conclude that either $\rho_{k}^+(\Qs,x)<1$ or $\rho_{k}^-(\Qs,x)>1$. Using this, the next lemma proves in particular that
we can do small perturbations in such a way that the ratio in \eqref{ratio} increase.
\begin{lemma}\label{lmon}Let $\Qs\sim P$, $k\in\{1,...,N\}$ and $y\in E_{k-1}$. If $\rho_k^+(\Qs,y)<1$ (respectively $\rho_k^-(\Qs,y)>1$), then there exist $\ell\geq k$ and $(z,y)\in E_{\ell-1}$ such that for every $\varepsilon>0$ there exists $\Qs^\varepsilon\sim P$ verifying:
\begin{enumerate}
\item\label{lc1} $\lvert q_\ell^\varepsilon(z,y)-q_\ell(z,y)\rvert\leq \varepsilon.$
\item\label{lc2} $q_n^\varepsilon(x)=q_n(x)\textrm{ if and only if } n\neq \ell \textrm{ or } x\neq(z,y).$
\item\label{lc3} $\rho_k^+(\Qs^\varepsilon,y)>\rho_k^+(\Qs,y)\,\, (\textrm{resp. } \rho_k^-(\Qs^\varepsilon,y)<\rho_k^-(\Qs,y)).$
\end{enumerate}
We used the notation $q_n^\varepsilon(\cdot):=q_n(\Qs^\varepsilon,\cdot)$, where $q_n(\Qs^\varepsilon,\cdot)$ is given in \eqref{qn}.
\end{lemma}
\begin{proof}
We give the proof for the assertion concerning the case $\rho_k^+(\Qs,y)<1$ (the proof for the case $\rho_k^-(\Qs,y)>1$ is analogous). We prove this by means of a backward induction on the level $k$.
So, we start with the proof for $k=N$.\\
If $\rho_N^+(\Qs,y)<1$, then:
$$\rho_N^+(\Qs,y)=q_N(y)(\alpha_N(y)-\beta_N(y))+\beta_N(y)<1,$$
and then, it suffices to take $\ell=N$, $(z,y)=y$ and for each $\varepsilon>0$, we choose:
$$q_N^\varepsilon(y)=q_N(y)+\delta(\varepsilon)\textrm{ with } \delta(\varepsilon)=\varepsilon\wedge\left(\frac{1-q_N(y)}{2}\right).$$
Since $q_N(y)\in(0,1)$, we have that $\delta(\varepsilon)>0$ and $q_N^\varepsilon(y)\in(0,1)$. These choices induce a new probability $\Qs^{\varepsilon}\sim P$
which verifies clearly conditions (1)-$N$,(2)-$N$ and (3)-$N$.\\
Now, we suppose that the assertion is true at the level $k+1$ and we prove that it is also true at the level $k$.\\
If $\rho_k^+(\Qs,y)<1$, then:
\begin{equation}\label{rhol1}
\rho_k^+(\Qs,y)=q_k(y)\alpha_k(y)\rho_{k+1}^+(\Qs,y\star u)+(1-q_k(y))\beta_k(y)\rho_{k+1}^+(\Qs,y\star d)<1.
\end{equation}
At this point, there are three situations.
(i) If $\alpha_k(y)\rho_{k+1}^+(\Qs,y\star u)>\beta_k(y)\rho_{k+1}^+(\Qs,y\star d)$, we can take $\ell=k$, $(z,y)=y$ and for each $\varepsilon>0$, we choose:
$$q_k^\varepsilon(y)=q_k(y)+\delta(\varepsilon)\textrm{ with } \delta(\varepsilon)=\varepsilon\wedge\left(\frac{1-q_k(y)}{2}\right),$$
and we can conclude as in the case $k=N$.
(ii) If $\alpha_k(y)\rho_{k+1}^+(\Qs,y\star u)<\beta_k(y)\rho_{k+1}^+(\Qs,y\star d)$, we can take $\ell=k$, $(z,y)=y$ and for each $\varepsilon>0$, we choose:
$$q_k^\varepsilon(y)=q_k(y)-\delta(\varepsilon)\textrm{ with } \delta(\varepsilon)=\varepsilon\wedge\frac{q_k(y)}{2}.$$
Using similar arguments as before, we achieve the proof in this case.
(iii) If $\alpha_k(y)\rho_{k+1}^+(\Qs,y\star u)=\beta_k(y)\rho_{k+1}^+(\Qs,y\star d)$, then
$\rho_{k+1}^+(\Qs,y\star u)<1$ (because $\alpha_k(y)>\beta_k(y)$). Applying induction hypothesis at the level $k+1$ to $y\star u$, we obtain $\ell\geq k+1$, $(z,y\star u)\in E_{\ell-1}$
and for each $\varepsilon>0$ a probability measure $\Qs^\varepsilon\sim P$ verifying (1)-$(k+1)$, (2)-$(k+1)$ and (3)-$(k+1)$. (1)-$k$ and (2)-$k$ remain the same. Finally, condition (3)-$k$ follows by
plugging condition (3)-$(k+1)$ in \ref{rhol1}. The proof is finished.\\
\end{proof}
Before to establish the characterization theorem for $\Ms(\lambda_c)$, we fix some notations which will be useful in the proof. For each $\Qs\in \Ps_1(\Omega)$,
we put:
$$\As(\Qs)=\left\{(n,x): 1\leq n\leq N,\, x\in E_{n-1}\textrm{ s.t. }\frac{\rho_n^+(\Qs,x)}{\rho_n^-(\Qs,x)}=\rho(\Qs)\right\},$$
and we note that, if $\Qs\in\Ms(\lambda_c)$, then:
$$\As(\Qs)=\left\{(n,x): 1\leq n\leq N,\, x\in A_n(\Qs)\right\}.$$
Until now, we know how to perturb a measure $\Qs\in\Ms(\lambda_c)$ in order to increase the ratio \eqref{ratio} for a point $x\in A_{k_\Qs}(\Qs)$. If in addition
we want to perturb the measure in such a way that the sets $\As(\Qs)$ decrease (in the sense of the inclusion) we need to look to the quantity:
$$\tilde{\rho}(\Qs)=\min_{(n,x)\notin \As(\Qs)}\frac{\rho_n^+(\Qs,x)}{\rho_n^-(\Qs,x)},$$
using by convention $\tilde{\rho}(\Qs)=\rho(\Qs)$ if $\frac{\rho_n^+(\Qs,x)}{\rho_n^-(\Qs,x)}=\rho(\Qs)$ for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$.
We define also:
$$\eta(\Qs)=\tilde{\rho}(\Qs)- \rho(\Qs)\geq 0.$$
\begin{theorem}
We have that $\Ms(\lambda_c)\neq \emptyset$ if and only if for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$:
$$\beta_n(x)<1< \alpha_n(x),$$
and in this case $\Ms(\lambda_c)=\{\Qs^0\}$, where $\Qs^0$ is the probability measure defined in the Paragraph \ref{ssnac}.
\end{theorem}
\begin{proof}
($\Leftarrow$) If $\beta_n(x)<1< \alpha_n(x)$ for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$, then $\lambda_c=0$ (see Remark \ref{r6}) and the result is a consequence of the
no arbitrage condition in the frictionless case (see Paragraph \ref{ssnac}).\\
($\Rightarrow$) Assume that there exists $n_0\in\{1,...,N\}$ and $x^*\in E_{n-1}$ such that $\beta_{n_0}(x^*)\geq 1$ or $\alpha_{n_0}(x^*)\leq 1$. We will
prove that $\Ms(\lambda_c)=\emptyset$. In order to do this, we proceed by contradiction, that means we suppose that there exists $\Qs\in\Ms(\lambda_c)$.
Since the no arbitrage condition for the frictionless case is not satisfied, we deduce that $\lambda_c>0$.\\
We set $k=k_\Qs$ and we fix $x\in A_k(\Qs)$. Thanks to Lemma \ref{laux}, we know that either $r_k^+(\Qs,x)>1$ or $r_k^-(\Qs,x)<1$.
(i) If $r_k^+(\Qs,x)>1$, by continuity, we can find $\delta_1>0$ such that:
$$d_\infty(\Qs,\widehat{\Qs})\leq\delta_1\Rightarrow r_k^+(\widehat{\Qs},x)>1.$$
(i.1) If $\eta=\eta(\Qs)>0$, again by continuity, we can find $\delta_2>0$ such that:
$$d_\infty(\Qs,\widehat{\Qs})\leq\delta_2\Rightarrow \max_{n\in\{1,...,N\}}\left\{\max_{y\in E_{n-1}}\left|\frac{\rho_n^+(\Qs,y)}{\rho_n^-(\Qs,y)}-\frac{\rho_n^+(\widehat{\Qs},y)}{\rho_n^-(\widehat{\Qs},y)}\right|\right\}\leq\frac{\eta}{2}.$$
Since $\rho_k^-(\Qs,x)>1$, by using Lemma \ref{lmon}, we can associate to $\varepsilon=\delta_1\wedge\delta_2>0$ a probability
measure $\Qs^\varepsilon$ verifying \eqref{lc1}, \eqref{lc2} and \eqref{lc3}. Conditions \eqref{lc1} and \eqref{lc2} and the fact that $\varepsilon\leq \delta_1$ implies that:
$$\rho_k^+(\Qs^\varepsilon,x)=\rho_k^+(\Qs,x)=1.$$
Using this and \eqref{lc3}, we obtain that:
$$\frac{\rho_k^+(\Qs^\varepsilon,x)}{\rho_k^-(\Qs^\varepsilon,x)}=\frac{1}{\rho_k^-(\Qs^\varepsilon,x)}>\frac{1}{\rho_k^-(\Qs,x)}=\frac{\rho_k^+(\Qs,x)}{\rho_k^-(\Qs,x)}=1-\lambda_c$$
and for each $m\leq k$ and $y\in E_{m-1}$:
\begin{equation}\label{eaux1}
\frac{\rho_m^+(\Qs^\varepsilon,y)}{\rho_m^-(\Qs^\varepsilon,y)}\geq \frac{\rho_m^+(\Qs,y)}{\rho_m^-(\Qs,y)}
\end{equation}
For the last assertion is crucial that, passing from $\Qs$ to $\Qs^\varepsilon$, the only change at the level $k$ is in the quantity $\rho_k^-(\Qs,x)$
which decreases.\\
Now, for each $n\in\{1,...,N\}$ and $y\notin A_n(\Qs)$, since $\varepsilon\leq\delta_2$, we have:
$$\frac{\rho_n^+(\Qs^\varepsilon,y)}{\rho_n^-(\Qs^\varepsilon,y)}\geq -\frac{\eta}{2}+\frac{\rho_n^+(\Qs,y)}{\rho_n^-(\Qs,y)}\geq -\frac{\eta}{2}+\tilde{\rho}(\Qs)=\frac{\eta}{2}+ 1-\lambda_c>1-\lambda_c.$$
From this and \eqref{eaux1}, we can deduce that $\rho(\Qs^\varepsilon)=1-\lambda_c$ and that:
$$A_n(\Qs^\varepsilon)\subseteq A_n(\Qs),$$
for each $n\in\{1,...,N\}$, the inclusion being strict for $n=k$.\\
(i.2) If $\eta(\Qs)=0$, we proceed in the same way, but taking $\varepsilon=\delta_1$. The arguments remain the same until \eqref{eaux1}
and from there, using that in this case $k=N$, we can obtain the same conclusion.
(ii) If $r_k^-(\Qs,x)<1$, by continuity, we can find $\delta_3>0$ such that:
$$d_\infty(\Qs,\widehat{\Qs})\leq\delta_3\Rightarrow r_k^-(\widehat{\Qs},x)<1$$
and the arguments are similar to those in (i), but taking now $\varepsilon=\delta_2\wedge\delta_3$ when $\eta(\Qs)>0$ and $\varepsilon=\delta_3$ in the other case. The only difference, is that now the probability measure
$\Qs^\varepsilon$ verifies:
$$\rho_k^-(\Qs^\varepsilon,x)=\rho_k^-(\Qs,x)=1\textrm{ and }\rho_k^+(\Qs^\varepsilon,x)\geq\rho_k^+(\Qs,x),$$
but the conclusions are the same.
Summarizing, starting from $\Qs\sim P$ satisfying $\rho(\Qs)=1-\lambda_c$, we construct a probability measure $\Qs^{(1)}=\Qs^\varepsilon\sim P$ such that $\rho(\Qs^{(1)})=1-\lambda_c$ and $\nu(\Qs^{(1)})<\nu(\Qs)$. We repeat the procedure
inductively, starting at each time with a probability measure $\Qs^{(i)}\sim P$ satisfying $\rho(\Qs^{(i)})=1-\lambda_c$ and constructing a new probability measure $\Qs^{(i+1)}\sim P$ such that $\rho(\Qs^{(i+1)})=1-\lambda_c$ and
$\nu(\Qs^{(i+1)})<\nu(\Qs^{(i)})$. Necessarily, at some point we will arrive to a probability measure $\Qs^{(n_0)}\sim P$ verifying $\rho(\Qs^{(n_0)})=1-\lambda_c$ and $\nu(\Qs^{(n_0)})=0$, which is a contradiction.
\end{proof}
\section{Homogeneous and semi-homogeneous binary markets}
In this section, we are interested to deduce, from the characterization of $\lambda_c$ (Theorem \ref{thmc} or Corollary \ref{lac}), more explicit expressions
in some special cases of binary markets. More precisely, we focus in the two following cases:
\begin{itemize}
\item \textit{Homogeneous case}: We refer to this case, when the parameters of the model are homogeneous in time and space. That means:
$$0<\beta_n(x)=\beta<\alpha_n(x)=\alpha,$$
for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$.
\item \textit{Semi-homogeneous case}: We refer to this case, when the parameters of the model are not necessarily homogeneous in time, but they are still homogeneous in space. That means:
$$0<\beta_n(x)=\beta_n<\alpha_n(x)=\alpha_n,$$
for all $n\in\{1,...,N\}$ and $x\in E_{n-1}$.
\end{itemize}
Henceforth, we assume that our binary markets are semi--homogeneous (the homogeneous case is covered). In this framework, we start by giving an upper bound for $\lambda_c$ and then we will prove that
this upper bound coincides with $\lambda_c$ for homogeneous binary markets and also for a large class of semi-homogeneous binary markets. In order to do this, based on the characterization of
$\lambda_c$ given by Corollary \ref{lac}, we construct a probability measure, by taking at each time the best ``1-step'' choice, which gives us, by means of $\rho$, a natural upper bound and also a naive
candidate for the critical transaction cost $\lambda_c$.
Let $\Qs^*$ be the probability measure defined by:
$$q_n(\Qs^*,x)=:q_n^*=1_{\{\alpha_n\leq 1\}}+\frac{1-\beta_n}{\alpha_n-\beta_n}1_{\{\beta_n<1<\alpha_n\}},\quad n\in\{1,...,N\},\, x\in E_{n-1}.$$
and define the sequences of positive numbers ${\{\varrho_n^+\}}_{n=1}^{N+1}$, ${\{\varrho_n^-\}}_{n=1}^{N+1}$ and ${\{\gamma_n\}}_{n=1}^N$ by setting:
$$\varrho_{N+1}^+=\varrho_{N+1}^-=1$$
and for $n \in\{1,...,N\}$:
$$\gamma_n= \alpha_n 1_{\{\alpha_n\leq 1\}}+\beta_n 1_{\{\beta_n\geq 1\}}+1_{\{\beta_n<1<\alpha_n\}},$$
$$\varrho_n^+=1\wedge\left[\gamma_n\,\varrho_{n+1}^+\right]\quad\textrm{and}\quad\varrho_n^-=1\vee\left[\gamma_n\,\varrho_{n+1}^-\right].$$
The relation between these sequences of numbers and the functions $\rho^+$ and $\rho^-$ is given in the following lemma:
\begin{lemma}\label{lid}
For all $n\in\{1,...,N+1\}$ and $x\in E_{n-1}$:
$$\varrho_n^+= \rho_n^+(\Qs^*,x) \quad\textrm{and}\quad\varrho_n^-=\rho_n^-(\Qs^*,x)$$
\end{lemma}
\begin{proof}
We prove this by using a backward recurrence. By definition, the result is true for $n=N+1$. Now, we assume the result holds for $n+1$ and we prove it for $n$.
By definition of $\rho^+$ and the recurrence step, we obtain that:
$$\rho_{n}^+(\Qs^*,x)=1\wedge\left[\left(\,q_{n}^*\,\alpha_{n}+(1-q_{n}^*)\,\beta_{n}\right)\varrho_{n+1}^+\right],$$
and now, using the definitions of $q_n^*$ and $\gamma_n$:
$$\rho_{n}^+(\Qs^*,x)=1\wedge\left[\gamma_n\,\varrho_{n+1}^+\right].$$
The result follows from the definition of $\rho_n^+$. The proof for $\rho^-$ is analogous.
\end{proof}
In order to obtain an easy expression for $\varrho_n^+$ and $\varrho_n^-$, we introduce the sets:
$$\Lambda_n=\{1\}\cup\left\{\prod\limits_{\ell=0}^{k}\gamma_{n+\ell}:\, 0\leq k\leq N-n\right\}.$$
\begin{lemma}\label{fcr}
For all $n\in\{1,...,N\}$:
$$\varrho_n^+= \min \Lambda_n \quad\textrm{and}\quad\varrho_n^-= \max \Lambda_n.$$
\end{lemma}
\begin{proof}
The result follows from a simple backward recurrence and the fact that:
$$\Lambda_n=\{1\}\cup \gamma_n\Lambda_{n+1}.$$
\end{proof}
\begin{remark}\label{rme}
If we define the sets:
$$\Lambda_n^*=\left\{\frac{x}{y}:\,x,y\in \Lambda_n\right\}\quad\textrm{and}\quad\Lambda_n^0=\left\{\prod\limits_{\ell=p}^k\gamma_{n+\ell}:\, 0\leq p\leq k\leq N-n\right\},$$
we can see that:
$$\frac{\min \Lambda_n}{\max \Lambda_n}=\min \Lambda_n^*=1\wedge\min\Lambda_n^0\wedge\frac{1}{\max\Lambda_n^0}.$$
\end{remark}
\begin{proposition}[Upper bound in the semi-homogeneous case]\label{ubsh} We have that:
$$\lambda_c\leq 1-1\,\wedge\,\min\Lambda_1^0\,\wedge\,\frac{1}{\max \Lambda_1^0}.$$
\end{proposition}
\begin{proof}
From Corollary \ref{lac} and using Lemmas \ref{lid} and \ref{fcr}, we obtain that:
\begin{equation}
\lambda_c\leq 1- \min\limits_{n\in\{1,...,N\}}\left\{\frac{\min \Lambda_n}{\max \Lambda_n}\right\}.
\end{equation}
The statement follows from this inequality, Remark \ref{rme} and the fact that:
$$\Lambda_N^0\subseteq\Lambda_{N-1}^0\subseteq\cdots\subseteq\Lambda_1^0.$$
\end{proof}
The next Proposition covers the homogeneous case, as well as a large class of semi-homogeneous cases.
\begin{proposition}\label{pec}In the semi-homogeneous case:
\begin{enumerate}
\item If $\alpha_n\leq 1$, for all $n\in\{1,...,N\}$, then:
$$\lambda_c=1-\prod_{n=1}^N\alpha_n.$$
\item If $\beta_n\geq 1$, for all $n\in\{1,...,N\}$, then:
$$\lambda_c=1-\prod_{n=1}^N\frac1{\beta_n}.$$
\item If $\beta_n\leq 1\leq \alpha_n$, for all $n\in\{1,...,N\}$, then:
$$\lambda_c=0.$$
\end{enumerate}
\end{proposition}
\begin{proof}
(1) We fix $\Qs\sim P$ and we will prove by a backward recurrence that for each $k\in\{1,...,N\}$:
$$\rho_k^+(\Qs,x)\leq \prod_{n=k}^N\alpha_n,$$
for all $x\in E_{k-1}$. In fact, for $k=N$, the statement is true by definition. Now, we suppose that:
$$\rho_{k+1}^+(\Qs,y)\leq\prod_{n=k+1}^N\alpha_n, $$
for all $y\in E_k$. By using this and the definition of $\rho_k^+$, we have that for each $x\in E_{k-1}$:
$$\rho_{k}^+(\Qs,x)\leq \alpha_k \prod_{n=k+1}^N\alpha_n.$$
This proves our statement. We can deduce that:
$$\rho(\Qs)\leq \prod_{n=1}^N\alpha_n.$$
As $\Qs$ is arbitrary, we obtain that: $\lambda_c\geq 1-\prod\limits_{n=1}^N\alpha_n$.
On the other hand, it follows from the definitions and Lemma \ref{fcr} that:
$$\gamma_n=\alpha_n,\,\varrho_n^+=\prod_{k=n}^N\alpha_k\quad\textrm{and}\quad \varrho_n^-=1,$$
and the result is a consequence of Lemma \ref{fcr}, Remark \ref{rme} and Proposition \ref{ubsh}.
(2) The proof of this case uses the same argument like the previous one.
(3) We have proved that in Remark \ref{r6}. However, we provide here a proof using the results of this section.
Note that by definition:
$$\gamma_n=1,\,\varrho_n^+=\varrho_n^-=1,$$
and then, from Lemma \ref{fcr}, Remark \ref{rme} and Proposition \ref{ubsh}, the result follows.
\end{proof}
\bibliographystyle{plain}
|
1,116,691,499,744 | arxiv |
\subsection{Deep Learning for Object Detection and Localisation}
Deep learning has been around for a few decades~\cite{fukushima1980neocognitron,giebel1971feature,lecun1989backpropagation}.
After a period of limited use within computer vision, Krizhevsky et al.\ (2012)~\cite{krizhevsky2012imagenet} demonstrated a vast performance improvement for image classification over previous state-of-the-art methods, using a deep \ac{CNN}.
As a result, the use of \acp{CNN} surged within computer vision.
Early \ac{CNN} based approaches for object localisation~\cite{matan1992reading,nowlan1995convolutional,rowley1998neural,sermanet2013pedestrian} used the same sliding-window approach used by previous state-of-the-art detection systems~\cite{dalal2005histograms,felzenszwalb2010object}.
As \acp{CNN} became larger, and with an increased number of layers, this approach became intractable.
However, Sermanet et al. (2014)~\cite{sermanet2014overfeat} demonstrated that few windows are required, provided the \ac{CNN} is fully convolutional.
Furthermore, as the size of their receptive fields increased, \acp{CNN} either became or were trained to be less sensitive to precise location and scale the input.
As a result, obtaining a precise bounding box using sliding window and non-maximal suppression became difficult.
One early approach attempted to solve this issue by training a separate \ac{CNN} for precise localisation~\cite{vaillant1994original}.
Szegedy et al. (2013)~\cite{szegedy2013deep} modified the architecture of Krizhevsky et al.\ (2012)~\cite{krizhevsky2012imagenet} for localisation by replacing the final layer of the \ac{CNN} with a regression layer.
This layer produces a binary mask indicating whether a given pixel lies within the bounding box of an object.
Schulz and Behnke (2011)~\cite{schulz2011object} previously used a similar approach with a much smaller network for object segmentation.
Girshick et al. (2014)~\cite{girshick2014rich} introduced \ac{RCNN}, which surpassed previous approaches.
The authors used selective search~\cite{uijlings2013selective}, a hierarchical segmentation method, to generate region proposals: possible object locations within an image.
Next, a \ac{CNN} obtains features from each region and a \ac{SVM} classifies each region.
In addition, they used a regression model to improve the accuracy of the bounding box output by learning bounding box adjustments for each class-agnostic region proposal.
He et al. (2015)~\cite{he2015spatial} improved the run-time performance by introducing SPP-net, which uses a \ac{SPP}~\cite{grauman2005pyramid,lazebnik2006beyond} layer after the final convolutional layer.
The convolutional layers operate on the whole image, while the \ac{SPP} layer pools based on the region proposal to obtain a fixed length feature vector for the fully connected layers.
Girshick (2015)~\cite{girshick2015fast} later introduced \ac{FastRCNN} which improves upon \ac{RCNN} and SPP-net and allows the \ac{CNN} to output a location of the bounding box (relative to the region proposal) directly, along with class detection score, thus replacing the \ac{SVM}.
Furthermore, this work enables end-to-end training of the whole \ac{CNN} for both detection and bounding box regression.
We use this approach to achieve state-of-the-art performance on our \emph{People-Art} dataset and detail the method in Section \ref{sec:CNNarchitecture}.
To make \ac{FastRCNN} even faster and less dependent on selective search~\cite{uijlings2013selective}, Lenc and Vedaldi (2015)~\cite{lenc2015r} used a static set of region proposals.
Ren et al. (2015)~\cite{ren2015faster} instead used the output of the existing convolutional layers plus additional convolutional layers to predict regions, resulting in a further increase in accuracy and efficiency.
Redmon et al. (2015)~\cite{redmon2015you} proposed \ac{YOLO}, which operates quicker though with less accuracy than other state-of-art approaches.
A single \ac{CNN} operates on an entire image, divided in a grid of rectangular cells, without region proposals.
Each cell outputs bounding box predictions and class probabilities;
unlike previous work, this occurs simultaneously.
Huang et al. (2015)~\cite{huang2015densebox} proposed a similar system, introducing up-sampling layers to ensure the model performs better with very small and overlapping objects.
\subsection{Cross-Depiction Detection and Matching}
Early work relating to non-photographic images focused on matching hand-drawn sketches.
Jacobs et al.\ (1995)~\cite{jacobs1995fast} used wavelet decomposition of image colour channels to allow matching between a rough colour image sketch and a more detailed colour image.
Funkhouser et al.\ (2003)~\cite{funkhouser2003search} used a distance transform of a binary line drawing, followed by fourier analysis of the distance transforms at fixed radii from the centre of the drawing, to match 2D sketches and 3D projections, with limited performance.
Hu and Collomosse (2013)~\cite{hu2013performance} used a modified version of \ac{HOG}~\cite{dalal2005histograms} to extract descriptors at interest-points in the image: for photographs, these are at Canny edges~\cite{canny1986computational} pixels; for sketches, these are sketch strokes.
Wang et al. (2015)~\cite{wang2015sketch} used a siamese \ac{CNN} configuration to match sketches and 3D model projections, optimising the \ac{CNN} to minimise the distances between sketches and 3D model projections of the same class.
Another cross-depiction matching approach, by Crowley et al. (2015)~\cite{Crowley15}, uses \ac{CNN} generated features to match faces between photos and artwork.
This relies on the success of a general face detector~\cite{parkhi2015deep}, which succeeds on artwork which is ``largely photo-realistic in nature'' but has not been verified on more abstract artwork styles such as cubism.
Other work has sought to use self-similarity to detect patterns across different depictions such as Shechtman and Irani (2007)~\cite{shechtman2007matching} and Chatfield et al.\ (2009)~\cite{chatfield2009efficient} who used self-similarity descriptors formed by convolving small regions within in image over a larger region.
This approach is not suitable for identifying (most) objects as a whole: for example, the results show effective matching of people forming a very specific pose, not of matching people as an object class in general.
Recent work has focused on cross-depiction object classification and detection.
Wu et al.\ (2014)~\cite{wu2014learning} improved upon Felzenszwalb et al.'s \ac{DPM}~\cite{felzenszwalb2010object} to perform cross-depiction matching between photographs and ``artwork'', (including ``clip-art'', cartoons and paintings).
Instead of using root and part-based filters and a latent \ac{SVM}, the authors learnt a fully connected graph to better model object structure between depictions, using the \ac{SSVM} formulation of Cho et al.\ (2013)~\cite{cho2013learning}.
In addition, each model has separate ``attributes'' for photographs and ``artwork'': at test-time, the detector uses the maximum response from either of ``attribute'' set, to achieve depiction invariance.
This work improved performance for detecting objects in artwork, but depended on a high performing \ac{DPM} to bootstrap the model.
Our dataset is more challenging than the one used, leading to a low accuracy using \ac{DPM} and hence this is approach is also not suitable.
Zissermann et al. (2014)~\cite{crowley2014search} evaluate the performance of \acp{CNN} learnt on \acp{photo} for classifying objects in paintings, showing strong performance in spite of the different domain.
Their evaluation excludes people as a class, as people appear frequently in their paintings without labels.
Our \textit{People-Art} dataset addresses this issue: all people are labelled and hence we provide a new benchmark.
We also believe our dataset contains more variety in terms of artwork styles and presents a more challenging problem.
Furthermore, we advance their findings: we show the performance improvement when a \ac{CNN} is fine-tuned for this task rather than simply fine-tuned on \acp{photo}.
\subsection{Detection Performance on People-Art}
We used the tools of Hoiem et al. (2012)~\cite{hoiem2012diagnosing} to analyse the detection performance of the best performing \ac{CNN}.
Since we only have a single class (person), detections have three types based on their \ac{IOU} with a ground truth labelling:
\begin{description}
\item[Cor] correct i.e.\
\begin{math}
IoU \geq 0.5
\end{math}
\item[Loc] false positive caused by poor localisation,
\begin{math}
0.1 \leq IoU < 0.5
\end{math}
\item[BG] a background region,
\begin{math}
IoU < 0.1
\end{math}
\end{description}
Figure \ref{fig:detectionTrend} shows the detection trend: the proportion of detection types as the number of detections increases, i.e.\ from reducing the threshold.
At higher thresholds, the majority of incorrect detections are caused by poor localisation; at lower thresholds, background regions dominate.
In total, there are 1088 people labelled in the test set, and that are not labelled difficult.
The graph in Figure \ref{fig:detectionTrend} shows a grey dashed line corresponding to this number detections and Figure \ref{fig:detectionTrend} shows a separate pie chart for this threshold.
This threshold corresponding to this number of detections is significant: with perfect detection, there would be no false positives or false negatives.
This shows that poor localisation is the bigger cause of false positives, though only slightly more so than background regions.
Figure \ref{fig:falsePositiveBackground} shows false positives caused by background regions.
Some are caused by mammals which is understandable given these, like people, have faces and bodies.
Others detections have less clear causes.
Figure \ref{fig:falsePositiveLocalisation} show the false positives caused by poor localisation.
In some of the cases, the poor localisation is caused by the presence of more than one person, which leads to the bounding box covering multiple people.
In other cases, the bounding box does not cover the full extent of the person, i.e.\ it misses limbs or the lower torso.
We believe that this shows the extent to which the range of poses makes detecting people in \ac{artwork} a challenging problem.
\section{Introduction}
\input{include/introduction}
\input{include/oneImagePerStyle}
\section{Related Work}
\input{include/related}
\section{The \textit{People-Art} Dataset and its Challenges}
\label{sec:dataset}
\input{include/dataset}
\section{\ac{CNN} architecture}
\label{sec:CNNarchitecture}
\input{include/architecture}
\section{Experiments}
For both validation and testing, our benchmark is \acf{AP}: we calculate this using the same method as PASCAL \ac{VOC} detection task~\cite{everingham2007pascal}.
A positive detection is one whose \ac{IOU} overlap with a ground-truth bounding box is greater than 50\%;
duplicate detections are considered false.
Annotations marked as difficult are excluded.
\subsection{\ac{ROI} Selection and Layer Fixing for \ac{CNN} Fine-Tuning}
\label{sec:ROISelection}
\input{include/experiments/ROI}
\subsection{Performance Benchmarks on the People-Art Dataset}
\label{sec:peopleArtBenchmarks}
\input{include/experiments/performance}
\subsection{Performance Benchmarks on the Picasso Dataset}
\label{sec:PicassoBenchmarks}
\input{include/experiments/performancePicasso}
\subsection{The Importance of Global Structure}
\label{sec:structure}
\input{include/experiments/structure}
\section{Conclusion}
\input{include/conclusion}
\section*{Acknowledgements}
This research was funded in part by EPSRC grant reference EP/K015966/1.
This research made use of the Balena High Performance Computing Service at the University of Bath.
\bibliographystyle{splncs}
|
1,116,691,499,745 | arxiv | \section{INTRODUCTION\label{sec:introduction}}
Deep learning encompasses a broad class of machine learning methods that use multiple layers of nonlinear processing units in order to learn multilevel representations for detection or classification tasks~\cite{lecun2015deep,goodfellow2016deep,schmidhuber2015deep,bronstein2017geometric,deng2014deep}. The main realizations of deep multi-layer architectures are the so-called deep neural networks (DNNs), which correspond to artificial neural networks (ANNs) with multiple layers between input and output layers. DNNs have been shown to perform successfully in processing a variety of signals with an underlying Euclidean or grid-like structure, such as speech, images and videos. Signals with an underlying Euclidean structure usually come in the form of multiple arrays~\cite{lecun2015deep} and are known for their statistical properties such as locality, stationarity and hierarchical compositionality from local statistics~\cite{simoncelli2001natural,field1989statistics}. For instance, %
an image can be seen as a function on Euclidean space (the 2-D plane) sampled from a grid. In this setting, locality is a consequence of local connections, stationarity results from shift-invariance, and compositionality stems from the intrinsic multi-resolution structure of many images~\cite{bronstein2017geometric}.
It has been suggested that such statistical properties can be exploited by convolutional architectures via DNNs, namely (deep) convolutional neural networks (CNNs)~\cite{lecun1998gradient,lecun1990handwritten,bruna2013invariant} which are based on four main ideas: local connections, shared weights, pooling, and multiple layers~\cite{lecun2015deep}. The role of the convolutional layer in a typical CNN architecture is to detect local features from the previous layer that are shared across the image domain, thus largely reducing the parameters compared with traditional fully connected feed-forward ANNs.
Although deep learning models, and in particular CNNs, have achieved highly improved performance on data characterized by an underlying Euclidean structure, many real-world data sets do not have a natural and direct connection with a Euclidean space. Recently there has been interest in extending deep learning techniques to non-Euclidean domains, such as graphs and manifolds~\cite{bronstein2017geometric}.
An archetypal example is social networks, which can be represented as graphs with users as nodes and edges representing social ties between them. In biology, gene regulatory networks represent relationships between genes encoding proteins that can up- or down-regulate the expression of other genes.
In this paper, we illustrate our results through examples stemming from another kind of relational data with no discernible Euclidean structure, yet with a clear graph formulation, namely citation networks, where nodes represent documents and an edge is established if one document cites the other~\cite{lazer2009life}.
To address the challenge of extending deep learning techniques to graph-structured data, a new class of deep learning algorithms, broadly named graph neural networks (GNNs), has been recently proposed~\cite{xu2018how,hamilton2017inductive,bronstein2017geometric}. In this setting, each node of the graph represents a sample, which is described by a feature vector, and we are additionally provided with relational information between the samples that can be formalized as a graph. GNNs are well suited to node (i.e., sample) classification tasks.
For a recent survey of this fast-growing field, see~\cite{wu2020comprehensive}.
Generalizing convolutions to non-Euclidean domains is not straightforward~\cite{defferrard2016convolutional}. Recently,
graph convolutional networks (GCNs) have been proposed~\cite{kipf2017semi} as a subclass of GNNs with convolutional properties.
The GCN architecture combines the full relational information from the graph together with the node features to accomplish the classification task, using the ground truth class assignment of a small subset of nodes during the training phase.
GCNs have shown improved performance for semi-supervised classification of documents (described by their text) into topic areas, outperforming methods that rely exclusively on text information without the use of any citation information, e.g., multilayer perceptron (MLP)~\cite{kipf2017semi}.
However, we would not expect such an improvement to be universal. In some cases, the additional information provided by the graph (i.e., the edges) might not be consistent with the similarities between the features of the nodes. In particular, in the case of citation graphs, it is not always the case that documents cite other documents that are similar in content. As we will show below with some illustrative data sets, in those cases the conflicting information provided by the graph means that a graph-less MLP approach outperforms GCN.
Here, we explore the relative importance of the graph with respect to the features for classification purposes, and propose a geometric measure based on subspace alignment to explain the relative performance of GCN against different limiting cases.
Our hypothesis is that a degree of alignment among the three layers of information available (i.e., the features, the graph and the ground truth) is needed for GCN to perform well, and that any degradation in the information content leads to an increased misalignment of the layers and worsened performance. We will first use randomization schemes to show that the systematic degradation of the information contained in the graph and the features leads to a progressive worsening of GCN performance. Second, we propose a simple spectral alignment measure, and show that this measure correlates with the classification performance in a number of data sets: (i) a constructive example built to illustrate our work; (ii) CORA, a well-known citation network benchmark; (iii) AMiner, a newly constructed citation network data set; and (iv) two subsets of Wikipedia: Wikipedia~\RNum{1}, where GCN outperforms MLP, and Wikipedia~\RNum{2}, where instead MLP outperforms GCN.
\section{RELATED WORK\label{sec:related_work}}
\subsection{Neural Networks on Graphs}
The first attempt to generalize neural networks on graphs can be traced back to Gori~\textit{et al.}~\cite{gori2005new}, who proposed a scheme combining recurrent neural networks (RNNs) and random walk models. Their method requires the repeated application of contraction maps as propagation functions until the node representations reach a stable fixed point. This method, however, did not attract much attention when it was proposed. With the current surge of interest in deep learning, this work has been reappraised in a new and modern form:~\cite{li2015gated} introduced modern techniques for RNN training based on the original graph neural network framework, whereas~\cite{duvenaud2015convolutional} proposed a convolution-like propagation rule on graphs and methods for graph-level classification.
Non-spectral methods have also been successfully proposed. For example,~\cite{atwood2016diffusion} shows how diffusion-based representations can be learned from graph-structured data and used as the basis for node classification by introducing a diffusion-convolution operation. Niepert~\textit{et al.}~\cite{niepert2016learning} convert graphs locally into sequences fed into a conventional 1D CNN, which needs the definition of a node ordering in a pre-processing step.
The first formulation of convolutional neural networks on graphs (GCNNs) was proposed by Bruna~\textit{et al.}~\cite{bruna2014spectral}. These researchers applied the definition of convolutions to the spectral domain of the graph Laplacian. While being theoretically salient, this method is unfortunately impractical due to its computational complexity. This drawback was addressed by subsequent studies~\cite{defferrard2016convolutional}. In particular,~\cite{defferrard2016convolutional} leveraged fast localized convolutions with Chebyshev polynomials.
In~\cite{kipf2017semi}, a GCN architecture was proposed via a first-order approximation of localized spectral filters on graphs. In that work, Kipf and Welling considered the task of semi-supervised transductive node classification where labels are only available for a small number of nodes. Starting with a feature matrix $X$ and a network adjacency matrix $A$, they encoded the graph structure directly using a neural network model $f(X, A)$, and trained on a supervised target loss function $\mathcal{L}$ computed over the subset of nodes with known labels. Their proposed GCN was shown to achieve improved accuracy in classification tasks
on several benchmark citation networks and on a knowledge graph data set. In our study, we examine how the properties of features and the graph interact in the model proposed by Kipf and Welling for semi-supervised transductive node classification in citation networks. The architecture and propagation rules of this method are detailed in Section~\ref{sec:methodGCNs}.
\subsection{Spectral Graph Convolutions}
We now present briefly the key insights introduced by Bruna~\textit{et al.}~\cite{bruna2014spectral} to extend CNNs to the non-Euclidean domain. For an extensive recent review, the reader should refer to~\cite{bronstein2017geometric}.
We study GCNs in the context of a classification task for $N$ samples. Each sample is described by a $C^{0}$-dimensional feature vector, which is conveniently arranged into the feature matrix $X \in R^{N\times C^{0}}$.
Each sample is also associated with the node of a given graph $\mathcal{G}$ with $N$ nodes, with edges representing additional relational (symmetric) information. This undirected graph is described by the adjacency matrix $A \in R^{N\times N}$. The ground truth assignment of each node to one of $F$ classes is encoded into a 0-1 membership matrix $Y \in R^{N\times F}$.
The main hurdle is the definition of a convolution operation on a graph between a filter $g_{w}$ and the node features $X$. This can be achieved by expressing $g_{w}$ onto a basis encoding information about the graph, e.g., the adjacency matrix $A$ or the Laplacian $L=D-A$, where $D=\text{diag}(A\mathbf{1})$. This real symmetric matrix has an eigendecomposition $L=U\Lambda U^{T}$, where $U$ is the matrix of column eigenvectors with associated eigenvalues collected in the diagonal matrix $\Lambda$. The filters can then be expressed in the eigenbasis $U$ of $L$:
\begin{equation}
g_{w}=Ug_{w}(\Lambda)U^{T},
\end{equation}
with the convolution between filter and signal given by:
\begin{equation}
g_{w}\star X=Ug_{w}(\Lambda)U^{T}X.
\end{equation}
The signal is thus projected onto the space of the graph, filtered in the frequency domain, and projected back onto the~nodes.
\subsection{Graph Convolutional Networks\label{sec:methodGCNs}}
A GCN is a semisupervised method, in which a small subset of the node ground truth labels are used in the training phase to infer the class of unlabeled nodes. This type of learning paradigm, where only a small amount of labeled data is available, therefore lies between supervised and unsupervised learning.
Furthermore, the model architecture, and thus the learning, depend explicitly on the structure of the network. Hence the addition of any new data point (i.e., a new node in the network) will require a retraining of the model. GCNs are, therefore, an example of a transductive learning paradigm, where the classifier cannot be generalized to data it has not already seen.
Node classification using a GCN can be seen as a label propagation task: given a set of seed nodes with known labels, the task is to predict which label will be assigned to the unlabeled nodes given a certain topology and attributes.
\paragraph*{Layerwise Propagation Rule and Multilayer Architecture}
\label{sec:twolayerGCN}
Our study uses the multilayer GCN proposed in~\cite{kipf2017semi}.
Given the matrix $X$ with sample features and the (undirected) adjacency matrix $A$ of the graph $\mathcal{G}$ encoding relational information between the samples,
the propagation rule between layers $\ell$ and $\ell+1$ (of size $C^{\ell}$ and $C^{\ell+1}$, respectively) is given by:
\begin{equation}
H^{\ell+1} = \sigma^\ell\left(\widehat{A}H^{\ell}W^{\ell}\right),
\label{eq:layer_propagation}
\end{equation}
where
$H^{\ell}\in R^{N\times C^{\ell}}$ and $H^{\ell+1} \in R^{N\times C^{\ell+1}}$ are matrices of activation in the $\ell^{th}$ and $(\ell+1)^{th}$ layers, respectively;
$\sigma^\ell(\cdot)$ is the threshold activation function for layer $\ell$; and the weights connecting layers $\ell$ and $\ell+1$ are stored in the matrix $W^{\ell}\in R^{C^{\ell}\times C^{\ell+1}}$. Note that the input layer contains the feature matrix $H^{0}\equiv X$.
The graph is encoded in
$\widehat{A}=\tilde{D}^{-1/2}\tilde{A}\tilde{D}^{-1/2}$, where $\tilde{A} = A + I_{N}$ is the adjacency matrix of a graph with added self-loops, $I_{N}$ is the identity matrix, and $\tilde{D} = \text{diag}(\tilde{A} \mathbf{1})$ is a diagonal matrix containing the degrees of $\tilde{A}$. In the remainder of this work (and to ensure comparability with the results in~\cite{kipf2017semi}), we use $\widehat{A}$ as the descriptor of the graph $\mathcal{G}$.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{gcn_architecture_v6.pdf}
\caption{\textbf{Schematic of GCN used.} The graph $\widehat{A}$ is applied to the input of each layer $\ell$ before it is funneled into the input of layer $\ell+1$. The process is repeated until the output has dimension $N\times F$ and produces a predicted class assignment. During the training phase, the predicted assignments are compared against a subset of values $\mathcal{Y}_L$ of the ground truth.}
\label{fig:gcn_architecture}
\end{figure}
Following \cite{kipf2017semi},
we implement a two-layer GCN with propagation rule~\eqref{eq:layer_propagation} and different activation functions for each layer, i.e., a rectified linear unit (ReLU) for the first layer and a softmax unit for the output layer:
\begin{align}
\sigma^0 &:\mathrm{ReLU}(x_{i})=\max(x_{i},0) \\
\sigma^1 &:\mathrm{softmax}(x)_{i} = \frac{\exp(x_{i})}{\sum_{i} \exp(x_{i})},
\end{align}
where $x$ is a vector.
The model then takes the simple form:
\begin{equation}
Z = f(X,A) %
= \mathrm{softmax}(\widehat{A}~\mathrm{ReLU}(\widehat{A}XW^{0})~W^{1}),\\
\label{eq:two_layers_rule}
\end{equation}
where the softmax function is applied row-wise and the ReLU is applied element-wise.
Note there is only one hidden layer with $C^1$ units. Hence
$W^{0} \in R^{C^{0}\times C^{1}}$ maps the input with $C^0$ features to the hidden layer
and $W^{1} \in R^{C^{1}\times C^{2}}$ maps these hidden units to the output layer with $C^{2}=F$ units, corresponding to the number of classes of the ground truth.
In this semi-supervised multiclass classification, the cross-entropy error over all labeled instances is evaluated as follows:
\begin{equation}
\mathcal{L} = -\displaystyle\sum_{l\in \mathcal{Y}_{L}}\displaystyle\sum_{f=1}^{F} Y_{lf} \, \ln{Z_{lf}},
\label{eq:cross_entropy_error}
\end{equation}
where $\mathcal{Y}_{L}$ is the set of nodes that have labels. The
weights of the neural network ($W^{0}$ and $W^{1}$) are trained using gradient descent to minimize the loss~$\mathcal{L}$.
A visual summary of the GCN architecture is shown in Fig.~\ref{fig:gcn_architecture}. The reader is referred to~\cite{kipf2017semi} for details and in-depth analysis. Although GCN was introduced as a simplified form of spectral-based GNNs, it shows a natural connection with spatial-based GNNs, in which graph convolutions are defined by information propagation. Hence our study of the alignment of graph and features is related more widely to graph-feature correlations, such as spatial autocorrelations measured with, e,g,, Moran's and Geary's indices~\cite{de1984extreme,waldhor06moran} that capture how the features of nodes influence each other via network structure.
\section{METHODS\label{sec:methods}}
\subsection{Randomization Strategies\label{sec:randomization_strategy}}
To test the hypothesis that a degree of alignment across information layers is crucial for a good classification performance of GCN, we gradually randomize the node features, the node connectivity, or both. For the randomization to give a meaningful notion of alignment, at least one ingredient needs to be kept constant. Since we focus on the alignment of graph and features, we keep the ground truth constant.
\subsubsection{Randomization of the Graph}
The edges of the graph are randomized by rewiring a percentage $p_{\widehat{A}}$ %
of edge stubs (i.e., ``half-edges'') under the constraint that the degree distribution remains unchanged. This randomization strategy is described in Algorithm~\ref{algo:random_graph} which is based on the configuration model~\cite{newman2003structure}. Once a randomized realization of the graph is produced, the corresponding $\widehat{A}$ is computed.
\begin{algorithm}[htbp!]
\KwIn{A graph $G(V,E)$, where $V$ is the set of nodes and $E$ is the set of edges, and a randomization percentage $0\leq p_{\widehat{A}} \leq 100$.}
\KwOut{A randomized graph $G_{p_{\widehat{A}}}(V,E')$}
\BlankLine
1.Choose a random subset of edges $E_{r}$ from $E$ with $|E_{r}|=\left\lfloor|E|\times p_{\widehat{A}} /100 \right \rfloor$, and denote the unrandomized edges in $E$ as $E_{u}$.
\BlankLine
2. Obtain the degree sequence of nodes from $E_{r}$, and build a stub list $l_{s}$ based on the degree sequence.
\BlankLine
3. Obtain a randomized stub list $l'_{s}$ by shuffling $l_{s}$, and randomized edges $E'_{r}$ by connecting the stubs in the corresponding positions of the two stub lists~$l_{s}$~and~$l'_{s}$.
\BlankLine
4. Compute $E_{u} \cup E'_{r}$, remove multiedges and self-loops, and obtain the final edge set E'.
\BlankLine
5. Generate randomized graph $G_{p_{\widehat{A}}}(V,E')$ from node set V and edge set $E'$.
\caption{Randomization of the Graph}
\label{algo:random_graph}
\end{algorithm}
\subsubsection{Randomization of the Features}
The features were randomized by swapping feature vectors between a percentage $p_{X}$ %
of randomly chosen nodes following the procedure described in Algorithm~\ref{algo:random_features}.
\begin{algorithm}[htbp!]
\KwIn{A feature matrix $X \in R^{N\times C^{0}}$, and a randomization percentage $0\leq p_X\leq 100$.}
\KwOut{A randomized feature matrix $X_{p_{X}}\in R^{N\times C^{0}}$}
\BlankLine
1. Choose at random $N_{r}$ rows from $X$, where
$N_{r}=\left\lfloor N \, p_X/100 \right\rfloor$.
\BlankLine
2. Swap randomly the $N_{r}$ rows to obtain $X_{p_{X}}$.
\caption{Randomization of the Features}
\label{algo:random_features}
\end{algorithm}
A fundamental difference between the two randomization schemes is that the graph randomization alters its spectral properties as it gradually destroys the graph structure, whereas the randomization of the features preserves its spectral properties in the principal component analysis (PCA) sense, i.e., the principal values are the same but the loadings on the components are swapped. Hence the feature randomization still alters the classification performance because the features are re-assigned to nodes that have a different environment, thereby changing the result of the convolution operation defined by the $H^{\ell}$ activation matrices~\eqref{eq:layer_propagation}.
\subsection{Limiting Cases\label{sec:limit}}
To interrogate the role that the graph plays in the classification performance of a GCN, it is instructive to consider three limiting cases:
\begin{itemize}
\item \textit{No Graph:} $A=\mathbf{0}\mathbf{0}^{T}$. If we remove all the edges in the graph, the classifier becomes equivalent to an MLP, a classic feed-forward ANN. The classification is based solely on the information contained in the features, as no graph structure is present to guide the label propagation.
\item \textit{Complete Graph:} $A=\mathbf{1}\mathbf{1}^{T} - I_{N}$.
In this case, the mixing of features is immediate and homogeneous, corresponding to a mean field approximation of the information contained in the features.
\item \textit{No Features:} $X=I_{N}$. In this case, the label propagation and assignment are purely based on graph topology.
\end{itemize}
An illustration of these limiting cases can be found in the top row of Table~\ref{table:results_gcn_and_limitCases}.
\subsection{Spectral Alignment Measure}
In order to quantify the alignment between the features, the graph and the ground truth, we propose a measure based on the chordal distance between subspaces, as follows.
\subsubsection{Chordal Distance Between Two Subspaces}
Recent work by Ye and Lim~\cite{ye2016schubert} has shown that the distance between two subspaces of different dimension in $\mathbb{R}^{n}$ is necessarily defined in terms of their principal angles.
Let $\mathcal{A}$ and $\mathcal{B}$ be two subspaces of the ambient space $\mathbb{R}^{n}$
with dimensions $\alpha$ and $\beta$, respectively, with $\alpha \leq \beta < n$. The principal angles between $\mathcal{A}$ and $\mathcal{B}$ denoted $0 \leq \theta_{1}\leq\theta_{2}\leq...\leq\theta_{\alpha} \leq \frac{\pi}{2}$ are defined recursively as follows~\cite{bjorck1973numerical,golub2012matrix}:
\begin{align*}
\theta_{1}&=\min_{a_{1}\in \mathcal{A}, b_{1}\in \mathcal{B}} \arccos\left(\frac{|a_{1}^{T}b_{1}|}{\|a_{1}\|\|b_{1}\|}\right), &\\
\theta_{j}&=\min_{\substack{a_{j}\in \mathcal{A}, b_{j}\in \mathcal{B}\\
a_{j}\bot a_{1},...,a_{j-1}\\
b_{j}\bot b_{1},...,b_{j-1}}}
\arccos\left(\frac{|a_{j}^{T}b_{j}|}{\|a_{j}\|\|b_{j}\|}\right), \enskip j=2,...,\alpha,
\end{align*}
If the \textit{minimal} principal angle is small, then the two subspaces are nearly linearly dependent, i.e., almost perfectly aligned.
A numerically stable algorithm that computes the canonical correlations, (i.e., the cosine of the principal angles) between subspaces is given in Algorithm~\ref{algo:principal_angles}. %
\begin{algorithm}[htbp!]
\KwIn{matrices $A_{n \times \alpha}$ and $B_{n \times \beta}$ with $\alpha \leq \beta < n$.}
\KwOut{cosines of the principal angles $\theta_{1}\leq \theta_{2}\leq...\leq \theta_{\alpha}$ between $\mathcal{R}(A)$ and $\mathcal{R}(B)$, the column spaces of $A$ and $B$.}
\BlankLine
1. Find orthonormal bases $\mathcal{Q}_{A}$ and $\mathcal{Q}_{B}$ for $A$ and $B$ using the QR decomposition:
$\mathcal{Q}_{A}^{T}\mathcal{Q}_{A} = \mathcal{Q}_{B}^{T}\mathcal{Q}_{B} = I$; $\mathcal{R}(\mathcal{Q}_{A}) = \mathcal{R}(A)$, $\mathcal{R}(\mathcal{Q}_{B}) = \mathcal{R}(B)$.
\BlankLine
2. Compute the singular value decomposition (SVD): %
$\mathcal{Q}_{A}^{T}\mathcal{Q}_{B}$ = $UCV^{T}$.
\BlankLine
3. Extract the diagonal elements of $C$: $C_{ii}=\cos \theta_i$, to obtain the canonical correlations $\{\cos\theta_{1},...,\cos\theta_{\alpha}\}$.
\caption{Principal angles~\cite{bjorck1973numerical,golub2012matrix}}
\label{algo:principal_angles}
\end{algorithm}
The principal angles are the basic ingredient of a number of well defined Grassmannian distances between subspaces~\cite{ye2016schubert}. Here we use the chordal distance
given by:%
\begin{equation}
d(\mathcal{A,B}) = \sqrt{\displaystyle\sum_{j=1}^{\alpha}{\sin^{2}\theta_{j}}}.
\label{eq:pairwise_alignment}
\end{equation}
The larger the chordal distance $d(\mathcal{A,B})$ is, the worse the alignment between the subspaces $\mathcal{A}$ and $\mathcal{B}$. %
We remark that the last inequality in $\alpha \leq \beta < n$ is strict. If a subspace spans the whole ambient space (i.e., $\beta=n$), then its distance to all other strict subspaces of $\mathbb{R}^n$ is trivially zero, as it is always possible to find a rotation that aligns the strict subspace with the whole space.
\subsubsection{Alignment Metric\label{sec:Frobenius_norm}}
Our task involves establishing the alignment between \textit{three} subspaces associated with the features $X$, the graph $\widehat{A}$, and the ground truth $Y$.
To do so, we consider the distance matrix containing all the pairwise chordal distances:
\begin{equation}
D(X,\widehat{A},Y) = \begin{bmatrix}
0 & d(X,\widehat{A}) & d(X,Y) \\[0.3em]
d(X,\widehat{A}) & 0 & d(\widehat{A},Y) \\[0.3em]
d(X,Y) & d(\widehat{A},Y) & 0
\end{bmatrix},
\label{eq:matrix_distance}
\end{equation}
and we take the Frobenius norm~\cite{golub2012matrix} of this matrix $D$ as our \textit{subspace alignment measure} (SAM):
\begin{equation}
\mathcal{S}(X,\widehat{A},Y) = \|D(X,\widehat{A},Y) \|_{\text{F}} = \sqrt{\sum_{i=1}^{3}\sum_{j=1}^{3}D_{ij} ^{2}}.
\label{eq:norm_matrix_distance}
\end{equation}
The larger $\|D\|_{\text{F}}$ is, the worse the alignment between the three subspaces.
This alignment measure has a geometric interpretation related to the area of the triangle with sides $d(X,\widehat{A}), d(X,Y), d(\widehat{A},Y)$ (the smaller blue shaded triangle in Fig.~\ref{fig:pyramid}).
\subsubsection{Determining the Dimension of the Subspaces\label{sec:generation_subspaces}}
The feature, graph and ground truth matrices $(X,\widehat{A},Y)$ are associated with subspaces of the ambient space $\mathbb{R}^N$, where $N$ is the number of nodes (or samples). These subspaces are spanned by: the eigenvectors of $\widehat{A}$, the principal components of the feature matrix $X$, and the principal components of the ground truth matrix $Y$, respectively~\cite{von2007tutorial}.
The dimension of the graph subspace is $N$; the dimension of the feature subspace is the number of features $C^{0}<N$ (in our examples); and the dimension of the ground truth subspace is the number of classes~$F<C^{0}<N$.
The pairwise chordal distances $D_{ij}$ in~\eqref{eq:matrix_distance} are computed from a number of minimal angles, corresponding to the smaller of the two dimensions of the subspaces being compared.
Hence the dimensions of the subspaces $(k_X, k_{\widehat{A}}, k_Y)$ need to be defined to compute the distance matrix $D$.
Here, we are interested in finding low dimensional subspaces of features, graph and ground truth with dimensions $(k^*_X, k^*_{\widehat{A}}, k^*_Y)$ such that they provide maximum discriminatory power between the original problem and the fully randomized (null) model.
To do this, we propose the following criterion:
\begin{align}
k^{*}_{Y}&=F \label{eq:dimension_subspaces} \\
(k_{X}^{*},k_{\widehat{A}}^{*})&=
\underset{k_{X},k_{\widehat{A}}}{\max}
\left(\|D(X_{100},\widehat{A}_{100},Y)\|_{\text{F}}-
\|D(X,\widehat{A},Y)\|_{\text{F}}\right). \nonumber
\end{align}
We choose $k^{*}_{Y}$ equal to the number of ground truth classes since they are non-overlapping~\cite{von2007tutorial}.
Our optimization selects $k_{X}^{*}$ and $k_{\widehat{A}}^{*}$ such that the difference in alignment between the original problem with no randomization ($p_X=p_{\widehat{A}}=0$)
and an ensemble of 100 fully randomized (feature and graph, $p_X=p_{\widehat{A}}=100$) problems is maximized (see SI for details on the optimization scheme).
This criterion maximizes the range of values that $||D||_{\text{F}}$ can take, thus augmenting the discriminatory power of the alignment measure when finding the alignment between \textit{both} data sources and the ground truth, beyond what is expected purely at random.
Importantly, the reduced dimension of features and graph are found simultaneously, since our objective is to quantify the alignment (or amount of shared information) contained in the three subspaces. Our criterion effectively amounts to finding the dimensions of the subspaces that maximize a difference in the surfaces of the larger (red) and smaller (blue) shaded triangles in Fig.~\ref{fig:pyramid}.
We provide the code to compute our proposed alignment measure at \url{https://github.com/haczqyf/gcn-data-alignment}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{pyramid_MBv4.pdf}
\caption{\textbf{Method to determine relevant subspaces~\eqref{eq:dimension_subspaces}}. Using the constructive example, we illustrate the subspaces representing features, graph and ground truth. The feature and ground truth matrices are decomposed via PCA %
and the graph matrix is similarly eigendecomposed.
Fixing $k_{Y}^{*}=F$, we optimize~\eqref{eq:dimension_subspaces} to find the dimensions $k_{X}^{*}$ and $k_{\widehat{A}}^{*}$ that maximize the difference between the area of the blue triangle, which reflects the alignment of the three subspaces $(X,\widehat{A},Y)$ of the original data, and the area of the red triangle, which corresponds to the alignment of the subspaces $(X_{100},\widehat{A}_{100},Y)$ of the fully randomized data.
The edges of the triangles correspond to the pairwise chordal distances (e.g., the base of the blue triangle corresponds to $d(X,\widehat{A})$).
}
\label{fig:pyramid}
\end{figure}
\section{EXPERIMENTS\label{sec:experiments}}
\subsection{Data Sets\label{sec:dataset}}
Relevant statistics of the data sets, including number of nodes and edges, dimension of feature vectors, and number of ground truth classes, are reported in Table~\ref{table:dataset_statistics}.
\begin{table}[htbp!]
\centering
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{ ccccccc }
\specialrule{.1em}{.05em}{.05em}
\textbf{Data sets} & \textbf{Nodes ($N$)} & \textbf{Edges} & \textbf{Features ($C^0$)} & \textbf{Classes ($F$)}\\
\hline
Constructive & $1,000$ & $6,541$ & $500$ & $10$\\
CORA & $2,485$ & $5,069$ & $1,433$ & $7$\\
AMiner & $2,072$ & $4,299$ & $500$ & $7$\\
Wikipedia & $20,525$ & $215,056$ & $100$ & $12$\\
Wikipedia~\RNum{1} & $2,414$ & $8,163$ & $100$ & $5$\\
Wikipedia~\RNum{2} & $1,858$ & $8,444$ & $100$ & $5$\\
\specialrule{.1em}{.05em}{.05em}
\end{tabular}}
\caption{\textbf{Some statistics of the data sets in our study.}}
\label{table:dataset_statistics}
\end{table}
\subsubsection{Constructive Example}\label{sec:constructive_example}
To illustrate the alignment measure in a controlled setting, we build a constructive example, consisting of $1,000$ nodes assigned to $10$ planted communities $C_1,...,C_{10}$ of equal size.
We then generate both a feature matrix and a graph matrix whose structures are aligned with the ground truth assignment matrix. The graph structure is generated using a stochastic block model that reproduces the ground truth structure with some noise: two nodes are connected with a probability $p_{in}=0.07$ if they belong to the same community $C_i$ and $p_{out}=0.007$ otherwise. The feature matrix is constructed in a similar way. The feature vectors are $500$ dimensional and binary, i.e., a node either possesses a feature or it does not. Each ground truth cluster is associated with $50$ features that are present with a probability of $p_{in}=0.07$. Each node also has a probability $p_{out}=0.007$ of possessing each feature characterizing other clusters. Using the same stochastic block structure for both features and graph ensures that they are maximally aligned with the ground truth. This constructive example is then randomized in a controlled way to detect the loss of alignment and the impact this loss of alignment has on the classification performance.
\subsubsection{CORA}
The CORA data set is a benchmark for classification algorithms using text and citation data\footnote{https://linqs.soe.ucsc.edu/data}.
Each paper is labeled as belonging to one of $7$ categories (Case\_Based, Genetic\_Algorithms, Neural\_Networks, Probabilistic\_Methods, Reinforcement\_Learning, Rule\_Learning, and Theory), which gives the ground truth $Y$.
The text of each paper is described by a $0/1$ vector indicating the absence/presence of words in a dictionary of $1,433$ unique words, the dimension of the feature space.
The feature matrix $X$ is made from these word vectors.
We extracted the largest connected component of this citation graph (undirected) to form the graph adjacency matrix~$A$.
\subsubsection{AMiner}
For additional comparisons, we produced a new data set with similar characteristics to CORA from the academic citation site AMiner.
AMiner is a popular scholarly social network service for research purposes only~\cite{tang2008arnetminer},
which provides an open database\footnote{https://aminer.org/data} with more than $10$ data sets encompassing researchers, conferences, and publication data.
Among these, the academic social network\footnote{https://aminer.org/aminernetwork} is the largest one and includes information on papers, citations, authors, and scientific collaborations.
In 2012 the Chinese Computer Federation (CCF) released a catalog including $10$ subfields of computer science. Using the AMiner academic social network, Qian~\textit{et al.}~\cite{qian2017citation} extracted $102,887$ papers published from 2010 to 2012,
and mapped each paper with a unique subfield of computer science according to the publication venue.
Here, we use these assigned categories as the ground truth for a classification task.
Using all the papers in~\cite{qian2017citation} that have both abstract and references, we created a data set of similar size to CORA. We extracted the largest connected component from the citation network of all papers in $7$ subfields (Computer systems/high performance computing, Computer networks, Network/information security, Software engineering/software/programming language, Databases/data mining/information retrieval, Theoretical computer science, and Computer graphics/multimedia) from 2010 to 2011. The resulting AMiner citation network consists of $2,072$ papers with $4,299$ edges. Just as with CORA, we treat the citations as undirected edges, and obtain an adjacency matrix $A$. We further extracted the most frequent $500$ stemmed terms from the corpus of abstracts of papers and constructed the feature matrix $X$ for AMiner using bag-of-words.
\subsubsection{Wikipedia}
As a contrasting example, we produced three data sets from the English Wikipedia.
The Wikipedia provides an interlinked corpus of documents (articles) in different fields, which `cite' each other via hyperlinks. We first constructed a large corpus of articles, consisting of a mixture of popular and random pages so as to obtain a balanced data set. We retrieved the $5,000$ most accessed articles during the week before the construction of the data set (July 2017), and an additional $20,000$ documents at random using the Wikipedia built-in random function\footnote{https://en.wikipedia.org/wiki/Wikipedia:Random}.
The text and subcategories of each document, together with the names of documents connected to it, were obtained using the Python library~\textit{Wikipedia}\footnote{https://github.com/goldsmith/Wikipedia}.
A few documents (e.g., those with no subcategories)
were filtered out during this process.
We constructed the citation network of the documents retrieved and extracted the largest connected component. The resulting citation network contained $20,525$ nodes and $215,056$ edges. The text content of each document was converted into a bag-of-words representation based on the $100$ most frequent words. To establish the ground truth,
we used $12$ categories from the application programming interface (API) (People, Geography, Culture, Society, History, Nature, Sports, Technology, Health, Religion, Mathematics, Philosophy) and assigned each document to one of them.
As part of our investigation, we split this large Wikipedia data set into two smaller subsets of non-overlapping categories: Wikipedia~\RNum{1}, consisting of Health, Mathematics, Nature, Sports, and Technology; and Wikipedia~\RNum{2}, with the categories Culture, Geography, History, Society, and People.
All six data sets used here can be found at \url{https://github.com/haczqyf/gcn-data-alignment/tree/master/alignment/data}.
\subsection{GCN Architecture, Hyperparameters and Implementation}\label{sec:experiment_setup}
We used the GCN architecture~\cite{kipf2017semi} and implementation\footnote{https://github.com/tkipf/gcn} provided by Kipf and Welling~\cite{kipf2017semi}, and followed closely their experimental setup to train and test the GCN on our data sets.
We used a two-layer GCN as described in Section~\ref{sec:methodGCNs} with the maximum number of training iterations (epochs) set to $400$~\cite{kingma2014adam}, a learning rate of $0.01$, and early stopping with a window size of $100$, i.e., training stops if the validation loss does not decrease for $100$ consecutive epochs.
Other hyperparameters used were: 1) dropout rate: $0.5$; 2) L2 regularization: $5 \times 10^{-4}$; and 3) number of hidden units: $16$. We initialized the weights as described in~\cite{glorot2010understanding}, and accordingly row-normalized the input feature vectors.
For the training, validation and test of the GCN, we used the following split: 1) $5$\% of instances as training set; 2) $10$\% as validation set; and 3) the remaining $85$\% as test set. We used this split for all data sets with exception of the full Wikipedia data set, where we used: 1) $3.5$\% of instances as training set; 2) $11.5$\% as validation set; and 3) the remaining $85$\% as test set. This modification of the split was necessary to ensure the instances in the training set were evenly distributed across categories.
\section{RESULTS\label{sec:results}}
The GCN performance is evaluated using the standard \textit{classification accuracy} defined as the proportion of nodes correctly classified in the test set.
\subsection{GCN: Original Graph Versus Limiting Cases\label{sec:results_gcn_limitCases}}
For each data set in Table~\ref{table:dataset_statistics}, we trained and tested a GCN with the original graph and features matrices, and GCN models under the three limiting cases described in Section~\ref{sec:limit}.
We computed the average accuracy of $100$ runs with random weight initializations (Table~\ref{table:results_gcn_and_limitCases}).
\begin{table}[htbp!]
\centering
\resizebox{0.75\textwidth}{!}{
\begin{threeparttable}
\begin{tabular}{lcccc}
\specialrule{.1em}{.05em}{.05em}
& \textbf{GCN (original)} & \multicolumn{3}{c}{\textbf{GCN (limiting cases)}}\\
\cmidrule(lr){2-2}\cmidrule(l){3-5}
& & No graph = MLP & No features & Complete graph\\
& & \textit{(Only features)} & \textit{(Only graph)} &
\textit{(Mean field)}\\
& & $A=\mathbf{0}\mathbf{0}^{T}$ & $X=I_N$ & $A=\mathbf{1}\mathbf{1}^{T} - I_{N}$ \\
& \raisebox{-\totalheight}{\includegraphics[width=0.11\textwidth,height=0.12\textwidth]{gcn_ingredients_v4.pdf}} & \raisebox{-\totalheight}{\includegraphics[width=0.11\textwidth,height=0.12\textwidth]{gcn_limit_cases_nograph_v4.pdf}} & \raisebox{-\totalheight}{\includegraphics[width=0.11\textwidth,height=0.12\textwidth]{gcn_limit_cases_nofeatures_v4.pdf}} & \raisebox{-\totalheight}{\includegraphics[width=0.11\textwidth,height=0.12\textwidth]{gcn_limit_cases_completegraph_v4.pdf}}\\
\textbf{Data sets} & & & & \\ %
\hline
Constructive & \textbf{0.932 $\pm$ 0.006} & 0.416 $\pm$ 0.010 & 0.764 $\pm$ 0.009 & 0.100 $\pm$ 0.003 \\
CORA & \textbf{0.811 $\pm$ 0.005} & 0.548 $\pm$ 0.014 & 0.691 $\pm$ 0.006 & 0.121 $\pm$ 0.066 \\
AMiner & \textbf{0.748 $\pm$ 0.005} & 0.547 $\pm$ 0.013 & 0.591 $\pm$ 0.006 & 0.123 $\pm$ 0.045 \\
Wikipedia & 0.392 $\pm$ 0.010 & \textbf{0.450 $\pm$ 0.007} & 0.254 $\pm$ 0.037 & O.O.M. \\
Wikipedia~\RNum{1} & \textbf{0.861 $\pm$ 0.006} & 0.796 $\pm$ 0.005 & 0.824 $\pm$ 0.003 & 0.163 $\pm$ 0.135 \\
Wikipedia~\RNum{2} & 0.566 $\pm$ 0.021 & \textbf{0.659 $\pm$ 0.011} & 0.347 $\pm$ 0.012 & 0.155 $\pm$ 0.176 \\
\specialrule{.1em}{.05em}{.05em}
\end{tabular}
\end{threeparttable}
}
\caption{\textbf{Classification accuracy of GCN and limiting cases for our data sets.}
The best performance is indicated in bold. Error bars are evaluated over $100$ runs. The GCN with original data performs best in most cases, but is outperformed by MLP in the full Wikipedia data set and its subset Wikipedia II.
}
\label{table:results_gcn_and_limitCases}
\end{table}
The GCN using all the information available in the features and the graph outperforms MLP (the no graph limit) except in the case of the large Wikipedia set.
Hence using the additional information contained in the graph does not necessarily increase the performance of GCN.
To investigate this issue further,
we split the Wikipedia data set into two subsets: Wikipedia~\RNum{1}, with articles in topics that tend to be more self-referential (e.g., Mathematics or Technology) and Wikipedia~\RNum{2}, containing pages in areas that are less self-contained (e.g., Culture or Society). We observed that GCN outperforms MLP for Wikipedia~\RNum{1} but the opposite is still true for Wikipedia~\RNum{2}.
Finally, we also observe that the performance of ``No features'' is always lower than the performance of GCN, and, as expected, the performance of ``Complete graph'' (i.e., mean field) is very low and close to pure chance (i.e., $\sim 1/F$).
\subsection{Performance of GCN Under Randomization\label{sec:results_randomization}}
The results above
lead us to pose the hypothesis that a degree of synergy between features, graph and ground truth is needed for GCN to perform well.
To investigate this hypothesis, we use the randomization schemes described in Section~\ref{sec:randomization_strategy} to degrade
systematically the information content of the graph and/or the features in our data sets.
Fig.~\ref{fig:summary_randomization} presents the performance of the GCN as a function of the percent of randomization of the graph structure, the features, or both.
As expected, the accuracy decreases for all data sets as the information contained in the graph, features or both is scrambled, yet with differences in the decay rate of each of the ingredients for the different examples.
Note that the chance-level performance of the ``Complete graph'' (mean field) limiting case is achieved only when \textit{both} graph and features are fully randomized, whereas the accuracy of the two other limiting cases (``No graph---MLP'', ``No features'') is reached around the half-point ($\sim 50\%$) of randomization of the graph or of the features,
respectively.
This indicates that using the scrambled information above a certain degree of randomization becomes more detrimental to the classification performance than simply ignoring it.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.75\textwidth]{accuracy_vs_percentRandomization_all_US_v2.pdf}
\caption{\textbf{Degradation of classification performance as a function of randomization.} Each panel shows the degradation of the classification accuracy as a function of the randomization of graph, features and both, for a different data set. Error bars are evaluated over $100$ realizations:
for zero percent randomization, we report $100$ runs with random weight initializations; for the rest, we report $1$ run with random weight initializations for $100$ random realizations.
The horizontal lines correspond to the limiting cases in Table~\ref{table:results_gcn_and_limitCases}.
The full Wikipedia data set was not analyzed here
since the eigendecomposition of $\widehat{A}$ needed to obtain $k^{*}_{X}, k^{*}_{\widehat{A}}$ is computationally intensive.
}
\label{fig:summary_randomization}
\end{figure}
\subsection{Relating GCN Performance and Subspace Alignment}
We tested whether the degradation of GCN performance is linked to the increased misalignment of features, graph and ground truth given by the SAM
\begin{equation}
\label{eq:SAM}
\mathcal{S}^*(X,\widehat{A},Y) = \|D(X,\widehat{A},Y;k^{*}_{X}, k^{*}_{\widehat{A}}, k^*_Y) \|_{\text{F}}
\end{equation}
which corresponds to~\eqref{eq:norm_matrix_distance} computed with
the dimensions $(k^{*}_{X}, k^{*}_{\widehat{A}}, k^*_Y)$ obtained using~\eqref{eq:dimension_subspaces} (Table~\ref{table:k_X_k_A}, and see SI for the optimization scheme used).
Fig.~\ref{fig:summary_Frobenius_norm} shows that the GCN accuracy is clearly (anti)correlated with the subspace alignment distance~\eqref{eq:SAM} in all our examples (mean correlation $= -0.92$).
As we randomize the graph and/or features, the subspace misalignment increases and the GCN performance decreases.
In addition to the Chordal distance,~\cite{ye2016schubert} studies other subspace distances. While all the distances can be expressed in terms of the principal angles $\theta_j$, some rely on all the angles whereas others only use the maximum principal angle. We obtain similar results for distances that use all the principal angles (e.g., Chordal, Grassmann), but we find that extremal distances based on the maximum principal angle (e.g., the Projection distance) do not correlate as well with GCN performance. This highlights the importance of the information captured by all principal angles to quantify the alignment between subspaces. For results based on the Grassmann and Projection distances, see Appendix (Section~\RNum{3}) in the Supplementary Material.
\begin{table}[htbp!]
\centering
\resizebox{0.5\textwidth}{!}{
\begin{threeparttable}
\begin{tabular}{cccc}
\specialrule{.1em}{.05em}{.05em}
\textbf{Data sets} & $k^{*}_{X}$ & $~k^{*}_{\widehat{A}}$ & $~k^{*}_{Y}$
\\
\hline
Constructive example & 287 & 10 & 10\\
CORA & 1,291 & 190 & 7\\
AMiner & 500 & 57 & 7\\
Wikipedia~\RNum{1} & 68 & 1,699 & 5\\
Wikipedia~\RNum{2} & 100 & 1,125 & 5\\
\specialrule{.1em}{.05em}{.05em}
\end{tabular}
\end{threeparttable}}
\caption{\textbf{Dimensions of the three subspaces obtained according to~\eqref{eq:dimension_subspaces} for our data sets.}}
\label{table:k_X_k_A}
\end{table}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.75\textwidth]{accuracy_vs_distance_all_US_v2.pdf}
\caption{\textbf{Classification performance versus the subspace alignment measure (SAM).} Each panel shows the accuracy of GCN versus the SAM~\eqref{eq:SAM} for all the runs presented in Fig.~\ref{fig:summary_randomization}.
Error bars are evaluated over $100$ randomizations.}
\label{fig:summary_Frobenius_norm}
\end{figure}
\section{DISCUSSION\label{sec:discussion}}
Our first set of experiments (see Table~\ref{table:results_gcn_and_limitCases}) reflects the varying amount of information that GCN can extract from features, graph and their combination, for the purpose of classification.
For a classifier to perform well, it is necessary to find (possibly nonlinear) combinations of features that map differentially and distinctively onto the categories of the ground truth.
The larger the difference (or distance on the projected space) between the samples of each category, the easier it is to ``separate'' them, and the better the classifier.
In the MLP setting, for instance, the weights between layers ($W^\ell$) are trained to maximize this separation. As seen by the different accuracies in the ``No graph' 'column (Table~\ref{table:results_gcn_and_limitCases}), the features of each example contain variable amount of information that is mappable on its ground truth.
A similar reasoning applies to classification based on graph information alone, but in this case, it is the eigenvectors of $\widehat{A}$ that need to be combined to produce distinguishing features between the categories in the ground truth
(e.g., if the graph substructures across scales~\cite{lambiotte2015random} do not map onto the separation lines of the ground truth categories, then the classification performance based on the graph will deteriorate).
The accuracy in the ``No features'' column indicates that some of the graphs contain more congruent information with the ground truth than others.
Therefore, the ``No graph'' and ``No features'' limiting cases inform about the relative congruence of each type of information with respect to the ground truth. One can then conjecture that if the performance of the ``No features'' case is higher than the ``No graph'' case, GCN will yield better results than MLP.
In addition, our numerics show that although combining both sources of information generally leads to improved classification performance (``GCN original'' column in Table~\ref{table:results_gcn_and_limitCases}),
this is \textit{not} always necessarily the case.
Indeed, for the Wikipedia and Wikipedia~\RNum{2} examples, the classification performance of the MLP (``No graph''), which is agnostic to relationships between samples, is better than when the additional layer of relational information about the samples (i.e., the graph) is incorporated via the GCN architecture.
This suggests that, for improved GCN classification, the information contained in features and graph needs to be constructively aligned with the ground truth.
This phenomenon can be intuitively understood as follows.
In the absence of a graph (i.e., the MLP setting), the training of the layer weights is done independently over the samples, without assuming any relationship between them.
In GCN, on the other hand, the role of the graph is to guide the training of the weights %
by averaging the features of a node with those of its graph neighbors.
The underlying assumption is that the relationships represented by the graph should be consistent with the information of the features, i.e., the features of nodes that are graph neighbors are expected to be more similar than otherwise; hence the training process is biased towards convolving the diffusing information on the graph to extract improved feature descriptions for the classifier.
However, if feature similarities and graph neighborhoods (or more generally, graph communities~\cite{lambiotte2015random}) are not congruent, this graph-based averaging during the training is not beneficial.
To explore this issue in a controlled fashion,
our second set of experiments (Fig.~\ref{fig:summary_randomization}) studied the degradation of the classification performance induced by the systematic randomization of graph structure and/or features.
The erosion of information is not uniform across our examples, reflecting the relative salience of each of the components (features and graph) for classification.
Note that the GCN is able to leverage the information present in any of the two components, and is only degraded to chance-level performance when \textit{both} graph and features are fully randomized.
Interestingly, this fully randomized (chance-level) performance coincides with that of the ``Complete graph'' (or mean field) limiting case, where the classifier is trained on features averaged over all the samples, thus leading to a uniform representation that has zero discriminating power when it comes to category assignment.
These results suggest that a degree of constructive alignment between the matrices of features, graph and ground truth
$(X, \widehat{A},Y)$
is necessary for GCN to operate successfully beyond standard classifiers. To capture this idea, we proposed a simple SAM~\eqref{eq:SAM} that uses the minimal principal angles to capture the consistency of pairwise projections between subspaces. Fig.~\ref{fig:summary_Frobenius_norm} shows that SAM correlates well with the classification performance and captures the monotonic dependence %
remarkably, given that SAM is a simple linear measure being applied to the outcome of a highly non-linear, optimized system.
The results are consistent for other versions of GCN. In particular, in the Supplementary Material (Section~\RNum{2}) we show that the alignment measure correlates well with the performance of the recently proposed Simple Graph Convolution (SGC)~\cite{wu2019simplifying}.
The alignment measure can be used to evaluate the relative importance of features and graph for classification without explicitly running the GCN, by comparing the SAM under full randomization of features against the SAM under full randomization of the graph. If $\mathcal{S}^*(X_{100},\widehat{A},Y) > \mathcal{S}^*(X,\widehat{A}_{100},Y)$, the features play a more important role in GCN classification.
Conversely, if $\mathcal{S}^*(X_{100},\widehat{A},Y) < \mathcal{S}^*(X,\widehat{A}_{100},Y)$, the graph is more important in GCN classification. While we have focused here on node classification, it would be interesting in future work to extend our measure to other tasks such as graph classification, link prediction, and regression.
\section{CONCLUSION\label{sec:conclusion}}
Here, we have introduced SAM~\eqref{eq:SAM}, a measure that quantifies the consistency between the feature and graph ingredients of data sets, and we showed that it correlates well with the classification performance of GCNs. Our experiments show that a degree of alignment is needed for a GCN approach to be beneficial, and that using a GCN can actually be detrimental to the classification performance if the feature and graph subspaces associated with the data are not constructively aligned (e.g., Wikipedia and Wikipedia II). More generally, the SAM has potentially a wider range of applications in the quantification of data alignment including, among others: quantifying the alignment of different graphs associated with, or obtained from, particular data sets; evaluating the quality of classifications found using unsupervised methods; and aiding in choosing the classifier architecture most advantageous computationally for a particular data set.
Our approach has a number of limitations that could be addressed in future work. First, it contains two parameters (i.e., the dimensions of the subspaces, $k_{X}^{*}$ and $k_{\widehat{A}}^{*}$) which need to be tuned through a computational search. Second, the alignment is not directly comparable across data sets since the subspace dimensions are adjusted for each data set. To facilitate comparisons across data sets, normalized versions of the alignment measure will be the object of future work. Third, the current measure is not suitable for very large data sets as the eigendecomposition of large matrices is computationally demanding. For very large data sets, approximations (e.g., using the Lanczos algorithm to explore only leading eigenvectors) might be necessary to optimize the subspace dimensions.
\section*{ACKNOWLEDGEMENT}
The work of Yifan Qian was supported by the China Scholarship Council Program under Grant 201706020176. The work of Paul Expert was supported in part by the National Institute for Health Research (NIHR) Imperial Biomedical Research Centre (BRC) under Grant NIHR-BRC-P68711 and in part by the Engineering and Physical Sciences Research Council (EPSRC) Centre for Mathematics of Precision Healthcare under Grant EP/N014529/1. The work of Mauricio Barahona was supported by the EPSRC Centre for Mathematics of Precision Healthcare under Grant EP/N014529/1.
\section*{BIOGRAPHY}
Yifan Qian received the B.Sc. degree in information and computing science and the M.Sc. degree in computer science from Beihang University, Beijing, China, in 2014 and 2017, respectively. He is currently pursuing the Ph.D. degree with the Queen Mary University of London, London, U.K. His research interest is broadly concerned with computational social science and combines theories and methods from network science, sociology, machine learning, and data science.
Paul Expert received the Ph.D. degree in physics from Imperial College London, London, U.K., in 2012. He was in Neuroimaging at King's College London, London, and Mathematics at Imperial College London. He is currently a Research Associate with the Global Digital Health Unit, Imperial College London, and a Visiting Associate Professor with the Tokyo Institute of Technology, Tokyo, Japan. His research interest is concerned with understanding the interaction between the structure and function of complex systems, with applications ranging from neuroscience to public health.
Tom Rieu received the M.Sc. degree in Engineering from the Engineering School CentraleSupélec Paris, France, and the M.Sc. degree in applied mathematics from the Imperial College London, London, U.K., both in 2017. He is currently a Data Scientist with Facebook, London, U.K. His academic research was focused on machine learning and particularly the application of deep models to networks. Currently, his work as part of the Facebook Product team is concerned with producing data science insights to drive improvements on the core and business products of Facebook’s online advertising platform and retargeting technology.
Pietro Panzarasa received the Ph.D. degree from Bocconi University Milan, Italy, in 2000.
He is a Professor of Networks and Innovation with the School of Business and Management, Queen Mary University of London, London, U.K. He became a Research Fellow with the University of Southampton, Southampton, U.K. He also held visiting positions at Columbia University, New York, NY, USA, and Carnegie Mellon University, Pittsburgh, PA, USA. He draws on network science, computational social science, and
big data analytics to study social capital and dynamics of social interaction in complex large-scale networks.
Mauricio Barahona received the Ph.D. degree from the Massachusetts Institute of Technology, Cambridge, MA, USA, in 1996. He is a Professor with the Department of Mathematics and the Director of the EPSRC Centre for Mathematics of Precision Healthcare, Imperial College London, London, U.K. He held Fellowships with Stanford University, Stanford, CA, USA, and the California Institute of Technology, Pasadena, CA. He is broadly interested in applied mathematics in engineering, physical, social, and biological systems using methods from graph theory, stochastic processes, dynamical systems, and machine learning.
\section{Finding optimal dimensions}
A key element of the subspace alignment measure described in the main paper is to find lower dimensional representations of the graph, features and ground truth.
To determine the dimension of the representative subspaces, we propose the following heuristic:
\begin{align}
(k_{X}^{*},k_{\widehat{A}}^{*})&=
\underset{k_{X},k_{\widehat{A}}}{\max}
\left(\|D(X_{100},\widehat{A}_{100},Y)\|_{\text{F}}-
\|D(X,\widehat{A},Y)\|_{\text{F}}\right).
\label{eq:dimension_subspaces} %
\end{align}
We choose $k_{Y}^{*}$ to be equal to the number of categories in the ground truth as they are non overlapping. Thus, $k_{X}^{*}$ and $k_{\widehat{A}}^{*}$ range from $k_{Y}^{*}$ to their maximum values, $C^{0}$, the dimension of the feature vectors, and $N$, the number of nodes in the graph, respectively.
To find the values for $k_{X}^{*}$ and $k_{\widehat{A}}^{*}$, we scan different possible combinations of $k_{X}$ and $k_{\widehat{A}}$. We applied two rounds of scanning. In the first scanning round, in the intervals of $k_{X}$ and $k_{\widehat{A}}$, we picked $10$ equally spaced values that contain the minimum and maximum possible values for $k_{X}$ and $k_{\widehat{A}}$. For example, in CORA, $k_{Y}^{*}$ equals $7$ because the number of categories in the ground truth is $7$. Thus $k_{X}$ ranges from $7$ to $1,433$.
At the end of the first round, the optimal values of $k_{X}^{*}$ and $k_{\widehat{A}}^{*}$ are $1,433$ and $282$, respectively (see Fig.~\ref{CORA: round 1}).
In the second scanning round, we applied a very similar process to the one just described. We set the scanning intervals of $k_{X}$ and $k_{\widehat{A}}$ as the neighbors of $k_{X}^{*}$ and $k_{\widehat{A}}^{*}$ found in the first round, respectively. For example, in CORA, for the second round, we set the intervals of $k_{X}$ and $k_{\widehat{A}}$ as $[1,274, 1,433]$ and $[7,557]$. Again, we split the new intervals with $10$ equally spaced values.
We have also shown the scanning results for other data sets in Figure~\ref{fig:summary_scanning}.
\begin{figure}[H]
\centering
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_simulation_1.pdf}
\caption{Constructive example: round 1}
\end{subfigure}
~
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_simulation_2.pdf}
\caption{Constructive example: round 2}
\end{subfigure}
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_cora_1.pdf}
\caption{CORA: round 1}
\label{CORA: round 1}
\end{subfigure}
~
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_cora_2.pdf}
\caption{CORA: round 2}
\label{CORA: round 2}
\end{subfigure}
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_aminer_1.pdf}
\caption{AMiner: round 1}
\end{subfigure}
~
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_aminer_2.pdf}
\caption{AMiner: round 2}
\end{subfigure}
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_wikipedia6_1.pdf}
\caption{Wikipedia~\RNum{1}: round 1}
\end{subfigure}
~
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_wikipedia6_2.pdf}
\caption{Wikipedia~\RNum{1}: round 2}
\end{subfigure}
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_wikipedia4_1.pdf}
\caption{Wikipedia~\RNum{2}: round 1}
\end{subfigure}
~
\begin{subfigure}[c]{.27\textwidth}
\includegraphics[width=1.0\textwidth]{heatmap_wikipedia4_2.pdf}
\caption{Wikipedia~\RNum{2}: round 2}
\end{subfigure}
\caption{Summary of results on scanning subspaces.}
\label{fig:summary_scanning}
\end{figure}
\section{Replicating experiments on a variant of Graph Convolutional Networks}
First, we would like to highlight that the alignment metric is independent of the architecture and only relies on the data. Therefore, we expect that the conclusion will be consistent with different variants of GCNs: the convolution operation in GCN (Kipf and Welling) can be seen as a neighborhood aggregation or message passing scheme. Many variants based on the Kipf and Welling version of GCN have been proposed, but they can ultimately be expressed as neighborhood aggregation or message passing schemes. For these different GCNs, we expect that our hypothesis that a certain degree of alignment between ingredients is needed for them to perform well would hold, since the working principles of variants of GCNs and the original version we consider are similar. To substantiate this claim, we have replicated our experiments using a recently proposed variant of GCN: Simple Graph Convolution (SGC) proposed by Wu et al. ``Simplifying graph convolutional networks." ICML (2019). SGC is a simplified version of the original GCN proposed by Kipf and Welling that removes nonlinearities and collapses weight matrices between consecutive layers. It has been shown that SGC can achieve competitive performance on node classification tasks and yields up to several orders of magnitude speedup.
We use the implementation provided by Pytorch Geometric\footnote{https://github.com/rusty1s/pytorch\_geometric/blob/master/examples/sgc.py}, which is a popular geometric deep learning extension library for PyTorch. Our results on SGC are shown below in Fig.~\ref{fig:summary_randomization_sgc_SI} and Fig.~\ref{fig:summary_Frobenius_norm_sgc_SI}. The figure suggests that results based on SGC are consistent with those produced using GCN (Kipf and Welling).
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.75\textwidth]{accuracy_vs_percentRandomization_all_US_sgc.pdf}
\caption{\textbf{Degradation of the classification performance as a function of randomization with SGC.} Each panel shows the degradation of the classification accuracy as a function of the randomization of graph, features and both, for a different data set. Error bars are evaluated over $100$ realizations:
for zero percent randomization, we report $100$ runs with random seeds; for the rest, we report $1$ run with random seed for $100$ random realizations. The horizontal lines correspond to the limiting cases.
}
\label{fig:summary_randomization_sgc_SI}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.75\textwidth]{accuracy_vs_distance_all_US_sgc.pdf}
\caption{\textbf{Classification performance versus the subspace alignment measure (SAM) with SGC.} Each panel shows the accuracy of SGC versus the SAM for all the runs presented in Fig.~\ref{fig:summary_randomization_sgc_SI}. Error bars are evaluated over $100$ randomizations.}
\label{fig:summary_Frobenius_norm_sgc_SI}
\end{figure}
\section{Choices of distance measures}
Within the distances discussed by Ref. (Ye, Ke, and Lim, Lek-Heng. ``Schubert varieties and distances between subspaces of different dimensions." SIAM Journal on Matrix Analysis and Applications 37.3 (2016): 1176-1197.), there are two `families':
\begin{enumerate}
\item average distances that use \textit{all} the principal angles, e.g., the Chordal distance, $\left(\sum_{j=1}^{\alpha} \sin ^{2} \theta_{j}\right)^{1 / 2}$, and the Grassmann distance, $\left(\sum_{j=1}^{\alpha} \theta^{2}_{j}\right)^{1 / 2}$.
\item extremal distances that use only the maximum principal angle between two subspaces, e.g., the Projection distance, $\sin \theta_{\alpha}$where $\theta_\alpha$ is the maximum angle.
\end{enumerate}
Our numerics show that average distances (the first family) display similar performance, as they leverage information from all the principal angles. Hence these measures produce similar performance to the Chordal distance.
To show this, we have replicated our experiments using the Grassmann distance (see Fig.~\ref{fig:summary_Frobenius_norm_grassmann_SI} below). The results are consistent with those produced with Chordal distance. \\~\\
On the other hand, we expect that extremal distances (the second family) will have less expressive power to capture the alignment between subspaces, since they use solely the maximum principal angle and do not consider the information contained in the other principal angles. To demonstrate this point, we replicated our experiments with the Projection distance (see Fig.~\ref{fig:summary_Frobenius_norm_projection_SI} below). Our results show that the Projection distance is indeed less effective than the Chordal distance in representing the alignment between subspaces.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.75\textwidth]{accuracy_vs_distance_all_US_grassmann_v2.pdf}
\caption{\textbf{Classification performance versus the subspace alignment measure (SAM) with Grassmann distance.} Each panel shows the accuracy of GCN versus the SAM. Error bars are evaluated over $100$ randomizations.}
\label{fig:summary_Frobenius_norm_grassmann_SI}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.75\textwidth]{accuracy_vs_distance_all_US_projection_v2.pdf}
\caption{\textbf{Classification performance versus the subspace alignment measure (SAM) with Projection distance.} Each panel shows the accuracy of GCN versus the SAM. Error bars are evaluated over $100$ randomizations.}
\label{fig:summary_Frobenius_norm_projection_SI}
\end{figure}
\end{document}
|
1,116,691,499,746 | arxiv | \section{Introduction}
\label{Introduction}
In a series of influential papers, \cite{Abadie2003}, \cite{Abadie2010}, and \cite{Abadie2015} proposed the Synthetic Control (SC) method as an alternative to estimate treatment effects in comparative case studies when there is only one treated unit. The main idea of the SC method is to use the pre-treatment periods to estimate weights such that a weighted average of the control units reconstructs the pre-treatment outcomes of the treated unit, and then use these weights to compute the counterfactual of the treated unit in case it were not treated.
According to \cite{Athey_Imbens}, \textit{``the simplicity of the idea, and the obvious improvement over the standard methods, have made this a widely used method in the short period of time since its inception''}, making it ``\emph{arguably the most important innovation in the policy evaluation literature in the last 15 years}''. As one of the main advantages that helped popularize the method, \cite{Abadie2010} derive conditions under which the SC estimator would allow confounding unobserved characteristics with time-varying effects, as long as there exist weights such that a weighted average of the control units fits the outcomes of the treated unit for a long set of pre-intervention periods.
In this paper, we analyze, in a linear factor model setting, the properties of the SC and other related estimators when the pre-treatment fit is imperfect.\footnote{We refer to ``imperfect pre-treatment fit'' as a setting in which is not assumed existence of weights such that a weighted average of the outcomes of the control unit perfectly fits the outcome of the treated unit for all pre-treatment periods. The perfect pre-treatment fit condition is presented in equation 2 of \cite{Abadie2010}. } In a model with ``stationary'' common factors and a fixed number of control units ($J$), we show that the SC weights converge in probability to weights that do \textit{not}, in general, reconstruct the factor loadings of the treated unit when the number of pre-treatment periods ($T_0$) goes to infinity.\footnote{We focus on the SC specification that uses the outcomes of all pre-treatment periods as predictors. Specifications that use the average of the pre-treatment periods outcomes and other covariates as predictors are also considered in Appendix \ref{A_alternatives}. }
This happens because, in this setting, the SC weights converge to weights that simultaneously attempt to match the factor loadings of the treated unit \textit{and} to minimize the variance of a linear combination of the idiosyncratic shocks. Therefore, weights that reconstruct the factor loadings of the treated unit are not generally the solution to this problem, even if such weights exist. While in many applications $T_0$ may not be large enough to justify large-$T_0$ asymptotics (e.g. \cite{Doudchenko}), our results can also be interpreted as the SC weights not converging to weights that reconstruct the factor loadings of the treated unit, when the pre-treatment fit is imperfect, \emph{even when $T_0$ is large}.
As a consequence, the SC estimator is, in this setting with an imperfect pre-treatment fit, biased if treatment assignment is correlated with the unobserved heterogeneity, even when the number of pre-treatment periods goes to infinity. The intuition is the following: if treatment assignment is correlated with common factors in the post-treatment periods, then we would need a SC unit that is affected in exactly the same way by these common factors as the treated unit, but did not receive the treatment. This would be attained with weights that reconstruct the factor loadings of the treated unit. However, since the SC weights do not converge to weights that satisfy this condition when the pre-treatment fit is imperfect, the distribution of the SC estimator will still depend on the common factors, implying in a biased estimator when selection depends on the unobserved heterogeneity.\footnote{ \cite{Rothstein} derive finite-sample bounds on the bias of the SC estimator, and show that the bounds they derive do not converge to zero when $J$ is fixed and $T_0 \rightarrow \infty$. This is consistent with our results, but does not directly imply that the SC estimator is asymptotically biased when $J$ is fixed and $T_0 \rightarrow \infty$. In contrast, our result on the asymptotic bias of the SC estimator imply that it would be impossible to derive bounds that converge to zero in this case. Moreover, we show the conditions under which the estimator is asymptotically biased. } Our results are not as conflicting with the results from \cite{Abadie2010} as it might appear at first glance. The asymptotic bias of the SC estimator, in our framework, goes to zero when the variance of the idiosyncratic shocks is small. This is the case in which one should expect to have a close-to-perfect pre-treatment match when $T_0$ is large, which is the setting the SC estimator was originally designed for. Our theory complements the theory developed by \cite{Abadie2010}, by considering the properties of the SC estimator when the pre-treatment fit is imperfect.
The asymptotic bias we derive for the SC estimator does not rely on the fact that the SC unit is constrained to convex combinations of control units, so it also applies to other related panel data approaches that have been studied in the context of an imperfect pre-treatment fit, such as \cite{Hsiao}, \cite{Li}, \cite{Carvalho2015}, \cite{Carvalho2016b}, and \cite{Masini}. We show that these papers rely on assumptions that essentially exclude the possibility of selection on unobservables.\footnote{ \cite{2018arXiv181210820C} and \cite{Zhou} suggest alternative estimators and analyze their properties in a setting with both large $J$ and $T$. As we explain in more detail in Section \ref{Alternatives}, they also rely on assumptions that essentially excludes the possibility of selection on unobservables. Since they consider a setting with both large $J$ and $T$, however, it is possible that their estimators are consistent when there is selection on unobservables if they considered conditions similar to the ones considered by \cite{Ferman}. } Therefore, an important contribution of our paper is to clarify what selection on unobservables means in this setting, and to show that these estimators are generally biased if treatment assignment is correlated with the unobserved heterogeneity.
One important implication of the SC restriction to convex combinations of the control units is that the SC estimator, in this setting with an imperfect pre-treatment fit, may be biased even if treatment assignment is only correlated with time-invariant unobserved variables, which is essentially the identification assumption of the difference-in-differences (DID) estimator. We therefore consider a modified SC estimator, where we demean the data using information from the pre-intervention period, and then construct the SC estimator using the demeaned data.\footnote{Demeaning the data before applying the SC estimator is equivalent to relaxing the non-intercept constraint, as suggested, in parallel to our paper, by \cite{Doudchenko}. We formally analyze the implication of this modification to the bias of the SC estimator. The estimator proposed by \cite{Hsiao} relaxes not only the the non-intercept but also the adding-up and non-negativity constraints. } An advantage of demeaning is that it is possible to, under some conditions, show that the SC estimator dominates the DID estimator in terms of variance and bias in this setting.\footnote{We also provide in Appendix \ref{IV} an instrumental variables estimator for the SC weights that generates an asymptotically unbiased SC estimator under additional assumptions on the error structure, which would be valid if, for example, the idiosyncratic error is serially uncorrelated \textit{and} all the common factors are serially correlated. The idea behind this strategy is similar to the strategy outlined by \cite{heckman}. }
Finally, we consider the properties of the SC and related estimators in a model with a combination of $I(1)$ common factors and/or deterministic polynomial trends, in addition to $I(0)$ common factors. We show that, in this setting, the demeaned SC weights converges to weights that reconstruct the factor loadings associated to the non-stationary common trends of the treated unit, but that generally fails to reconstruct the factor loadings associated with the $I(0)$ common factors.\footnote{We assume existence of weights that perfectly reconstructs the factor loadings of the treated unit associated with the non-stationary trends. In a setting with $\mathcal{I}(1)$ common factors, this is equivalent to assume that the vector of outcomes is cointegrated. If there were no set of weights that satisfies this condition, then the asymptotic distribution of the SC estimator would depend on the non-stationary common trends.
} Therefore, non-stationary common trends will not generate asymptotic bias in the demeaned SC estimator, but we need that treatment assignment is uncorrelated with the $I(0)$ common factors to guarantee asymptotic unbiasedness. Given that, we recommend that researchers applying the SC method should \textit{also} assess the pre-treatment fit of the SC estimator after de-trending the data.
If potential outcomes follow a linear factor model structure, then it would be possible to construct a counterfactual for the treated unit if we could consistently estimate the factor loadings.\footnote{Assuming that it is possible to construct a linear combination of the factor loadings of the control units that reconstructs the factor loadings of the treated unit, then this linear combination of the control units' outcomes would provide an unbiased counterfactual for the treated unit. } However, with fixed $J$, it is only possible to estimate factor loadings consistently under strong assumptions on the idiosyncratic shocks (e.g., \cite{Bai2003} and \cite{anderson1984introduction}). Therefore, the asymptotic bias we find for the SC estimator is consistent with the results from a large literature on factor models. We also revisit the conditions for validity of alternative estimators, such as the ones proposed by \cite{Hsiao}, \cite{Li}, \cite{Carvalho2015} and \cite{Carvalho2016b}. We show that these papers rely on assumptions that implicitly imply no selection on unobservables, which clarifies why their consistency/unbiasedness results when $J$ is fixed do not contradict the literature on factor models. Also consistent with the literature on factor models, if we impose restrictions on the idiosyncratic shocks, then there are asymptotically unbiased alternatives. For example, the de-noising algorithm suggested by \cite{Robust_SC} and the IV-like SC estimator we present in Appendix \ref{IV} would be valid if the transitory shocks are independent across units and time. However, this may not be an appealing assumption for the idiosyncratic shocks in common applications. Finally, \cite{Powell2} proposes a 2-step estimation in a setting with fixed $J$ in which the SC unit is constructed based on the fitted values of the outcomes on unit-specific time trends. However, we show that the demeaned SC method is already very efficient in controlling for polynomial time trends, so the possibility of asymptotic bias in the SC estimator would come from correlation between treatment assignment and common factors beyond such time trends, which would not generally be captured in this strategy.
When both $J$ and $T_0$ diverge, \cite{Magnac}, \cite{GSC}, \cite{Imbens_matrix}, and \cite{SDID} provide alternative estimation methods that are asymptotically valid when the number of both pre-treatment periods and controls increase. This is also consistent with the literature on linear factor models, which shows that these models can be consistently estimated in large panels (e.g., \cite{Bai2003}, \cite{baing}, \cite{Bai}, and \cite{Martin}). \cite{Ferman} provides conditions under which the original SC estimator is also asymptotically unbiased in this setting with large $J$/large $T_0$. He shows that the main requirement for this result is that, as the number of control units increases, there are weights diluted among an increasing number of control units that recover the factor loadings of the treated unit. In contrast, our results on the bias of the SC estimator provides a better approximation for the properties of the SC estimator for cases in which this condition is not valid, and/or when $J$ and $T_0$ are roughly of the same size, but they are not large enough, so that a large $T_0$/large $J$ asymptotics does not provide a good approximation.
The remainder of this paper proceeds as follows. We start Section \ref{SC_model} with a brief review of the SC estimator. We highlight in this section that we rely on different assumptions relative to \cite{Abadie2010}.
In Section \ref{Setting1}, we show that, in a model such that pre-treatment averages of the first and second moments of the common factors converge, the SC estimator is, in our framework, generally asymptotically biased if treatment assignment is correlated with the unobserved heterogeneity.
In Section \ref{DID}, we contrast the SC estimator with the DID estimator, and consider the demeaned SC estimator. In Section \ref{Alternatives}, we show that our main results also apply to other related panel data approaches that have been considered in the literature. In Section \ref{Setting2}, we consider a setting in which pre-treatment averages of the common factor diverge. In Section \ref{particular_model}, we present a particular class of linear factor models in which we consider the asymptotic properties of the SC estimator, and MC simulations with finite $T_0$. We conclude in Section \ref{Conclusion}.
\section{Base Model} \label{SC_model}
Suppose we have a balanced panel of $J+1$ units indexed by $j = 0,...,J$ observed on a total of $T$ periods. We want to estimate the treatment effect of a policy change that affected only unit $j=0$, and we have information before and after the policy change. Let $T_0$ be the number of pre-intervention periods. Since we want to consider the asymptotic behavior of the SC estimator when $T_0 \rightarrow \infty$, we label the periods as $t \in \{ -T_0+1,...,-1,0,1,...,T_1 \}$, where $T_1 = T-T_0$ is the total number of post-treatment periods. Let $\mathcal{T}_0$ ($\mathcal{T}_1$) be the set of time indices in the pre-treatment (post-treatment) periods. We assume that potential outcomes follow a linear factor model.
\begin{assumption}[potential outcomes] \label{assumption_LFM}
\normalfont
Potential outcomes when unit $j$ at time $t$ is treated ($y_{jt}^I$) and non-treated ($y_{jt}^N$) are given by
\begin{eqnarray} \label{model}
\begin{cases} y_{jt}^N = \delta_t + \lambda_t \mu_j + \epsilon_{jt} \\
y_{jt}^I = \alpha_{jt} + y_{jt}^N, \end{cases}
\end{eqnarray}
where $\delta_t$ is an unknown common factor with constant factor loadings across units, $\lambda_t$ is a $(1 \times F)$ vector of common factors, $\mu_j$ is a $(F \times 1)$ vector of unknown factor loadings, and the error terms $\epsilon_{jt}$ are unobserved idiosyncratic shocks.\footnote{In principle, the term $\delta_t$ could be included in $\lambda_t$ as a common factor with constant factor loading across units. We include that separately because we want to consider $\lambda_t$ as a vector of common factors that do not have constant effects across units. }
\end{assumption}
The treatment effect on unit $j$ at time $t$ is given by $\alpha_{jt}$. We only observe $y_{jt} = d_{jt} y_{jt}^I + (1-d_{jt}) y_{jt}^N$, where $d_{it}=1$ if unit $i$ is treated at time $t$.
Since we hold the number of units ($J+1$) fixed and look at asymptotics when the number of pre-treatment periods goes to infinity, we treat the vector of unknown factor loads ($\mu_j$) as fixed and the common factors ($\lambda_t$) as random variables. Alternatively, we can think that all results are conditional on $\{ \mu_j \}_{j=0}^J$. In order to simplify the exposition of our main results, we consider the model without observed covariates $Z_j$. In Appendix Section \ref{theta} we consider the model with covariates. The main goal of the SC method is to estimate the effect of the treatment for unit 0 for each post-treatment $t$, that is $\{ \alpha_{01},...,\alpha_{0T_1} \}$.
Since the SC estimator is only well defined if it actually happened that one unit received treatment in a given period, all results of the paper are conditional on that. Let $D(j,t)$ be a dummy variable equal to $1$ if unit $j$ starts to be treated after period $t$, while all other units do not receive treatment. Without loss of generality, we consider a realization of the data in which unit 0 is treated and that treatment starts after $t=0$, so $D(0,0)=1$. We consider a repeated sampling framework over the distributions of $\epsilon_{jt}$ and $\lambda_t$, conditional on $D(0,0)=1$.
Assumption \ref{assumption_sample} defines the sample a researcher observes in a SC application.
\begin{assumption}[conditional sample] \label{assumption_sample}
\normalfont We observe a realization of $\{ y_{0t} ,..., y_{Jt} \}_{t=-T_0 + 1}^{T_1}$ conditional on $D(0,0)=1$.
\end{assumption}
We also impose that the treatment assignment is not informative about the first moment of the idiosyncratic shocks.
\begin{assumption}[idiosyncratic shocks] \label{assumption_exogeneity}
\normalfont $\mathbb{E}[\epsilon_{jt} | D(0,0)=1]=\mathbb{E}[\epsilon_{jt} ]=0$ for all $j$ and $t$.
\end{assumption}
Assumption \ref{assumption_exogeneity} implies that idiosyncratic shocks are mean-independent from the treatment assignment. However, we still allow for the possibility that treatment assignment to unit 0 is correlated with the factor structure. More specifically, we allow for $\mathbb{E}[\lambda_t | D(0,0)=1] \neq \mathbb{E}[\lambda_t]$ for any $t$. While $\lambda_t$ is a common shock, the fact that unit 0 is treated can still be informative about $\lambda_t$, because we are fixing (or conditioning on) $\mu_0$. Suppose that the treatment is more likely to happen for unit $j$ at time $t$ if $\lambda_t\mu_j$ is high. In this case, the fact that unit 0 is treated after $t=0$ is informative that $\lambda_t \mu_0$ should be high for $t \geq 0$ if $\lambda_t$ is positively serially correlated. Since we are conditioning on $\mu_0$, this in turn implies that the common factors that strongly affect unit 0 are expected to be particularly high given that unit 0 is the treated one. As an illustration, consider a simple example in which there are two common factors $\lambda_t = [\lambda_t^1 ~ \lambda_t^2]$, with $\mu_j = (1,0)$ for $j = 0,...,\frac{J}{2}$ and $\mu_j = (0,1)$ for $j = \frac{J}{2}+1,...,J$. Under these conditions, the fact that unit 0 is treated after $t=0$ is informative about the common factor $\lambda_t^1$, because unit 0 is only affected by the first common factor. In this case, one should expect $\mathbb{E}[\lambda^1_t | D(0,0)=1] > \mathbb{E}[\lambda^1_t]$ for $t>0$. The assumptions we make are essentially the same as the ones considered by, for example, \cite{Magnac} and \cite{Rothstein} (in their Section 4.1), where they assume unconfoundness conditional on the unobserved factor loadings. The difference is that we condition on $\mu_j$, while they condition on $\lambda_t$. However, the essence of the assumptions in both cases are the same, in that we allow treatment assignment to be informative about the structure $\lambda_t \mu_j$, while the idiosyncratic shocks $\epsilon_{jt}$ are uncorrelated with treatment assignment. Note also that Assumption D from \cite{Bai} implies our Assumption \ref{assumption_exogeneity}.
Let $\boldsymbol{\mu} \equiv [\mu_1 \hdots \mu_J]'$ be the $J \times F$ matrix that contains the information on the factor loadings of all control units, and $\mathbf{y}_t \equiv (y_{1t}, \hdots, y_{Jt})$ and $\boldsymbol{\epsilon}_t \equiv (\epsilon_{1t}, \hdots, \epsilon_{Jt})$ be $J \times 1$ vectors with information on the control units' outcomes and idiosyncratic shocks at periods $t$. We define $\Phi$ as the set of weights such that a weighted average of the control units absorbs all time correlated shocks of unit 0, $ \lambda_t \mu_0$. Following the original SC papers, we start restricting to convex combinations of the control units. Therefore, $\Phi = \{ \textbf{w} \in \Delta^{J-1} ~ | ~ \mu_0 = \boldsymbol{\mu}' \mathbf{w} \}$, where $\Delta^{J-1} \equiv \{ (w_1,...,w_J) \in \mathbb{R}^{J} | w_j \geq 0 \mbox{ and } \sum_{j=1}^J w_j = 1\}$. Assuming $\Phi \neq \varnothing$, if we knew $\textbf{w}^\ast \in \Phi $, then we could consider an \emph{infeasible} SC estimator using these weights, $\hat \alpha_{0t}^\ast = y_{0t} - \mathbf{y}_t ' {\mathbf{w}^\ast} $. For a given $t>0$, we would have
\begin{eqnarray} \label{infeasible}
\hat \alpha^\ast_{0t} = y_{0t} - \mathbf{y}_t ' \mathbf{w}^\ast = \alpha_{0t} + \left( \epsilon_{0t} - \boldsymbol{\epsilon}_t ' \mathbf{w}^\ast \right).
\end{eqnarray}
We consider the expected value of $\hat \alpha^\ast_{0t}$ conditional on $D(0,0)=1$ (Assumption \ref{assumption_sample}). Therefore, under Assumption \ref{assumption_exogeneity}, $\mathbb{E}[\hat \alpha^\ast_{0t} | D(0,0)=1] = \alpha_{0t}$, which implies that this infeasible SC estimator is unbiased. Intuitively, the infeasible SC estimator constructs a SC unit for the counterfactual of $y_{0t}$ that is affected in the same way as unit 0 by each of the common factors (that is, $\mu_0 = \boldsymbol{\mu} ' \mathbf{w}$), but did not receive treatment. Therefore, the only difference between unit 0 and this SC unit, beyond the treatment effect, would be given by the idiosyncratic shocks, which are assumed not related to the treatment assignment (Assumption \ref{assumption_exogeneity}). This guarantees that a SC estimator, using these infeasible weights, provides an unbiased estimator. Since there might be multiple weights in $\Phi$, we define the infeasible SC estimator from equation (\ref{infeasible}) considering $\mathbf{w}^\ast \in \Phi$ that minimizes $var(\hat \alpha^\ast_{0t})$ for cases in which $\Phi \neq \varnothing$. Note that $\alpha_{0t}$ is identified given knowledge about the joint distribution of $\{ y_{0t} ,..., y_{Jt} \}$ conditional on $D(0,0)=1$, if the factor loadings $\mu_0$ and $\boldsymbol{\mu}$ were known and $\Phi \neq \varnothing$. In this case, under Assumptions \ref{assumption_LFM} to \ref{assumption_exogeneity}, $\alpha_{0t}$ is uniquely determined by $\alpha_{0t} = \mathbb{E}[y_{0t} | D(0,0)=1] - \mathbb{E}[\mathbf{y}_t | D(0,0)=1] ' \mathbf{w}^\ast$ for any $\mathbf{w}^\ast \in \Phi$.\footnote{\cite{Magnac} discuss identification assuming that the common factors are known. This differs from our argument assuming that factor loadings are known because they consider factor loadings as random and common factors as fixed, while we do the opposite. The main intuition in the two models, however, is the same. Note that $\Phi$ depends only on the factor loadings, and that $\mathbb{E}[\mathbf{y}_t | D(0,0)=1] ' \mathbf{w}^\ast = \mathbb{E}[\lambda_t | D(0,0)=1]\mu_0 $ for any $\mathbf{w}^\ast \in \Phi$.} All of our results, however, remain valid whether we consider $\Phi \neq \varnothing$ or $\Phi = \varnothing$.
It is important to note that \cite{Abadie2010} do note make any assumption on $\Phi \neq \varnothing$. Instead, they consider that there is a set of weights $\widetilde{\mathbf{w}}^\ast \in \Delta^{J-1}$ that satisfies $y_{0t} = \mathbf{y}_t' \widetilde{\mathbf{w}}^\ast$ for all $t \in \mathcal{T}_0$. We call the existence of such weights $\widetilde{\mathbf{w}}^\ast \in \Delta^{J-1}$ as a ``perfect pre-treatment fit'' condition. While subtle, this reflects a crucial difference between our setting and the setting considered in the original SC papers. \cite{Abadie2010} and \cite{Abadie2015} consider the properties of the SC estimator conditional on having a perfect pre-intervention fit. As stated by \cite{Abadie2015}, they \textit{``do not recommend using this method when the pretreatment fit is poor or the number of pretreatment periods is small''}. \cite{Abadie2010} provide conditions under which existence of $\widetilde{\mathbf{w}}^\ast \in \Delta^{J-1}$ such that $y_{0t} = \mathbf{y}_t' \widetilde{\mathbf{w}}^\ast$ for all $t \in \mathcal{T}_0$ (for large $T_0$) implies that $\mu_{0} \approx \boldsymbol{\mu} ' \widetilde{\mathbf{w}}^\ast$. In this case, the bias of the SC estimator would be bounded by a function that goes to zero when $T_0$ increases. We depart from the original SC setting in that we consider a setting with imperfect pre-treatment fit, meaning that we do not assume existence of $\widetilde{\mathbf{w}}^\ast \in \Delta^{J-1}$ such that $y_{0t} = \mathbf{y}_t' \widetilde{\mathbf{w}}^\ast$ for all $t \in \mathcal{T}_0$ .\footnote{\cite{Abadie2010} assume that such weights also provide perfect balance in terms of observed covariates. \cite{FB} analyze the case in which the perfect balance on covariates assumption is dropped, but there is still perfect balance on pre-treatment outcomes. In Appendix \ref{A_alternatives} we consider the case in which covariates are used in a setting with imperfect pre-treatment fit on both pre-treatment outcomes and covariates. } The motivation to analyze the SC method in our setting is that the SC has been widely used even when the pre-treatment fit is poor. Therefore, it is important to understand the properties of the estimator in this setting.
In order to implement their method, \cite{Abadie2010} recommend a minimization problem using the pre-intervention data to estimate the SC weights. They define a set of $K$ predictors where $X_0$ is a $(K \times 1)$ vector containing the predictors for the treated unit, and $X_C$ is a $(K \times J)$ matrix of economic predictors for the control units. {Predictors can be, for example, linear combinations of the pre-intervention values of the outcome variable or other covariates not affected by the treatment. } The SC weights are estimated by minimizing $|| X_0 - X_C {\textbf{w}} ||_V$ subject to $\mathbf{w} \in \Delta^{J-1}$, where $V$ is a $(K \times K)$ positive semidefinite matrix. They discuss different possibilities for choosing the matrix $V$, including an iterative process where $V$ is chosen such that the solution to the $|| X_0 - X_C {\textbf{w}} ||_V$ optimization problem minimizes the pre-intervention prediction error. In other words, let $\textbf{Y}_0$ be a $(T_0 \times 1)$ vector of pre-intervention outcomes for the treated unit, while $\textbf{Y}_C$ be a $(T_0 \times J)$ matrix of pre-intervention outcomes for the control units. Then the SC weights would be chosen as $ \widehat{\textbf{w}}(V^\ast)$ such that $V^\ast$ minimizes $|| \textbf{Y}_0 - \textbf{Y}_C \widehat{\textbf{w}}(V) ||$.
We focus on the case where one includes all pre-intervention outcome values as predictors. In this case, the matrix $V$ that minimizes the second step of the nested optimization problem would be the identity matrix (see \cite{Kaul2015} and \cite{Doudchenko}), so the optimization problem suggested by \cite{Abadie2010} to estimate the weights simplifies to
\begin{eqnarray} \label{objective_function}
\widehat{\textbf{w}} &=& \underset{{\textbf{w} \in \Delta^{J-1}}}{\mbox{argmin}} \frac{1}{T_0} \sum_{t \in \mathcal{T}_0} \left[ y_{0t} - \mathbf{y}_{t}' \mathbf{w} \right]^2.
\end{eqnarray}
For a given $t>0$, the SC estimator is then defined by
\begin{eqnarray}
\hat \alpha_{0t} = y_{0t} - \mathbf{y}_t ' \widehat{\mathbf{w}}.
\end{eqnarray}
\cite{FPP} provide conditions under which the SC estimator using all pre-treatment outcomes as predictors will be asymptotically equivalent, when $T_0 \rightarrow \infty$, to any alternative SC estimator such that the number of pre-treatment outcomes used as predictors goes to infinity with $T_0$, even for specifications that include other covariates. Therefore, our results are also valid for these SC specifications under these conditions. In Appendix \ref{A_alternatives} we also consider SC estimators using (1) the average of the pre-intervention outcomes as predictor, and (2) other time-invariant covariates in addition to the average of the pre-intervention outcomes as predictors.
\section{Model with ``stationary'' common factors } \label{Setting1}
We start assuming that pre-treatment averages of the first and second moments of the common factors and the idiosyncratic shocks converge. Let $\varepsilon_t = (\epsilon_{0t},...,\epsilon_{Jt})$.
\begin{assumption}[convergence of pre-treatment averages] \label{assumptions_lambda}
\normalfont Conditional on $D(0,0)=1$, \\ $\frac{1}{T_0} \sum_{t \in \mathcal{T}_0} \lambda_t \buildrel p \over \rightarrow \omega_0$, $\frac{1}{T_0} \sum_{t \in \mathcal{T}_0} \varepsilon_{t} \buildrel p \over \rightarrow 0$, $\frac{1}{T_0} \sum_{t \in \mathcal{T}_0} \lambda_t'\lambda_t \buildrel p \over \rightarrow \Omega_0$ positive semi-definite, \\ $\frac{1}{T_0} \sum_{t \in \mathcal{T}_0} \varepsilon_{t}\varepsilon_{t}' \buildrel p \over \rightarrow \sigma^2_\epsilon I_{J+1}$, and $\frac{1}{T_0} \sum_{t \in \mathcal{T}_0} \varepsilon_{t}\lambda_{t} \buildrel p \over \rightarrow 0$ when $T_0 \rightarrow \infty$.
\end{assumption}
Assumption \ref{assumptions_lambda} allows for serial correlation for both idiosyncratic shocks and common factors. We assume $\frac{1}{T_0} \sum_{t \in \mathcal{T}_0} \varepsilon_{t}\varepsilon_{t}' \buildrel p \over \rightarrow \sigma^2_\epsilon I_{J+1}$ in order to simplify the exposition of our results. However, this can be easily replaced by $\frac{1}{T_0} \sum_{t \in \mathcal{T}_0} \varepsilon_{t}\varepsilon_{t}' \buildrel p \over \rightarrow \Sigma$ for any symmetric positive definite $(J+1) \times (J+1)$ matrix $\Sigma$, so that idiosyncratic shocks may be heteroskedastic and correlated across $j$. Assumption \ref{assumptions_lambda} would be satisfied if, for example, conditional on $D(0,0)=1$, $(\epsilon'_{t},\lambda_t)$ is $\alpha-$mixing with exponential speed, with uniformly bounded fourth moments in the pre-treatment period, and $\epsilon_{t}$ and $\lambda_t$ are independent. Note that this would allow the distribution of $\lambda_t$, conditional on $D(0,0)=1$, to be different when we consider pre-treatment periods closer to the assignment of the treatment. That is, we allow for $\mathbb{E}[\lambda_t | D(0,0)=1] \neq \mathbb{E}[\lambda_{t'}|D(0,0)=1]$, for $t<0$ closer to zero and $t'<0$ further away from zero, which would happen if treatment assignment to unit 0 is correlated with common factors a few periods before treatment starts. In this case, conditional on $D(0,0)=1$, $\lambda_t$ would not be stationary, but Assumption \ref{assumptions_lambda} would still hold.
We show first that, when the number of control units is fixed, $\mathbf{\widehat w}$ converges in probability to
\begin{eqnarray} \label{objective_function_limit}
\mathbf{\bar w} = \underset{{\textbf{w} \in \Delta^{J-1}}}{\mbox{argmin}} \left\{ \sigma_\epsilon^2 \left( 1+ \mathbf{w}' \mathbf{w} \right) + \left( \mu_0 - \boldsymbol{\mu}'\mathbf{w} \right)' \Omega_0 \left( \mu_0 - \boldsymbol{\mu}'\mathbf{w} \right) \right \},
\end{eqnarray}
where, in general, $\mu_0 \neq \boldsymbol{\mu}'\mathbf{ \bar w}$.
\begin{proposition} \label{main_result}
\normalfont Under Assumptions \ref{assumption_LFM}, \ref{assumption_sample}, \ref{assumption_exogeneity} and \ref{assumptions_lambda}, { $\mathbf{\widehat w} \buildrel p \over \rightarrow \mathbf{\bar w}$} when $T_0 \rightarrow \infty$, where $\mu_0 \neq \boldsymbol{\mu}'\mathbf{ \bar w}$, unless $ \sigma_\epsilon^2=0 $ or $\exists \textbf{w} \in \Phi | \textbf{w} \in \underset{{\textbf{w} \in \Delta^{J-1}}}{\mbox{argmin}} \left\{ \mathbf{w}'\mathbf{w} \right\}$. Moreover, for $t>0$,
\begin{eqnarray} \label{SC_asymptotic_distribution}
\hat \alpha_{0t} = y_{0t} - \mathbf{y}_t ' \widetilde{\mathbf{w}} \buildrel p \over \rightarrow \alpha_{0t} + \left( \epsilon_{0t} - \boldsymbol{\epsilon}_t ' \mathbf{\bar w} \right) + \lambda_t \left(\mu_0 - \boldsymbol{\mu}' \mathbf{\bar w} \right) \mbox{ when } T_0 \rightarrow \infty.
\end{eqnarray}
\end{proposition}
\begin{proof}
Details in Appendix \ref{Prop1}
\end{proof}
The intuition of Proposition \ref{main_result} is that we can treat the SC weights as an M-estimator, so we have that $\mathbf{\widehat w}$ converges in probability to $\mathbf{\bar w}$, defined in (\ref{objective_function_limit}). This objective function has two parts. The first one reflects that different choices of weights will generate different weighted averages of the idiosyncratic shocks $\epsilon_{jt}$. In this simpler case, if we consider the specification that restricts weights to sum one, then this part would be minimized when we set all weights equal to $\frac{1}{J}$. The second part reflects the presence of common factors $\lambda_t$ that would remain after we choose the weights to construct the SC unit. If $\Phi \neq \varnothing$, then we can set this part equal to zero by choosing $\textbf{w}^\ast \in \Phi$. Now start from $\textbf{w}^\ast \in \Phi$ and move in the direction of weights that minimize the first part of this expression. Since $\textbf{w}^\ast \in \Phi$ minimizes the second part, there is only a second order loss in doing so. On the contrary, since we are moving in the direction of weights that minimize the first part, there is a first order gain in doing so. This will always be true, unless $ \sigma_\epsilon^2=0 $ or $\exists \textbf{w} \in \Phi \mbox{ such that } \textbf{w} \in \mbox{argmin}_{\textbf{w} \in \Delta^{J-1}} \left\{ \mathbf{w}'\mathbf{w} \right\}$. Therefore, the SC weights will not generally converge to weights that reconstruct the factor loadings of the treated unit. If $\Phi = \varnothing$, then Proposition \ref{main_result} trivially holds. Another intuition for this result is that the outcomes of the controls work as proxy variables for the factor loadings of the treated unit, but they are measured with error. We present this interpretation in more detail in Appendix \ref{finite_T}.
From equation (\ref{SC_asymptotic_distribution}), note that $\hat \alpha_{0t}$ converges in probability to the parameter we want to estimate ($ \alpha_{0t}$) plus linear combinations of contemporaneous idiosyncratic shocks and common factors.\footnote{For simplicity, we consider the case in which $\alpha_{0t}$ is a fixed parameter. More generally, we could consider $\alpha_{0t}$ stochastic, so that $\hat \alpha_{0t} \buildrel p \over \rightarrow \mathbb{E}[\alpha_{0t}|D(0,0)=1] + (\alpha_{0t} - \mathbb{E}[\alpha_{0t}|D(0,0)=1]) + \left( \epsilon_{0t} - \boldsymbol{\epsilon}_t ' \mathbf{\bar w} \right) + \lambda_t \left(\mu_0 - \boldsymbol{\mu}' \mathbf{\bar w} \right) $. In this case, the parameter of interest would be $\mathbb{E}[\alpha_{0t}|D(0,0)=1]$ instead of $\alpha_{0t}.$} Therefore, the SC estimator will be asymptotically unbiased if, conditional on $D(0,0)=1$, the expected value of these linear combinations of idiosyncratic shocks and common factors are equal to zero.\footnote{We consider the definition of asymptotic unbiasedness as the expected value of the asymptotic distribution of $\hat \alpha_{0t} - \alpha_{0t}$ equal to zero. An alternative definition is that $\mathbb{E}[\hat \alpha_{0t} - \alpha_{0t}] \rightarrow 0$. We show in Appendix \ref{definition_bias} that these two definitions are equivalent in this setting under standard assumptions. } More specifically, we need that $\mathbb{E} [ \epsilon_{0t} - \boldsymbol{\epsilon}_{t} ' \mathbf{\bar w} | D(0,0)=1 ] + \mathbb{E} [ \lambda_t | D(0,0)=1 ] (\mu_0 - \boldsymbol{\mu}' \mathbf{\bar w}) =0$. While the first term is equal to zero by Assumption \ref{assumption_exogeneity}, the second one may be different from zero if treatment assignment is correlated with the unobserved heterogeneity.
Therefore, in general, the SC estimator will only be asymptotically unbiased if $\mathbb{E} \left[ \lambda^k_t | D(0,0)=1 \right]=0$ for all common factors $k$ such that $\mu^k_0 \neq \sum_{j \neq 0} \bar w_j \mu^k_j$.\footnote{In principle, it could also be that $\mathbb{E} \left[ \lambda^k_t | D(0,0)=1 \right] \neq 0$ for some $k$ such that $\mu^k_0 \neq \sum_{j \neq 0} \bar w_j \mu^k_j$, but the linear combination $\mathbb{E} [ \lambda_t | D(0,0)=1 ] (\mu_0 - \boldsymbol{\mu}' \mathbf{\bar w}) =0$. However, this would only happen in ``knife-edge'' cases in which the biases arriving from different common factors end up cancelling out. } In order to better understand the intuition behind this result, we consider a special case in which, conditional on $D(0,0)=1$, $\lambda_t$ is stationary for $t \leq 0$. In this case, we can assume, without loss of generality, that $\omega_0^1 = \mathbb{E}[\lambda_t^1]=1$ and $\omega_0^k=\mathbb{E}[\lambda_t^k]=0$ for $k>0$. Therefore, the SC estimator will only be asymptotically unbiased if the weights turn out to recover unit 0 fixed effect (that is, $\mu_0^1=\sum_{j\neq 0}\mu_j^1$) \textit{and} treatment assignment is uncorrelated with time-varying unobserved common factors such that the factor loadings associated to those common factors are not reconstructed by the weights $ \mathbf{\bar w}$ (that is, for $t>0$, $\mathbb{E}[\lambda_t^k | D(0,0)=1] =0$ for all $k>1$ such that $\mu_0^k \neq \sum_{j\neq 0}\mu_j^k$). Importantly, once we relax the assumption of a perfect pre-treatment fit, this implies that the SC estimator may be asymptotically biased even in settings in which the DID estimator is unbiased, as the DID estimator takes into account unobserved characteristics that are fixed over time, while the SC estimator would not necessarily do so. We discuss this issue in more detail in Section \ref{DID}.
In the derivation of equation (\ref{SC_asymptotic_distribution}), we treat $\{ \mu_j \}_{j=0}^J$ as fixed. An alternative way to think about this result is that we have the asymptotic distribution of $\hat \alpha_{0t}$ conditional on $\{ \mu_j \}_{j=0}^J$, so we derive conditions in which $\hat \alpha_{0t}$ is asymptotically unbiased conditional on $\{ \mu_j \}_{j=0}^J$. To check whether $\hat \alpha_{0t}$ is asymptotically unbiased unconditionally, we would have to integrate the conditional distribution of $\hat \alpha_{0t}$ over the distribution of $\{\mu_j \}_{j=0}^J$. Therefore, unless we are willing to impose restrictions on the distribution of $\{\mu_j \}_{j=0}^J$, we can only guarantee that $\hat \alpha_{0t}$ is asymptotically unbiased unconditionally if $\hat \alpha_{0t}$ is asymptotically unbiased conditional on every $\{\mu_j \}_{j=0}^J$. We show that this will generally not be the case if $\mathbb{E} [ \lambda_t | D(0,0)=1 ] \neq 0$.
Note that, if we impose additional --- and arguably strong --- assumptions on the common factors and on the idiosyncratic shocks, then factor loadings would be identified and could be consistently estimated in a setting with fixed $J$ and $T_0 \rightarrow \infty$. For example, this would be the case if we consider a classical factor analysis (e.g., \cite{anderson1984introduction}).\footnote{In this case, we would assume that $\epsilon_{jt}$ is i.i.d. over time and are also independent across $i$. Moreover, we would assume that $\lambda_t$ is i.i.d. and independent of $\epsilon_{jt}$. } Consequently, under additional assumptions, $\alpha_{0t}$ would be identified, and it would be possible to derive an asymptotically unbiased estimator based on consistent estimators of the factor loadings. Importantly, Proposition \ref{main_result} and our conclusion that the SC estimator is generally biased under selection on unobservables remain valid even when we consider the assumptions of the classical factor analysis. Therefore, the asymptotic bias we report remains valid whether or not the parameter of interest is identified.
The discrepancy of our results with the results from \cite{Abadie2010} arises because we consider different frameworks. \cite{Abadie2010} consider the properties of the SC estimator conditional on having a perfect fit in the pre-treatment period in the data at hand. They do not consider the asymptotic properties of the SC estimator when $T_0$ goes to infinity. Instead, they provide conditions under which the bias of the SC estimator is bounded by a term that goes to zero when $T_0$ increases, \textit{if there exist a set of weights that provide a perfect pre-treatment fit}.
Our results are not as conflicting with the results from \cite{Abadie2010} as they may appear at first glance. In a model with ``stationary'' common factors, the probability that one would actually have a dataset at hand such that the SC weights provide a close-to-perfect pre-intervention fit with a moderate $T_0$ is close to zero, unless the variance of the idiosyncratic shocks is small. Therefore, our results agree with the theoretical results from \cite{Abadie2010} in that the asymptotic bias of the SC estimator should be small in situations where one would expect to have a close-to-perfect fit for a large $T_0$.
While many SC applications do not have a large number of pre-treatment periods to justify large-$T_0$ asymptotics (see, for example, \cite{Doudchenko}), our results can also be interpreted as the SC weights not converging to weights that reconstruct the factor loadings of the treated unit when $J$ is fixed \emph{even when $T_0$ is large}. In Appendix \ref{finite_T}, we show that the problem we present remains if we consider a setting with finite $T_0$. The intuition for this result is that the SC method uses the vector of control outcomes as a proxy for the vector of common factors. That is, assuming $\Phi \neq \varnothing$, we can write the potential outcome of the treated unit as a linear combination of the control units using a set of weights $\mathbf{w}^\ast \in \Phi$. However, in this case the control outcomes will be, by construction, correlated with the error in this model. The intuition is that the idiosyncratic shocks would behave as a measurement error in these proxy variables, which leads to bias. Without the non-negativity and adding up constraints, and assuming $\lambda_t$ and $\boldsymbol{\epsilon}_t$ are i.i.d normal, the bias of the SC weights would be exactly the same, irrespectively of $T_0$. More generally, the expected value of the SC weights might depend on $T_0$, but it should still be biased whether we consider a fixed $T_0$ or the asymptotic distribution with $T_0 \rightarrow \infty$ (see details in Appendix \ref{finite_T}). In Section \ref{particular_model}, we show that, in our MC simulations, the SC weights are on average even further from weights that reconstruct the factor loadings of the treated unit when $T_0$ is finite.
Having a larger $T_0$ could make the problem worse would be if there were ``structural breaks'' in the factor structure in the pre-treatment periods. For example, we could have $y^N_{jt} = \lambda_t \mu_j + \epsilon_{jt}$ for $t \geq -M$, and $y^N_{jt} = \lambda_t \tilde \mu_j + \epsilon_{jt}$ for $t < -M$, for some $M \in \mathbb{N}$, with $\mu_j$ potentially different from $\tilde \mu_j$. This would be a case in which units are affected by the common shocks in the same way in the post-treatment and in the last $M$ pre-treatment periods, but they are affected differently in periods further away in the past. In this case, including more pre-treatment periods could likely induce more bias in the SC estimator, because in this case the second part of the objective function in (\ref{objective_function_limit}) would be minimized with a $\widetilde{\mathbf{w}}$ such that $ \mu_0 \neq \boldsymbol{\mu}'\widetilde{\mathbf{w}}$. However, all our results consider the same factor structure for all periods. Therefore, our results should be interpreted as the SC estimator generally being asymptotically biased if there is selection on unobservables \emph{even when} $T_0 \rightarrow \infty$ \emph{and} the factor loadings are stable for all periods when $T_0 \rightarrow \infty$.
Importantly, while increasing $T_0$ reduces the chances of satisfying (or being close to satisfying) the perfect pre-treatment fit condition considered by \cite{Abadie2010},
our results do not contradict the results from \cite{Abadie2010}. The bounds on the bias of the SC estimator they derive depend on the ratio of $J$ to $T_0$. If a perfect (or close-to-perfect) pre-treatment fit is achieved because $T_0$ is small relative to $J$, the bounds derived by \cite{Abadie2010} would not necessarily be close to zero, so there is no guarantee that the bias of the SC estimator should be small based on their results. This is consistent with our results that the bias of the SC estimator would remain even when $T_0$ is small relative to $J$. Overall, the bias we derive for the SC estimator does not come from the difficulty in having a perfect pre-treatment fit when $T_0$ is large and $J$ is fixed. Rather, this would remain a problem when $J$ is fixed, even if $T_0$ is small enough so that a perfect pre-treatment fit is achieved due to over-fitting.
In Appendix \ref{A_alternatives} we consider alternative specifications used in the SC method to estimate the weights. In particular, we consider the specification that uses the pre-treatment average of the outcome variable as predictor, and the specification that uses the pre-treatment average of the outcome variable and other time-invariant covariates as predictors. In both cases, we show that the objective function used to calculate the weights converge in probability to a function that can, in general, have multiple minima. If $\Phi$ is non-empty, then $\textbf{w} \in \Phi$ will be one solution. However, there might be $\textbf{w} \notin \Phi$ that also minimizes this function, so there is no guarantee that the SC weights in these specifications will converge in probability to weights in $\Phi$.
\section{Comparison to DID \& the demeaned SC estimator } \label{DID}
We show in Section \ref{Setting1} that the SC estimator can be asymptotically biased even in situations where the DID estimator is unbiased. In contrast to the SC estimator, the DID estimator for the treatment effect in a given post-intervention period $t >0$, under Assumption \ref{assumptions_lambda}, would be given by
\begin{eqnarray}
\hat \alpha^{\tiny DID}_{0t} &=& y_{0t} - \frac{1}{J} \mathbf{y}_t ' \mathbf{i} - \frac{1}{T_0}\sum_{\tau \in \mathcal{T}_0} \left[ y_{0\tau} - \frac{1}{J} \mathbf{y}_\tau ' \mathbf{i} \right],
\end{eqnarray}
where $\mathbf{i}$ is a $J \times 1$ vector of ones.\footnote{Since the goal in the SC literature is to estimate the effect of the treatment for unit 1 at a specific date $t$, this circumvents the problem of aggregating heterogeneous effects, as considered by \cite{Pedro}, \cite{Imbens_DID}, and \cite{Bacon} in the DID setting.} Note that the DID estimator in this case with one treated unit is numerically the same as the two-way fixed effects (TWFE) estimator using unit and time fixed effects. {This will be the case, in general, when the treatment date starts at the same period for all treated units. Since we consider a setting with only one treated unit, this condition is satisfied. }
Under Assumptions \ref{assumption_LFM}, \ref{assumption_sample}, and \ref{assumptions_lambda}, the asymptotic distribution of the DID estimator is given by:
\begin{eqnarray}
\hat \alpha^{\tiny DID}_{0t} &\buildrel p \over \rightarrow & \alpha_{0t} + \left( \epsilon_{0t} - \frac{1}{J} \boldsymbol{\epsilon}_{t} ' \mathbf{i} \right) + \left( \lambda_t - \omega_0 \right) \left( \mu_{0} - \frac{1}{J} \boldsymbol{\mu} ' \mathbf{i} \right) \mbox{ when } T_0 \rightarrow \infty.
\end{eqnarray}
Therefore, the DID estimator will be asymptotically unbiased in this setting if $\mathbb{E}[\lambda_t | D(0,0)=1] = \omega_0$ for the factors such that $ \mu_{0} \neq \frac{1}{J} \boldsymbol{\mu} ' \mathbf{i} $, which means that the fact that unit 0 is treated after period $t=0$ is not informative about the first moment of the common factors relative to their pre-treatment averages. Intuitively, the unit fixed effects control for any difference in unobserved variables that remain constant (in expectation) before and after the treatment. Moreover, the DID allows for arbitrary correlation between treatment assignment and $\delta_t$ (which is captured by the time effects). However, the DID estimator will be biased if the fact that unit 0 is treated after period $t=0$ is informative about variations in the common factors relative to their pre-treatment mean, and it turns out that the (simple) average of the factor loadings associated to such common factors are different from the factor loadings of the treated unit.
As an alternative to the standard SC estimator, we suggest a modification in which we calculate the pre-treatment average for all units and demean the data. This is equivalent to a generalization of the SC method suggested, in parallel to our paper, by \cite{Doudchenko}, which includes an intercept parameter in the minimization problem to estimate the SC weights and construct the counterfactual. Here we formally consider the implications of this alternative on the bias and MSE of the SC estimator. Relaxing the non-intercept constraint was already a feature of \cite{Hsiao}. The difference here is that we relax this constraint while maintaining the adding-up and non-negativity constraints, which allows us to rank the demeaned SC with the DID estimator under some conditions.
The demeaned SC estimator is given by $\hat \alpha^{SC'}_{0t} = y_{0t} - \mathbf{y}_t ' \widehat{\mathbf{w}}^{\mbox{\tiny SC$'$}} - (\bar y_{0} - \mathbf{\bar y}' \widehat{\mathbf{w}}^{\mbox{\tiny SC$'$} } )$, where $\bar y_0$ is the pre-treatment average of unit $0$, and $\mathbf{\bar y}$ is an $J \times 1$ vector with the pre-treatment averages of the controls. The weights $\widehat{\textbf{w}}^{\mbox{\tiny SC$'$}} $ are given by
\begin{eqnarray} \label{Q_demeaned}
\widehat{\textbf{w}}^{\mbox{\tiny SC$'$}} = \underset{{\textbf{w} \in \Delta^{J-1}}}{\mbox{argmin}} \frac{1}{T_0} \sum_{t \in \mathcal{T}_0} \left[ y_{0t} - \mathbf{y}_t'\mathbf{w} - \left(\bar y_{0} -\mathbf{\bar y} ' \mathbf{w} \right) \right]^2. \end{eqnarray}
\begin{proposition} \label{Prop_demeaned}
\normalfont Under Assumptions \ref{assumption_LFM}, \ref{assumption_sample}, \ref{assumption_exogeneity} and \ref{assumptions_lambda}, { $\widehat{\textbf{w}}^{\mbox{\tiny SC$'$}} \buildrel p \over \rightarrow \mathbf{\bar w}^{\mbox{\tiny SC$'$}}$} when $T_0 \rightarrow \infty$, where $\mu_0 \neq \boldsymbol{\mu} ' \mathbf{\bar w}^{\mbox{\tiny SC$'$}} $, unless $ \sigma_\epsilon^2=0 $ or $\exists \textbf{w} \in \Phi | \textbf{w} \in \underset{{\textbf{w} \in \Delta^{J-1}}}{\mbox{argmin}} \left\{ \mathbf{w}'\mathbf{w} \right\}$. Moreover, for $t>0$,
\begin{eqnarray}
\hat \alpha^{\mbox{\tiny SC$'$}}_{0t} \buildrel p \over \rightarrow \alpha_{0t} + \left( \epsilon_{0t} - \boldsymbol{\epsilon}_t'\mathbf{\bar w}^{\mbox{\tiny SC$'$}} \right) + (\lambda_t-\omega_0) \left(\mu_0 - \boldsymbol{\mu} ' \mathbf{\bar w}^{\mbox{\tiny SC$'$}} \right) \mbox{ when } T_0 \rightarrow \infty.
\end{eqnarray}
\end{proposition}
\begin{proof}
See details in Appendix \ref{Proof_demeaned}
\end{proof}
Therefore, both the demeaned SC and the DID estimators are asymptotically unbiased when $\mathbb{E}[\lambda_t | D(0,0)=1] = \omega_0$ for $t>0$.\footnote{This is a sufficient condition. More generally, the demeaned SC estimator would be asymptotically unbiased if $\mathbb{E}[\lambda^k_t | D(0,0)=1] = \omega_0$ for $t>0$ for any common factor $k$ such that $\mu^k_0 \neq \sum_{j \neq 0} \bar w^{\mbox{\tiny SC$'$}}_j \mu^k_j$. However, as we show in Proposition \ref{Prop_demeaned}, if $\sigma^2_\epsilon>0$, then we would only have $\mu^k_0 = \sum_{j \neq 0} \bar w^{\mbox{\tiny SC$'$}}_j \mu^k_j$ in knife-edge cases. Therefore, we focus on the sufficient condition $\mathbb{E}[\lambda_t | D(0,0)=1] = \omega_0$ for $t>0$. } Moreover, under this assumption, both estimators are unbiased even for finite $T_0$.
With additional assumptions on $(\epsilon_{0t},...,\epsilon_{Jt},\lambda_t')$ in the post-treatment periods, we can also assure that the demeaned SC estimator is asymptotically more efficient than DID.
\begin{assumption}[Stability in the pre- and post-treatment periods]
\label{A5}
\normalfont For $t>0$, $\mathbb{E}[\lambda_t | D(0,0)=1]= \omega_0$, $\mathbb{E}[\epsilon_{t} | D(0,0)=1]= 0$, $\mathbb{E}[\lambda_t'\lambda_t | D(0,0)=1] = \Omega_0$, and $\mathbb{E}[\epsilon_t\epsilon_t' | D(0,0)=1] = \sigma^2_\epsilon I_{J+1}$, $cov(\epsilon_t,\lambda_t | D(0,0)=1)=0$.
\end{assumption}
Assumptions \ref{assumption_LFM}, \ref{assumptions_lambda} and \ref{A5} imply that idiosyncratic shocks and common factors have the same first and second moments in the pre- and post-treatment periods. From Proposition \ref{Prop_demeaned}, Assumption \ref{A5} implies that the demeaned SC estimator is asymptotically unbiased. We now show that this assumption also implies that, in the setting we consider, the demeaned SC estimator has lower asymptotic MSE than both the DID estimator and the infeasible SC estimator.\footnote{This dominance of the SC estimator may not hold in different settings. See a related discussion by \cite{ding_li_2019}.}
\begin{proposition} \label{Prop_demeaned_efficient}
\normalfont Under Assumptions \ref{assumption_LFM}, \ref{assumption_sample}, \ref{assumption_exogeneity}, \ref{assumptions_lambda}, and \ref{A5}, the demeaned SC estimator ($\hat \alpha^{\mbox{\tiny SC$'$}}_{0t}$) dominates both the DID estimator ($\hat \alpha^{\tiny DID}_{0t} $) and the infeasible SC estimator ($\hat \alpha^\ast_{0t} $) in terms of asymptotic MSE when $T_0 \rightarrow \infty$.
\end{proposition}
\begin{proof}
See details in Appendix \ref{Proof_demeaned_efficient}
\end{proof}
The intuition of this result is that, under Assumption \ref{A5}, the demeaned SC weights converge to weights that minimize a function $\Gamma(\mathbf{w})$ such that $\Gamma(\mathbf{\bar w}^{\mbox{\tiny SC$'$}}) = a.var(\hat \alpha^{\mbox{\tiny SC$'$}}_{0t} | D(0,0)=1 ) $, $\Gamma(\mathbf{w}^\ast) = a.var(\hat \alpha^\ast_{0t} | D(0,0)=1 ) $, and $\Gamma(\{\frac{1}{J},...,\frac{1}{J} \}) = a.var(\hat \alpha^{\mbox{ \tiny DID}}_{1t} | D(0,0)=1 ) $. Therefore, it must be that the asymptotic variance of $\hat \alpha^{\mbox{\tiny SC$'$}}_{0t} $ is weakly lower than the variance of both $\hat \alpha^\ast_{0t} $ and $\hat \alpha^{\mbox{ \tiny DID}}_{1t} $. Moreover, these three estimators are unbiased under these assumptions.
The demeaned SC estimator dominates the infeasible one, in terms of MSE, because the infeasible SC estimator focuses on eliminating the common factors, even if this means using a linear combination of the idiosyncratic shocks with higher variance. In contrast, the demeaned SC estimator provides a better balance in terms of the variance of the common factors and idiosyncratic shocks.
This dominance of the demeaned SC estimator, however, relies crucially on the assumption that the first and second moments of the common factors and idiosyncratic shocks remain stable before and after the treatment. If we had that $\mathbb{E}[\lambda_t'\lambda_t | D(0,0)=1] \neq \Omega_0$ for $t>0$, then $\Gamma(\mathbf{w})$ would not provide the variance of the estimators with weights $\mathbf{w}$. Therefore, it would not be possible to guarantee that the demeaned SC estimator has lower variance, even if the three estimators are unbiased.
If we had that $\mathbb{E}[\lambda_t | D(0,0)=1] \neq \omega_0$ for $t>0$, then both the demeaned SC and the DID estimators would be asymptotically biased, while the infeasible SC estimator would remain unbiased. The asymptotic bias of $\hat \alpha^{\mbox{\tiny SC$'$}}_{0t}$ would be given by $ (\mathbb{E}[\lambda_t | D(0,0)=1]-\omega_0) (\mu_0 - \boldsymbol{\mu} ' \mathbf{\bar w}^{\mbox{\tiny SC$'$}} )$. Therefore, provided $\mu_0 \neq \boldsymbol{\mu} ' \mathbf{\bar w}^{\mbox{\tiny SC$'$}}$ (which, in general, will happen), the infeasible SC estimator will dominate the demeaned SC estimator in terms of asymptotic MSE if $(\mathbb{E}[\lambda_t | D(0,0)=1]-\omega_0)$ is large enough. In other words, once we relax Assumption \ref{A5}, we cannot guarantee that the demeaned SC estimator provides a better prediction in terms of MSE relative to the infeasible one. Moreover, if the bias of the demeaned SC estimator is large enough, then the infeasible SC estimator will be better in terms of MSE relative to the demeaned SC estimator.
In general, it is not possible to rank the demeaned SC and the DID estimators in terms of bias and MSE if treatment assignment is correlated with time-varying common factors. We provide in Appendix \ref{example} a specific example in which the DID can have a smaller bias and MSE relative to the demeaned SC estimator. This might happen when selection into treatment depends on common factors with low variance, and it happens that a simple average of the controls provides a good match for the factor loadings associated with these common factors. In general, however, we should expect a lower bias for the demeaned SC estimator, given that the demeaned SC weights are \emph{partially} chosen to minimize the distance between $\mu_0$ and $\boldsymbol{\mu}' \widehat{\textbf{w}}^{\mbox{\tiny SC$'$}}$, while the DID estimator uses weights that are not data driven. For the particular class of linear factor models we present in Section \ref{particular_model}, however, the asymptotic bias and the MSE of the demeaned SC estimator will always be lower relative to the DID estimator, provided that there is stability in the variance of common factors and idiosyncratic shocks before and after the treatment.
Note that the restriction that weights must sum one, combined with this demeaning process, implies that the demeaned SC estimator also enjoys the double bias reduction property of the synthetic differences in differences method proposed by \cite{SDID}. {In Appendix \ref{Appendix_SDID}, we replicate the placebo exercise from \cite{SDID}, but contrasting the demeaned SC estimator with the original SC and the DID estimators. We construct placebo estimates based on the empirical application from \cite{Abadie2010}. We see that there are states such that the original SC estimator performs poorly, while the DID estimator provides a good counterfactual. This happens when the treated state is outside the convex hull of the outcomes of the other states. There are also states such that the DID estimator performs poorly, while the original SC estimator provides a good counterfactual. This happens when the DID fails to match the temporal pattern of the treated state, while the SC estimator does a better job in this dimension. Interestingly, the demeaned SC estimator performs as well as the DID estimator when the DID estimator works better, and as well as the original SC estimator when the original SC estimator works better. }
Importantly, it is not possible to, in general, compare the original and the demeaned SC estimator in terms of bias and variance. For example, the original SC estimator may lead to lower bias if we believe it is only possible to reproduce the trend of a series if we also reproduce its level. In this case, matching also on the levels would help provide a better approximation to the factor loadings of the treated unit associated with time-varying common trends. Moreover, the demeaning process may increase the variance of the estimator for a finite $T_0$. Therefore, it is not clear whether demeaning is the best option in all applications. Still, this demeaning process allows us to provide conditions under which the SC method dominates the DID estimator, which would not be the case if we consider the original SC estimator.
\section{Other related estimators} \label{Alternatives}
We show in Appendix \ref{relaxing_constraints} that our main result that the original and the demeaned SC estimators are generally asymptotically biased if there are unobserved time-varying confounders (Propositions \ref{main_result} and \ref{Prop_demeaned}) still applies if we also relax the non-negative and the adding-up constraints, which essentially leads to the panel data approach suggested by \cite{Hsiao}, and further explored by \cite{Li}.\footnote{In this case, since we do not constraint the weights to sum 1, we need to adjust Assumption \ref{assumptions_lambda} so that it also includes convergence of the pre-treatment averages of the first and second moments of $\delta_t$.} Our conditions for unbiasedness of the SC estimator also apply to the estimators proposed by \cite{Carvalho2015} and \cite{Carvalho2016b} when $J$ is fixed.
These papers rely on assumptions that essentially imply no selection on unobservables to derive consistency results, which reconciles our results with theirs. \cite{Hsiao} and \cite{Li} implicitly rely on stability in the linear projection of the potential outcomes of the treated unit on the outcomes of the control units, before and after the intervention, to show that their proposed estimators are unbiasedness and consistent. See, for example, equation A.4 from \cite{Li}. The linear projection of $y_{0t}^N$ on $\mathbf{y}_t$ for any given $t$ is given by $\delta_1(t) + \mathbf{y}_t ' \delta(t) $, where
\begin{eqnarray} \label{linear_projection}
\begin{cases}
\delta(t) = \left[ \boldsymbol{\mu} var(\lambda_t | D(0,0)=1) \boldsymbol{\mu}' \right]^{-1} \boldsymbol{\mu} var(\lambda_t | D(0,0)=1) \mu_0 \mbox{, and} \\
\delta_1(t) = \mathbb{E}[\lambda_t | D(0,0)=1] (\mu_0 - \boldsymbol{\mu}' \delta(t)).
\end{cases}
\end{eqnarray}
Therefore, in general, we will only have $(\delta_1(t),\delta(t))$ constant for all $t$ if the distribution of $\lambda_t$ conditional on $D(0,0)=1$ is stable over time. However, the idea that treatment assignment is correlated with the factor model structure essentially means that the distribution of $\lambda_t$ conditional on $D(0,0)=1$ is different before and after the treatment assignment. In this case, it would not be reasonable to assume that the parameters of the linear projection of $y_{0t}^N$ on $\mathbf{y}_t$ are the same for $t \in \mathcal{T}_0$ and $t \in \mathcal{T}_1$ if we consider that treatment assignment is correlated with the factor model structure. \cite{2018arXiv181210820C} assume that $y_{0t}^N$ on $\mathbf{y}_t$ are covariance-stationary for all periods (see their Assumption 6), which implies that $(\delta_1(t),\delta(t))$ constant for all $t$. Therefore, they also implicitly imply that there is no selection on unobservables. Since they consider a setting with both large $J$ and $T$, however, it is possible that their estimator is consistent when there is selection on unobservables under conditions similar to the ones considered by \cite{Ferman}.
\cite{Carvalho2015}, \cite{Carvalho2016b}, and \cite{Masini} assume that the outcome of the control units are independent from treatment assignment. If we consider the linear factor model structure from Assumption \ref{assumption_LFM}, then this essentially means that there is no selection on unobservables. Given Assumption \ref{assumption_exogeneity}, if treatment assignment is correlated with the potential outcomes of the treated unit, then it must be correlated with $\lambda_t \mu_0$. However, if this is the case, then treatment assignment must also be correlated with at least some control units, implying that their assumption that the outcome of the control units are independent from treatment assignment would be violated. \cite{Zhou} also consider this independence assumption. Since they also consider a setting with both large $J$ and $T$, however, it should be possible to consider a different set of assumptions, as the ones considered by \cite{Ferman}, so that their estimator is asymptotically unbiased.
Overall, our results clarify what selection on unobservables means in this setting, and the conditions under which these estimators are asymptotically unbiased when $J$ is fixed.
\section{Model with ``explosive'' common factors} \label{Setting2}
Many SC applications present time-series patterns that are not consistent with Assumption \ref{assumptions_lambda}, including the applications considered by \cite{Abadie2003}, \cite{Abadie2010}, and \cite{Abadie2015}. This will be the case whenever we consider outcome variables that exhibit non-stationarities, such as GDP and average wages.
We consider now the case in which the first and second moments of a subset of the common factors diverge. We modify Assumption \ref{assumption_LFM}.
\begin{assumption_b}{\ref{assumption_LFM}$'$}[potential outcomes]
\normalfont
Potential outcomes are given by
\begin{eqnarray} \label{explosive_model}
\begin{cases} y_{jt}^N = \lambda_t \mu_j + \gamma_t \theta_j + \epsilon_{jt} \\
y_{jt}^I = \alpha_{jt} + y_{jt}^N, \end{cases}
\end{eqnarray}
where $\lambda_t = (\lambda_t^1,...,\lambda_t^{F_0})$ is a $(1 \times F_0)$ vector of $I(0)$ common factors, and $\gamma_t = (\gamma_t^1,...,\gamma_t^{F_1})$ is a $(1 \times F_1)$ vector of common factors that are $\mathcal{I}(1)$ and/or polynomial time trends $t^f$, while $\mu_j$ and $\theta_j$ are the vectors of factor loadings associated with these common factors. The time effect $\delta_t$ can be either included in vector $\lambda_t$ or $\gamma_t$.
\end{assumption_b}
Differently from the previous sections, in order to consider the possibility that treatment starts after a large number of periods in which some common factors may be $\mathcal{I}(1)$ and/or polynomial time trends, we label the periods as $t=1,...,T_0,T_0+1,...,T$. We modify Assumption \ref{assumptions_lambda} to determine the behavior of the common factors and the idiosyncratic shocks in the pre-treatment periods.
\begin{assumption_b}{\ref{assumptions_lambda}$'$}[stochastic processes]
\normalfont Conditional on $D(0,T_0)=1$, the process $z_t = (\epsilon_{0t},...,\epsilon_{Jt},\lambda_t)$ is $I(0)$ and weakly stationary with finite fourth moments, while the components of $\gamma_t$ are $I(1)$ and/or polynomial time trends $t^f$ for $t=1,...,T_0$.
\end{assumption_b}
Assumption \ref{assumptions_lambda}$'$ restricts the behavior of the common factors in the pre-treatment periods.
However, this assumption allows for correlation between treatment assignment and common factors in the post-intervention periods. For example, if $\gamma^k_t = \gamma^k_{t-1} + \eta_t$, then Assumption \ref{assumptions_lambda}$'$ implies that $\eta_t$ has mean zero for all $t \leq T_0$. However, it may be that $\mathbb{E}[\eta_t | D(0,T_0)] \neq 0$ for $t>T_0$. This assumption could be easily relaxed to allow for $\mathbb{E}[\eta_t | D(0,T_0)] \neq 0$ for a fixed number of periods prior to the start of the treatment.
We also consider an additional assumption on the existence of weights that reconstruct the factor loadings of unit 0 associated with the non-stationary common trends.
\begin{assumption}[existence of weights]
\label{non_stationary_assumption}
\normalfont
\begin{eqnarray*}
\exists ~ \textbf{w}^\ast \in W ~ | ~ \theta_0 = \sum_{j \neq 0} {w_j^\ast} \theta_j
\end{eqnarray*}
\end{assumption}
where $W$ is the set of possible weights given the constrains on the weights the researcher is willing to consider. For example, \cite{Abadie2010} suggest $W = \Delta^{J-1}$, while \cite{Hsiao} allows for $W=\mathbb{R}^J$. Let $\Phi_1$ be the set of weights in $W$ that reconstruct the factor loadings of unit 0 associated with the $I(1)$ common factors. Assumption \ref{non_stationary_assumption} implies that $\Phi_1 \neq \varnothing$. In a setting in which $\gamma_t$ is a vector of $I(1)$ common factors, Assumption \ref{non_stationary_assumption} implies that the vector of outcomes $(y_{0t},...,y_{J,t})'$ is co-integrated. The converse, however, is not true. Even if $(y_{0t},...,y_{J,t})'$ is co-integrated, we would still need that there is a co-integrating vector $(1,-\mathbf{w})$ that satisfies $\mathbf{w} \in W$, so that Assumption \ref{non_stationary_assumption} holds. Importantly, we do \textit{not} need to assume existence of weights in $\Phi_1$ that also reconstruct the factor loadings of unit 0 associated with the $I(0)$ common factors, so it may be that $\Phi = \varnothing$, where $\Phi$ is the set of weights that reconstruct \textit{all} factor loadings.
We consider an asymptotic exercise where $T_0 \rightarrow \infty$ with ``explosive'' common factors, so it is not possible to fix the label of the post-treatment periods, as we do in Sections \ref{Setting1} and \ref{DID}. Instead, we consider the asymptotic distribution of the estimator for the treatment effect $\tau$ periods after the start of the treatment. As in Section \ref{Setting1}, we define $\boldsymbol{\mu} \equiv [\mu_1 \hdots \mu_J]'$. In this case, this $J \times F$ matrix contains the information only on the factor loadings associated with the stationary common factors.
\begin{proposition} \label{I1_result}
\normalfont
Under Assumptions \ref{assumption_LFM}$'$, \ref{assumption_sample}, \ref{assumption_exogeneity}, \ref{assumptions_lambda}$'$, and \ref{non_stationary_assumption}, for $t = T_0 + \tau$, $\tau>0$,
\begin{eqnarray}
\hat \alpha_{0t}^{\mbox{\tiny SC$'$}} \buildrel p \over \rightarrow \alpha_{0t} + \left( \epsilon_{0t} - \mathbf{\bar w} '\boldsymbol{\epsilon}_t \right) + (\lambda_t-\omega_0) \left(\mu_0 - \boldsymbol{\mu}' \mathbf{ \bar w} \right) \mbox{ when } T_0 \rightarrow \infty
\end{eqnarray}
where $\mu_0 \neq \boldsymbol{\mu}' \mathbf{ \bar w}$, unless $ \sigma_\epsilon^2=0 $ or $\exists \textbf{w} \in \Phi | \textbf{w} \in \underset{{\textbf{w} \in W}}{\mbox{argmin}} \left\{ \mathbf{w}'\mathbf{w} \right\}$.
\end{proposition}
\begin{proof}
Details in Appendix \ref{Prop2}.
\end{proof}
Proposition \ref{I1_result} has two important implications. First, if Assumption \ref{non_stationary_assumption} is valid, then the asymptotic distribution of the demeaned SC estimator does not depend on the non-stationary common trends. The intuition of this result is the following. The demeaned SC weights will converge to weights that reconstruct the factor loadings of the treated unit associated with the non-stationary common trends. Interestingly, while $\mathbf{\widehat w}$ will generally be only $\sqrt{T_0}-$consistent when $\Phi_1$ is not a singleton, we show in Appendix \ref{Prop2} that there are linear combinations of $\mathbf{\widehat w}$ that will converge at a faster rate, implying that $\gamma_t ( \theta_0 - \sum_{j \neq 0}\hat w_j \theta_j ) \buildrel p \over \rightarrow 0$, despite the fact that $\gamma_t$ explodes when $T_0 \rightarrow \infty$. Therefore, such non-stationary common trends will not lead to asymptotic bias in the SC estimator. Second, the demeaned SC estimator will be biased if there is correlation between treatment assignment and the $I(0)$ common factors. The intuition is that the demeaned SC weights will converge in probability to weights in $\Phi_1$ that minimize the variance of the $I(0)$ process $u_t = y_{0t} - \mathbf{w}'\mathbf{y}_t= \lambda_t (\mu_0 - \boldsymbol{\mu}'\mathbf{w}) + (\epsilon_{0t} - \mathbf{w}'\boldsymbol{\epsilon}_t)$. Following the same arguments as in Proposition \ref{main_result}, $\mathbf{\widehat w}$ will not eliminate the $I(0)$ common factors, unless we have that $ \sigma_\epsilon^2=0 $ or it coincides that there is a $\textbf{w} \in \Phi$ that also minimizes the linear combination of idiosyncratic shocks.
The result that the asymptotic distribution of the SC estimator does not depend on the non-stationary common trends depends crucially on Assumption \ref{non_stationary_assumption}. If there were no linear combination of the control units that reconstruct the factor loadings of the treated unit associated to the non-stationary common trends, then the asymptotic distribution of the SC estimator would trivially depend on these common trends, which might lead to bias in the SC estimator if treatment assignment is correlated with such non-stationary trends. Testing whether treated and control units' outcomes are co-integrated can potentially inform about whether Assumption \ref{non_stationary_assumption} is valid. For example, we could consider tests like the ones proposed by \cite{Phillips} and \cite{Johansen}. However, this should not provide a definite answer on whether Assumption \ref{non_stationary_assumption} is valid. It may be that we reject the null that the series are not co-integrated, but there is no co-integrating vector $(1,-\mathbf{w})$ that satisfies $\mathbf{w} \in W$.
Proposition \ref{I1_result} remains valid when we relax the adding-up and/or the non-negativity constraints, with minor variations in the conditions for unbiasedness.\footnote{Relaxing the adding-up constraint makes the estimator biased if $\delta_t$ is correlated with treatment assignment and if it is $I(0)$. If $\delta_t$ is $I(1)$, then the weights will converge to sum one even when such restriction is not imposed, so this would not generate bias. Including or not the non-negative constraint does not alter the conditions for unbiasedness, although it may be that Assumption \ref{non_stationary_assumption} is valid in a model without the non-negativity constraints, but not valid in a model with these constraints.} However, these results are not valid when we consider the no-intercept constraint, as the original SC estimator does. When the intercept is not included, it remains true that $\mathbf{\widehat w} \buildrel p \over \rightarrow \mathbf{\bar w} \in \Phi_1$. However, in this case, the weights will not converge fast enough to compensate the fact that $\gamma_t$ explodes, implying that the result from Proposition \ref{I1_result} that the asymptotic distribution of the estimator does not depend on the non-stationary common factor does not hold if we consider the estimator with no intercept. We present a counter-example in Appendix \ref{Example_non_stationary}.
The results from Proposition \ref{I1_result} suggest that correlation between treatment assignment and stationary common factors, beyond such non-stationary trends, may lead to bias in the SC estimator. Therefore, we recommend that researchers should \textit{also} present the pre-treatment fit after eliminating non-stationary trends as an additional diagnosis test for the SC estimator, as this should be more indicative of potential bias from possible correlation between treatment assignment and stationary common factors. To illustrate this point, we consider the application presented by \cite{Abadie2003}.
We present in Figure \ref{basque}.A the per capita GDP time series for the Basque Country and for other Spanish regions, while in Figure \ref{basque}.B we replicate Figure 1 from \cite{Abadie2003}, which displays the per capita GDP of the Basque Country contrasted with the per capita GDP of a SC unit constructed to provide a counterfactual for the Basque Country without terrorism. We construct three different SC units, with the original SC estimator using all pre-treatment outcome lags as predictors, with the demeaned SC estimator using all pre-treatment outcome lags as predictors, and with the specification considered by \cite{Abadie2003}. In this application, the SC units are similar for all those specifications. Figure \ref{basque}.B displays a remarkably good pre-treatment fit, regardless of the specification. However, the per capita GDP series is clearly non-stationary, with all regions displaying similar trends before the intervention. Therefore, in light of Proposition \ref{I1_result}, it may still be that correlation between treatment assignment and common factors beyond this non-stationary trend may lead to bias. In order to assess this possibility, we de-trend the data, so that we can have a better assessment on whether factor loadings associated with stationary common factors are also well matched. We subtract the outcome of the treated and control units by the average of the control units at time $t$ ($a_t=\frac{1}{J} \sum_{j \neq 0}y_{jt}$). {Note that, under the adding-up constraint ($\sum_{j \neq 0} w_j=1$), the SC weights with this de-trended data will be numerically the same as the SC weights using the original data.} If the non-stationarity comes from a common factor $\delta_t$ that affects every unit in the same way, then the series $\tilde y_{jt} = y_{jt} - \frac{1}{J}\sum_{j' \neq 0}y_{j't}$ would not display non-stationary trends.\footnote{If there are other sources of non-stationarity, then the series would remain non-stationary even after such transformation. In this case, other strategies to de-trend the series could be used, such as, for example, considering parametric trends. }
As shown in Figure \ref{basque}.C, in this case, the treated and SC units do not display a non-stationary trend. The pre-treatment fit is still good for this de-trended series, but not as good as in the previous case, providing a better assessment of possible mismatches in factor loadings associated with stationary trends. In the presence of non-stationary common factors, a possible bias due to a correlation between treatment assignment and stationary common factors should become small relative to the scale of the outcome variable when $T_0 \rightarrow \infty$. However, this empirical illustration suggests that, for a finite $T_0$, a mismatch in factor loadings associated with stationary common factors might still be relevant, even when non-stationary common factors lead to graphs with seemingly perfect pre-treatment fit when we consider the variables in level. Finally, we consider the tests proposed by \cite{Phillips} and \cite{Johansen} in this application. The test proposed by \cite{Johansen} rejects the null that the series are not co-integrated, while the test proposed by \cite{Phillips} does not reject. This is consistent with the results from \cite{PESAVENTO2004349}, who shows that the test proposed by \cite{Johansen} is more powerful in small samples. Therefore, we take that as supporting evidence that Assumption \ref{non_stationary_assumption} is valid in this application.\footnote{We apply these tests considering the treated unit and the control units that received the three largest SC weights. Since, based on the test proposed by \cite{Johansen}, we find evidence that there is a co-integrating vector for this subset of series, then a vector including zeros for the series not included in the test would also be a co-integrating vector for all series. }
Importantly, our results do not imply that one should not use the SC method when the data is non-stationary. On the contrary, we show that the SC method is very efficient in dealing with non-stationary trends. Indeed, the seemingly perfect pre-treatment fit when we consider the outcomes in level suggest that the method is being highly successful in taking into account non-stationary trends, which is an important advantage of the method relative to alternatives such as DID. Our only suggestion is to \textit{also} present graphs with the de-trended series to have a better assessment of possible imbalances in the factor loadings associated with stationary common trends, beyond those non-stationary trends. Another possibility would be to apply the SC method on other transformations that make the data stationary. For example, one could look at first differences or at growth rates instead of applying the method to the data in level. In this case, however, the estimator would not be numerically the same as the estimator using the original data.
\section{Particular Class of Linear Factor Models \& Monte Carlo Simulations} \label{particular_model}
We consider now in detail a particular class of linear factor models in which all units are divided into groups that follow different time trends. We present both theoretical and MC simulations for these models. In Section \ref{MC1} we consider the case with stationary common factors, while in Section \ref{MC2} we consider a case in which there are both $I(1)$ and $I(0)$ common factors.
\subsection{Model with stationary common factors} \label{MC1}
We consider first a model in which the $J$ control units are divided into $K$ groups, where for each $j$ we have that
\begin{eqnarray} \label{dgp}
y_{jt}(0) = \delta_{t} + \lambda^k_{t} + \epsilon_{jt}
\end{eqnarray}
for some $k=1,...,K$. The potential outcome of the treated unit is given by $y_{0t}(0) = \delta_{t} + \lambda^1_{t} + \epsilon_{0t}$. As in Section \ref{Setting1}, let $t=-T_0+1,...,0,1,...,T_1$. We assume that $\frac{1}{T_0} \sum_{t=-T_0+1}^{0} \lambda_t^k \buildrel p \over \rightarrow 0 $, $\frac{1}{T_0} \sum_{t=-T_0+1}^{0} (\lambda_t^k)^2 \buildrel p \over \rightarrow 1 $, $\frac{1}{T_0} \sum_{t=-T_0+1}^{0} \epsilon_{jt} \buildrel p \over \rightarrow 0 $, $\frac{1}{T_0} \sum_{t=-T_0+1}^{0} \epsilon_{jt}^2 \buildrel p \over \rightarrow \sigma_\epsilon^2$ and $\frac{1}{T_0} \sum_{t=-T_0+1}^{0} \lambda_t^k \epsilon_{jt} \buildrel p \over \rightarrow 0 $. {As explained in Section \ref{Setting1}, these condition would be satisfied if, for example, conditional on $D(0,0)=1$, $(\epsilon'_{t},\lambda_t)$ is $\alpha-$mixing with exponential speed, with uniformly bounded fourth moments in the pre-treatment period, and $\epsilon_{t}$ and $\lambda_t$ are independent.}
\subsubsection{Asymptotic Results} \label{Asymptotic}
Consider first an extreme case in which $K=2$, so the first half of the $J$ control units follows the parallel trend given by $\lambda^1_t$ (which is the parallel trend followed by the treated unit), while the other half of the control units follows the parallel trend given by $\lambda_t^2$. In this case, an infeasible SC estimator would only assign positive weights to the control units in the first group.
We calculate, for this particular class of linear factor models, the asymptotic proportion of misallocated weights of the SC estimator using all pre-treatment lags as predictors. Since we assume $\frac{1}{T_0} \sum_{t=-T_0+1}^{0} \lambda_t^k \buildrel p \over \rightarrow 0 $, in this case the original and the demeaned SC estimator will have the same asymptotic distribution. We consider a more general setting in which the asymptotic distribution of these estimators may differ in Appendix \ref{alternative_DGP}. From the minimization problem \ref{objective_function_limit}, we have that, when $T_0 \rightarrow \infty$, the proportion of misallocated weights converges to
\begin{eqnarray}
\gamma_2(\sigma_\epsilon^2,J) = \sum_{j=1}^\frac{J}{2} \bar w_j = \frac{ \sigma_\epsilon^2}{2 \sigma^2_\epsilon+J}
\end{eqnarray}
where $\gamma_K(\sigma_\epsilon^2,J)$ is the proportion of misallocated weights when the $J$ control units are divided in $K$ groups.
We present in Figure \ref{Miss_bias}.A the relationship between asymptotic misallocation of weights, variance of the idiosyncratic shocks, and number of control units. For a fixed $J$, the proportion of misallocated weights converges to zero when $\sigma_\epsilon^2 \rightarrow 0$, while this proportion converges to $\frac{1}{2}$ (the proportion of misallocated weights of DID) when $\sigma_\epsilon^2 \rightarrow \infty$. This is consistent with the results we have in Section \ref{Setting1}. Moreover, for a given $\sigma_\epsilon^2$, the proportion of misallocated weights converges to zero when the number of control units goes to infinity, which is consistent with the results from \cite{Ferman}.
In this example, the SC estimator, for $t>0$, converges to
\begin{eqnarray} \label{eq_example}
\hat \alpha_{1t} \buildrel p \over \rightarrow \alpha_{1t} + \left( \epsilon_{1t} - \mathbf{\bar w}' \boldsymbol{\epsilon}_{t} \right) + \lambda^1_t \times \gamma_2(\sigma_\epsilon^2,J) - \lambda^2_t \times \gamma_2(\sigma_\epsilon^2,J),
\end{eqnarray}
so the potential bias due to correlation between treatment assignment and common factors (for example, $\mathbb{E}[\lambda_t^1|D(0,0)=1]\neq 0$ for $t>0$) will directly depend on the proportion of misallocated weights.
We consider now another extreme case in which the $J$ control units are divided into $K=\frac{J}{2}$ groups that follow the same parallel trend. In this case, there are two units that follow the same trend as the treated unit, but the other units follow different trends. Importantly, as $J$ increases, the number of control units that could be used to reconstruct the factor loadings of the treated unit does not increase. In this case, the proportion of misallocated weights converges to
\begin{eqnarray}
\gamma_{\frac{J}{2}}(\sigma_\epsilon^2,J)=\sum_{j=3}^{J} \bar w_j = \frac{J-2}{J} \frac{\sigma^2_\epsilon}{\sigma^2_\epsilon+2}
\end{eqnarray}
We present the relationship between misallocation of weights, variance of the idiosyncratic shocks, and number of control units in Figure \ref{Miss_bias}.B. Again, the proportion of misallocated weights converges to zero when $\sigma_\epsilon^2\rightarrow 0$ and to the proportion of misallocated weights of DID when $\sigma_\epsilon^2 \rightarrow \infty$ (in this case, $\frac{J-2}{J}$). Differently from the previous case, however, for a given $\sigma_\epsilon^2$, the proportion of misallocated weights is increasing with $J$, and converges to $\frac{\sigma_\epsilon^2}{\sigma_\epsilon^2+2}$ when $J \rightarrow \infty$. Therefore, the SC estimator would remain asymptotically biased even when the number of control units is large. This happens because, in this case, we are adding new units that are less correlated with the treated unit once we increase $J$, which is not consistent with the conditions derived by \cite{Ferman}. This highlights that our results are also relevant as a good approximation to the properties of the SC estimator when $J$ and $T_0$ are large, but control units become less correlated with the treated unit when $J \rightarrow \infty$.
In both cases, the proportion of misallocated weights is always lower than the proportion of misallocated weights of DID. Therefore, in this particular class of linear factor models, the asymptotic bias of the SC estimator will always be lower than the asymptotic bias of DID. If we further assume that the variance of common factors and idiosyncratic shocks remain constant in the pre- and post-intervention periods, then we also have that the SC estimator will have lower variance and, therefore, lower MSE relative to the DID estimator. However, this is not a general result, as we show in Appendix \ref{example}.
\subsubsection{Monte Carlo Simulations}
The results presented in Section \ref{Asymptotic} are based on large-$T_0$ asymptotics. We now consider, in MC simulations, the finite $T_0$ properties of the SC estimator. We present MC simulations using a data generating process (DGP) based on equation \ref{dgp}, with $K=10$ (that is, 10 groups of 2). We consider in our MC simulations $J=20$, $\lambda^k_t$ normally distributed following an AR(1) process with 0.5 serial correlation parameter in the pre-treatment periods, $\epsilon_{jt} \buildrel \mbox{\tiny iid} \over \sim N(0,\sigma^2_\epsilon)$, and $T_1=1$. We consider the case in which $\mathbb{E}[\lambda_t^1 | D(0,0)=1)]=1$ for $t>0$. Therefore, an SC estimator that assigns positive weights only to control units 1 and 2 would be unbiased. However, if $\hat w_1 + \hat w_2<1$, then the estimator would be biased. We also impose that there is no treatment effect, i.e., $y_{jt}= y_{jt}(0) = y_{jt}(1)$ for each time period $t \in \left\lbrace -T_0+1,...,0,1,...,T_1 \right\rbrace$.\footnote{The SC weights are estimated using only the pre-treatment data, so the estimated weights would not differ whether we consider a DGP with zero or non-zero treatment effects. For this reason, if we have an estimated effect of $\hat \alpha_{0t}$ when the true effect is zero, then the estimated effect when the true effect is $a$ would simply be $\hat \alpha_{0t}+a$. Moreover, the bias, variance and MSE of $\hat \alpha_{0t}$ would remain exactly the same whether the true effect is zero or not.} We consider variations in the DGP in the following dimensions:
\begin{itemize}
\item The number of pre-intervention periods: $T_0 \in \{20, 50, 100 \}$.
\item The variance of the idiosyncratic shocks: $\sigma_\epsilon^2 \in \{ 0.5,1 \}$.
\end{itemize}
For each simulation, we calculate the original and the demeaned SC estimators using all pre-treatment outcome lags as predictors. We also calculate the DID estimator. For each estimator, we calculate the proportion of misallocated weights, the bias, and the standard deviation. For each scenario, we generate 5,000 simulations.
In column 1 of Table \ref{Table_stationary}, we present the estimated $\hat \mu_0^1$ when $K=10$ for different values of $T_0$. Since $\mu_0^1=1$, note that the proportion of misallocated weights is equal to $1-\hat \mu_0^1$. Panel A considers the case with $\sigma^2_\epsilon=1$, while Panel B sets $\sigma^2_\epsilon=0.5$. Consistent with our analytical results from Section \ref{Asymptotic}, the misallocation of weights is increasing with the variance of the idiosyncratic shocks. With $T_0=100$, the proportion of misallocated weights is close to the asymptotic values, while the proportion of misallocated weights is substantially higher when $T_0$ is small. {This happens because the asymptotic values for the weights of each control unit $j=3,...,J$ --- which should be set to zero to generate an unbiased estimator --- are equal to $0.0167$. When $T_0 \rightarrow \infty$, the weights would be precisely estimated around this value, so the non-negativity constraints would not be (asymptotically) binding. In contrast, with fixed $T_0$ the estimator for these weights, if we ignore the non-negativity constraints, would be centered around $0.0167$, but there would be a positive probability that these weights are negative. Therefore, when we impose the non-negativity constraints, the expected value for the estimator of the weights for these control units would be greater than $0.0167$, generating more misallocation of weights. }
When $\mathbb{E}[\lambda_t^1 | D(0,0)=1)]=\gamma$ for $t>0$, the bias of the SC estimator is equal to $\gamma \times (1-\hat \mu_0^1)$. Column 2 reports the case with $\gamma=1$. The asymptotic bias is roughly equal to one fourth of the asymptotic standard error of the SC estimator, and is relatively larger when $T_0$ is finite (column 3). If $T_1>1$ and we consider the average treatment effect across post-treatment periods, then the bias would remain the same, while the standard error would reduce at a rate $1/\sqrt{T_1}$, making the bias even more relevant relative to the standard error of the estimator.
Columns 4 to 6 present the simulation results for the demeaned SC estimator. As expected from the discussion in Section \ref{Asymptotic}, the original and the demeaned SC estimators behave very similarly when $T_0$ is large. When $T_0$ is small, however, demeaning implies a slightly larger standard error for the demeaned SC estimator. This happens because it estimates a constant term that is actually zero in this DGP. For the simulations considered in Table \ref{Table_stationary}, both the original and the demeaned SC estimators dominate the DID estimator in terms of bias and standard error.
In Appendix \ref{alternative_DGP}, we present alternative DGP's in which different units have different fixed effects. The demeaned SC generally dominates the original SC estimator in terms of bias, at the expense of a slight increase in standard errors when $T_0$ is small. The original SC estimator would only have a lower bias than the demeaned SC estimator in extreme cases, in which matching on the fixed effects also helps matching on the factor loadings associated with the time-varying unobservables, and treatment assignment is strongly correlated with the time-varying unobservables. We also present settings in which the DID estimator may have a lower bias than the original SC estimator. In contrast, the demeaned SC estimator dominates the DID estimator in terms of bias and standard error in all scenarios. Overall, this suggests that the demeaned SC estimator can improve relative to DID even when the number of pre-treatment periods is not large and when the pre-treatment fit is imperfect, situations in which \cite{Abadie2015} suggest the method should not be used. However, a very important qualification is that, in these cases, the estimator requires stronger identification assumptions than stated in the original SC papers. More specifically, it is generally asymptotically biased if treatment assignment is correlated with time-varying confounders.
We also consider in Appendix \ref{Appendix_bai} the estimator proposed by \cite{Bai}, which fully exploits the factor model structure from Assumption \ref{assumption_LFM}. We first consider the same DGP presented in Table \ref{Table_stationary}, with $\sigma_\epsilon^2=1$, and assume that the number of factors is known. Note that, in this case, the DGP satisfies the conditions stated in Theorem 4 of \cite{Bai2003}, so that factor loadings can be consistently estimated with fixed $J$. While we find that the bias is close to zero when $T_0=500$, the standard error for this estimator is larger than the standard error of the SC estimator, implying a larger mean square error (MSE). When $T_0=50$, the estimator proposed by \cite{Bai} is biased, and the standard error is more than five times larger relative to standard error of the SC estimator. These results are consistent with the findings from \cite{Zhou}, in that estimators that fully exploit the factor model structure may have worse finite sample properties given the larger number of parameters that are estimated. This problem should be aggravated once we take into account that the number of factors also generally has to be estimated. We also consider a setting in which the idiosyncratic shocks are heteroskedastic, so that the conditions stated in Theorem 4 of \cite{Bai2003} are not satisfied. In this case, the estimator proposed by \cite{Bai} is biased even when $T_0=500$. This is consistent with the literature on factor models in that factor loadings cannot be consistently estimated, unless we impose strong assumptions on the idiosyncratic shocks.
\subsection{Model with ``explosive'' common factors} \label{MC2}
We consider now a model in which a subset of the common factors is $I(1)$. We consider the following DGP:
\begin{eqnarray} \label{dgp_non}
y_{jt}(0) = \delta_{t} + \lambda^k_{t} +\gamma_t^r+ \epsilon_{jt}
\end{eqnarray}
for some $k=1,...,K$ and $r=1,...,R$. We maintain that $\lambda^k_t$ is stationary, while $\gamma_t^r$ follows a random walk.
\subsubsection{Asymptotic results}
Based on our results from Section \ref{Setting2}, the SC weights will converge to weights in $\Phi_1$ that minimize the second moment of the $I(0)$ process that remains after we eliminate the $I(1)$ common factor. Consider the case $K=\frac{J}{2}$ and $R=2$. Therefore, units $j=1,...,\frac{J}{2}$ follow the same non-stationary path $\gamma_t^1$ as the treated unit, although only control units 1 and 2 also follows the same stationary path $\lambda_t^1$ as the treated unit. In this case, asymptotically, all weights would be allocated among units 1 to $\frac{J}{2}$, eliminating the relevance of the $I(1)$ common factor. However, the allocation of weights within these units will not assign all weights to units 1 and 2, so the $I(0)$ common factor will remain relevant.
\subsubsection{Monte Carlo simulations}
In our MC simulations, we maintain that $\lambda^k_t$ is normally distributed following an AR(1) process with 0.5 serial correlation parameter, while $\gamma_t^r$ follows a random walk. We consider the case $K=10$ and $R=2$. For both the original and the demeaned SC estimators, the estimators for the factor loadings associated with the non-stationary common factor ($\hat \theta_0^1$) converge relatively fast to one. For example, they are $0.98$ when $T_0=50$. The reason is that, even with a moderate $T_0$, the $I(1)$ common factors dominate the idiosyncratic shocks, so the SC method is extremely efficient selecting control units that follow the same non-stationary trend as the treated unit. In contrast, for both the original and the demeaned SC estimators, the estimators for the factor loading of the treated unit associated with the stationary common factor ($\hat \mu_0^1$, presented in columns 1 and 5 of Table \ref{Table_nonstationary}) are smaller than one, which generates bias when treatment assignment is correlated with the stationary common factors. In the non-stationary DGP, the proportion of misallocated weights ($1-\hat \mu_0^1$) is slightly lower than in the stationary DGP (presented in Section \ref{MC1}, because in the non-stationary DGP the weights are concentrated only in the 10 control units that follow the same non-stationary trend as the treated unit. As in Section \ref{MC1}, both the original and the demeaned SC estimators have a lower bias than the DID estimator.
We present the standard error for these estimators in columns 4, 8, and 10. For both the original and the demeaned SC estimators, the standard error converges to a finite value when $T_0 \rightarrow \infty $. This happens because the non-stationary trends are asymptotically eliminated (Proposition \ref{I1_result}). In contrast, the DID estimator does not eliminate the non-stationary trends, so its standard error diverge when $T_0 \rightarrow \infty$. Overall, these results suggest that the SC method provides substantial improvement relative to DID in this scenario, as the SC estimators are extremely efficient in capturing the $I(1)$ factors.
\section{Conclusion } \label{Conclusion}
We consider the properties of the SC and related estimators, in a linear factor model setting, when the pre-treatment fit is imperfect. We show that, in this framework, the SC estimator is generally biased if treatment assignment is correlated with the unobserved heterogeneity, and that such bias does not converge to zero even when the number of pre-treatment periods is large. Still, we also show that a modified version of the SC method can substantially improve relative to currently available methods, even if the pre-treatment fit is not close to perfect and if $T_0$ is not large. Moreover, we suggest that, in addition to the standard graph comparing treated and SC units, researchers should also present a graph comparing the treated and SC units after de-trending the data, so that it is possible to better assess whether there might be relevant possibilities for bias arising due to a correlation between treatment assignment and common factors beyond non-stationary trends. Overall, we show that the SC method can provide substantial improvement relative to alternative methods, even in settings where the method was not originally designed to work. However, researchers should be more careful in the evaluation of the identification assumptions in those cases. {Importantly, our results clarify the conditions in which the SC and related estimators are reliable, providing guidance on how applied researchers should justify the use of such estimators in empirical applications. }
\singlespacing
\bibliographystyle{aer}
|
1,116,691,499,747 | arxiv | \section{Supplemental Material}
\section{Principles of Floquet theory}
\label{suppl:floquet}
Here we formally derive the properties of Floquet theory
used in the main text. We consider the unitary time evolution operator
$U(t_1,t_2)$ and in particular $U(T,0)$.
Any unitary operator such as $U(T,0)$ has an orthonormal eigenbasis
$\{ \ket{\psi_m} \}$ satisfying
\be
\label{eq:eigenbasis}
U(T,0) \ket{\psi_m} = \lambda_m \ket{\psi_m}.
\ee
Since $U(T,0)$ is unitary, the absolute value of $\lambda_m$
is unity, so it can be written as
\be
\lambda_m = \exp(-i\epsilon_m T)
\ee
where the quasi-energy $\epsilon_m$ is uniquely defined only
if it is restricted to the interval $\epsilon_m\in (-\pi/T,\pi/T]$.
This is the temporal equivalent of a Brillouin zone.
Next, we take the states $\ket{\psi_m}$ as initial states, i.e.,
$\ket{\psi_m (t=0)}=\ket{\psi_m}$ for
a time-evolution according to the Schr\"odinger equation
\be
\label{eq:schrodinger}
i\partial_t \ket{\psi(t)} = \mathcal{H}(t) \ket{\psi(t)}.
\ee
We emphasize that the orthonormality and the completeness
persist in the course of the time evolution because it is unitary
\bes
\label{eq:on-psim}
\begin{align}
\bra{\psi_m(t)}\psi_n(t)\rangle &= \delta_{mn}
\\
\label{eq:on-psimb}
\sum_m \ket{\psi_m(t)} \bra{\psi_m(t)} &= \mathbbm{1}.
\end{align}
\ees
But these relations only hold if the time arguments in bra and ket
are the same. Since the states $\ket{\psi_m(t)}$ are solutions of the Schr\"odinger
equation
\be
U(t_1,t_2) \ket{\psi_m(t_2)} = \ket{\psi_m(t_1)}
\ee
holds by definition for all times $t_1$ and $t_2$.
Thus, the unitary time evolution is given by
\be
\label{eq:time-evolution}
U(t_1,t_2) = \sum_m \ket{\psi_m(t_1)} \bra{\psi_m(t_2)}.
\ee
One can verify that this solves the Schr\"odinger equation
\bes
\begin{align}
i\partial_{t_1} U(t_1,t_2) &= i\partial_{t_1} U(t_1,t_2)
\sum_m \ket{\psi_m(t_2)} \bra{\psi_m(t_2)}
\\
&= \sum_m i\partial_{t_1} \ket{\psi_m(t_1)} \bra{\psi_m(t_2)}
\\
&= \sum_m \mathcal{H}(t_1) \ket{\psi_m(t_1)} \bra{\psi_m(t_2)}
\\
&= \mathcal{H}(t_1) U(t_1,t_2).
\end{align}
\ees
where we used that the states $\ket{\psi_m(t_1)}$ fulfill
the Schr\"odinger equation in Eq.~\eqref{eq:schrodinger}. The initial condition
\be
U(t_2,t_2) = \mathbbm{1}
\ee
is fulfilled due to the completeness in Eq.~\eqref{eq:on-psimb}
of the states $\{ \ket{\psi_m(t)}\}$.
By construction [see Eq.~\eqref{eq:eigenbasis}], the property
\be
\ket{\psi_m(T)} = U(T,0)\ket{\psi_m} = \exp(-i\epsilon_m T) \ket{\psi_m}
\ee
holds. More generally, quasi-periodicity holds
\bes
\begin{align}
\ket{\psi_m(t+T)} &=U(t+T,T) \ket{\psi_m(T)}
\\
\label{eq:2nd_line}
&= U(t,0) \exp(-i\epsilon_m T) \ket{\psi_m(0)}
\end{align}
\ees
resulting from the periodicity of the unitary time evolution,
which in turn is implied by the periodicity of the Hamiltonian and
of Eq.~\eqref{eq:eigenbasis}. Combining the unitary operator
and the ket in Eq.~\eqref{eq:2nd_line} yields
\be
\label{eq:quasi-period}
\ket{\psi_m(t+T)} = \exp(-i\epsilon_m T) \ket{\psi_m(t)}
\ee
which confirms that $\ket{\psi_m(t)}$ is periodic
\emph{up to the factor} $\exp(-i\epsilon_m T)$.
This is what is conventionally regarded as the Floquet theorem.
Finally, we define the states used in Eq.~(7) via
\be
\label{eq:define-m}
\ket{m,t} := \exp(i\epsilon_m t) \ket{\psi_m(t)}.
\ee
Clearly, these states are periodic, inheriting this property from
the quasi-periodicity in Eq.~\eqref{eq:quasi-period} of $\ket{\psi_m(t)}$.
In addition, they form an orthonormal basis
\bes
\label{eq:on-m}
\begin{align}
\bra{m,t} n,t\rangle &= \delta_{mn}
\\
\sum_m \ket{m,t} \bra{m,t} &= \mathbbm{1}
\end{align}
\ees
which results again from the orthonormality in Eq.~\eqref{eq:on-psim} of the states $\ket{\psi_m(t)}$.
The representation of the time evolution operator in Eq.~\eqref{eq:time-evolution}
can be expressed in terms of the states $\ket{m,t}$
as well
\be
U(t_1,t_2) = \sum_m \exp(-i\epsilon_m(t_1-t_2))\, \ket{m,t_1} \bra{m,t_2}
\ee
which confirms Eq.~(8). Thereby, all properties used in the
main text are derived.
\section{Sum rules for higher moments of the spectral densities in the Hubbard model}
We already discussed the zeroth moment sum rule in Eq.~(19), which is valid for any given Hamiltonian.
To analyze higher spectral moment sum rules, we have to specify the underlying model, as the sum rules depend on the particular form of the
Hamiltonian. Here we will present results for the Hubbard Hamiltonian, which is one of the simplest models to describe
electron-electron interactions. Furthermore, it is a model for which the sum rules are well-known
\cite{turko08}.
The Hubbard-Hamiltonian is given by
\begin{align}
\mathcal{H}_\mathrm{H}\left(t\right)
=&
-\sum_{ij\sigma}
t_{ij}\left(t\right)c_{i\sigma}^\dagger c_{j\sigma}^{\phantom\dagger}
+
\sum_i
U_i\left(t\right)
n_{i\downarrow}n_{i\uparrow}
\\&
-
\sum_i
\mu_i\left(t\right)
\left(
n_{i\downarrow}+n_{i\uparrow}
\right)\,,\nonumber
\end{align}
where $t_{ij}\left(t\right)$ is the time-dependent Hermitian electron hopping matrix, $U_i\left(t\right)$ is the time-dependent on-site Hubbard repulsion,
and $\mu_i\left(t\right)$ is a time-dependent local site energy. To simplify the formulas, we introduce the notation $\left[\tilde{O}=\hat O\left(t\ave\right)\right]$
to indicate the operator (or function) is evaluated at the average time $t\ave$ after taking the limit $t\rel\rightarrow 0$. We assume that $\tilde{t}_{ij}$,
$\tilde{U}_i$ and $\tilde{\mu}_i$ are $T$ periodic in $t\ave$ and therefore can be written as a Fourier series
\begin{align}
\tilde{t}_{ij}=&
\sum_n t_{ij}^{n} \mathrm{exp}\left[in\frac{2\pi}{T}t\right]
\end{align}
(analogous for $\tilde{U}_i$ and $\tilde{\mu}_i$). The zeroth moment sum rule is given by
$\mu_{ij\sigma}^{R0}\left(t\ave\right)=\delta_{ij}$, so integrating over one period
\begin{equation}
\frac{1}{T}\int_x^{x+T}
\mu_{ij\sigma}^{R0}\left(t\ave\right)
\,\mathrm{d}t\ave
=
\delta_{ij}
\end{equation}
does not change the result. This is different for the first moment, which is given by
\begin{equation}
\mu_{ij\sigma}^{R1}\left(t\ave\right)=
-\tilde{t}_{ij}-\delta_{ij}\tilde{\mu}_i+\delta_{ij}\tilde{U}_i\left\langle\tilde{n}_{i\tilde{\sigma}}\right\rangle\,,
\end{equation}
so the integration yields
\begin{align}
\frac{1}{T}\int_t^{t+T}
\mu_{ij\sigma}^{R1}\left(t\ave\right)
\,\mathrm{d}t\ave
=&
t_{ij}^0
-
\delta_{ij}
\mu_i^0
\\&+
\delta_{ij}
\sum_m
U_i^m
\left\langle n_{i\bar{\sigma}}\right\rangle^{-m}\,.\nonumber
\end{align}
The second moment sum rule is given by
\begin{eqnarray}
\mu_{ij\sigma}^{R2}\left(t\ave\right)
&=&
\sum_k
\tilde{t}_{ik}
\tilde{t}_{kj}
\\\nonumber
&&+
\tilde{t}_{ij}
\tilde{\mu}_i
+
\tilde{t}_{ij}
\tilde{\mu}_j
-
\tilde{t}_{ij}
\tilde{U}_i
\left\langle\tilde{n}_{i\bar{\sigma}}\right\rangle
-
\tilde{t}_{ij}
\tilde{U}_j
\left\langle\tilde{n}_{j\bar{\sigma}}\right\rangle
\\\nonumber
&&+
\delta_{ij}\left(
\tilde{\mu}_i^2
+
{\color{black}
\tilde{U}_i^2
\left\langle\tilde{n}_{i\bar{\sigma}}\right\rangle^2
}
-
2\tilde{\mu}_i\tilde{U}_i
\left\langle\tilde{n}_{i\bar{\sigma}}\right\rangle
\right)
\\\nonumber
&&+
\delta_{ij}\left(
\tilde{U}_i^2
\left\langle\tilde{n}_{i\bar{\sigma}}\right\rangle
-
{\color{black}
\tilde{U}_i^2
\left\langle\tilde{n}_{i\bar{\sigma}}\right\rangle^2}
\right)
\end{eqnarray}
which, when integrated over one period becomes
\begin{eqnarray}
&&\frac{1}{T}\int_x^{x+T}
\mu_{ij\sigma}^{R2}\left(t\ave\right)
\,\mathrm{d}t\ave
=
\sum_{k,n}
t_{ik}^n
t_{kj}^{-n}
\\\nonumber
&&\quad+
\sum_n\left(
t_{ij}^n
\mu_i^{-n}
+
t_{ij}^n
\mu_j^{-n}
\right)
\\\nonumber
&&\quad-
\sum_{nm}
\left(
t_{ij}^{n+m}
U_i^{-n}
\left\langle n_{i\bar{\sigma}}\right\rangle^{-m}
+
t_{ij}^{n+m}
U_j^{-n}
\left\langle n_{j\bar{\sigma}}\right\rangle^{-m}
\right)
\\\nonumber
&&\quad+
\delta_{ij}
\sum_n
{\color{black}{\left|\mu_i^n\right|^2}}
\\\nonumber
&&\quad
-
2\delta_{ij}
\sum_{mn}
\mu_i^{n+m}
{U}_i^{-n}
\left\langle n_{i\bar{\sigma}}\right\rangle^{-m}
\\\nonumber
&&\quad+
\delta_{ij}
\sum_{mn}
U_i^{n+m}
U_i^{-n}
\left\langle n_{i\bar{\sigma}}\right\rangle^{-m}\,.
\end{eqnarray}
It is obvious that the mixing of Floquet coefficients increases as we go to higher moments.
Finally we would like to discuss the zeroth moment of the
self energy, given by
\begin{eqnarray}
C_{ij\sigma}^{R0}\left(t\ave\right)
&=&
\delta_{ij}\left(
\tilde{U}_i^2
\left\langle\tilde{n}_{i\bar{\sigma}}\right\rangle
-
\tilde{U}_i^2
\left\langle\tilde{n}_{i\bar{\sigma}}\right\rangle^2
\right)\,.
\end{eqnarray}
Here the integration over one period yields
\begin{eqnarray}
&&\frac{1}{T}\int_x^{x+T}\nonumber
C_{ij\sigma}^{R0}\left(t\ave\right)
\,\mathrm{d}t\ave
=
\delta_{ij}
\sum_{mn}
U_i^{n+m}
U_i^{-n}
\left\langle n_{i\bar{\sigma}}\right\rangle^{-m}\\
&&\qquad\qquad-
\delta_{ij}
\sum_{lmn}
U_i^{l+m+n}
U_i^{-l}
\left\langle n_{i\bar{\sigma}}\right\rangle^{-m}
\left\langle n_{i\bar{\sigma}}\right\rangle^{-l}\,,
\end{eqnarray}
so even for the lowest moment of the self energy, the Fourier coefficients of $\tilde{U}_i$ and $\tilde{n}_i$ mix.
\end{document} |
1,116,691,499,748 | arxiv | \section{The two solutions of YM theory within stochastic quantization}
\begin{figure}[htb]
\includegraphics*[width=0.47\textwidth]{FIGS.DIR/gluonLdse.eps}
\hspace{0.3cm}
\includegraphics*[width=0.47\textwidth]{FIGS.DIR/gluonTdse.eps}\\
\includegraphics*[width=0.5\textwidth]{FIGS.DIR/gluonL.eps}
\includegraphics*[width=0.5\textwidth]{FIGS.DIR/gluonT.eps}\\
\includegraphics*[width=0.5\textwidth]{FIGS.DIR/gluonLa.eps}
\includegraphics*[width=0.5\textwidth]{FIGS.DIR/gluonTa.eps}
\caption{Longitudinal (left panels) and transverse (right panels) gluon propagator dressing functions $Z_L$ and $Z_T$ in stochastic quantization.
The top row displays the respective Dyson-Schwinger equations in rainbow approximation.
The middle row shows both the scaling and the massive (also called decoupling) solutions in Landau gauge $a=0$. The bottom row is obtained with a small but finite $a$ parameter and shows the massive solution only; the scaling solution ceases to exist for $a\ne 0$.}
\label{fig:stochastic}
\end{figure}
Two families of solutions to the wave equations of Yang-Mills theory have been widely studied. The first is called ``massive'' or ``decoupling''~\cite{Dudal:2007cw} and characterized by a dynamically generated gluon mass scale, with Euclidean gluon propagator $Z(k^2)/k^2\propto 1/(k^2+m^2)$. The second is a ghost dominated, gluon suppressed ``scaling'' solution~\cite{Alkofer:2003jr} with respective power-law behaviors $Z(k^2)/k^2\propto (k^2)^{2\kappa-1}$ and $G(k^2)/k^2 \propto (k^2)^{-\kappa-1}$ (with $\kappa>0.5$). Lattice gauge theory finds the massive solution in Landau-gauge fixed simulations~\cite{Cucchieri:2009zt}. Much discussion has focused on the ability to fix the gauge in large lattices and the presence of Gribov copies.
To continue studying the impact of the formation of a Gribov horizon and the gauge dependence of the solutions~\cite{Zwanziger:1988jt} we have performed~\cite{LlanesEstrada:2012my} a numerical analysis within stochastic quantization~\cite{Parisi:1980ys}.
In this approach to Yang-Mills theory the weight employed to compute correlators
$
\la A(x) A(y) \ra = \int DA A(x) A(y) e^{-S_{\rm (YM)}[A]}\ ,
$
akin to a Boltzmann equilibrium distribution $e^{-\beta E}$,
is seen as the end-point $e^{-S_{\rm (YM)}[A]}=\lim_{\tau\to\infty}
P(\tau)$ of a stochastic random walk in a fictitious time parameter $\tau$. $P$ satisfies a Fokker-Planck equation with force $K^a_\mu(x)\equiv-\frac{\delta S_{\rm (YM)}}{\delta A^a_\mu(x)}$,
$$
\frac{\partial P}{\partial \tau} = \int d^4x \frac{\delta}{\delta A^{a \mu}(x)}
\left(
\frac{\delta P}{\delta A^a_\mu(x)} - K^a_\mu(x) P
\right) \ .
$$
To avoid the stochastic evolution running away along a gauge orbit (line of constant action), Zwanziger added a force that respects gauge-independent dynamics, a gauge transformation
$
K^a_\mu(x)\to -\frac{\delta S_{\rm (YM)}}{\delta A^a_\mu(x)} + a^{-1} D^{ac}_{\mu}\partial\cdot A^{c}(x).
$
The real constant $a$ controls the relative intensity of the stochastic Yang-Mills and the gauge-restoring forces. The gauge is not strictly fixed, rather, gauge-equivalent configurations are weighted in a smooth manner, with much less probability for those farther from $A=0$, except in the limit $a\to 0$ that fixes the Landau gauge.
The Gribov problem is thus bypassed.
While $P(\infty)$ is not known, its uniqueness and positivity, as well as the Dyson-Schwinger equations have been derived. Since this ``soft'' gauge fixing method uses no Faddeev-Popov ghosts, one has instead both transverse and longitudinal
dressing functions of the gluon propagator,
$$
\int d^4 x \la A^a_\mu(0) A^b_\nu(x)\ra e^{ik\cdot x}
=\delta^{ab} \left( \frac{Z_T(k^2)}{k^2} \left(\delta^{\mu\nu} -\frac{k^\mu k^\nu}{k^2} \right)
+ a \frac{Z_L(k^2)}{k^2} \frac{k^\mu k^\nu}{k^2} \right) \ .
$$
Solving the rainbow DSE's for $Z_T$ and $Z_L$ (see figure~\ref{fig:stochastic}) we find that the scaling solution can be found only in Landau gauge, not for finite $a$, suggesting indeed a connection to the Gribov horizon, while massive solutions can be found for both $a=0$ and finite $a$.
\section{Effective action in Faddeev-Popov formalism}
\begin{figure}[htb]
\centering
\begin{minipage}{0.15\textwidth}
\includegraphics*[width=0.85\textwidth]{FIGS.DIR/eff_DDD.eps}\\
\includegraphics*[width=0.85\textwidth]{FIGS.DIR/eff_DGG.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\includegraphics*[width=\textwidth]{FIGS.DIR/action.eps}
\end{minipage}
\caption{(left) Interaction terms of the Faddeev-Popov effective action generating the rainbow-DSE for the Yang-Mills propagators, and (right) evaluation of the action; $\alpha=0$ corresponds to a bare propagator, $\alpha=1/2$ to the massive/decoupling DSE solution, and $\alpha=1$ to the scaling propagators. The solid line interpolates between them and has a minimum at the massive solution. The dashed and dashed-dotted lines separately inform of the free and the interacting parts of the action.}
\label{fig:action}
\end{figure}
To understand from the continuum DSE perspective why lattice data favors the massive-like solutions we have examined the effective action~\cite{Berges:2004pu} $\Gamma[D,G]$ that generates DSE equations via $\delta \Gamma/\delta D =0$, $\delta \Gamma/\delta G=0$. We have evaluated the effective action for the bare (perturbative) propagators, for the massive propagators, and for the scaling ones. For simplicity we have limited ourselves to the rainbow DSE's in Faddeev-Popov formalism
The outcome, reported in figure~\ref{fig:action}, clearly shows that the massive propagator has least action in an unconstrained minimization, with the scaling solution disfavored. A natural question to ask is whether a constrained minimization can pick up the scaling solution in DSE or in a lattice computation, and whether such constraint is necessary from the point of view of the Gribov horizon formation in Landau gauge or similar considerations.
\section{Empirical studies of gluon confinement}
\begin{figure}[htb]
\centerline{\includegraphics*[width=0.5\textwidth]{FIGS.DIR/dEdx.eps}}
\caption{Energy deposition per unit length for charged particles in a detector. Particles with different $q/m$ are identified as bands. Free quarks are excluded, there not being a band directly left of the kaon's. Figure courtesy of P. Ladron de Guevara.}
\label{fig:dEdx}
\end{figure}
Searching for empirical evidence for either Yang-Mills scenario made us observe that not even gluon confinement stands on solid empirical footing~\cite{HidalgoDuque:2011je}.
What has been experimentally established is \emph{quark} confinement, by modern reassesments of Millikan's experiments against fractional charges at rest~\cite{Perl:2009zz} as well as energy deposition in high energy reactions~\cite{Bergsma:1984yn} that exclude energetic quarks (see figure~\ref{fig:dEdx}).
Possible evidence for gluon confinement could come from meson decays,
since $\pi_0\to \gamma \gamma$ accounts for the $\pi_0$ width, not leaving room for
the gluon reaction $\pi_0\to g g$.
The decay is however kinematically closed for both infrared solutions, that are gapped. $\Upsilon\to ggg$ has sufficient phase space, but 40 keV of its width is currently unaccounted for, so only a loose bound $\sigma_{b\bar{b}}\to ggg< 0.1\mu$barn can be obtained~\cite{HidalgoDuque:2011je}.
\begin{figure}[htb]
\centerline{\includegraphics*[width=0.5\textwidth]{FIGS.DIR/Seccioneficazgluonproton2.eps}}
\caption{Gluon-proton cross section (top band) for high energy gluons obtained from Regge theory, with input the total proton-proton cross section (bottom band) and a rescaling of the pomeron coupling based on color counting alone.}
\label{fig:secgluonproton}
\end{figure}
We suggest to further constrain free gluon production at hadron colliders such as the LHC. If a liberated gluon would reach the beampipe or vertex detector, because of its color charge, it would have a very short mean free path of about 0.6cm (see figure~\ref{fig:secgluonproton} for an estimate of the cross section based on Regge physics and large $N_c$~\cite{Brodsky:1973hm}). A sketch of our experimental proposal is detailed in figure~\ref{fig:gdetection}.
\begin{figure}[htb]
\includegraphics[width=0.5\textwidth]{FIGS.DIR/Sketchgluondet.eps}
\includegraphics[width=0.5\textwidth]{FIGS.DIR/exppenetration.eps}
\caption{Proposal to obtain collider bounds on gluon confinement. If the would-be liberated gluon reached the detector material, it would interact very strongly with atomic nuclei and eject a secondary proton (left), identified by its energy deposition and time-of-flight detector signal. The neutron background at the mbarn level can be ameleorated by noting that secondary protons would be spallated much earlier by gluons than they are by neutrons (right). Gluon liberation bounds at the $\mu$barn to nbarn level, depending on gluon energy, should be expected in pp collisions.}
\label{fig:gdetection}
\end{figure}
\section{Conclusion}
We have recalled the two widely studied behaviors for the infrared gluon. A massive-like propagator seems favored by lattice data in Landau gauge and we have shown that the effective action in the DSE formalism also suggests that this massive solution appears under unconstrained minimization. \\
The scaling solution where the Green's functions are all power laws remains nevertheless of theoretical interest, and we have shown that in stochastic quantization it appears only in Landau gauge, so perhaps it is a feature of Gribov horizon formation.\\
Although exploring alternative behaviors of IR glue in experiment seems appealing, we have noticed that gluon confinement itself is not on solid empirical footing, and have suggested to set bounds on it at hadron colliders.
|
1,116,691,499,749 | arxiv | \section{Introduction}
A long tradition of inquiry seeks sufficient sets of conditions on a linear map $U$ between Banach spaces in order that $U$ preserves the distance of elements in the spaces. The most prominent result along these lines is the Banach--Stone theorem on a linear map on the space $C(Y)$ (resp. $C_{\mathbb R}(Y)$) of complex-valued (resp. real-valued) continuous functions on a compact Hausdorff space $Y$.
Researchers have derived extensions of the Banach--Stone theorem for several different settings.
We refer the reader to \cite{fj1,fj2} for a survey of the topic. In this paper an isometry means a complex-linear isometry.
de Leeuw \cite{dl} probably initiated the study of isometries on the algebra of Lipschitz functions on the real line. Roy \cite{roy} studied isometries on the Banach space $\Lip(X)$ of Lipschitz functions on a compact metric space $X$, equipped with the norm $\|f\|=\max\{\|f\|_\infty, L(f)\}$, where $L(f)$ denotes the Lipschitz constant.
Cambern \cite{c} has considered isometries on spaces of scalar-valued continuously differentiable functions $C^1([0,1])$ with norm given by $\|f\|=\max_{x\in [0,1]}\{|f(x)|+|f'(x)|\}$ for $f\in C^1([0,1])$ and determined a representation for the surjective isometries supported by such spaces. Jim\'enez-Vargas and Villegas-Vallecillos in \cite{amPAMS} have considered isometries of spaces of vector-valued Lipschitz maps on a compact metric space taking values in a strictly convex Banach space, equipped with the norm $\|f\|=\max\{\|f\|_\infty, L(f)\}$, see also \cite{amHouston}. Botelho and Jamison \cite{bjStudia2009} studied isometries on $C^1([0,1],E)$ with $\max_{x\in [0,1]}\{\|f(x)\|_E+\|f'(x)\|_E\}$.
See also \cite{rr,mw,amy,araduba,bfj,kos,bjz,rm,bjPositivity17,mt,kawar,kc1,kc12,lcmw,kkm,lpww,jlp}
From now on, and unless otherwise mentioned, $\alpha$ will be a real scalar in $(0,1)$.
Jarosz and Pathak \cite{jp} studied a problem when an isometry on a space of continuous functions is a weighted composition operator. They provided a unified approach for certain function spaces including $C^1(X)$, $\Lip(X)$, $\lip(X)$ and $AC[0,1]$. On the other hand,
isometries on algebras of Lipschitz maps and continuously differentiable maps have often been studied independently.
We propose a unified approach to the study of isometries on algebras $\Lip(X,C(Y))$, $\lip(X,C(Y))$ and $C^1(K,C(Y))$, where $X$ is a compact metric space, $K=[0,1]$ or $\mathbb{T}$ (in this paper $\mathbb{T}$ denotes the unit circle on the complex plane), and $Y$ is a compact Hausdorff space. We define an admissible quadruple of type L (see Definition \ref{aqL}) as a common abstraction of Lipschitz algebras and algebras of continuously differentiable maps. We prove that a surjective isometry between admissible quadruple of type L is canonical (Theorem \ref{main}), in the sense that it is represented as a weighted composition operator. As corollaries we describe isometries on $\Lip(X,C(Y))$, $\lip(X,C(Y))$ and $C^1(K,C(Y))$ respectively (Corollaries \ref{isoLip}, \ref{c101}, \ref{c1t}). There is a variety of norms on $\Lip(X,C(Y))$, $\lip(X,C(Y))$ and $C^1(K,C(Y))$. In this paper we consider the norm of $\ell^1$-type; $\|F\|_{\infty(X\times Y)}+L(F)$ for $F\in \Lip(X,C(Y))$, $\|F\|_{\infty(X\times Y)}+L_\alpha(F)$ for $F\in \lip(X,C(Y))$ and $\|F\|_{\infty(K\times Y)}+\|F'\|_{\infty(K \times Y)}$ for $F\in C^1(K,C(Y))$. With these norms $\Lip(X,C(Y))$, $\lip(X,C(Y))$ and $C^1(K,C(Y))$ are commutative Banach algebras respectively.
Jarosz and Pathak exhibited in \cite[Example 8]{jp} that a surjective isometry on $\Lip(X)$ and $\lip(X)$ of a compact metric space $X$ with respect to the norm $\|\cdot\|_\infty+ L_\alpha(\cdot)$ is canonical.
There seem to be a confusion of the status of the result and it would be appropriate to clarify the current situation. After the publication of \cite{jp} some authors expressed their suspicion about the argument there and the validity of the statement there had not been confirmed when the authors of \cite{lpww} pointed out a gap by referring the comment of Weaver \cite[p. 243]{wea}. While Weaver in \cite{wea} pointed out that the argument of \cite{jp} failed on p.200 in which the norm $\max\{\|\cdot\|_\infty, L(\cdot)\}$ was studied, he did not seem to have stated explicitly that the argument in the Example 8 contained a flaw.
The authors of the present paper find it difficult to follow the argument given in the Example 8. Besides non-substantial typos, the well-definedness of the map $\Psi_\vartheta:\operatorname{ext}B^*\to \operatorname{ext}B^*$ (\cite[p. 205, line 8]{jp}), where $\operatorname{ext}B^*$ is the set of all extreme points in the closed unit ball of the dual space of $B=\Lip_{\alpha'}(Y)$ given by $\Psi_\vartheta(\gamma \delta_{(y,\omega,\beta)})=\gamma \delta_{(y,\omega, e^{i\vartheta}\beta)}$ (note that the formula on the line 9 of \cite[p. 205]{jp} reads in this way) seems to require further explanation. On the other hand Corollary \ref{JPOK} of this paper confirms the statement of \cite[Example 8]{jp}. Our proof uses a similar but slightly different vein than that of Jarosz-Pathak's argument.
The main result in this paper is Theorem \ref{main}, which gives the form of a surjective isometry $U$ between admissible quadruples of type L. The proof of the necessity of the isometry in Theorem \ref{main} comprises several steps. We give an outline of the proof. The crucial part of the proof of Theorem \ref{main} is to prove that $U(1)=1\otimes h$ for an $h\in C(Y_2)$ with $|h|=1$ on $Y_2$ (Proposition \ref{absolute value 1}). To prove Proposition \ref{absolute value 1} we apply Choquet's theory with measure theoretic arguments (Lemmata \ref{1},\ref{2}). By Proposition \ref{absolute value 1} we have that $U_0=(1\otimes \bar{h})U$ is a surjective isometry fixing the unit. Then by applying a theorem of Jarosz \cite{ja} we see that $U_0$ is also an isometry with respect to the supremum norm. By the Banach--Stone theorem $U_0$ is an algebra isomorphism and applying \cite{hots} we see that $U_0$ is a composition operator of type BJ.
\section{Preliminaries with Definitions and Basic Results
}
\subsection{Algebras of Lipschitz maps and continuously differentiable maps}
Let $Y$ be a compact Hausdorff space.
Let $E$ be a complex Banach space. The space of all $E$-valued continuous maps on $Y$ is denoted by $C(Y,E)$. When $E={\mathbb C}$, $C(Y,E)$ is abbreviated $C(Y)$. The space of all real-valued continuous functions on $Y$ is denoted by $C_{\mathbb R}(Y)$.
For a subset $K$ of $Y$, the supremum norm on $K$ is $\|F\|_{\infty(K)}=\sup_{x\in K}\|F(x)\|_E$ for $F\in C(Y,E)$.
When no confusion will result we omit the subscript $K$ and write only $\|\cdot\|_{\infty}$.
Let $X$ be a compact metric space and $0<\alpha\le 1$. For $F\in C(X,E)$, put
\[
L_\alpha(F)=\sup_{x\ne y}\frac{\|F(x)-F(y)\|_E}{d(x,y)^\alpha},
\]
which is called an $\alpha$-Lipschitz number of $F$, or just a Lipschitz number of $F$. When $\alpha=1$ we omit the subscript $\alpha$ and write only $L(F)$. The space of all $F\in C(X,E)$ such that $L_\alpha(F)<\infty$ is denoted by $\Lip_\alpha(X,E)$. When $\alpha=1$ the subscript is omitted and it is written as $\Lip(X,E)$.
When $0<\alpha<1$ the closed subspace
\begin{multline*}
\lip(X,E)
\\
=\{F\in \Lip_\alpha(X,E):\text{$\lim_{x\to x_0}\frac{\|f(x_0)-f(x)\|_E}{d(x_0,x)^\alpha}=0$ for every $x_0\in X$}\}
\end{multline*}
of $\Lip_\alpha(X,E)$ is called a little Lipschitz space. In this paper the norm $\|\cdot\|$ of $\Lip_\alpha(X,E)$ (resp. $\lip(X,E)$) is defined by
\[
\|F\|=\|F\|_{\infty(X)}+L_\alpha(F), \quad F\in \Lip_\alpha(X,E)\,\, \text{(resp. $\lip(X,E)$)}.
\]
Note that if $d(\cdot,\cdot)$ is a metric, then so is $d(\cdot,\cdot)^\alpha$, and is denoted by $d^\alpha$ which is called a H\"older metric.
For a compact metric space $(X,d)$,
$\Lip_\alpha((X,d), E)$ is isometrically isomorphic to $\Lip((X,d^\alpha),E)$.
In this paper we mainly concern with $E=C(Y)$. In this case $\Lip_\alpha(X,C(Y))$ and $\lip(X,C(Y))$ are unital semisimple commutative Banach algebras with $\|\cdot\|$. When $E={\mathbb C}$ $\Lip(X,{\mathbb C})$ (resp. $\lip(X,{\mathbb C})$) is abbreviated to $\Lip(X)$ (resp. $\lip(X)$).
There are a variety of complete norms other than $\|\cdot\|$. For example $\|\cdot\|_{\max}=\max\{\|\cdot\|_{\infty}, L_\alpha(\cdot)\}$ is such a norm, but it fails to be submultiplicative. Hence $\Lip_\alpha(X,C(Y))$ and $\lip(X,C(Y))$ need not be Banach algebras with respect to the norm $\|\cdot\|_{\max}$.
Let $F\in C(K,C(Y))$ for $K=[0,1]$ or ${\mathbb T}$. We say that $F$ is continuously differentiable if there exists $G\in C(K,C(Y))$ such that
\[
\lim_{K\ni t\to t_0}\left\|\frac{F(t_0)-F(t)}{t_0-t}-G(t_0)\right\|_{\infty(Y)}=0
\]
for every $t_0\in K$. We denote $F'=G$.
Put $C^1(K,C(Y))=\{F\in C(K,C(Y)):\text{$F$ is continuously differentiable}\}$. Then $C^1(K,C(Y))$ with norm $\|F\|=\|F\|_\infty+\|F'\|_\infty$ is a unital semisimple commutative Banach algebra.
If $Y$ is singleton we may suppose that $C(Y)$ is isometrically isomorphic to ${\mathbb C}$ and we abbreviate $C^1(K,C(Y))$ by $C^1(K)$.
By identifying $C(X, C(Y))$ with $C(X\times Y)$ we may assume that $\Lip(X,C(Y))$ is a subalgebra of $C(X\times Y)$ by the correspondence
\[
F\in \Lip(X,C(Y)) \leftrightarrow ((x,y)\mapsto (F(x))(y))\in C(X\times Y).
\]
Throughout the paper we may assume that
\begin{equation*}
\begin{split}
&\Lip(X,C(Y))\subset C(X\times Y), \\
&\lip(X,C(Y))\subset C(X\times Y), \\
&C^1(K,C(Y))\subset C(K\times Y).
\end{split}
\end{equation*}
We say that a subset $S$ of $C(Y)$ is point separating if $S$ separates the points of $Y$.
Suppose that $B$ is a unital point separating subalgebra of $C(Y)$ equipped with a Banach algebra norm. Then $B$ is semisimple because
$\{f\in B:f(x)=0\}$ is a maximal ideal of $B$ for every $x\in X$ and the Jacobson radical of $B$ vanishes.
The unit of $B$ is denoted by $1_B$. When no confusion will result we omit the subscript $B$ and write simply as $1$. The maximal ideal space of $B$ is denoted by $M_B$.
\begin{definition}
We say that $B$ is inverse-closed if $f\in B$ with $f(y)\ne 0$ for every $y\in Y$ implies $f^{-1}\in B$.
We say that $B$ is natural if the map $e:Y\to M_B$ defined by $y\mapsto \phi_y$, where $\phi_y(f)=f(y)$ for every $f\in B$, is bijective. We say that $B$ is self-adjoint if $B$ is natural and satisfies that $f\in B$ implies that $\bar{f}\in B$ for every $f\in B$, where $\bar{\cdot}$ denotes the complex conjugation on $Y=M_B$.
\end{definition}
Note that conjugate closedness of $B$ (that is $f\in B$ implies $\bar{f}\in B$) needs not imply the self-adjointness of $B$.
\begin{prop}\label{el}
Let $Y$ be a compact Hausdorff space. Suppose that $B$ is a unital point separating subalgebra of $C(Y)$ equipped with a Banach algebra norm. If $B$ is dense in $C(Y)$ and inverse-closed, then $B$ is natural.
\end{prop}
\begin{proof}
Suppose that $e:Y\to M_B$ is not surjective. Then there exists $\phi\in M_B$ such that for every $y\in Y$ there exists $f_y\in B$ with $\phi(f_y)=0$ such that $f_y(y)=1$. As $Y$ is compact, there exists a finite number of $f_1,\dots, f_n\in B$ with $\phi(f_j)=0$ for $j=1,\dots, n$ such that $\sum_{j=1}^n|f_j|^2>0$ on $Y$. Since $B$ is uniformly dense in $C(Y)$ there exist $g_1,\dots, g_n\in B$ such that $\sum_{j=1}^nf_jg_j>0$ on $Y$. As $B$ is inverse-closed, there exists $h\in B$ such that $h\sum_{j=1}^nf_jg_j=1_B$. As $\phi(f_j)=0$ for $j=1,\dots, n$ we have $0=\phi(h\sum_{j=1}^nf_jg_j)=\phi(1_B)=1$, which is a contradiction.
\end{proof}
\begin{cor}\label{eell}
The unital Banach algebras $\Lip(X)$ and $\Lip(X,C(Y))$ with $\|\cdot\|_\infty+L(\cdot)$ are point separating and self-adjoint. For $0<\alpha<1$ the unital Banach algebras $\lip(X)$ with $\|\cdot\|_\infty+L_\alpha(\cdot)$ and $\lip(X,C(Y))$ with $\|\cdot\|_\infty+L_\alpha(\cdot)$ are point separating and self-adjoint. For $K=[0,1]$ and ${\mathbb T}$, the unital Banach algebras $C^1(K)$ with $\|\cdot\|_\infty+\|\cdot'\|_\infty$ and $C^1(K, C(Y))$ with $\|\cdot\|_\infty+\|\cdot'\|_\infty$ are point separating and self-adjoint.
\end{cor}
\begin{proof}
The Lipschitz algebra $\Lip(X)$ is a unital point separating subalgebra of $C(X)$ equipped with a Banach algebra norm $\|\cdot\|_\infty+L(\cdot)$. As $\Lip(X)$ is conjugate closed, the Stone-Weierstrass theorem asserts that $\Lip(X)$ is uniformly dense in $C(X)$. Thus it is natural by Proposition \ref{el}, and, self-adjoint.
In a similar way to that for $\Lip(X)$ we infer that $\Lip(X,C(Y))$ is self-adjoint.
Suppose that $0<\alpha<1$. Then we see that $\lip(X)$ separates the points of $X$. (Let $x,y$ be different points in $X$. Put $f:X\to {\mathbb C}$ by $f(\cdot)=d(\cdot,y)$. By a simple calculation we infer that $f\in \lip(X)$ and $f(x)\ne f(y)$.) In the same way as above we see that $\lip(X)$ and $\lip(X,C(Y))$ are natural, hence self-adjoint.
Let $K=[0,1]$ or $K={\mathbb T}$.
In the same way as above we see that $C^1(K)$ is self-adjoint. In the same way as above $C^1(K,C(Y))$ is self-adjoint.
\end{proof}
\subsection{Admissible quadruples of type L}
An admissible quadruple was defined by Nikou and O'Farrell in \cite{no} (see also a comment just after Definition 2.2 in \cite{hots}).
The definition is little complicated and we adopt a simpler definition that is sufficient for our purpose. For a detailed account of admissible quadruples see
\cite{no} and \cite{hots}.
Let $X$ and $Y$ be compact Hausdorff spaces.
For functions $f\in C(X)$ and $g\in C(Y)$, let $f\otimes g\in C(X\times Y)$ be the function defined by $f\otimes g(x,y)=f(x)g(y)$, and for a subspace $E_X$ of $C(X)$ and a subspace $E_Y$ of $C(Y)$, let
\[
E_X\otimes E_Y=\left\{\sum_{j=1}^nf_j\otimes g_j: n\in {\mathbb N},\,\,f_j\in E_X,\,\,g_j\in E_Y\right\}.
\]
An admissible quadruple $(X, C(Y), B, \widetilde{B})$ in this paper is defined as follows.
\begin{definition}\label{aqL}
Let $X$ and $Y$ be compact Hausdorff spaces.
Let $B$ and $\wb$ be unital point separating subalgebras of $C(X)$ and $C(X\times Y)$ equipped with Banach algebra norms respectively which satisfy
\[
B\otimes C(Y)\subset \widetilde{B},\,\,\{F(\cdot, y):F\in \widetilde{B},\,\,y\in Y\}\subset B.
\]
We say that $(X,C(Y),B,\widetilde{B})$ is an admissible quadruple of type L if the following conditions are satisfied.
\begin{itemize}
\item[$\cno$]
The algebras $B$ and $\widetilde{B}$ are self-adjoint.
\item[$\cnt$]
There exists a compact Hausdorff space $\mathfrak{M}$ and a complex-linear operator $D:\wb\to C(\mathfrak{M})$ such that
\[D(\widetilde{B}\cap C_{\mathbb R}(X\times Y))\subset C_{\mathbb R}(\mathfrak{M})
\]
and also
\begin{itemize}
\item[(1)]
the norm $\|\cdot\|$ on $\widetilde{B}$ satisfies
\[
\|F\|=\|F\|_{\infty(X\times Y)}+\|D(F)\|_{\infty(\mathfrak{M})},\quad F\in \widetilde{B},
\]
\item[(2)]
$\operatorname{ker}D=1_B\otimes C(Y)$,
\item[(3)]
$\|D((1_B\otimes g)F)\|_{\infty(\mathfrak{M})}=\|D(F)\|_{\infty(\mathfrak{M})}$ for every $F\in \wb$ and $g\in C(Y)$ such that $|g|=1$ on $Y$.
\end{itemize}
\end{itemize}
\end{definition}
It will be appropriate to make a few comments on the above definition.
First we do not assume that $D(\widetilde{B})$ is point separating.
Next $B$ and $\widetilde{B}$ are semisimple since they are point separating.
For a point $x\in X$ define $e_x:\widetilde{B}\to C(Y)$ by $e_x(F)=F(x,\cdot)$ for every $F\in \widetilde{B}$.
A theorem of \v Silov (see \cite[Theorem 3.1.11]{pal}) states that the map $e_x:\widetilde{B}\to C(Y)$ is automatically continuous for every $x\in X$ since $C(Y)$ is semisimple.
Hence it is straightforward to check that an admissible quadruple of type L is in fact an admissible quadruple defined by Nikou and O'Farrell in \cite{no} (see also \cite{hots}). In particular if $X$ is a compact metric space, then $(X, C(Y), \Lip(X), \Lip(X,C(Y)))$, $(X,C(Y), \lip(X), \lip(X,C(Y)))$ and $(K, C(Y), C^1(K), C^1(K,C(Y)))$ for $K=[0,1],{\mathbb T}$ are admissible quadruples of type L. See Section \ref{example}.
We define a seminorm $\nn\cdot\nn$ on $\wb$ by $\nn F\nn=\|D(F)\|_{\infty(\mathfrak{M})}$ for $F\in \wb$.
Note that $\nn\cdot\nn$ is one-invariant in the sense of Jarosz \cite{ja} ($\nn F\nn=\nn F+1_{\wb}\nn$ for every $F\in \wb$) since $1_{\wb}=1_B\otimes 1_{C(Y)}$ and $D(1_{\wb})=0$.
The norm $\|\cdot\|=\|\cdot\|_\infty+\nn\cdot\nn$ is a $p$-norm (see \cite[p.67]{ja}).
\subsection{Preliminaries on measures}
We recall some basic properties of regular Borel measures for the convenience of the readers. As the authors could not find appropriate references, we exhibit the properties in Lemmata \ref{0.1}, \ref{0.2} and \ref{0.3}.
In Lemmata \ref{0.1} and \ref{0.2}, $X$ is a compact Hausdorff space and $\mu$ is a Borel probability measure (a positive measure on the $\sigma$-algebra of Borel sets whose total measure is $1$). For a non-empty Borel subset $S$ of $X$, $\mu|S$ denotes the measure on $S$ which is the restriction of $\mu$; $\mu|S(E)=\mu(E)$ for a Borel set $E\subset S$. Recall that the support of $\mu$ is the set defined by
\[
\suppm \mu=\{x\in X: \text{$\mu(U)>0$ for every open neighborhood $U$ of $x$}\}.
\]
\begin{lemma}\label{0.1}
Let $K$ be a non-empty compact subset of $X$ and $f\in C(X)$. Assume that $f\le c$ on $K$ for a constant $c>0$. If
\[
\int_K f d\mu=c\mu(K),
\]
then $\suppm(\mu|K)\subset f^{-1}(c)\cap K$.
\end{lemma}
\begin{proof}
Let $x\in \suppm(\mu|K)$. Then $x\in K$ by the definition of the support of $\mu|K$. Suppose that $f(x)\ne c$. As $f\le c$ on $K$, we have $f(x)<c$. Since $f|K$ is continuous on $K$, there exists an open neighborhood $U$ of $x$ relative to $K$ such that $f<(f(x)+c)/2$ on $U$. As $x\in \suppm(\mu|K)$ we have that $\mu(U)>0$. Then
\begin{equation*}
\begin{split}
\int_Kfd\mu
&=\int_Ufd\mu + \int_{K\setminus U}fd\mu \\
&\le \frac{f(x)+c}{2}\mu(U) +c\mu(K\setminus U) \\
&= c\mu(K)-\frac{c-f(x)}{2}\mu(U)<c\mu(K),
\end{split}
\end{equation*}
which is a contradiction proving that $f(x)=c$. Thus we conclude that $\suppm(\mu|K)\subset f^{-1}(c)\cap K$.
\end{proof}
\begin{lemma}\label{0.2}
Suppose that $K_1$ and $K_2$ are non-empty compact subsets of $X$. Then
\[
\suppm (\mu|K_1) \cup \suppm (\mu|K_2) =\suppm( \mu|(K_1\cup K_2)).
\]
\end{lemma}
\begin{proof}
Suppose that $x\in \suppm(\mu|K_1)$. Let $G$ be an arbitrary open neighborhood of $x$ relative to $K_1\cup K_2$. Then there is an open set $\tilde G$ in $X$ with $\tilde{G}\cap (K_1\cup K_2)=G$. Then $\tilde{G}\cap K_1$ is an open neighborhood of $x$ relative to $K_1$ and $G=\tilde{G}\cap(K_1\cup K_2)\supset \tilde{G}\cap K_1$. As $x\in \suppm(\mu|K_1)$ we have $0<\mu(\tilde{G}\cap K_1)\le \mu(G)$. Since $G$ is arbitrary we conclude that $x\in \suppm(\mu|(K_1\cup K_2))$; that is $\suppm(\mu|K_1)\subset\suppm(\mu|(K_1\cup K_2))$. In the same way we have $\suppm(\mu|K_2)\subset\suppm(\mu|(K_1\cup K_2))$. Thus we have $\suppm(\mu|K_1) \cup \suppm(\mu|K_2)\subset\suppm(\mu|(K_1\cup K_2))$.
Suppose conversely that $x\in \suppm(\mu|(K_1\cup K_2))$. Then $x\in K_1\cup K_2$. Suppose that $x\not\in \suppm(\mu|K_1)\cup \suppm(\mu|K_2)$. First we consider the case that $x\in K_1\cap K_2$. Then there is an open neighborhood $G_1$ of $x$ relative to $K_1$ and an open neighborhood $G_2$ of $x$ relative to $K_2$ such that $\mu(G_1)=\mu(G_2)=0$ since we have assumed that $x\not\in \suppm(\mu|K_1)\cup \suppm(\mu|K_2)$. There exist open sets $\tilde{G_1}$ and $\tilde{G_2}$ in $X$ such that $\tilde{G_1}\cap K_1=G_1$ and $\tilde{G_2}\cap K_2=G_2$. Put $\tilde{G}=\tilde{G_1}\cap \tilde{G_2}$. Then $\tilde{G}$ is an open set in $X$ and $x\in \tilde{G}$. Then $\tilde{G}\cap(K_1\cup K_2)$ is an open neighborhood of $x$ relative to $K_1\cup K_2$ and
\[
\tilde{G}\cap (K_1\cup K_2)=(\tilde{G}\cap K_1)\cup(\tilde{G}\cap K_2)
\subset
(\tilde{G_1}\cap K_1)\cup (\tilde{G_2}\cap K_2)=G_1\cup G_2.
\]
Then
\[
0\le \mu(\tilde{G}\cap (K_1\cup K_2))\le \mu(G_1\cup G_2)
\le \mu(G_1)+\mu(G_2)=0,
\]
so that $\mu(\tilde{G}\cap (K_1\cup K_2))=0$, which is a contradiction since $x\in \suppm(\mu|(K_1\cup K_2))$. Next we consider the case where $x\in K_1$ and $x\not\in K_2$. Then there exists an open neighborhood $G_1$ of $x$ relative to $K_1$ with $\mu(G_1)=0$ since we have assumed that $x\not\in\suppm(\mu|K_1)$. There exists an open set $\tilde{G_1}$ in $X$ such that $\tilde{G_1}\cap K_1=G_1$. Since $x\not\in K_2$ we infer that $\tilde{G_1}\cap K_2^c$ is an open neighborhood of $x$ in $X$. Then $(\tilde{G_1}\cap K_2^c)\cap(K_1\cup K_2)$ is an open neighborhood of $x$ relative to $K_1\cup K_2$ and
\[
(\tilde{G_1}\cap K_2^c)\cap(K_1\cup K_2)=\tilde{G_1}\cap K_2^c\cap K_1\subset \tilde{G_1}\cap K_1=G_1.
\]
As $(\tilde{G_1}\cap K_2^c)\cap(K_1\cup K_2)$ is an open neighborhood of $x$ relative to $K_1\cup K_2$, we infer that
$0<\mu((\tilde{G_1}\cap K_2^c)\cap(K_1\cup K_2))$ since $x\in \suppm(\mu|(K_1\cup K_2))$. On the other hand $(\tilde{G_1}\cap K_2^c)\cap(K_1\cup K_2)\subset G_1$ assures that
\[
0< \mu((G_1\cap K_2^c)\cap(K_1\cup K_2))\le \mu(G_1)=0,
\]
which is a contradiction. In the same way we derive a contradiction for the case where $x\not\in K_1$ and $x\in K_2$. Therefore we conclude that $x\in \suppm(\mu|K_1)\cup \suppm(\mu|K_2)$.
\end{proof}
We assume the regularity for the measure $\mu$ in Lemma \ref{0.3}. If $\mu$ is a regular Borel probability measure on a compact Hausdorff space $Y$, then for any Borel set $S$ in $Y\setminus \suppm(\mu)$ we have $\mu(S)=0$. Indeed the regularity of $\mu$ assures that $\mu(S)$ is approximated arbitrarily closely by $\mu(E)$ for a compact subset $E\subset S$. Since $S\cap \suppm(\mu)=\emptyset$, we use the compactness to cover $E$ by a finitely many open sets with measure zero. This implies $\mu(E)=0$ and thus $\mu(S)=0$.
\begin{lemma}\label{0.3}
Let $Y$ be a compact Hausdorff space and let $K$ be a non-empty compact subset of $Y$ and let
$\mu$ be a regular Borel probability measure on $Y\times \mathbb{T}$.
Let $g\in C_{\mathbb{R}}(Y)$ such that $|g|\le c$ on $K$ for some $c>0$. Suppose that there exists $\gamma_0\in \mathbb{T}$ such that
\[
\int_{K\times\mathbb{T}}\gamma g(y)
d\mu(y,\gamma)=\gamma_0c \mu(K\times \mathbb{T}).
\]
Then we have the inclusion
\begin{multline*}
\suppm (\mu|K\times \mathbb{T})
\\
\subset
\left\{(g^{-1}(c)\cap K)\times \{\gamma_0\}\right\}
\cup
\left\{(g^{-1}(-c)\cap K)\times \{-\gamma_0\}\right\}.
\end{multline*}
\end{lemma}
\begin{proof}
As $|\gamma g|=|g|\le c$ on $K\times \mathbb{T}$ we have
\[
c\mu (K\times \mathbb{T})=\left|\int_{K\times\mathbb{T}}\gamma g(y)d\mu\right|\le \int_{K\times \mathbb{T}}|g(y)|d\mu\le c\mu(K\times \mathbb{T}),
\]
hence $\int_{K\times \mathbb{T}}|(g\otimes 1_{C(\mathbb{T})})(y,\gamma)|d\mu=\int_{K\times \mathbb{T}}|g(y)|d\mu= c\mu(K\times \mathbb{T})$.
Letting $|g\otimes 1_{C(\mathbb{T})}|$ be
the function
$f$
and $K\times \mathbb{T}$ be the compact set $K$ of Lemma \ref{0.1} respectively we have
\[
\suppm(\mu|K\times \mathbb{T})\subset (|g\otimes 1_{C({\mathbb T})}|^{-1}(c))\cap (K\times\mathbb{T})=
(|g|^{-1}(c)\cap K)\times \mathbb{T}.
\]
As $g$ is a real-valued function we infer by a simple calculation that
\[
|g|^{-1}(c)=g^{-1}(c)\cup g^{-1}(-c).
\]
Put $K_1=g^{-1}(c)$ and $K_2=g^{-1}(-c)$. As $c>0$, we have $K_1\cap K_2=\emptyset$. Then
\begin{multline}\label{o1}
\suppm(\mu|K\times \mathbb{T})
\subset ((K_1\cup K_2)\cap K)\times\mathbb{T}\\
=(K_1\cap K)\times \mathbb{T} \cup (K_2\cap K)\times \mathbb{T}.
\end{multline}
As $\mu$ is regular, we have that
\[
\mu(K\times \mathbb{T}\setminus[(K_1\cap K)\times \mathbb{T} \cup (K_2\cap K)\times \mathbb{T}])=0.
\]
It follows that
\begin{multline*}
\gamma_0 c \mu (K\times \mathbb{T})=
\int_{K\times\mathbb{T}}\gamma g(y)d\mu \\
=\int_{(K_1\cap K)\times\mathbb{T}}\gamma g(y)d\mu +
\int_{(K_2\cap K)\times\mathbb{T}}\gamma g(y)d\mu \\
=c\int_{(K_1\cap K)\times\mathbb{T}}\gamma d\mu-c\int_{(K_2\cap K)\times\mathbb{T}}\gamma d\mu.
\end{multline*}
Thus we have
\begin{equation}\label{sub1}
\mu(K\times \mathbb{T})=\int_{(K_1\cap K)\times \mathbb{T}}\overline{\gamma_0}\gamma d\mu-\int_{(K_2\cap K)\times\mathbb{T}}\overline{\gamma_0}\gamma d\mu.
\end{equation}
Put $M_1=\int_{(K_1\cap K)\times \mathbb{T}}1d\mu$ and $M_2=\int_{(K_2\cap K)\times \mathbb{T}}1d\mu$. As $\mu$ is regular and $K_1\cap K_2=\emptyset$ we have
\begin{equation}\label{sub2}
M_1+M_2= \int_{((K_1\cup K_2)\cap K)\times \mathbb{T}}1d\mu=\int_{K\times \mathbb{T}}1d\mu=\mu(K\times \mathbb{T}).
\end{equation}
Put
\[
\int_{(K_1\cap K)\times\mathbb{T}}\overline{\gamma_0}\gamma d\mu=e^{i\delta_1}N_1, \quad
\int_{(K_2\cap K)\times\mathbb{T}}\overline{\gamma_0}\gamma d\mu=e^{i\delta_2}N_2,
\]
where $N_1, N_2\ge 0$ and $\delta_1,\delta_2\in {\mathbb R}$. We may assume that $e^{i\delta_1}=1$ if $N_1=0$ and $e^{i\delta_2}=-1$ if $N_2=0$. Note that $N_1\le M_1$ and $N_2\le M_2$. By \eqref{sub1} and \eqref{sub2} we obtain
\[
M_1+M_2=e^{i\delta_1}N_1-e^{i\delta_2}N_2.
\]
Then by a simple calculation we have that $e^{i\delta_1}=-e^{i\delta_2}=1$, $N_1=M_1$, and $N_2=M_2$, that is,
\[
\int_{(K_1\cap K)\times\mathbb{T}}\overline{\gamma_0}\gamma d\mu =\mu((K_1\cap K)\times\mathbb{T}), \quad
\int_{(K_2\cap K)\times\mathbb{T}}-\overline{\gamma_0}\gamma d\mu =\mu((K_2\cap K)\times\mathbb{T}).
\]
Then
\begin{equation}\label{sub3}
\mu((K_1\cap K)\times\mathbb{T})=\operatorname{Re}\int_{(K_1\cap K)\times \mathbb{T}}\overline{\gamma_0}\gamma d\mu=
\int_{(K_1\cap K)\times\mathbb{T}}\operatorname{Re}\overline{\gamma_0}\gamma d\mu,
\end{equation}
\begin{equation}\label{sub4}
\mu((K_2\cap K)\times\mathbb{T})=\operatorname{Re}\int_{(K_2\cap K)\times \mathbb{T}}-\overline{\gamma_0}\gamma d\mu=
\int_{(K_2\cap K)\times\mathbb{T}}\operatorname{Re}(-\overline{\gamma_0}\gamma) d\mu.
\end{equation}
Applying Lemma \ref{0.1} to \eqref{sub3} we infer that
\[
\operatorname{supp}(\mu|((K_1\cap K)\times\mathbb{T}))\subset
(K_1\cap K)\times \{\gamma_0\}
\]
since $\operatorname{Re}\overline{\gamma_0}\gamma \le 1$ and $(\operatorname{Re}\overline{\gamma_0}\gamma)^{-1}(1)=\{\gamma_0\}$.
In the same way we have by \eqref{sub4} that
\[
\operatorname{supp}(\mu|((K_2\cap K)\times\mathbb{T}))\subset
(K_2\cap K)\times \{-\gamma_0\}.
\]
By Lemma \ref{0.2} we have that
\begin{multline}\label{o2}
\operatorname{supp}\Big(\mu|\big(((K_1\cup K_2)\cap K)\times\mathbb{T}\big)\Big)
\\
\subset
\big\{(K_1\cap K)\times\{\gamma_0\}\big\}\cup
\big\{(K_2\cap K)\times\{-\gamma_0\}\big\}.
\end{multline}
Since $\mu$ is regular, so is $\mu|(K\times\mathbb{T})$. Thus $\mu|(K\times\mathbb{T})$ is a regular Borel measure on $K\times\mathbb{T}$ such that $\operatorname{supp}(\mu|(K\times\mathbb{T}))\subset ((K_1\cup K_2)\cap K)\times\mathbb{T}$ by \eqref{o1}. Thus
\begin{multline*}
\operatorname{supp}(\mu|(K\times\mathbb{T}))=
\operatorname{supp}\Big(\big(\mu|(K\times\mathbb{T})\big)|\big(((K_1\cup K_2)\cap K)\times\mathbb{T}\big)\Big)\\
=\operatorname{supp}\Big(\mu|\big(((K_1\cup K_2)\cap K)\times\mathbb{T}\big)\Big),
\end{multline*}
hence the conclusion holds by \eqref{o2}.
\end{proof}
Lemma \ref{0.3} plays an essential role in the proof of Lemma \ref{2} which is a crucial lemma for the proof of Proposition \ref{absolute value 1}.
\section{Isometries on admissible quadruples of type L}
The main result of this paper is the following.
\begin{theorem}\label{main}
Suppose that $(X_j, C(Y_j), B_j, \wbj)$ is an admissible quadruple of type L for $j=1,2$.
Suppose that $U:\wbo\to \wbt$ is a surjective isometry. Then there exists $h\in C(Y_2)$ such that $|h|=1$ on $Y_2$, a continuous map $\varphi:X_2\times Y_2\to X_1$ such that $\varphi(\cdot,y):X_2\to X_1$ is a homeomorphism for each $y\in Y_2$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy
\[
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in X_2\times Y_2
\]
for every $F\in \wbo$.
\end{theorem}
In short a surjective isometry between admissible quadruples of type L is canonical, that is, a weighted composition operator of a specific form: the homeomorphism $X_2\times Y_2 \to X_1\times Y_1, \,\,(x,y)\mapsto (\varphi(x,y),\tau(y))$ has the second coordinate that depends only on the second variable $y\in Y_2$.
A composition operator induced by such a homeomorphism is said to be of type BJ in \cite{ho,hots} after the study of Botelho and Jamison \cite{bjRocky}. That every composition operator on an admissible quadruple $(X, E, B, \widetilde{B})$ onto itself is of type BJ indicates that $B$ and $E$ are totally different Banach algebras.
\section{The form of $U(1_{\wb_1})$}
Throughout this section we assume
that $U:\wbo \to \wbt$ is a surjective linear isometry satisfying all the hypotheses of Theorem \ref{main} without further mention. For the simplicity of the proof of Theorem \ref{main} we assume that $X_2$ is not a singleton in this section. Our main purpose in this section is to prove Proposition \ref{absolute value 1}, which is a crucial part of proof of Theorem \ref{main}.
\begin{prop}\label{absolute value 1}
There exists $h\in C(Y_2)$ with $|h|=1$ on $Y_2$ such that $U(1_{\wb_1})=1_{B_2}\otimes h$.
\end{prop}
Lemma \ref{2} is crucial for the proof of Proposition \ref{absolute value 1}. We prove Lemma \ref{2} by applying Choquet's theory (\cite{ph}) which studies the extreme point of the dual unit ball of the space of continuous functions with the supremum norms. To apply the theory we first define an isometry
from $\widetilde{B_j}$ into a uniformly closed space of complex-valued continuous functions.
Let $j=1, 2$. Define a map
\[
I_j:\wbj \to C(X_j\times Y_j\times \M_j\times \mathbb{T})
\]
by $I_j(F)(x,y,m,\gamma)=F(x,y)+\gamma D_j(F)(m)$ for $F\in \wbj$ and $(x,y,m,\gamma)\in X_j\times Y_j\times M_j\times \mathbb{T}$. (Recall that $\mathbb{T}$ is the unit circle in the complex plane.) As $D_j$ is a complex linear map, so is $I_j$. Let $S_j=X_j\times Y_j\times M_j\times \mathbb{T}$. For simplicity we just write $I$ and $D$ instead of $I_j$ and $D_j$ without causing any confusion. For every $F\in \wbj$ the supremum norm $\|I(F)\|_{\infty}$ on $S_j$ of $I(F)$ is written as
\begin{equation*}
\begin{split}
\|I(F)\|_\infty
& =\sup\{|F(x,y)+\gamma D(F)(m)|:(x,y,m,\gamma)\in S_j\}\\
& =\sup\{|F(x,y)|:(x,y)\in X_j\times Y_j\}\\
&\qquad
+\sup\{|D(F)(m)|:m\in \M_j\}\\
&=\|F\|_{\infty(X_j\times Y_j)}+\|D(F)\|_{\infty(\M)}.
\end{split}
\end{equation*}
The second equality follows by an inspection that $\gamma$ runs through the whole $\mathbb{T}$.
It follows that
\[
\|I(F)\|_{\infty}=\|F\|_{\infty}
+\|D(F)\|_{\infty}=
\|F\|
\]
for every $F\in \wbj$. Since $0= \| D(1)\|_{\infty}$, we have $D(1)=0$ and $I(1)=1$. Hence $I$ is a complex-linear isometry with $I(1)=1$. In particular, $I(\wbj)$ is a complex-linear closed subspace of $C(S_j)$ which contains $1$. In general $I(\wbj)$ needs not separate the points of $S_j$.
It follows from the definition in \cite{ph} of the Choquet boundary $\ch I(\wbt)$ of $I(\wbt)$, we see that a point $p=(x,y,m,\gamma)\in X_2\times Y_2\times\mathfrak{M}\times \mathbb{T}$ is in $\ch I(\wbt)$ if the point evaluation $\phi_p$ at $p$ is an extreme point of the state space, or equivalently $\phi_p$ is an extreme point of the closed unit ball $(I(\wbt))^*_1$ of the dual space $(I(\wbt))^*$ of $I(\wbt)$.
\begin{lemma}\label{1}
Suppose that
$(x_0,y_0)\in X_2\times Y_2$ and ${\mathfrak U}$ is an open neighborhood of $(x_0,y_0)$. Then there exists a function $F_0=b_0\otimes f_0\in \wbt$ with $0\le b_0\le 1$ on $X_2$ and $0\le f_0\le 1$ on $Y_2$ such that $F_0(x_0,y_0)=1$ and $F_0<1/2$ on $X_2\times Y_2\setminus \mathfrak{U}$. Furthermore there exists a point $(x_c,y_c,m_c,\gamma_c)$ in the Choquet boundary for $I_2(\wbt)$ such that $(x_c,y_c)\in {\mathfrak U}\cap (b^{-1}_0(1)\times f^{-1}_0 (1))$ and $\gamma_cD(F_0)(m_c)=\|D(F_0)\|_\infty\ne 0$.
\end{lemma}
\begin{proof}
Suppose that $\mathfrak{G}$ and $\mathfrak{H}$ are open neighborhoods of $x_0$ and $y_0$ respectively such that $\mathfrak{G\times H}\subset \mathfrak{U}$. Since $B_2$ is unital, self-adjoint and separates the points of $X_2$, the Stone-Weierstrass theorem asserts that $B_2$ is uniformly dense in $C(X_2)$. By the Urysohn's lemma there exists $v\in C(X_2)$ such that $0\le v\le 4/5$ on $X_2$, $v(x_0)=0$, and $v=4/5$ on $X_2\setminus \mathfrak{G}$. As $B_2$ is self-adjoint and uniformly dense in $C(X_2)$, there exists $u_1\in B_2\cap C_{\mathbb R}(X_2)$ such that $\|v-u_1\|_\infty<1/40$. Put $u=u_1-u_1(x_0)$.
By a simple calculation we infer that $u\in B_2$ with $u(x_0)=0$ and $-1\le u\le 1$ on $X_2$ and $u^2>1/2$ on $X_2\setminus \mathfrak{G}$. Then $b_0=1-u^2\in B_2$, $0\le b_0\le 1=b_0(x_0)$ on $X_2$, and $b_0< 1/2$ on $X_2\setminus \mathfrak{G}$. We may suppose that $b_0$ is not constant as we assume that $X_2$ is not a singleton. In a similar way, there exists $f_0\in C(Y_2)$ with $0\le f_0\le 1=f_0(y_0)$ and $f_0<1/2$ on $Y_2\setminus \mathfrak{H}$. Put $F_0=b_0\otimes f_0$. Hence we have that $0\le F_0\le 1=F_0(x_0,y_0)$ and $F_0<1/2$ on $X_2\times Y_2\setminus \mathfrak{U}$.
Since $B_2\otimes C(Y_2)\subset \wbt$ by Definition \ref{aqL},
we infer that $F_0\in \wbt$.
By Proposition 6.3 in \cite{ph} there exists $c=(x_c,y_c,m_c,\gamma_c)$ in the Choquet boundary for $I(\wbt)$ with
\[
\|I(F_0)\|_{\infty}=|I(F_0)(c)|.
\]
We see that
\begin{multline}\label{sub5}
\|I(F_0)\|_{\infty}=|I(F_0)(c)|=|F_0(x_c,y_c)+\gamma_cD(F_0)(m_c)| \\
\le |F_0(x_c,y_c)|+|D(F_0)(m_c)|\le \|F_0\|_\infty + \|D(F_0)\|_\infty
=\|I(F_0)\|_{\infty}.
\end{multline}
As $0\le F_0\le 1=\|F_0\|_\infty$ we have by \eqref{sub5} that $F_0(x_c,y_c)=1=\|F_0\|_\infty$. Thus $(x_c,y_c)\in {\mathfrak U}\cap (b^{-1}_0(1)\times f^{-1}_0(1))$. Applying that $F_0(x_c,y_c)=1$ and \eqref{sub5}, we also have that $\gamma_cD(F_0)(m_c)=|D(F_0)(m_c)|=\|D(F_0)\|_\infty$.
As $b_0$ is not a constant function, we have $F_0=b_0\otimes f_0\not\in 1\otimes C(Y_2)=\operatorname{ker} D$. Hence we have $\|D(F_0)\|_\infty \ne 0$, so that $\|D(F_0)\|_\infty> 0$. As $F_0$ is real-valued, so is $D(F_0)$ by the condition $\cnt$ of Definition \ref{aqL}.
Hence we see that $\gamma_cD(F_0)(m_c)>0$ and $\gamma_c=1$ or $-1$.
\end{proof}
Note that $\gamma_c=1$ if $D(F_0)(m_c)>0$ and $\gamma_c=-1$ if $D(F_0)(m_c)<0$.
\begin{lemma}\label{2}
Suppose that
$(x_0,y_0)\in X_2\times Y_2$ and ${\mathfrak U}$ is an open neighborhood of $(x_0,y_0)$. Let $F_0=b_0\otimes f_0\in \wbt$ be a function such that $0\le b_0\le 1$ on $X_2$, $0\le f_0\le 1$ on $Y_2$, $F_0(x_0,y_0)=1$, and $F_0<1/2$ on $X_2\times Y_2\setminus \mathfrak{U}$. Let $(x_c,y_c,m_c,\gamma_c)$ be a point in the Choquet boundary for $I_2(\wbt)$ such that $(x_c,y_c)\in {\mathfrak U}\cap (b^{-1}_0(1)\times f^{-1}_0 (1))$ and $\gamma_cD(F_0)(m_c)=\|D(F_0)\|_\infty\ne 0$. (Such functions and a point $(x_c,y_c,m_c,\gamma_c)$ exist by Lemma \ref{1}.)
Then for any $0<\theta<\pi/2$,
$c_\theta=(x_c,y_c,m_c,e^{i\theta}\gamma_c)$ is also in the Choquet boundary for $I(\wbt)$.
\end{lemma}
\begin{proof}
Let $\theta$ be $0<\theta<\pi/2$. The point evaluation $\phi_\theta(I(F))=F(x_c,y_c)+e^{i\theta}\gamma_cD(F)(m_c)$ at $c_\theta$ is well defined for $I(F)\in I(\wbt)$ since $I$ is injective. We prove that the point evaluation $\phi_\theta$ is an extreme point of the closed unit ball $I(\wbt)^*_1$ of the dual space $I(\wbt)^*$ of $I(\wbt)$. Suppose that $\phi_\theta=\frac12(\phi_1+\phi_2)$ for $\phi_1,\phi_2\in I(\wbt)^*$ with $\|\phi_1\|=\|\phi_2\|=1$, where $\|\cdot\|$ denotes the operator norm here. Let $\check{\phi_j}$ be a Hahn-Banach extension of $\phi_j$ to $C(X_2\times Y_2\times\mathfrak{M}_2\times \mathbb{T})$ for each $j=\theta, 1,2$. By the Riesz-Markov-Kakutani representation theorem there exists a complex regular Borel measure $\mu_j$ on $X_2\times Y_2\times\mathfrak{M}_2\times \mathbb{T}$ with $\|\mu_j\|=1$ which represents $\check{\phi_j}$ for $j=\theta,1,2$ respectively. In particular, we have
\[
\int I(F)d\mu_j=\phi_j(I(F)),\qquad I(F)\in I(\wbt)
\]
for $j=\theta,1,2$. As $\int 1d\mu_\theta=\phi_\theta(1)=1$ we see that $\mu_\theta$ is a probability measure. By the equation
\[
1=\int 1d\mu_\theta=\frac12\int 1d\mu_1+\frac12\int 1d\mu_2
\]
we see that $\mu_1$ and $\mu_2$ are also probability measures.
We prove that the support $\suppmj$ of the measure $\mu_j$ satisfies
\begin{equation}\label{sppmj}
\suppmj\subset b^{-1}_0(1)\times f^{-1}_0(1)\times\left\{(K_1\times \{e^{i\theta}\gamma_c\})\cup (K_2\times \{-e^{i\theta}\gamma_c\})\right\},
\end{equation}
where $K_1=D(F_0)^{-1}(D(F_0)(m_c))$ and $K_2=D(F_0)^{-1}(-D(F_0)(m_c))$, for $j=\theta,1,2$. Note that $m_c\in K_1$ while $K_2$ may be empty. Note also that $K_1\cap K_2=\emptyset$ since $|D(F_0)(m_c)|=\|D(F_0)\|_\infty\ne 0$. We first consider the case for $j=\theta$. As $(x_c, y_c)\in b^{-1}_0(1)\times f^{-1}_0(1)$ we have
\[
\phi_\theta(I(F_0))=F_0(x_c,y_c)+e^{i\theta}\gamma_cD(F_0)(m_c)
=1+ e^{i\theta}\gamma_cD(F_0)(m_c).
\]
As $\phi_\theta(I(F_0))=\int I(F_0)d\mu_\theta$ we have
\begin{multline*}
1+ e^{i\theta}\gamma_cD(F_0)(m_c)\\
=\int F_0(x,y)d\mu_\theta(x,y,m,\gamma) + \int \gamma D(F_0)(m)d\mu_\theta(x,y,m,\gamma).
\end{multline*}
Note that $0\le \int F_0(x,y)d\mu_\theta \le 1$ since $0\le F_0\le 1$ and $\mu_\theta$ is a probability measure. As $\gamma_cD(F_0)(m_c)=\|D(F_0)\|_\infty$, we have
\[
\left|\int\gamma D(F_0)(m)d\mu_\theta\right|\le \gamma_cD(F_0)(m_c).
\]
Taking into account that $0<\theta<\pi/2$ we have by an elementary calculation that
\begin{equation}\label{(a)}
1=\int F_0(x,y)d\mu_\theta,
\end{equation}
\begin{equation}\label{(b)}
e^{i\theta}\gamma_cD(F_0)(m_c)
=
\int \gamma D(F_0)(m)d\mu_\theta.
\end{equation}
Since $\mu_\theta$ is a regular Borel measure, $\mu_\theta(L)=0$ for any Borel set $L$ with $L\cap \suppmthe=\emptyset$. Hence we have $\int Gd\mu_\theta=\int_{\suppmthe} Gd\mu_\theta$ for every $G\in C(X_2\times Y_2\times \mathfrak{M}_2\times \mathbb{T})$. Then by the equality \eqref{(a)} we have
\[
1=\int_{\suppmthe} F_0(x,y)d\mu_\theta.
\]
As $0\le F_0\le 1$ we have by Lemma \ref{0.1} that
\begin{equation}\label{guruguru}
\suppmthe \subset F_0^{-1}(1)\times\mathfrak{M}_2\times\mathbb{T}
=
b^{-1}_0(1)\times f^{-1}_0(1)\times\mathfrak{M}_2\times\mathbb{T}.
\end{equation}
Letting $K=X_2\times Y_2\times \mathfrak{M}$, $g=1_{C(X_2\times Y_2)}\otimes D(F_0)$, and
applying Lemma \ref{0.3} to the equation \eqref{(b)} we get
\begin{multline*}
\suppmthe \subset
\left\{X_2\times Y_2\times K_1\times\{e^{i\theta}\gamma_c\}\right\}\cup\left\{ X_2\times Y_2\times K_2\times\{-e^{i\theta}\gamma_c\}\right\} \\
=
X_2\times Y_2\times\left\{(K_1\times\{e^{i\theta}\gamma_c\})\cup (K_2\times \{-e^{i\theta}\gamma_c\})\right\}.
\end{multline*}
Combining this inclusion with \eqref{guruguru} we infer that the inclusion \eqref{sppmj} holds
for $\mu_\theta$. In order to prove the corresponding inclusion for $\mu_j$ for $j=1,2$, we first have
\begin{multline*}
1+e^{i\theta}\gamma_c D(F_0)(m_c)
=
\phi_\theta (I(F_0))\\
=
\int I(F_0)d\frac{\mu_1+\mu_2}{2}+\int \gamma D(F_0)d\frac{\mu_1+\mu_2}{2}
\end{multline*}
by the equation $\phi_\theta(I(F_0))=\frac12\left(\phi_1(I(F_0))+\phi_2(I(F_0))\right)$.
Using a similar argument to that of
$\mu_\theta$ for $\frac{\mu_1+\mu_2}{2}$ we get
\[
\suppm(\frac{\mu_1+\mu_2}{2}) \subset b^{-1}_0(1)\times f^{-1}_0(1)\times\left\{(K_1\times\{e^{i\theta}\gamma_c\})\cup (K_2\times \{-e^{i\theta}\gamma_c\})\right\}.
\]
As $\mu_1$ and $\mu_2$ are positive measures we have the inclusion \eqref{sppmj} for $j=1,2$.
Next we prove equations
\begin{equation}\label{star1}
F(x_c,y_c)=\int F(x,y)d\mu_\theta
\end{equation}
and
\begin{multline}\label{star2}
D(F)(m_c)=(e^{i\theta}\gamma_c)^{-1}\int \gamma D(F)(m)d\mu_\theta
\\
=
\int_{L_1}D(F)(m)d\mu_\theta
-
\int_{L_2}D(F)(m)d\mu_\theta
\end{multline}
for every $F\in \wbt$, where $L_j=b^{-1}_0(1)\times f^{-1}_0(1)\times K_j\times \{(-1)^{j+1}e^{i\theta}\gamma_c\}$ for $j=1,2$.
We first show \eqref{star1} and \eqref{star2} for
a real-valued function $F\in \wbt$.
Suppose that $F\in \wbt\cap C_{\mathbb R}(X_2\times Y_2)$. Then we have
\begin{equation}\label{grgr}
\begin{split}
F(x_c,y_c)&+e^{i\theta}\gamma_c D(F)(m_c)
= \phi_\theta(I(F)) \\
& = \int F(x,y)d\mu_\theta + \int \gamma D(F)(m)d\mu_\theta \\
& = \int F(x,y)d\mu_\theta + \int_{L_1} \gamma D(F)(m)d\mu_\theta +\int_{L_2} \gamma D(F)(m)d\mu_\theta \\
& = \int F(x,y)d\mu_\theta \\
& \qquad
+e^{i\theta}\gamma_c \left(
\int_{L_1}D(F)(m)d\mu_\theta -
\int_{L_2}D(F)(m)d\mu_\theta \right).
\end{split}
\end{equation}
Note that $F(x_c,y_c)$, $D(F)(m_c)$, $\int F(x,y)d\mu_\theta$, $\int_{L_j}D(F)(m)d\mu_\theta$
for $j=1,2$ are all real numbers since $F$ and $D(F)$ are real-valued functions (see Definition \ref{aqL}). We also note that $e^{i\theta}\gamma_c\not\in \mathbb{R}$ since $0<\theta<\pi/2$ and $\gamma_c=1$ or $-1$.
Then comparing the real and the imaginary parts of the equation \eqref{grgr} we have \eqref{star1} and \eqref{star2}
for every $F\in \wbt \cap C_{\mathbb{R}}(X_2\times Y_2)$.
Take a general function
$F\in \wbt$. We have assumed that $\wbt$ is self-adjoint by the condition $\cno$ in Definition \ref{aqL}, therefore the real part $\operatorname{Re}F$ and the imaginary part $\operatorname{Im}F$ of $F$ both are in $\wbt \cap C_{\mathbb{R}}(X_2\times Y_2)$. Then
by \eqref{star1} for real-valued maps,
we have
\[
\operatorname{Re}F(x_c,y_c)=\int\operatorname{Re}F(x,y)d\mu_\theta,
\]
\[
\operatorname{Im}F(x_c,y_c)=\int\operatorname{Im}F(x,y)d\mu_\theta.
\]
Hence we have
\[
F(x_c,y_c)=\int\operatorname{Re}F(x,y)d\mu_\theta+i\int\operatorname{Im}F(x,y)d\mu_\theta=\int F(x,y)d\mu_\theta.
\]
Thus \eqref{star1} is proved for every $F\in \wbt$. As $D$ is complex-linear we have by \eqref{star2} for real-valued functions that
\begin{multline*}
D(F)(m_c)=D(\operatorname{Re}F)(m_c)+iD(\operatorname{Im}F)(m_c) \\
=(e^{i\theta}\gamma_c)^{-1}\int \gamma D(\operatorname{Re}F)(m)d\mu_\theta
+i(e^{i\theta}\gamma_c)^{-1}\int\gamma D(\operatorname{Im}F)(m)d\mu_\theta \\
= (e^{i\theta}\gamma_c)^{-1}\int \gamma D(F)d\mu_\theta \\
=
\int_{L_1}D(F)(m)d\mu_\theta
-
\int_{L_2}D(F)(m)d\mu_\theta.
\end{multline*}
Thus we have just proved \eqref{star2} for every $F\in \wbt$.
For every $F\in \wbt$ we have
\begin{equation*}
\begin{split}
\phi_\theta (I(F)) &=\frac12 \left(\phi_1(I(F))+\phi_2(I(F))\right) \\
&=
\int F(x,y)d\frac{\mu_1+\mu_2}{2}+\int \gamma D(F)(m)d\frac{\mu_1+\mu_2}{2}.
\end{split}
\end{equation*}
By the same way as the proof of \eqref{star1} and \eqref{star2} we have
\begin{equation}\label{star3}
F(x_c,y_c)=\int F(x,y)d\frac{\mu_1+\mu_2}{2}
\end{equation}
and
\begin{multline}\label{star4}
D(F)(m_c)=(e^{i\theta}\gamma_c)^{-1}\int \gamma D(F)(m)d\frac{\mu_1+\mu_2}{2}
\\
=
\int_{L_1}D(F)(m)d\frac{\mu_1+\mu_2}{2}
-
\int_{L_2}D(F)(m)d\frac{\mu_1+\mu_2}{2}
\end{multline}
for every $F\in \wbt$.
Next define a regular Borel probability measure $\nu_j$ on $X_2\times Y_2\times \mathfrak{M}_2\times \mathbb{T}$ for $j=\theta,1,2$ by
\[
\nu_j(E)=\mu_j(\{(x,y,m,e^{i\theta}\gamma):(x,y,m,\gamma)\in E\})
\]
for a Borel set $E\subset X_2\times Y_2\times\mathfrak{M}_2\times\mathbb{T}$. Then we have
\begin{equation}\label{A1}
\int F(x,y)d\nu_j=\int F(x,y) d\mu_j
\end{equation}
for every $F\in \wbt$ and $j=\theta, 1,2$. By \eqref{sppmj}
we have
\begin{equation}\label{sppnj}
\operatorname{supp}(\nu_j)\subset b^{-1}_0(1)\times f^{-1}_0(1)\times \left[(K_1\times \{\gamma_c\})\cup (K_2\times \{-\gamma_c\})\right]
\end{equation}
for $j=\theta,1,2$. Put $T_j=b^{-1}_0(1)\times f^{-1}_0(1)\times K_j\times\{(-1)^{j+1}\gamma_c\}$. As $\nu_\theta$ and $\frac{\nu_1+\nu_2}{2}$ are regular and $K_1\cap K_2=\emptyset$, we have by \eqref{sppmj} and \eqref{sppnj} that
\begin{equation}\label{A2}
\begin{split}
\int\gamma D(F)(m)d\nu_j
& = \int_{T_1}\gamma D(F)(m)d\nu_j+\int_{T_2}\gamma D(F)(m)d\nu_j \\
& = \gamma_c\int_{T_1}D(F)(m)d\nu_j-\gamma_c\int_{T_2}D(F)(m)d\nu_j \\
& = \gamma_c\int_{L_1}D(F)(m)d\mu_j-\gamma_c\int_{L_2}D(F)(m)d\mu_j \\
& = e^{-i\theta}\int \gamma D(F)(m)d\mu_j
\end{split}
\end{equation}
for every $F\in \wbt$ and $j=\theta, 1,2$.
For $j=\theta, 1,2$, put $\psi_j:I(\wbt)\to \mathbb{C}$
by
\[
\psi_j(I(F))=\int I(F)d\nu_j, \quad I(F)\in I(\wbt).
\]
As $\nu_j$ is a probability measure we see that $\psi_j\in I(\wbt)^*_1$. Let $I(F)\in I(\wbt)$. Then by \eqref{A1} and \eqref{A2} we have
\begin{equation*}
\begin{split}
\psi_\theta (I(F))
& = \int I(F)d\nu_\theta \\
& = \int F(x,y)d\nu_\theta +\int \gamma D(F)(m)d\nu_\theta \\
& = \int F(x,y)d\mu_\theta +e^{-i\theta}\int \gamma D(F)(m)d\mu_\theta.
\end{split}
\end{equation*}
Then by \eqref{star1} and \eqref{star2} we have
\[
\psi_\theta(I(F))=F(x_c,y_c)+\gamma_cD(F)(m_c)=I(F)(x_c,y_c,m_c,\gamma_c).
\]
That is $\psi_\theta$ is the point evaluation for $I(\wbt)$ at $(x_c,y_c,m_c,\gamma_c)$.
By \eqref{A1}, \eqref{A2}, \eqref{star3} and \eqref{star4} we have
\begin{multline*}
\frac12(\psi_1(I(F))+\psi_2(I(F))) \\
= \int F(x,y)d\frac{\nu_1+\nu_2}{2}+\int \gamma D(F)(m)d\frac{\nu_1+\nu_2}{2} \\
= \int F(x,y)d\frac{\mu_1+\mu_2}{2}+e^{-i\theta}\int \gamma D(F)(m)d\frac{\mu_1+\mu_2}{2} \\
= F(x_c,y_c)+\gamma_c D(F)(m_c)
\end{multline*}
for every $F\in \wbt$. Hence we have
\[
\psi_\theta(I(F))=\frac12\left(\psi_1(I(F))+\psi_2(I(F))\right)
\]
for every $I(F)\in I(\wbt)$; $\psi_\theta=\frac12(\psi_1+\psi_2)$. Since $(x_c,y_c,m_c,\gamma_c)$ is in the Choquet boundary for $I(\wbt)$, $\psi_\theta$ is an extreme point for $I(\wbt)^*_1$. Thus we have that $\psi_\theta=\psi_1=\psi_2$.
Applying the equations $\psi_\theta=\psi_1=\psi_2$ we prove that $\phi_\theta=\phi_1=\phi_2$. By \eqref{A1} and \eqref{A2} we have
\begin{multline}\label{K}
\phi_j(I(F))=\int F(x,y)d\mu_j+\int \gamma D(F)(m)d\mu_j \\
=\int F(x,y)d\nu_j+ e^{i\theta}\int \gamma D(F)(m)d\nu_j,\quad F\in I(\wbt)
\end{multline}
for every $j=\theta,1,2$.
Put
\[
P=\{G\in \wbt: 0\le G\le 1=G(x_c,y_c)\}.
\]
Then the set $P$ separates the points of $X_2\times Y_2$. Suppose that $(x_1,y_1)$ and $(x_2,y_2)$ are different points in $X_2\times Y_2$. We may assume that $(x_c,y_c)\ne(x_2,y_2)$. Let $\mathfrak{U}_c$ be an open neighbourhood of $(x_c,y_c)$ such that $(x_2,y_2)\not\in \mathfrak{U}_c$. By Lemma \ref{1} there is $F_c\in \wbt$ such that $0\le F_c\le 1=F_c(x_c,y_c)$ on $X_2\times Y_2$ and $F_c<1/2$ on $X_2\times Y_2\setminus \mathfrak{U}_c$. Hence $0\le F_c(x_2,y_2)<1/2$. In the same way there exists $F_1\in \wbt$ such that $0\le F_1\le 1=F_1(x_1,y_1)$ on $X_2\times Y_2$ and $0\le F_1(x_2,y_2)<1/2$. Put $H=1-(1-F_c)(1-F_1)\in \wbt$. Then we infer that $0\le H\le 1$ on $X_2\times Y_2$, $H(x_c,y_c)=H(x_1,y_1)=1$, and $H(x_2,y_2)\ne 1$. Hence we have that $H\in P$ and $H(x_1,y_1)\ne H(x_2,y_2)$.
Let $G\in P$ be arbitrary. Since $P\subset \wbt$, we have $G\in \wbt$. Hence by the equality \eqref{A1} we have
\begin{multline*}
\frac12\left(\int G(x,y)d\nu_1+\int G(x,y)d\nu_2\right) \\
=\frac12\left(\int G(x,y)d\mu_1+\int G(x,y)d\mu_2\right)
=\int G(x,y)d\frac{\mu_1+\mu_2}{2}.
\end{multline*}
By \eqref{star3}
\[
\int G(x,y)d\frac{\mu_1+\mu_2}{2}=G(x_c,y_c)=1.
\]
Hence we have
\[
\frac12\left(\int G(x,y)d\nu_1+\int G(x,y)d\nu_2\right)=1.
\]
Since $0\le G\le 1$ we have $0\le \int G(x,y)d\nu_j\le 1$ for $j=1,2$. It follows that
\[
\int G(x,y)d\nu_1=\int G(x,y)d\nu_2 = 1.
\]
As $G\in P$ is arbitrary we have
\[
\int \sum a_nG_n(x,y)d\nu_1=\sum a_n= \int \sum a_nG_n(x,y)d\nu_2
\]
for any complex linear combination $\sum a_nG_n$ for
$G_n\in P$.
Since $P$ is closed under multiplication and separates the points in $X_2\times Y_2$, we have that
\[
\left\{\sum a_nG_n:\text{$a_n\in {\mathbb C}$, $G_n\in P$}\right\}
\]
is a unital subalgebra of $\wbj$ which is conjugate-closed and separates the points of $X_2\times Y_2$. The Stone-Weierstrass theorem asserts that it is uniformly dense in $C(X_2\times Y_2)$, hence so is in $\wbt$. It follows that we have
\begin{equation}\label{C}
\int F(x,y)d\nu_1=\int F(x,y)d\nu_2
\end{equation}
for every $F\in \wbt$. On the other hand, since $\psi_1=\psi_2$ we have
\begin{multline}\label{nnn}
\int F(x,y)d\nu_1+\int \gamma D(F)(m)d\nu_1=\psi_1(I(F))\\
=\psi_2(I(F))=
\int F(x,y)d\nu_2+\int \gamma D(F)(m)d\nu_2
\end{multline}
for every $F\in \wbt$. By \eqref{C} and \eqref{nnn} we have
\[
\int \gamma D(F)(m)d\nu_1=\int \gamma D(F)(m) d\nu_2
\]
for every $F\in \wbt$. It follows by \eqref{K} that $\phi_1(I(F))=\phi_2(I(F))$ for every $F\in \wbt$. We infer that $\phi_\theta=\phi_1=\phi_2$. We conclude that $\phi_\theta$ is an extreme point for any $0<\theta<\pi/2$, that is, $(x_c,y_c,m_c,e^{i\theta}\gamma_c)$ is in the Choquet boundary for $I(\wbt)$ for any $0<\theta<\pi/2$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{absolute value 1}]
Define a map $\tilde{U}:I_1(\wbo)\to I_2(\wbt)$ by $\tilde{U}(I_1(H))=I_2(U(H))$ for $I_1(H)\in I_1(\wbo)$. The map $\tilde{U}$ is well defined since $I_1$ is injective. Due to the definition of $I_j$, we see that $\tilde{U}$ is a surjective isometry.
Then the dual map $\tilde{U}^*:I_2(\wbt)^*\to I_1(\wbo)^*$ is an isometry and it preserves the extreme points of the closed unit ball $I_2(\wbt)^*_1$ of $I_2(\wbt)^*$.
Let $(x_0,y_0)$ be an arbitrary point in $X_2\times Y_2$ and ${\mathfrak U}$ an arbitrary open neighborhood of $(x_0,y_0)$. Then by Lemmata \ref{1} and \ref{2} there exists $(x_c,y_c,m_c,\gamma_c)\in {\mathfrak U}\times\mathfrak{M}_2\times \mathbb{T}$ such that $(x_c,y_c,m_c,e^{i\theta}\gamma_c)$ is in the Choquet boundary of $I(\wbt)$ for every $0\le \theta<\pi/2$. Let $\phi_\theta$ be the point evaluation on $I(\wbt)$ at $(x_c,y_c,m_c,e^{i\theta}\gamma_c)$. Then $\phi_\theta$ is an extreme point of the closed unit ball $I(\wbt)^*_1$. As $\tilde{U}^*$ preserves the extreme point of the closed unit ball,
$\tilde{U}^*(\phi_{\theta})$ is an extreme points of the closed unit ball $I_1(\wbo)^*_1$ of $I_1(\wbo)^*$. By the Arens-Kelly theorem we see that there exists a complex number $\gamma$ with absolute value 1 and a point $d$ in the Choquet boundary for $I_1(\wbo)$ such that $\tilde{U}^*(\phi_{\theta})=\gamma \phi_d$, where $\phi_d$ denotes the point evaluation for $I_1(\wbo)$ at $d$. Thus we have that
\[
|\tilde{U}^*(\phi_{\theta})(1)|=1.
\]
As $\tilde{U}^*(\phi_\theta)(1)=\phi_\theta(I_2(U(1)))$ we have
\[
1=|U(1)(x_c,y_c)+e^{i\theta}\gamma_cD(U(1))(m_c)|
\]
for every $0\le \theta <\pi/2$. Hence one of the following (i) or (ii) occurs:
\begin{itemize}
\item[(i)]
$U(1)(x_c,y_c)=0$ and $|D(U(1))(m_c)|=1$,
\item[(ii)]
$|U(1)(x_c,y_c)|=1$ and $D(U(1))(m_c)=0$.
\end{itemize}
But (i) never occurs. The reason is as follows. Since $U$ is an isometry we have
\begin{equation}\label{E}
1=\|1\|=\|U(1)\|=\|U(1)\|_\infty +\|D(U(1))\|_\infty.
\end{equation}
Suppose that (i) holds.
By the second equation of (i) we have $\|D(U(1))\|_\infty\ge 1$. Then by \eqref{E} we have $\|U(1)\|_{\infty}=0$, and $U(1)=0$, which contradicts \eqref{E}. Thus we conclude that only (ii) occurs.
By the first equation of (ii) we infer that $\|U(1)\|_\infty\ge 1$. Then by the equation \eqref{E}, we have $0=\|D(U(1))\|_\infty$. By the condition $\cnt (2)$ of Definition \ref{aqL} we have $U(1)\in 1\otimes C(Y_2)$; there exists $h\in C(Y_2)$ with $U(1)=1\otimes h$. As $|U(1)(x_c,y_c)|=1$ we have $|h(y_c)|=1$. Note that $h$ does not depend on the point $(x_0,y_0)$ nor a neighborhood $\mathfrak{U}$. As $\mathfrak{U}$ is an arbitrary neighborhood of $(x_0,y_0)$, and $(x_c,y_c)\in \mathfrak{U}$, the continuity of $h$ asserts that $|h(y_0)|=1$. Since $y_0$ is an arbitrary point in $Y_2$, we infer that $|h|=1$ on $Y_2$.
\end{proof}
\section{Proof of Theorem \ref{main}}
\begin{proof}[Proof of Theorem \ref{main}]
Suppose first $X_1=\{x_1\}$ and $X_2=\{x_2\}$ are singletons. In this case $B_j$ is isometrically isomorphic to $\mathbb{C}$ as a Banach algebra and $\wbj=1\otimes C(Y_j)$. Thus $\|D(F)\|_\infty=0$ for every $F\in \wbj$. Therefore $\wbj$ is isometrically isomorphic to $C(Y_j)$ for $j=1,2$. Thus we may suppose that $U$ is a surjective isometry from $C(Y_1)$ onto $C(Y_2)$. Then applying the Banach--Stone theorem, we see that $|U(1)|=1$ on $Y_2$ and there exists a homeomorphism $\tau:Y_2\to Y_1$ such that
\[
U(F)=U(1)F\circ \tau, \qquad F\in C(Y_1).
\]
Letting $U(1)=1\otimes h$ and $\varphi:X_2\times Y_2 \to X_1$ by $\varphi(x_2,y)=x_1$ for every $y\in Y_2$, we have
\[
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in X_2\times Y_2
\]
for every $F\in \wbo$.
Suppose that $X_2$ is not a singleton. We prove the conclusion applying Proposition \ref{absolute value 1}. By Proposition \ref{absolute value 1} there exists $h\in C(Y_2)$ with $|h|=1$ on $Y_2$ such that $U(1)=1\otimes h$. Define $U_0:\wbo\to \wbt$ by $U_0(F)=1\otimes \bar{h}U(F)$ for $F\in \wbo$, where $\bar{h}$ denotes the complex conjugate of $h$. It is easy to see that $U_0$ is a bijection with $U_0(1)=1$. By the condition $\cnt (3)$ of Definition \ref{aqL} it is also easy to check that $U_0$ is an isometry. As $\wbj$ is a unital Banach algebra which is contained in $C(X_j\times Y_j)$ which separates the points of $X_j\times Y_j$. As $\wbj$ is natural, by \cite[Proposition 2]{ja} it is a regular subspace of $C(X_j\times Y_j)$ in the sense of Jarosz \cite[p. 67]{ja}. As the norm $\|\cdot\|=\|\cdot\|_\infty+\nn\cdot\nn$ is a $p$-norm (see \cite[p. 67]{ja}) and $U_0(1)=1$, we infer by Theorem in \cite{ja} that $U_0$ is also an isometry with respect to the supremum norm $\|\cdot\|_{\infty}$ on $X_j\times Y_j$. As $\wbj$ is a self-adjoint unital subalgebra of $C(X_j\times Y_j)$ which separates the points of $X_j\times Y_j$, the Stone-Weierstrass theorem asserts that $\wbj$ is uniformly dense in $C(X_j\times Y_j)$. Then the Banach--Stone theorem asserts that $U_0$ is an algebra isomorphism. Since $U_0$ is an isometry with respect to the original norm $\|\cdot\|$ on $\wbj$ we have for every $1\otimes g\in 1\otimes C(Y_1)$ that
\begin{multline*}
\|1\otimes g\|_{\infty}+\|D(1\otimes g)\|_\infty=\|1\otimes g\|=\|U_0(1\otimes g)\| \\
=\|U_0(1\otimes f)\|_\infty+\|D(U_0(1\otimes g))\|_\infty.
\end{multline*}
By the condition $\cnt (2)$ of Definition \ref{aqL} we have $\|D(1\otimes g)\|_\infty=0$. Since $U_0$ is also an isometry with respect to the supremum norm we have $\|1\otimes g\|_\infty=\|U_0(1\otimes g)\|_\infty$. Therefore we have that $\|D(U_0(1\otimes g))\|_\infty=0$. By the condition $\cnt (2)$ of Definition \ref{aqL} we have that $U_0(1\otimes g)\in 1\otimes C(Y_2)$. Hence we see that $U_0(1\otimes C(Y_1))\subset 1\otimes C(Y_2)$. By the Stone-Weierstrass theorem $B_1\otimes C(Y_1)$ is uniformly dense in $C(X_1\times Y_1)$, hence $\wbo\subset \overline{B_1\otimes C(Y_1)}$, where $\overline{\cdot}$ denotes the uniform closure on $X_1\times Y_1$. Then by Proposition 3.2 and the comments which follow that proposition in \cite{hots} there exists continuous maps $\varphi:X_2\times Y_2\to X_1$ and $\tau:Y_2\to Y_1$ such that
\begin{equation}\label{abc}
U_0(F)(x,y)=F(\varphi(x,y),\tau(y)),\qquad (x,y)\in X_2\times Y_2
\end{equation}
for every $F\in \wbo$.
As $X_2$ is not a singleton, there are two distinct points $z,w\in X_2$. Let $y\in Y_2$ be any point. As $U_0$ is a surjecton and $\wbt$ separates the points of $X_2\times Y_2$, there exists a function $F\in \wbo$ such that $U_0(F)(z,y)\ne U_0(F)(w,y)$. Then by \eqref{abc} we have
\[
F(\varphi(z,y),\tau(y))=U_0(F)(z,y)\ne U_0(F)(w,y)=F(\varphi(w,y),\tau(y)).
\]
Hence $\varphi(z,y)\ne \varphi(w,y)$. As $\varphi(z,y),\varphi(w,y)\in X_1$, we have that $X_1$ is not a singleton.
Applying a similar argument for $U_0^{-1}$ instead of $U_0$ we observe that there exists continuous maps $\varphi_1:X_1\times Y_1\to X_2$ and $\tau_1:Y_1\to Y_2$ such that
\[
U_0^{-1}(G)(u,v)=G(\varphi_1(u,v), \tau_1(v)),\qquad (u,v)\in X_1\times Y_1
\]
for every $G\in \wbt$. Thus we have
\begin{multline}\label{[1]}
G(x,y)=U_0(U_0^{-1}(G))(x,y)=U_0^{-1}(G)(\varphi(x,y),\tau(y)) \\
=G(\varphi_1(\varphi(x,y),\tau(y)), \tau_1(\tau(y))), \quad (x,y)\in X_2\times Y_2
\end{multline}
for every $G\in \wbt$ and
\begin{multline}\label{[2]}
F(u,v)=U_0^{-1}(U_0(F))(u,v)=U_0(F)(\varphi_1(u,v),\tau_1(v)) \\
=F(\varphi(\varphi_1(u,v),\tau_1(v)),\tau(\tau_1(v))), \quad (u,v)\in X_1\times Y_1
\end{multline}
for every $F\in \wbo$. As $\wbo$ separates the points in $X_1\times Y_1$ and $\wbt$ separates the points in $X_2\times Y_2$, we infer that $y=\tau_1(\tau(y))$ for every $y\in Y_2$ and $v=\tau(\tau_1(v))$ for every $v\in Y_1$. Hence $\tau:Y_2\to Y_1$ and $\tau_1:Y_1\to Y_2$ are homeomorphisms and $\tau_1^{-1}=\tau$. We have by \eqref{[2]} that $u=\varphi(\varphi_1(u,v),\tau_1(v))$ for every $(u,v)\in X_1\times Y_1$. As $\tau_1$ is a homeomorphism, we infer that $u=\varphi(\varphi_1(u,\tau_1^{-1}(y)),y)$ holds for every pair $u\in X_1$ and $y\in Y_2$. It means that for every $y\in Y_2$ the map $\varphi(\cdot,y):X_2\to X_1$ is a surjection.
We prove that $\varphi(\cdot,y)$ is an injection for every $y\in Y_2$. Let $y\in Y_2$. Suppose that $\varphi(a,y)=\varphi(b,y)$ for $a,b\in X_2$. Then $\varphi_1(\varphi(a,y),\tau(y))=a$ and $\varphi_1(\varphi(b,y),\tau(y))=b$ by the equation \eqref{[1]}. Thus we have $a=b$. Hence we conclude that $\varphi(\cdot,y)$ is an injection. It follows that $\varphi(\cdot,y):X_2\to X_1$ is a bijective continuous map. As $X_2$ is compact and $X_1$ is Hausdorff, we at once see that $\varphi(\cdot,y)$ is a homeomorphism. As $U_0(F)=1\otimes\bar{h} U(F)$ for every $F\in \wbo$ we conclude that
\[
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in X_2\times Y_2.
\]
Suppose that $X_1$ is not a singleton. By a similar argument for $U^{-1}$ instead of $U$ we see that there exists a continuous map $\varphi_1:X_1\times Y_1\to X_2$ such that $\varphi_1(\cdot,y):X_1\to X_2$ is a homeomorphism. As $X_1$ is not a singleton we infer that $X_2$ is not a singleton. Then the conclusion follows from the proof for the case where $X_2$ is not a singleton.
\end{proof}
\section{Examples of admissible quadruples of type L with applications of Theorem \ref{main}}\label{example}
\begin{example}\label{lip}
Let $(X,d)$ be a compact metric space and $Y$ a compact Hausdorff space. Let $0<\alpha\le 1$. Suppose that $B$ is a closed subalgebra of $\Lip((X,d^\alpha))$ which contains the constants and separates the points of $X$, where $d^\alpha$ is the H\"older metric induced by $d$. Suppose that $\widetilde{B}$ is a closed subalgebra of $\Lip((X,d^\alpha),C(Y))$ which contains the constants and separates the points of $X\times Y$. Suppose that $B$ and $\wb$ are self-adjoint. Suppose that
\[
B\otimes C(Y)\subset \widetilde{B}
\]
and
\[
\{F(\cdot, y):F\in \widetilde{B}, y\in Y\}\subset B.
\]
Let $\mathfrak{M}$ be the Stone-\v Cech compactification of $\{(x,x')\in X^2:x\ne x'\}\times Y$. For $F\in \wb$, let $D(F)$ be the continuous extension to $\mathfrak{M}$ of the function $(F(x,y)-F(x',y))/d^\alpha(x,x')$ on $\{(x,x')\in X^2:x\ne x'\}\times Y$. Then $D:\wb\to C(\mathfrak{M})$ is well defined.
We have $\|D(F)\|_\infty=L_\alpha(F)$ for every $F\in \wb$.
It is easy to see that the condition $\cnt$ of Definition \ref{aqL} is satisfied. Hence we have that
$(X,C(Y),B,\widetilde{B})$ is an admissible quadruple of type L.
There are two typical example of $(X,C(Y),B,\widetilde{B})$ above. One is
\[
(X,C(Y), \Lip((X,d^\alpha)),\Lip((X,d^\alpha),C(Y)))
\]
By Corollary \ref{eell} $\Lip((X,d^\alpha))$ and $\Lip((X,d^\alpha),C(Y))$ are self-adjoint. The inclusions
\[
\Lip((X,d^\alpha))\otimes C(Y)\subset \Lip((X,d^\alpha),C(Y))
\]
and
\[
\{F(\cdot,y):F\in \Lip((X,d^\alpha),C(Y)), y\in Y\}\subset \Lip((X,d^\alpha))
\]
is obvious. The other example of $(X,C(Y),B,\widetilde{B})$ above is
\[
(X, C(Y), \lip(X), \lip(X,C(Y)))
\]
for $0<\alpha <1$.
In fact $\lip(X)$ (resp. $\lip(X,C(Y))$) is a closed subalgebra of $\Lip((X,d^\alpha))$ (resp. $\Lip((X,d^\alpha),C(Y))$ which contains the constants. In this case Corollary \ref{eell} asserts that $\lip(X)$ separates the points of $X$. As $\lip(X)\otimes C(Y)\subset \lip(X,C(Y))$ we see that $\wb=\lip(X,C(Y))$ separates the points of $X\times Y$. By Corollary \ref{eell} $\lip(X)$ and $\lip(X,C(Y))$ are self-adjoint. The inclusions
\[
\lip(X)\otimes C(Y)\subset \lip(X,C(Y))
\]
and
\[
\{F(\cdot,y):F\in \lip(X,C(Y)), y\in Y\}\subset \lip(X)
\]
is obvious.
\end{example}
\begin{cor}\label{g}
Let $j=1,2$.
Let $(X_j,d_j)$ be a compact metric space
and $Y_j$ a compact Hausdorff space. Let $\alpha$ be $0<\alpha\le 1$. Suppose that $B_j$ is a closed subalgebra of $\Lip((X_j,d_j^\alpha))$ which contains the constants and separates the points of $X_j$. Suppose that $\widetilde{B_j}$ is a closed subalgebra of $\Lip((X_j,d_j^\alpha),C(Y_j))$ which contains the constants and separates the points of $X_j\times Y_j$.
Suppose that $B_j$ and $\wb_j$ are self-adjoint. Suppose that
\[
B_j\otimes C(Y_j)\subset \widetilde{B_j}
\]
and
\[
\{F(\cdot, y):F\in \widetilde{B_j}, y\in Y_j\}\subset B_j.
\]
Suppose that
\[
U:\widetilde{B_1}\to \wbt
\]
is a surjective isometry. Then there exists $h\in C(Y_2)$ such that $|h|=1$ on $Y_2$, a continuous map $\varphi:X_2\times Y_2\to X_1$ such that $\varphi(\cdot,y):X_2\to X_1$ is a homeomorphism for each $y\in Y_2$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy
\[
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in X_2\times Y_2
\]
for every $F\in \wbo$.
\end{cor}
\begin{proof}
As in a similar way to the argument in Example \ref{lip} we see that $(X_j,C(Y_j), B_j, \widetilde{B_j})$ is an admissible quadruple of type L. Then applying Theorem \ref{main} the conclusion holds.
\end{proof}
Note that Corollary \ref{g} holds for $\wbj=\Lip(X_j,C(Y_j))$ and $\wbj=\lip(X_j,C(Y_j))$ for $0<\alpha<1$.
In this case we have a complete description of a surjective isometry for $\wbj=\Lip(X_j,C(Y_j))$ and $\wbj=\lip(X_j,C(Y_j))$ for $0<\alpha<1$. Note that $\Lip_\alpha((X_j,d_j),C(Y_j))$ for $0<\alpha< 1$ is isometrically isomorphic to $\Lip((X_j,d_j^\alpha),C(Y_j))$ by considering the H\"older metric $d_j(\cdot,\cdot)^\alpha$ for the original metric $d_j(\cdot,\cdot)$ on $X_j$.
\begin{cor}\label{isoLip}
Let $(X_j,d_j)$ be a compact metric space and $Y_j$ a compact Hausdorff space for $j=1,2$.
Suppose that
$U:\Lip(X_1,C(Y_1))\to \Lip(X_2,C(Y_2))$ {\rm (}resp. $U:\lip(X_1,C(Y_1))\to \lip(X_2,C(Y_2))${\rm )}
is a map. Then $U$ is a surjective isometry with respect to the sum norm $\|\cdot\|=\|\cdot\|_\infty+L(\cdot)$ {\rm (}resp. $\|\cdot\|=\|\cdot\|_\infty+L_\alpha(\cdot)${\rm )} if and only if there exists $h\in C(Y_2)$ with $|h|=1$ on $Y_2$, a continuous map $\varphi:X_2\times Y_2\to X_1$ such that $\varphi(\cdot,y):X_2\to X_1$ is a surjective isometry for every $y\in Y_2$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy that
\[
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in X_2\times Y_2
\]
for every $F\in \Lip(X_1,C(Y_1))$ {\rm (}resp. $F\in \lip(X_1,C(Y_1))${\rm )}.
\end{cor}
\begin{proof}
Suppose that there exists $h\in C(Y_2)$ with $|h|=1$ on $Y_2$, a continuous map $\varphi:X_2\times Y_2\to X_1$ such that $\varphi(\cdot,y):X_2\to X_1$ is a surjective isometry for every $y\in Y_2$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy that
\[
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in X_2\times Y_2
\]
for every $F\in \Lip(X_1,C(Y_1))$ {\rm (}resp. $F\in \lip(X_1,C(Y_1))${\rm )}.
We prove that $U$ is a surjective isometry on $\Lip(X_j,C(Y_j))$. A proof for the case of $\lip(X_j,C(Y_j))$ is the same and we omit it. Since $\varphi(\cdot,y)$ is an isometry for every $y\in Y_2$, we have
\begin{equation}\label{pisoLip5}
\begin{split}
&\frac{|(U(F))(x,y)-(U(F))(x',y)|}{d_2(x,x')}
\\
& = \frac{|h(y)F(\varphi(x,y),\tau(y))-h(y)F(\varphi(x',y),\tau(y))|}{d_2(x,x')}
\\
& = \frac{|F(\varphi(x,y),\tau(y))-F(\varphi(x',y),\tau(y))|}{d_2(\varphi(x,y),\varphi(x',y))},\quad x,x'\in X_2, y\in Y_2
\end{split}
\end{equation}
for $F\in \Lip(X_1,C(Y_1))$. Since $\varphi(\cdot,y)$ is bijective and the map $(x,y)\mapsto (\varphi(x,y),\tau(y))$ gives a bijection from $X_2\times Y_2$ onto $X_1\times Y_1$, we see by \eqref{pisoLip5} that $L(F)=L(U(F))$ for every $F\in \Lip(X_1,C(Y_1))$. Since $\|F\|_\infty=\|U(F)\|_\infty$, we conclude that
\[
\|F\|=\|F\|_\infty+L(F)=\|U(F)\|_\infty+L(U(F))=\|U(F)\|
\]
for every $F\in \Lip(X_1,C(Y_1))$; that is $U$ is an isometry. We prove that $U$ is surjective. Let $G\in \Lip(X_2,C(Y_2))$ be arbitrary. Put $F$ by
$F(x,y)=\bar{h}(\tau^{-1}(y))G((\varphi(\cdot,\tau^{-1}(y)))^{-1}(x),\tau^{-1}(y))$ for $(x,y)\in X_1\times Y_1$, where $(\varphi(\cdot,\tau^{-1}(y)))^{-1}$ denotes the inverse of $\varphi(\cdot,\tau^{-1}(y)):X_2\to X_1$. Then we infer that $F\in \Lip(X_1,C(Y_1))$ and $U(F)=G$. As $G$ is an arbitrary elements in $\Lip(X_2,C(Y_2))$, we conclude that $U$ is surjective. It follows that $U$ is a surjective isometry.
Next we prove the converse. First consider the case of $\Lip(X_j,C(Y_j))$. Suppose that $U:\Lip(X_1,C(Y_1))\to \Lip(X_2,C(Y_2))$ is a surjective isometry. Then by Corollary \ref{g} there exists $h\in C(Y_2)$ with $|h|=1$ on $Y_2$, a continuous map $\varphi:X_2\times Y_2\to X_1$ such that $\varphi(\cdot,y):X_2\to X_1$ is a homeomorphism for every $y\in Y_2$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy that
\begin{equation}\label{pisoLip1}
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in X_2\times Y_2
\end{equation}
for every $F\in \Lip(X_1,C(Y_1))$. We only need to prove that $\varphi(\cdot,y):X_2\to X_1$ is a surjective isometry for every $y\in Y_2$. Let $x_1,x_2\in X_2$ and $y\in Y_2$ be arbitrary. Set $f:X_1\to {\mathbb C}$ by $f(x)=d_1(x,\varphi(x_2,y))$ for $x\in X_1$. Then $f\otimes 1\in \Lip(X_1,C(Y_1))$ and $L(f\otimes 1)=1$. Then we have
\begin{equation}\label{pisoLip2}
\begin{split}
d_1(\varphi(x_1,y),\varphi(x_2,y))
& = f(\varphi(x_1,y))=|f(\varphi(x_1,y))-f(\varphi(x_2,y))|
\\
& = |f\otimes 1(\varphi(x_1,y),\tau(y))-f\otimes 1(\varphi(x_2,y),\tau(y))|
\\
& = |(U(f\otimes 1))(x_1,y)-(U(f\otimes 1))(x_2,y)|
\\
& \le
L(U(f\otimes 1))d_2(x_1,x_2).
\end{split}
\end{equation}
By \eqref{pisoLip1} the map $U$ is an isometry with respect to $\|\cdot\|_{\infty}$, thus $1=L(f\otimes 1)=L(U(f\otimes 1))$ since $U$ is an isometry for $\|\cdot\|=\|\cdot\|_\infty+L(\cdot)$. It follows by \eqref{pisoLip2} that $d_1(\varphi(x_1,y),\varphi(x_2,y))\le d_2(x_1,x_2)$. Since $U^{-1}$ is a surjective isometry we have by Corollary \ref{g} that there exists $h_1$, $\varphi_1$ and $\tau_1$ such that
\[
U^{-1}(G)(x,y)=h_1(y)G(\varphi_1(x,y),\tau_1(y)),\qquad (x,y)\in X_1\times Y_1
\]
for $G\in \Lip(X_2,C(Y_2))$. Then by a similar way as above we infer that $d_2(\varphi_1(x_1',y'),\varphi_1(x_2',y'))\le d_1(x_1',x_2')$ for every pair $x_1',x_2'\in X_1$ and $y'\in Y_1$. By a simple calculation we obtain that $x=\varphi_1(\varphi(x,y),\tau(y))$ for every $x\in X_2$ and $y\in Y_2$ (see a similar calculation in the proof of Theorem \ref{main} or that given on p.386 of \cite{ho}). Thus we have
\begin{multline*}
d_2(x_1,x_2)=d_2(\varphi_1(\varphi(x_1,y),\tau(y)),\varphi_1(\varphi(x_2,y),\tau(y))\\
\le
d_1(\varphi(x_1,y),\varphi(x_2,y)).
\end{multline*}
Therefore $d_2(x_1,x_2)= d_1(\varphi(x_1,y),\varphi(x_2,y))$ holds for every pair $x_1,x_2\in X_2$ and $y\in Y_2$, that is, $\varphi(\cdot,y)$ is an isometry for every $y\in Y_2$.
Next we consider the case of $\lip(X_j,C(Y_j))$. Suppose that $0<\alpha<1$ and $U:\lip(X_1,C(Y_1))\to \lip(X_2,C(Y_2))$ is a surjective isometry. As in the same way as before there exists $h\in C(Y_2)$ with $|h|=1$ on $Y_2$, a continuous map $\varphi:X_2\times Y_2\to X_1$ such that $\varphi(\cdot,y):X_2\to X_1$ is a homeomorphism for every $y\in Y_2$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy that
\begin{equation*}
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in X_2\times Y_2
\end{equation*}
for every $F\in \lip(X_1,C(Y_1))$. We prove $\varphi(\cdot,y):X_2\to X_1$ is an isometry for every $y\in Y_2$. Let $x_1,x_2\in X_2$ and $y\in Y_2$ be arbitrary. Let $\beta$ with $\alpha<\beta<1$ be arbitrary. Set $f^\beta:X_1\to {\mathbb C}$ by $f^\beta(x)=d_1(x,\varphi(x_2,y))^\beta$. We have
\begin{multline}\label{pisoLip3}
\frac{|f^\beta(s)-f^\beta(t)|}{d_1(s,t)^\alpha}=
\frac{|d_1(s,\varphi(x_2,y))^\beta-d_1(t,\varphi(x_2,y))^\beta|}{d_1(s,t)^\alpha}\\
\le
\frac{d_1(s,t)^\beta}{d_1(s,t)^\alpha}=d_1(s,t)^{\beta-\alpha},\quad s,t\in X_1.
\end{multline}
Since $X_1$ is compact we have $\sup_{s,t\in X_1}d_1(s,t)<\infty$. Put $M=\sup_{s,t\in X_1}d_1(s,t)$.
Then by \eqref{pisoLip3} we infer that $L_\alpha(f^\beta\otimes 1)\le M^{\beta-\alpha}$.
We also infer by \eqref{pisoLip3} that $\lim_{s\to t}\frac{|f^\beta(s)-f^\beta(t)|}{d_1(s,t)^\alpha}=0$. Hence we have $f^\beta\otimes 1\in \lip(X_1,C(Y_1))$. We have, as before,
\begin{equation}\label{pisoLip4}
\begin{split}
d_1(\varphi(x_1,y),\varphi(x_2,y))^\beta
& = |f^\beta\otimes 1(\varphi(x_1,y),\tau(y))-f^\beta\otimes 1(\varphi(x_2,y),\tau(y))|
\\
& = |(U(f^\beta\otimes 1)(x_1,y)-(U(f^\beta\otimes 1)(x_2,y)|
\\
& \le L_\alpha(U(f^\beta\otimes 1))d_2(x_1,x_2)^\alpha
\\
& = L_\alpha(f^\beta\otimes 1)d_2(x_1,x_2)^\alpha
= M^{\beta-\alpha}d_2(x_1,x_2)^\alpha.
\end{split}
\end{equation}
Letting $\beta\to \alpha$ we have by \eqref{pisoLip4} that $d_1(\varphi(x_1,y),\varphi(x_2,y))^\alpha\le d_2(x_1,x_2)^\alpha$, hence $d_1(\varphi(x_1,y),\varphi(x_2,y))\le d_2(x_1,x_2)$. Applying the same argument for $U^{-1}$ as in the case of $\Lip(X_j,C(Y_j))$ we get
\[
d_2(x_1,x_2)^\beta\le M'^{\beta-\alpha}d_1(\varphi(x_1,y),\varphi(x_2,y))^\alpha
\]
for every $\beta$ with $\alpha<\beta<1$, where $M'=\sup_{s,t\in X_2}d_2(s,t)$. Letting $\beta\to \alpha$ we get $d_2(x_1,x_2)^\alpha \le d_1(\varphi(x_1,y),\varphi(x_2,y))^\alpha$ and $d_2(x_1,x_2) \le d_1(\varphi(x_1,y),\varphi(x_2,y))$. It follows that $d_2(x_1,x_2)=d_1(\varphi(x_1,y),\varphi(x_2,y))$ for every pair $x_1,x_2\in X_2$ and $y\in Y_2$, that is, $\varphi(\cdot,y)$ is an isometry for every $y\in Y_2$.
\end{proof}
Note that if $Y_j$ is a singleton in Corollary \ref{isoLip}, then
$\Lip(X_j,C(Y_j))$ {\rm (}resp. $\lip(X_j,C(Y_j))${\rm )} is naturally identified with $\Lip(X_j)$ {\rm (}resp. $\lip(X_j)${\rm )}.
Then Corollary \ref{isoLip} states that the statement of Example 8 of \cite{jp} is indeed correct.
\begin{cor}\cite[Example 8]{jp} \label{JPOK}
The map $U:\Lip(X_1)\to \Lip(X_2)$ {\rm (}resp. $U:\lip(X_1)\to \lip(X_2)${\rm )} is a surjective isometry with respect to the norm $\|\cdot\|=\|\cdot\|_{\infty}+L(\cdot)$ {\rm (}resp. $\|\cdot\|=\|\cdot\|_{\infty}+L_\alpha(\cdot)${\rm )} if and only if there exists a complex number $c$ with the unit modulus and a surjective isometry $\varphi:X_2\to X_1$ such that
\[
U(F)(x)=cF(\varphi(x)), \qquad x\in X_2
\]
for every $F\in \Lip(X_1)$ {\rm (}resp. $F\in \lip(X_1)${\rm )}.
\end{cor}
\begin{proof}
Suppose that $U$ is a surjective isometry, then by Corollary \ref{isoLip} there exists a complex number $c$ with the unit modulus and a surjective isometry $\varphi:X_2\to X_1$ such that the desired equality holds.
Suppose that $c$ is a complex number with the unit modulus and $\varphi:X_2\to X_1$ is a surjective isometry. Then $U:\Lip(X_1) \to \Lip(X_2)$ (resp. $U:\lip(X_1)\to \lip(X_2)$) by $U(F)(x)=cF(\varphi(x))$, $x\in X_2$ for $F\in \Lip(X_1)$ (resp. $F\in \lip(X_1)$) is well defined. Then by Corollary \ref{isoLip} we have that $U$ is a surjective isometry.
\end{proof}
\begin{example}\label{C101n}
Let $Y$ be a compact Hausdorff space. Then
\[
([0,1], C(Y), C^1([0,1]), C^1([0,1],C(Y)))
\]
is an admissible quadruple of type L, where the norm of $f\in C^1([0,1])$ is defined by $\|f\|=\|f\|_\infty+\|f'\|_\infty$ and the norm of $F\in C^{1}([0,1],C(Y))$ is defined by $\|F\|=\|F\|_\infty+\|F'\|_\infty$. It is easy to see that $C^1([0,1])\otimes C(Y)\subset C^1([0,1],C(Y))$ and
\[
\{F(\cdot, y):F\in C^1([0,1],C(Y)),\,\,y\in Y\}\subset C^1([0,1]).
\]
Let $\mathfrak{M}=[0,1]\times Y$ and $D:C^1([0,1],C(Y))\to C(\mathfrak{M})$ be defined by $D(F)(x,y)=F'(x,y)$ for $F\in C^1([0,1],C(Y))$. Then $\|F'\|_\infty=\|D(F)\|_{\infty}$ for $F\in C^1([0,1],C(Y))$. Then the conditions from $\cno$ through $\cnt (3)$ of Definition \ref{aqL} are satisfied.
\end{example}
\begin{example}\label{C1T}
Let $Y$ be a compact Hausdorff space. Then
\[
(\mathbb{T}, C(Y), C^1(\mathbb{T}), C^1(\mathbb{T},C(Y)))
\]
is an admissible quadruple of type L, where the norm of $f\in C^1(\mathbb{T})$ is defined by $\|f\|=\|f\|_\infty+\|f'\|_\infty$ and the norm of $F\in C^{1}(\mathbb{T},C(Y))$ is defined by $\|F\|=\|F\|_\infty+\|F'\|_\infty$. It is easy to see that $C^1(\mathbb{T})\otimes C(Y)\subset C^1(\mathbb{T},C(Y))$ and
\[
\{F(\cdot, y):F\in C^1(\mathbb{T},C(Y)),\,\,y\in Y\}\subset C^1(\mathbb{T}).
\]
Let $\mathfrak{M}=\mathbb{T}\times Y$ and $D:C^1(\mathbb{T},C(Y))\to C(\mathfrak{M})$ be defined by $D(F)(x,y)=F'(x,y)$ for $F\in C^1(\mathbb{T},C(Y))$. Then $\|F'\|_\infty=\|D(F)\|_{\infty}$ for $F\in C^1(\mathbb{T},C(Y))$. Then the conditions from $\cno$ through $\cnt (3)$ of definition \ref{aqL} are satisfied for $(\mathbb{T}, C(Y), C^1(\mathbb{T}), C^1(\mathbb{T},C(Y))$.
\end{example}
\begin{cor}\label{c101}
Let $Y_j$ be a compact Hausdorff space for $j=1,2$. The norm $\|F\|$ of $F\in C^{1}([0,1],C(Y_j))$ is defined by $\|F\|=\|F\|_\infty+\|F'\|_\infty$. Suppose that
$U:C^1([0,1], C(Y_1))\to C^1([0,1],C(Y_2))$ is a map. Then $U$ is a surjective isometry if and only if
there exists $h\in C(Y_2)$ such that $|h|=1$ on $Y_2$, a continuous map $\varphi:[0,1]\times Y_2\to [0,1]$ such that for each $y\in Y_2$ we have $\varphi(x,y)=x$ for every $x\in [0,1]$ or $\varphi(x,y)=1-x$ for every $x\in [0,1]$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy that
\[
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in [0,1]\times Y_2
\]
for every $F\in C^1([0,1],C(Y_1))$.
\end{cor}
\begin{proof}
Suppose that $U:C^1([0,1], C(Y_1))\to C^1([0,1],C(Y_2))$ is a surjective isometry. Then by Theorem \ref{main} there exists
$h\in C(Y_2)$ such that $|h|=1$ on $Y_2$, a continuous map $\varphi:[0,1]\times Y_1\to [0,1]$ such that $\varphi(\cdot,y):[0,1]\to [0,1]$ is a homeomorphism for each $y\in Y_2$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy
\begin{equation}\label{c1teq1}
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in [0,1] \times Y_2
\end{equation}
for every $F\in C^{1}([0,1],C(Y_1))$. We only need to prove that, for every $y\in Y_2$ $\varphi(x,y)=x$ for every $x\in [0,1]$ or $\varphi(x,y)=1-x$ for every $x\in [0,1]$. Let $F_0\in C^1([0,1],C(Y_1))$ be defined by $F_0(x,y)=x$ for every $(x,y)\in [0,1]\times Y_1$. Then we have $F_0'=1$ on $[0,1]\times Y_1$ and $\|F_0\|=\|F_0\|_\infty+\|F_0'\|_\infty=2$. By \eqref{c1teq1} we have $U(F_0)(x,y)=h(y)\varphi(x,y)$ for every $(x,y)\in [0,1]\times Y_2$.
Since $U(F_0)$ is continuously differentiable we infer that $\varphi$ is continuously differentiable and that $U(F_0)'(x,y)=h(y)\varphi'(x,y)$ for every $(x,y)\in [0,1]\times Y_2$. By \eqref{c1teq1} we infer that $\|U(F_0)\|_\infty=\|F_0\|_\infty$, hence $\|U(F_0)'\|_\infty=\|F_0'\|_\infty$ since $U$ is an isometry with respect to $\|\cdot\|$. As $|h|=1$ on $Y_2$ we see that
\[
|\varphi'(x,y)|\le \|U(F_0)'\|_\infty=\|F_0'\|_\infty=1
\]
for every $(x,y)\in [0,1]\times Y_2$. We prove that $|\varphi'(x,y)|=1$ for every $(x,y)\in [0,1]\times Y_2$. Suppose contrary that there exists $(x_0,y_0)\in [0,1]\times Y_2$ with $|\varphi'(x_0,y_0)|<1$. As $\varphi(\cdot,y_0):[0,1]\to [0,1]$ is a homeomorphism we infer that $|\varphi(1,y_0)-\varphi(0,y_0)|=1$. As $\varphi(\cdot,y_0)$ is continuously differentiable we have
\[
1=|\varphi(1,y_0)-\varphi(0,y_0)|=|\int^1_0\varphi'(x,y_0)dx|\le
\int^1_0|\varphi'(x,y_0)|dx.
\]
Since $\varphi'$ is continuous and $|\varphi'|\le 1$ on $[0,1]\times Y_1$, and $|\varphi'(x_0,y_0)|<1$ we have
\[
\int^1_0|\varphi'(x,y_0)|dx<1,
\]
which is a contradiction. Hence we have that $|\varphi'(x,y)|=1$ for every $(x,y)\in [0,1]\times Y_2$. Let $y_1\in Y_2$ be arbitrary. As $\varphi'(\cdot,y_1)$ is continuous on $[0,1]$ and $|\varphi'(\cdot,y_1)|=1$ on $[0,1]$ we have that
$\varphi'(\cdot,y_1)=1$ on $[0,1]$ or $\varphi'(\cdot,y_1)=-1$ on $[0,1]$ since $\varphi'$ is real-valued with $|\varphi'|=1$ on a connected space $[0,1]$. It follows by a simple calculation that $\varphi(x,y_1)=x$ for every $x\in [0,1]$ or $\varphi(x,y_1)=1-x$ for every $x\in [0,1]$ since $\varphi(\cdot,y_1)$ is a bijection between $[0,1]$.
Suppose conversely that there exists $h\in C(Y_2)$ such that $|h|=1$ on $Y_2$, a continuous map $\varphi:[0,1]\times Y_2\to [0,1]$ such that for each $y\in Y_2$ $\varphi(x,y)=x$ for every $x\in [0,1]$ or $\varphi(x,y)=1-x$ for every $x\in [0,1]$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy that
\[
U(F)(x,y)=h(y)F(\varphi(x,y),\tau(y)),\qquad (x,y)\in [0,1]\times Y_2
\]
for every $F\in C^1([0,1],C(Y_1))$. It is straightforward to check that $\|U(F)\|_\infty = \|F\|_\infty$. Let $y\in Y_2$ be arbitrary. By a simple calculation we infer that $|U(F)'(x,y)|=|F'(x,\tau(y))|$ for every $x\in [0,1]$ or $|U(F)'(x,y)|=|F'(1-x,\tau(y))|$ for every $x\in [0,1]$ for each $y\in Y_2$ and $F\in C^1([0,1],C(Y_1))$. As $\tau$ is a surjection, we have $\|U(F)'\|_\infty=\|F'\|_\infty$ for every $F\in C^1([0,1],C(Y_1))$.
To prove that $U$ is surjective, let $F\in C^1([0,1],C(Y_2))$ be an arbitrary map. Put $G(x',y')=\overline{h(\tau^{-1}(y'))}F(\varphi(x',\tau^{-1}(y')),\tau^{-1}(y'))$, $(x',y')\in [0,1]\times Y_1$. It is easy to see that $G\in C^1([0,1],C(Y_1))$. As $\varphi(x,y)=x$ or $1-x$ depending on $y\in Y_2$ we see by a simple calculation that $\varphi(\varphi(x,y),y)=x$ for every $(x,y)\in [0,1]\times Y_2$. Then we have
\begin{multline*}
(U(G))(x,y)=h(y)G(\varphi(x,y),\tau(y))
\\
=
h(y)\overline{h(\tau^{-1}(\tau(y)))}F(\varphi(\varphi(x,y)),\tau^{-1}(\tau(y)),\tau^{-1}(\tau(y)))
\\
=F(\varphi(\varphi(x,y)),y)=F(x,y),\quad (x,y)\in [0,1]\times Y_2
\end{multline*}
It follows that $U$ is a surjective isometry from $C^1([0,1],C(Y_1))$ onto $C^1([0,1],C(Y_2))$.
\end{proof}
Note that if $Y_j$ is a singleton in Corollary \ref{c101}, then $C^1([0,1],C(Y_j))$ is $C^1([0,1],{\mathbb C})$. The corresponding result on isometries was given by Rao and Roy \cite{rr}.
\begin{cor}\label{c1t}
Let $Y_j$ be a compact Hausdorff space for $j=1,2$. The norm $\|F\|$ of $F\in C^{1}(\mathbb{T},C(Y_j))$ is defined by $\|F\|=\|F\|_\infty+\|F'\|_\infty$. Suppose that
$U:C^1(\mathbb{T}, C(Y_1))\to C^1(\mathbb{T},C(Y_2))$ is a map. Then $U$ is a surjective isometry if and only if
there exists $h\in C(Y_2)$ such that $|h|=1$ on $Y_2$, a continuous map $\varphi:\mathbb{T}\times Y_2\to \mathbb{T}$ and a continuous map $u:Y_2\to \mathbb{T}$ such that for every $y\in Y_2$ $\varphi(z,y)=u(y)z$ for every $z\in \mathbb{T}$ or $\varphi(z,y)=u(y)\bar{z}$ for every $z\in \mathbb{T}$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy that
\[
U(F)(z,y)=h(y)F(\varphi(z,y),\tau(y)),\qquad (z,y)\in \mathbb{T}\times Y_2
\]
for every $F\in C^1(\mathbb{T},C(Y_1))$.
\end{cor}
\begin{proof}
Suppose that $U:C^1(\mathbb{T}, C(Y_1))\to C^1(\mathbb{T},C(Y_2))$ is a surjective isometry. Then by Theorem \ref{main} there exists $h\in C(Y_2)$ such that $|h|=1$ on $Y_2$, a continuous map $\varphi:\mathbb{T}\times Y_1\to \mathbb{T}$ such that $\varphi(\cdot,y):\mathbb{T}\to \mathbb{T}$ is a homeomorphism for each $y\in Y_2$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy
\begin{equation}\label{c1teq1.5}
U(F)(z,y)=h(y)F(\varphi(z,y),\tau(y)),\qquad (z,y)\in \mathbb{T}\times Y_2
\end{equation}
for every $F\in C^{1}(\mathbb{T},C(Y_1))$. We prove that for every $y\in Y_2$ there corresponds $u(y)\in \mathbb{T}$ such that $\varphi(z,y)=u(y)z$ for every $z\in \mathbb{T}$ or $\varphi(z,y)=u(y)\bar{z}$ for every $z\in \mathbb{T}$. Let $F_0\in C^1(\mathbb{T},C(Y_1))$ be defined as $F_0(z,y)=z$ for every $(z,y)\in \mathbb{T}\times Y_1$. Then by \eqref{c1teq1.5} we have $U(F_0)(z,y)=h(y)\varphi(z,y)$. As $|h|=1$ on $Y_2$ we have that $\varphi=\bar{h}U(F_0)\in C^1(\mathbb{T},C(Y_2))$. We also have $\|F_0\|_\infty=1$ and $\|F_0'\|_\infty=1$, hence $\|F_0\|=2$. By \eqref{c1teq1.5} we have $\|U(F_0)\|_\infty=1$. Since $\|U(F_0)\|=\|F_0\|$, we infer that $\|U(F_0)'\|_\infty=\|F_0'\|_\infty$, where
\[
U(F_0)'(z,y)
=h(y)\varphi'(z,y), \quad (z,y)\in \mathbb{T}\times Y_2
\]
as $U(F_0)=h\varphi$. Thus
\[
\|\varphi'\|_\infty=\|U(F_0)'\|_\infty=\|F_0'\|_\infty=1.
\]
It follows that $|\varphi'(z,y)|\le 1$ for every $(z,y)\in \mathbb{T}\times Y_2$. Define $u:Y_2\to {\mathbb T}$ by $u(y)=\varphi(1,y)$. Then $u$ is continuous since $\varphi$ is continuous on ${\mathbb T}\times Y_2$. We also have that $|u(y)|=|\varphi(1,y)|=1$. As $\varphi(\cdot,y)$ is a bijection from $\mathbb{T}$ onto itself, we have $\varphi(\mathbb{T}\setminus\{1\},y)=\mathbb{T}\setminus\{u(y)\}$. Hence the map
\[
t\mapsto -i\Log \overline{u(y)}\varphi(e^{it},y)
\]
is well defined from $(0,2\pi)$ onto $(0,2\pi)$, where $\Log$ denotes the principal value of the logarithm. As $\varphi(\cdot,y)$ is continuously differentiable, the above map has a natural extension $\mathcal{L}:[0,2\pi]\to [0,2\pi]$ (defining by $\mathcal{L}(0)=0$ and $\mathcal{L}(2\pi)=2\pi$, or $\mathcal{L}(0)=2\pi$ and $\mathcal{L}(2\pi)=0$, $\mathcal{L}(t)=-i\Log \overline{u(y)}\varphi(e^{it},y)$ for $0<t<2\pi$), which is continuously differentiable. By a simple calculation we have
\[
\mathcal{L}'(t)=\frac{\varphi'(e^{it},y)e^{it}}{\varphi(e^{it},y)}, \quad t\in [0,2\pi].
\]
Hence $|\mathcal{L}'(t)|\le 1$ for every $t\in [0,2\pi]$ since $|\varphi'(z,y)|\le 1$ for every $(z,y)\in \mathbb{T}\times Y_2$. In the way as in the proof of Corollary \ref{c101} we have that $\mathcal{L}'=1$ on $[0,2\pi]$ or $\mathcal{L}'=-1$ on $[0,2\pi]$. It follows that $\overline{u(y)}\varphi(e^{it},y)=e^{it}$ for every $t\in [0,2\pi]$ or $\overline{u(y)}\varphi(e^{it},y)=e^{-it}$ for every $t\in [0,2\pi]$. Hence $\varphi(z,y)=u(y)z$ for every $z\in \mathbb{T}$ or $\varphi(z,y)=u(y)\bar{z}$ for every $z\in \mathbb{T}$.
Suppose conversely that there exists $h\in C(Y_2)$ such that $|h|=1$ on $Y_2$, a continuous map $\varphi:\mathbb{T}\times Y_2\to \mathbb{T}$ and a continuous map $u:Y_2\to \mathbb{T}$ such that $\varphi(z,y)=u(y)z$ for every $z\in \mathbb{T}$ or $\varphi(z,y)=u(y)\bar{z}$ for every $z\in \mathbb{T}$, and a homeomorphism $\tau:Y_2\to Y_1$ which satisfy that
\begin{equation}\label{c1teq2}
U(F)(z,y)=h(y)F(\varphi(z,y),\tau(y)),\qquad (z,y)\in \mathbb{T}\times Y_2
\end{equation}
for every $F\in C^1(\mathbb{T},C(Y_1))$. By the hypotheses on $\varphi$ and $\tau$ we infer that $(z,y)\mapsto (\varphi(z,y),\tau(y))$ gives a homeomorphism from $\mathbb{T}\times Y_2$ onto $\mathbb{T}\times Y_1$. As $|h|=1$ on $Y_2$ we infer that $\|F\|_\infty=\|U(F)\|_\infty$ for every $F\in C^1(\mathbb{T},C(Y_1))$. By \eqref{c1teq2} we have
\[
U(F)'(z,y)=h(y)F'(\varphi(z,y),\tau(y))\varphi'(z,y),\quad (z,y)\in \mathbb{T}\times Y_2
\]
for every $F\in C^1(\mathbb{T},C(Y_1))$. As $\varphi'(z,y)=u(y)$ on $\mathbb{T}\times Y_2$ or $\varphi'(z,y)=-u(y)\bar{z}^2$ on $\mathbb{T}\times Y_2$ we infer that
\[
\|U(F)'\|_\infty=\|hF'(\varphi,\tau)\varphi'\|_\infty=\|F'\|_\infty.
\]
It follows that $U$ is an isometry. It is not difficult to prove that $U$ is a surjection. We conclude that $U$ is a surjective isometry.
\end{proof}
\subsection*{Acknowledgements}
The authors record our sincerest appreciation to the two referees for their valuable comments and advice which have improved the presentation of this paper substantially.
The first author was partially supported by JSPS KAKENHI Grant Numbers JP16K05172 (representative), JP15K04921 (sharer), JP15K04897 (sharer), Japan Society for the Promotion of Science.
|
1,116,691,499,750 | arxiv | \section{Introduction}
Molecules in strong laser fields have attracted a great deal of attention in
the past few years. Indeed, strong-field phenomena, such as high-order
harmonic generation, above-threshold ionization, and nonsequential double
ionization, may be used as tools for measuring and even controlling dynamic
processes in such systems with attosecond precision \cite{Scrinzi2006}. This
is a direct consequence of the fact that the physical mechanisms behind such
phenomena take place within a fraction of the period of the laser field. For
a typical, titanium-sapphire laser used in experiments, whose period is of
the order $\tau \sim 2.7\mathrm{fs}$, this means hundreds of attoseconds.
Explicitly, such phenomena can be described as the laser-assisted
rescattering or recombination of an electron with its parent ion, or
molecule \cite{tstep}. At an instant $t^{\prime },$ this electron reaches
the continuum through tunneling or multiphoton ionization. Subsequently, it
propagates in the continuum, being accelerated by the external field.
Finally, it is driven back towards its parent ion, or molecule, with which
it recombines or rescatters at a later instant $t$. In the former case, the
electron kinetic energy is converted in a high-energy, XUV photon, and
high-order harmonic generation (HHG) takes place \cite{hhgsfa}. In the
latter case, one may distinguish two specific scenarios: The electron may
suffer an elastic collision, which will lead to high-order above-threshold
ionization (ATI) \cite{atisfa}, or transfer part of its kinetic energy to
the core, and release other electrons. Hence, laser-induced nonsequential
double (NSDI), or multiple ionization (NSMI) will occur.
For molecules, there exist at least two centers with which the electron may
recombine or rescatter. This leads to interference patterns which are due to
photoelectron or high-harmonic emission at spatially separated centers, and
which contain information about its specific structure. In the simplest case
of diatomic molecules, such patterns have been described as the microscopic
counterpart of a double-slit experiment \cite{doubleslit,KB2005}.
A legitimate question is what sets of electron orbits are most relevant for
the two- or many-center interference patterns. To understand this issue is a
first step towards controlling such processes by, for instance, an adequate
choice of the shape and polarization of the external field. In the specific
case of diatomic molecules, the electron may start and return to the same
center $C_{j}$, or leave from a center $C_{j}$ and return to a center $%
C_{\nu },\nu \neq j(j=1,2)$. Hence, in total, there exist four possible
processes that contribute to the yield. Recently, these processes have been
addressed in several studies, for above-threshold ionization \cite%
{Usach2006,HBF2007,DM2006,BCCM2007,Milos2008}, high-order harmonic
generation \cite{KBK98,KB2005,PRACL2006,F2007} and nonsequential double
ionization \cite{F2008}. The vast majority of these studies has been
performed using semi-analytical methods, in the context of the strong-field
approximation. In this framework, the transition amplitude can be written as
a multiple integral with a slowly varying prefactor and a semiclassical
action. The structure of the molecule may be either incorporated in the
former \cite{MBBF00,Madsen,Usachenko,Kansas,JMOCL2006,FSLY2008}, or in the
latter \cite{Usach2006,HBF2007,Milos2008,KBK98,PRACL2006,F2007,F2008}. On a
more specific level, when solving these integrals employing saddle-point
methods, it is possible to draw a space-time picture of the laser-assisted
rescattering or recombination process in question, and establish a direct
connection to the orbits of a classical electron in a strong laser field
\cite{orbitshhg}. By incorporating the structure of the molecule in the
action, one obtains modified saddle-point equations which gives the one-or
two-center scenarios.
In a previous publication \cite{F2007} , we have addressed this issue to a
large extent for high-order harmonic generation, within the Strong-Field
Approximation (SFA). \ Our results suggested that the maxima and minima
observed in the spectra were due to the quantum interference of the
processes in which the electron leaves and returns to a specific center $%
C_{j}$ in the molecule with those in which it leaves from $C_{j}$,
but returns to a different center $C_{\nu }.$ There exist, however,
a few ambiguities as far as the interpretation of our findings is
concerned. For instance, in the length-gauge formulation of the SFA,
we found additional potential energy shifts, which depend on the
field strength $E(t)$ and in the internuclear separation $R.$ These
shifts led to a strong suppression of tunnel ionization at one of
the centers. This could have led to the conclusion that the
interference between other processes were not relevant for the
patterns in the spectra.
In this proceeding, we investigate the role of the one and two-center
recombination scenarios in more detail. In particular, we analyze the
above-mentioned potential energy shifts and their influence on the spectra,
\ for smaller internuclear distances than those taken in \cite{F2007}. We
also provide an alternative interpretation of the results encountered, based
on effective prefactors and single-atom saddle-point equations.
This paper is organized as follows. In Sec. \ref{transampl}, we briefly
recall the strong-field approximation HHG transition amplitudes. Thereby, we
consider the situation for which the structure of the molecule is either
incorporated in the prefactor (Sec. \ref{prefactor} ), or in the
semiclassical action (Sec. \ref{Smodified}). Subsequently (Sec. \ref{results}%
), we analyze the role of the different scenarios, involving one and two
centers, in the high-harmonic spectra, either solving the modified
saddle-point equations (Sec. \ref{orbits}), or mimicking the
quantum-interference between different sets of orbits by an adequate choice
of prefactors (Sec. \ref{singleatom}). Finally, in Sec. \ref{concl} we
outline the main conclusions of this work.
\section{Transition amplitudes}
\label{transampl}
\subsection{General expressions}
As a starting point, we will underline our main assumptions with regard to
the diatomic bound-state wave functions. We consider frozen nuclei, the
linear combination of atomic orbitals (LCAO) approximation, and homonuclear
molecules. Under these assumptions, the electronic bound-state wave function
reads\
\begin{equation}
\psi _{0}(\mathbf{r})=C_{\psi }(\phi _{0}(\mathbf{r}-\mathbf{R}/2)+\epsilon
\phi _{0}(\mathbf{r}+\mathbf{R}/2)), \label{LCAO}
\end{equation}%
where $\epsilon =\pm 1,$ $C_{\psi }=1/\sqrt{2(1+\epsilon S(\mathbf{R})},$
with
\begin{equation}
S(\mathbf{R})=\int \left[ \phi _{0}(\mathbf{r}-\mathbf{R}/2)\right] ^{\ast
}\phi _{0}(\mathbf{r}+\mathbf{R}/2)d^{3}r
\end{equation}%
The positive and negative signs for $\epsilon $ denote symmetric and
antisymmetric orbitals, respectively. For simplicity, unless otherwise
stated we will consider parallel-aligned molecules.
The SFA transition amplitude for high-order harmonic generation reads, in
the specific formulation of Ref. \cite{hhgsfa} and in atomic units,
\begin{eqnarray}
M^{(\Omega )} &\hspace{-0.1cm}=\hspace*{-0.1cm}&i\int_{-\infty }^{\infty }%
\hspace*{-0.5cm}dt\int_{-\infty }^{t}~\hspace*{-0.5cm}dt^{\prime }\int
d^{3}kd_{\mathrm{rec}}^{\ast }(\mathbf{\tilde{k}}(t))d_{\mathrm{ion}}(%
\mathbf{\tilde{k}}(t^{\prime })) \notag \\
&&\exp [iS(t,t^{\prime },\Omega ,\mathbf{k})]+c.c., \label{amplhhg}
\end{eqnarray}%
with the action
\begin{equation}
S(t,t^{\prime },\Omega ,\mathbf{k})=-\frac{1}{2}\int_{t^{\prime }}^{t}[%
\mathbf{k}+\mathbf{A}(\tau )]^{2}d\tau -I_{p}(t-t^{\prime })+\Omega t
\label{actionhhg}
\end{equation}%
and the prefactors $d_{\mathrm{rec}}(\mathbf{\tilde{k}}(t))=\left\langle
\mathbf{\tilde{k}}(t)\right\vert \mathbf{r}.\mathbf{e}_{x}\left\vert \psi
_{0}\right\rangle $ and $d_{\mathrm{ion}}(\mathbf{\tilde{k}}(t^{\prime
}))=\left\langle \mathbf{\tilde{k}}(t^{\prime })\right\vert H_{\mathrm{int}}%
\mathbf{(}t^{\prime }\mathbf{)}\left\vert \psi _{0}\right\rangle .$ Thereby $%
\mathbf{r}$, $\mathbf{e}_{x}$, $H_{\mathrm{int}}\mathbf{(}t^{\prime }\mathbf{%
),}$ $I_{p},$ and $\Omega $ give the dipole operator, the laser-polarization
vector, the interaction with the field, the ionization potential, and the
harmonic frequency, respectively. The explicit expressions for $\mathbf{%
\tilde{k}}(t)$ are gauge dependent, and will be provided below. Physically,
Eq. (\ref{amplhhg}) describes a process in which an electron, initially in a
field-free bound-state $\left\vert \psi _{0}\right\rangle $, is coupled to a
Volkov state $\left\vert \mathbf{\tilde{k}}(t^{\prime })\right\rangle $ by
the interaction $H_{\mathrm{int}}\mathbf{(}t^{\prime }\mathbf{)}$ of the
system with the field. Thereafter, it propagates in the continuum and is
driven back towards its parent ion, or molecule. At a time $t,$ it
recombines, emitting high-harmonic radiation of frequency $\Omega .$
The above-stated transition amplitude may be either solved
numerically, or employing saddle-point equations. In this work, we
employ the latter method and the specific uniform approximation
discussed in Ref. \cite{atiuni}. Explicitly, these equations are
given by the condition that the semiclassical action be stationary,
i.e., that $\partial _{t}S(t,t^{\prime
},\Omega ,\mathbf{k})=\partial _{t^{\prime }}S(t,t^{\prime },\Omega ,\mathbf{%
k})=0$ and $\partial _{\mathbf{k}}S(t,t^{\prime },\Omega ,\mathbf{k})=%
\mathbf{0.}$
For a single atom placed at the origin of the coordinate system, this leads
to
\begin{equation}
\left[ \mathbf{k}+\mathbf{A}(t^{\prime })\right] ^{2}=-2I_{p},
\label{saddle1}
\end{equation}%
\begin{equation}
\int_{t^{\prime }}^{t}d\tau \left[ \mathbf{k}+\mathbf{A}(\tau
)\right] =0, \label{saddle3}
\end{equation}%
and
\begin{equation}
2(\Omega -I_{p})=\left[ \mathbf{k}+\mathbf{A}(t)\right] ^{2}.
\label{saddle2}
\end{equation}%
Eq. (\ref{saddle1}) gives the conservation of energy at the instant $%
t^{\prime }$of ionization,\ and has no real solution. Indeed, the time $%
t^{\prime }$ will possess a non-vanishing imaginary part. This is due to the
fact that tunneling is a process which has no classical counterpart. In the
limit $I_{p}\rightarrow 0,$ corresponds to the physical situation of a
classical electron reaching the continuum with vanishing drift velocity. Eq.
(\ref{saddle3}) expresses the fact that the electron propagates in the
continuum from $t^{\prime }$ to $t,$ when it returns to the site of its
release. Eq. (\ref{saddle2}) yields the conservation of energy at the
recombination instant $t,$ when the kinetic energy of the returning electron
is converted into high-order harmonic radiation.
One should note that the transition amplitude (\ref{amplhhg}) is gauge
dependent \cite{FKS96,PRACL2006}. Firstly, the interaction Hamiltonians $H_{%
\mathrm{int}}(t^{\prime })$, which are present in $d_{\mathrm{ion}}(\mathbf{%
\tilde{k}}(t^{\prime }))$, are different in the length and velocity gauges.
Furthermore, in both velocity- and length-gauge formulations, field-free
bound states are taken, which are not gauge equivalent. Therefore, different
gauge choices will yield different interference patterns \cite%
{PRACL2006,DM2006,Madsen,Usachenko,F2007,SSY2007}. This problem has been
overcome to a large extent by considering field-dressed bound states, as a
dressed state in the length gauge is gauge-equivalent to a field-free bound
state in the velocity gauge, and vice-versa (for details see \cite%
{dressedSFA,F2007,DM2006}).
\subsection{Double-slit interference condition}
\label{prefactor}
The matrix element $d_{\mathrm{rec}}(\mathbf{\tilde{k}})=\left\langle
\mathbf{\tilde{k}}\right\vert \mathbf{r}\cdot \mathbf{e}_{x}\left\vert \psi
_{0}\right\rangle $ is then given by
\begin{equation}
d_{\mathrm{rec}}^{(b)}(\mathbf{\tilde{k}})=\frac{2iC_{\psi }}{(2\pi )^{3/2}}%
\left[ -\cos (\vartheta )\partial _{p_{x}}\phi (\mathbf{\tilde{k}})+\frac{%
R_{x}}{2}\sin (\vartheta )\phi (\mathbf{\tilde{k}})\right] , \label{prefb}
\end{equation}%
for bonding molecular orbitals (i.e., $\epsilon >0),$ or
\begin{equation}
d_{\mathrm{rec}}^{(a)}(\mathbf{\tilde{k}})=\frac{2C_{\psi }}{(2\pi )^{3/2}}%
\left[ \sin (\vartheta )\partial _{p_{x}}\phi (\mathbf{\tilde{k}})-\frac{%
R_{x}}{2}\cos (\vartheta )\phi (\mathbf{\tilde{k}})\right] , \label{prefa}
\end{equation}%
in the antibonding case (i.e., $\epsilon <0),$ with $\vartheta =\mathbf{%
\tilde{k}}\cdot \mathbf{R}/2.$ In the above-stated equations, $R_{x}$
denotes the projection of the internuclear distance along the direction of
the laser-field polarization.
In Eqs. (\ref{prefb}) and (\ref{prefa}), the terms with a purely
trigonometric dependence on the internuclear distance yield the double-slit
condition in \cite{doubleslit}. The maxima and minima in the spectra which
are caused by this condition are expected to occur for
\begin{equation}
\mathbf{\tilde{k}}\cdot \mathbf{R}=2n\pi \text{ and }\mathbf{\tilde{k}}\cdot
\mathbf{R}=(2n+1)\pi , \label{maxmin}
\end{equation}%
respectively, for bonding molecular orbitals (i.e., $\epsilon >0).$ For
antibonding orbitals, the maxima occur for the odd multiples of $\pi $ and
the minima for the even multiples. In the length and velocity gauges $\mathbf{%
\tilde{k}}(\tau)=\mathbf{k}+\mathbf{A}(\tau)$ and
$\mathbf{\tilde{k}}(\tau)=\mathbf{k} $, where $\tau=t,t^{\prime}$,
respectively.
The remaining terms grow linearly with $R_{x}$, and are an artifact of the
strong-field approximation, due to the fact that the continuum states and
the bound states are not orthogonal in the context of the strong-field
approximation \cite{JMOCL2006,F2007,SSY2007}. For that reason, they will be
neglected here (for rigorous justifications see \cite{DM2006,SSY2007}).
In the length gauge, $d_{\mathrm{rec}}(\mathbf{\tilde{k}}(t))=d_{\mathrm{ion}%
}(\mathbf{\tilde{k}}(t^{\prime }))$, with $\ \mathbf{\tilde{k}}(t)=\mathbf{k}%
+\mathbf{A}(t),$ while in the velocity gauge,
\begin{equation}
d_{\mathrm{ion}}^{(b)}(\mathbf{\tilde{k}})=\frac{C_{\psi }[\mathbf{k}+%
\mathbf{A}(t^{\prime })]^{2}}{(2\pi )^{3/2}}\cos (\vartheta )\phi (\mathbf{%
\tilde{k}}),
\end{equation}%
or%
\begin{equation}
d_{\mathrm{ion}}^{(a)}(\mathbf{\tilde{k}})=-i\frac{C_{\psi }[\mathbf{k}+%
\mathbf{A}(t^{\prime })]^{2}}{(2\pi )^{3/2}}\sin (\vartheta )\phi (\mathbf{%
\tilde{k}}),
\end{equation}%
with $\mathbf{\tilde{k}}(t)=\mathbf{k,}$ for bonding and antibonding
molecular orbitals, respectively. The simplest and most widely adopted \cite%
{MBBF00,Madsen,Usachenko,KB2005,DM2006,JMOCL2006} procedure is to employ the
prefactors $d_{\mathrm{ion}}(\mathbf{\tilde{k}})$ $\ $and $d_{\mathrm{rec}}(%
\mathbf{\tilde{k}})$ and the single-atom saddle-point equations (\ref%
{saddle1})-(\ref{saddle2}). In this case, we consider the origin, from which
the electron leaves and returns, as the geometric center of the molecule.
\subsection{Modified saddle-point equations}
\label{Smodified}
The prefactors $d_{\mathrm{ion}}^{(b)}(\mathbf{\tilde{k}})$ and $d_{\mathrm{%
rec}}^{(b)}(\mathbf{\tilde{k}})$ will now be exponentialized and
incorporated in the action (for details, see \cite{PRACL2006,F2007}). For
the recombination matrix element, we take the expression
\begin{equation}
d_{\mathrm{rec}}^{(b)}(\mathbf{\tilde{k}})=-\frac{2iC_{\psi }}{(2\pi )^{3/2}}%
\left[ \cos \left( \mathbf{\tilde{k}}\cdot \frac{\mathbf{R}}{2}\right)
\partial _{\tilde{p}_{x}}\phi (\mathbf{\tilde{k}})\right] ,
\label{modifieddip}
\end{equation}%
for which the spurious term is $R_{x}$ is absent. In the expression for the
antibonding case, the cosine term in (\ref{modifieddip}) should be replaced
by $\sin (\mathbf{\tilde{k}\cdot R}/2)$. Without loss of generality, the
same procedure can also be applied to more complex orbitals.
This leads to the sum
\begin{equation}
M=\sum_{j=1}^{2}\sum_{\nu =1}^{2}M_{j\nu } \label{sumampl}
\end{equation}%
of the transition amplitudes
\begin{eqnarray}
M_{j\nu } &=&\frac{C_{\psi }}{(2\pi )^{3/2}}\int_{0}^{t}dt^{%
\prime }\int dt\int d^{3}p\eta (\mathbf{k},t,t^{\prime }) \notag \\
&&\times \exp [iS_{j\nu }(\mathbf{k},\Omega ,t,t^{\prime })],\
\label{amplitudes}
\end{eqnarray}%
with $\eta (\mathbf{k},t,t^{\prime })=\left[ \partial _{\tilde{p}_{x}}\phi (%
\mathbf{\tilde{k}}(t))\right] ^{\ast }\partial _{\tilde{p}_{x}}\phi (\mathbf{%
\tilde{k}(}t^{\prime })).$ The terms $S_{j\nu }(\mathbf{k},\Omega
,t,t^{\prime })$ correspond to a modified action, which incorporates the
structure of the molecule. Explicitly, they read
\begin{equation}
S_{j\nu }(\mathbf{k},\Omega ,t,t^{\prime })=S(\mathbf{k},\Omega ,t,t^{\prime
})+(-1)^{\nu +1}\xi (R,t,t^{\prime }) \label{ssame}
\end{equation}%
where $\xi (R,t,t^{\prime })=[\mathbf{\tilde{k}}(t)\mathbf{-}(-1)^{\nu +j}%
\mathbf{\tilde{k}}(t^{\prime })]\cdot \mathbf{R}/2$.
We will now compute the amplitudes $M_{j\nu }$ employing saddle-point
methods. For this purpose, we will seek values for $t,t^{\prime }$ and $%
\mathbf{k}$ which satisfy the conditions $\partial _{\mathbf{k}}S_{j\nu }(%
\mathbf{k},\Omega ,t,t^{\prime })=\mathbf{0},\ \partial _{t}S_{j\nu }(%
\mathbf{k},\Omega ,t,t^{\prime })=0$ and $\partial _{t^{\prime }}S_{j\nu }(%
\mathbf{k},\Omega ,t,t^{\prime })=0$. This leads to the saddle-point
equations
\begin{equation}
\frac{\lbrack \mathbf{k}+\mathbf{A}(t^{\prime })]^{2}}{2}=-I_{p}+(-1)^{2\nu
+j+1}\partial _{t^{\prime }}\mathbf{\tilde{k}}(t^{\prime })\cdot \mathbf{R}%
/2, \label{tunnel}
\end{equation}%
\begin{equation}
\int_{t^{\prime }}^{t}[\mathbf{k}+\mathbf{A}(s)]ds+(-1)^{\nu +1}\partial _{%
\mathbf{k}}\zeta =0, \label{returndiff}
\end{equation}%
with $\zeta =\left[ \mathbf{\tilde{k}}(t)-(-1)^{j+\nu }\mathbf{\tilde{k}}%
(t^{\prime })\right] \cdot \mathbf{R}/2$ and%
\begin{equation}
\frac{\lbrack \mathbf{k}+\mathbf{A}(t)]^{2}}{2}=\Omega -I_{p}+(-1)^{\nu
}\partial _{t}\mathbf{\tilde{k}}(t)\cdot \mathbf{R}/2. \label{rec}
\end{equation}%
Eq. (\ref{tunnel}) corresponds to the tunnel ionization process,
saddle-point equation (\ref{returndiff}) gives the condition that the
electron returns to its parent molecule and Eq. (\ref{rec}) expresses the
conservation of energy at the instant of recombination, in which the kinetic
energy of the electron is converted into high-order harmonic radiation. The
above-stated saddle-point equations depend on the gauge, on the center $%
C_{j} $ from which the electron was freed and on the center $C_{\nu }$ with
which it recombines. Below we will have a closer look at specific cases. We
will start by analyzing Eqs. (\ref{tunnel}) and (\ref{rec}), which,
physically, correspond to the ionization and recombination process,
respectively.
If the length gauge is chosen, both equations are explicitly written as
\begin{equation}
\frac{\lbrack \mathbf{k}+\mathbf{A}(t^{\prime })]^{2}}{2}=-I_{p}+(-1)^{2\nu
+j}\mathbf{E}(t^{\prime })\cdot \mathbf{R}/2,
\end{equation}%
and%
\begin{equation}
\frac{\lbrack \mathbf{k}+\mathbf{A}(t)]^{2}}{2}=\Omega -I_{p}+(-1)^{\nu +1}%
\mathbf{E}(t)\cdot \mathbf{R}/2,
\end{equation}%
respectively. For this specific formulation, there exist potential-energy
shifts on the right-hand side, which depend on the external laser field $%
\mathbf{E}(\tau )(\tau =t,t^{\prime })$ and on the internuclear distance $%
\mathbf{R.}$ At the ionization or recombination times, depending on the
center, they increase, or sink the potential-energy barrier through which
the electron must tunnel, or the energy of the state with which it will
recombine. In the specific case discussed here, there is a decrease in the
barrier at $C_{2}$ and an increase at $C_{1}.$ Their meaning and existence
altogether has raised considerable debate in the literature \cite%
{PRACL2006,DM2006,SSY2007,BCCM2007}.
In the velocity gauge, the saddle-point equations (\ref{tunnel}) and (\ref%
{rec}) read
\begin{equation}
\frac{\lbrack \mathbf{k}+\mathbf{A}(t^{\prime })]^{2}}{2}=-I_{p},
\label{tunnelv}
\end{equation}%
and%
\begin{equation}
\frac{\lbrack \mathbf{k}+\mathbf{A}(t)]^{2}}{2}=\Omega -I_{p}. \label{recv}
\end{equation}%
These equations do not exhibit the above-mentioned potential-energy shifts,
and resemble the saddle-point equations obtained for a single atom \cite%
{hhgsfa}. Furthermore, if the limit $I_{p}\rightarrow 0$ is taken, Eq.(\ref%
{tunnelv}) describes a classical particle reaching the continuum with
vanishing drift momentum. In contrast, in the length gauge neither the
classical limit nor the single-atom equations are obtained.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=9cm]{Lphysfig1.EPS}
\end{center}
\caption{Schematic representation of the four possible recombination or
rescattering scenarios described by Eq. (\protect\ref{sumampl}). The centers
$C_{1}$ and $C_{2}$ in the molecule, as well as the transition amplitudes $%
M_{j\protect\nu }$ ($j,\protect\nu =1,2$) are indicated in the figure.}
\label{contour}
\end{figure}
We will now discuss the saddle-point equation (\ref{returndiff}) which gives
the return condition. For both length and velocity gauges, one may
distinguish two main scenarios: either the electron leaves and returns to
the same center, i.e., $\nu =j$, or the electron is freed at a center $C_{j}$
and recombines with the other center $C_{\nu }$, $j\neq \nu ,$ in the
molecule. In the former and latter case, the return condition reads%
\begin{equation}
\int_{t^{\prime }}^{t}[\mathbf{k}+\mathbf{A}(s)]ds=0, \label{return1C}
\end{equation}%
or
\begin{equation}
\int_{t^{\prime }}^{t}[\mathbf{k}+\mathbf{A}(s)]ds+(-1)^{\nu +1}\mathbf{R}=0.
\label{return2C}
\end{equation}%
In Eq. (\ref{return2C}), the index $\nu =2$ corresponds to the transition
amplitudes $M_{12}$ (center $C_{1}$ to center $C_{2})$ and $M_{21}$ (center $%
C_{2}$ to center $C_{1})$, respectively. For clarity, the scenarios
described above are summarized in Fig. 1.
\section{Quantum interference and different recombination scenarios}
\label{results} In the following we will discuss high-order harmonic
spectra. For simplicity, we will consider that the electrons involved are
initially bound in $1s$ states. This gives
\begin{equation}
\phi (\mathbf{\tilde{k}})\sim \frac{1}{[\mathbf{\tilde{k}}^{2}+2I_{p}]^{2}}
\label{dip1s}
\end{equation}%
in the high-order harmonic prefactors $d_{\mathrm{ion}}^{(b)}(\mathbf{\tilde{%
k}})$ and $d_{\mathrm{rec}}^{(b)}(\mathbf{\tilde{k}})$.
In Fig. \ref{interfe1}, we will commence by displaying the overall
contributions, computed using the prefactors $d_{\mathrm{ion}}^{(b)}(\mathbf{%
\tilde{k}})$ and $d_{\mathrm{rec}}^{(b)}(\mathbf{\tilde{k}})$ and
single-atom saddle point equations, instead of the modified saddle-point
equations (\ref{tunnel})-(\ref{rec}), for the length and velocity gauges.
For comparison, the also present the contribution from all transition
amplitudes $M_{j\nu }^{(\Omega )}$. In the present computations, we
considered up to five pairs of orbits starting at the first half-cycle of
the field, i.e., $0\leq t^{\prime }\leq T/2.$
\begin{figure}[tbp]
\begin{center}
\noindent \includegraphics[width=9cm]{Lphysfig2a.EPS}
\end{center}
\caption{Spectra computed employing the single-atom orbits and two center
prefactors, for the length and velocity gauges, compared to the length-gauge
spectrum obtained employing modified saddle-point equations. We consider
here the modified length form ({\protect\ref{modifieddip}}) of the dipole
operator, which excludes the term with a linear dependence on $R_{x}$. The
atomic system was approximated by the linear combination of $1s$ atomic
orbitals with $I_{p}=0.57$ a.u.. The internuclear distance and the alignment
angle are $R=2.068$ a.u., and $\protect\theta =0,$ respectively. The driving
field intensity and frequency are given by $I=3\times 10^{14}\mathrm{W/cm^{2}%
}$, and $\protect\omega =0.057$ a.u., respectively. The interference minimum
at $n=1$ is indicated by the vertical line in the figure. The difference in
the orders of magnitude between the velocity and length gauge spectra is due
to the different prefactors $d_{\mathrm{ion}}(\tilde{\mathbf{k}})$.}
\label{interfe1}
\end{figure}
In the length gauge, the interference condition predicts interference
extrema at $\Omega =I_{p}+n^{2}\pi ^{2}/(2R^{2})$. For the parameters in the
figure, this yields a minimum near $\Omega =31\omega $, for $n=1.$ Even
though this minimum is shallower if modified saddle-point equations are
taken, it can be easily identified.
In contrast, in the velocity gauge, the above-mentioned interference
patterns are absent. This is due to the fact that the interference condition
changes. The maxima and minima re now given by (\ref{maxmin}), with $\mathbf{%
k}$ instead of $\mathbf{\tilde{k}}=\mathbf{k}+\mathbf{A}(t).$ This will lead
to interference extrema at harmonic frequency $\Omega =I_{p}+\left[ n^{2}\pi
^{2}/R^{2}+2n\pi A(t)/R+A^{2}(t)\right] /2.$ Roughly, if we assume that the
vector potential at the electron return time is $A(t)\simeq 2\sqrt{U_{p}},$
this will correspond to $\Omega \sim 97\omega .$ This frequency lies far
beyond the cutoff ( $\Omega \sim 47\omega )$, so that there will be a
breakdown in the interference patterns \cite{F2007,SSY2007}. For this
reason, in the following figures we will consider only the length-gauge
situation.
\subsection{Modified saddle-point equations}
\label{orbits}
Subsequently, in Fig. \ref{orbits1}, we present the contributions from the
different recombination scenarios. In panel (a), the contributions from the
topologically similar scenarios, involving only one or two centers, are
depicted. We observe that the interference minimum mentioned in Fig. 1 is
absent for both types of contributions. \ At first sight, this seems to
contradict the double-slit picture. In fact, for both $|M_{12}+M_{21}|^{2}$
and $|M_{11}+M_{22}|^{2},$ high-order harmonic emission at spatially
separated centers takes place. Therefore, one would expect well-defined
interference patterns to be present. One should note, however, that the
potential energy shifts $\pm \mathbf{E}(t^{\prime })\cdot \mathbf{R}/2$ sink
the potential barrier for the orbits starting at $C_{2}$ and increase the
potential barrier for those starting at $C_{1}.$ Thus, the latter
contributions are strongly suppressed and do not contribute significantly to
the two-center interference.
\begin{figure}[tbp]
\begin{center}
\noindent \includegraphics[width=9cm]{LPhysfig2.EPS}
\end{center}
\caption{Contributions to the high-harmonic yield from the quantum
interference between different types of orbits, for internuclear distance $%
R=2.068$ a.u. The remaining parameters are the same as in Fig. 2. Panel (a):
Orbits involving similar scattering scenarios, i. e., $|M_{11}+M_{22}|^{2}$,
and $|M_{12}+M_{21}|^{2}.$ Panel (b): Orbits \emph{starting} at the same
center, i.e., $|M_{11}+M_{12}|^{2}$ and $|M_{21}+M_{22}|^{2}.$ Panel (b):
Orbits \emph{ending} at the same center, i.e., $|M_{11}+M_{21}|^{2}$ and $%
|M_{12}+M_{22}|^{2}.$ For comparison, the full contributions $%
|M_{21}+M_{22}+M_{11}+M_{12}|^{2}$ are displayed as the light gray circles
in the picture. The interference minimum at $n=1$ is indicated as the
vertical line in the figure.}
\label{orbits1}
\end{figure}
This is in agreement with panel (b), in which the contributions from the
processes $|M_{jj}+M_{j\nu }|^{2}(j,\nu =1,2$ and $\nu \neq j)$ starting
from the same center and ending at different centers are depicted. Therein,
the contributions of the processes starting at $C_{2}$ are roughly two
orders of magnitude larger than those starting at $C_{1}.$ This is due to
the fact that the barrier through which the electron must tunnel in order to
reach the continuum is much wider for the latter center. Furthermore, the
two-center interference minimum near $\Omega =31\omega $ is present. This is
expected, as the contributions from the centers $C_{1}$ and $C_{2}$ exhibit
the same order of magnitude for both types of orbits.
Finally, in panel (c) we display the contributions $|M_{jj}+M_{\nu j}|^{2}$
from the processes starting at different centers and ending at the same
center. In this case, the interference minimum is absent. This was expected
for two reasons. First, for these orbits, there is no high-order harmonic
emission taking place at spatially separated centers. Second, even if this
were the case, the contributions from the orbits starting at $C_{2}$ are
much stronger than those starting at $C_{1}$.
Since the potential energy shifts $\pm \mathbf{E}(t^{\prime })\cdot \mathbf{R%
}/2$ depend on the internuclear distance, it is legitimate to ask the
question of whether, for small internuclear distances, a minimum is present
in the contributions from the topologically similar scenarios.\ In Fig. \ref%
{smallR1}, we considered such a situation. From the interference condition,
we expect a minimum near $\Omega =69\omega $. This minimum is present for
the overall contributions, and also for the processes $|M_{jj}+M_{j\nu
}|^{2}(j,\nu =1,2$ and $\nu \neq j)$ starting from the same center and
ending at different centers [Fig. \ref{smallR1}.(a)]. It is however absent
for the interference of topologically similar processes [Fig. \ref{smallR1}%
.(b)]. This is due to fact that, even for this small internuclear distance,
the orbits starting from $C_{2}$ lead to larger contributions than those
starting from $C_{1}.$ Indeed, a closer look at Fig. \ref{smallR1}.(a) shows
that the contributions $|M_{11}+M_{12}|^{2}$ are roughly one order of
magnitude smaller than $|M_{22}+M_{21}|^{2}$.
\begin{figure}[tbp]
\begin{center}
\noindent \includegraphics[width=9cm]{Lphysfig3.EPS}
\end{center}
\caption{Contributions from different types of orbits to the high-harmonic
yield, for internuclear distance $R=1.2$ a.u. and intensity $I=8\times
10^{14}\mathrm{W/cm}^{2}.$ The remaining parameters are the same as in the
previous figures. Panel (a) gives the contributions from the topologically
similar scattering scenarios, i. e., $|M_{11}+M_{22}|^{2}$, and $%
|M_{12}+M_{21}|^{2},$\ and panel (b) of the orbits starting at the same
center, i.e., $|M_{11}+M_{12}|^{2}$ and $|M_{21}+M_{22}|^{2}.$}
\label{smallR1}
\end{figure}
Possibly, in order to obtain well-defined maxima and minima for the
contributions of topologically similar scenarios, it would be necessary to
reduce the internuclear distance even more. In this case, however, none of
the assumptions adopted in this paper, such as the LCAO approximation, hold.
In this context, it is worth noticing that the parameters adopted in Fig. %
\ref{smallR1} are also somewhat unrealistic, as far as this specific
approximation is concerned. If, however, an alternative ionization pathway
is provided, so that the electron may reach the continuum without the need
of overcoming the potential-energy barriers, the contributions from the
topologically similar scenarios may lead to well-defined patterns. Indeed,
in previous work, we employed an additional attosecond-pulse train in order
to release the electron in the continuum, and obtained an interference
minimum in this case \cite{F2007}. We were, however, changing the physics of
the problem by providing a different ionization mechanism. In the following,
we will investigate the issue of the potential-energy shifts for this set of
parameters, employing an alternative method.
\subsection{Modified prefactors}
\label{singleatom}
On the other hand, the transition amplitudes $M_{j\nu }$ may also be grouped
in such a way as to obtain effective prefactors. Such prefactors may then be
related to the quantum interference of specific types of orbits. Hence, one
may mimic the influence of the above-stated scenarios even if the
single-atom saddle-point equations (\ref{saddle1})-(\ref{saddle2}) are taken
into account. For the symmetric combination of atomic orbitals considered
here, there would be four different sets of prefactors, which are explicitly
given by%
\begin{eqnarray}
d_{\mathrm{ion}}^{(j\nu )}(\mathbf{k},t,t^{\prime }) &=&2\exp [(-1)^{j}i%
\mathbf{\tilde{k}}(t^{\prime })\cdot \mathbf{R}/2] \label{samestart} \\
&&\times \cos [\mathbf{\tilde{k}}(t)\cdot \mathbf{R}/2]\eta (\mathbf{k}%
,t,t^{\prime }), \notag
\end{eqnarray}%
\begin{eqnarray}
d_{\mathrm{end}}^{(j\nu )}(\mathbf{k},t,t^{\prime }) &=&2\exp [(-1)^{j}i[%
\mathbf{\tilde{k}}(t)\mathbf{+A}(t^{\prime })]\cdot \mathbf{R}/2]
\label{sameend} \\
&&\times \cos [\mathbf{k}\cdot \mathbf{R}/2]\eta (\mathbf{k},t,t^{\prime }),
\notag
\end{eqnarray}%
\begin{equation}
d_{\mathrm{same}}(\mathbf{k},t,t^{\prime })=2\cos [\mathbf{[A}(t)-\mathbf{A}%
(t^{\prime })]\cdot \mathbf{R}/2]\eta (\mathbf{k},t,t^{\prime }),
\end{equation}%
and
\begin{eqnarray}
d_{\mathrm{diff}}(\mathbf{k},t,t^{\prime }) &=&2\cos [\mathbf{p}\cdot
\mathbf{R}+[\mathbf{A}(t)+\mathbf{A}(t^{\prime })]\cdot \mathbf{R}/2] \\
&&\times \eta (\mathbf{k},t,t^{\prime }). \notag
\end{eqnarray}
\begin{figure}[tbp]
\begin{center}
\noindent \includegraphics[width=9cm]{Lphysfig5.EPS}
\end{center}
\caption{Contributions from different types of orbits to the high-harmonic
yield, for the same parameters as in Fig. 4. Panel (a) gives the
contributions from the topologically similar scattering scenarios, i. e., $%
|M_{11}+M_{22}|^{2}$, and $|M_{12}+M_{21}|^{2},$\ and panel (b) of the
orbits starting at the same center, i.e., $|M_{11}+M_{12}|^{2}$ and $%
|M_{21}+M_{22}|^{2}.$ All results in this figure have been computed
mimicking the above-stated processes by employing the modified prefactors
(27)-(30) and single-atom saddle-point equations.}
\label{smallR2}
\end{figure}
The prefactor $d_{\mathrm{ion}}^{(j\nu )}$ corresponds to the transition
amplitudes $M_{jj}+M_{\nu j}$ in which the electron starts at the same
center and recombines with different centers in the molecule. The prefactor $%
d_{\mathrm{end}}^{(j\nu )}$ is related to the transition amplitudes $M_{\nu
j}+M_{jj}$ in which the electron starts at different centers, but ends at
the same center $C_{j}$. Finally, $d_{\mathrm{same}}$ and $d_{\mathrm{diff}}$
corresponds to the topologically similar processes, in which only one, or
two center scenarios, respectively, are involved. Interestingly, only the
prefactors $d_{\mathrm{ion}}^{(j\nu )}$ lead to the same interference
conditions as the overall double-slit prefactor (\ref{prefb}).
Furthermore, one should note that, if all parameters involved were real, for
the first two prefactors there would be the symmetry $|d_{\mathrm{ion}%
}^{(j\nu )}(\mathbf{k},t,t^{\prime })|^{2}=|d_{\mathrm{ion}}^{(\nu j)}(%
\mathbf{k},t,t^{\prime })|^{2}$ and $|d_{\mathrm{end}}^{(j\nu )}(\mathbf{k}%
,t,t^{\prime })|^{2}=|d_{\mathrm{end}}^{(\nu j)}(\mathbf{k},t,t^{\prime
})|^{2}.$ This would lead to the same transition probabilities, as one
transition amplitude is the complex conjugate of the other. This is,
however, not the case, and can be seen by inspecting Eq. (\ref{samestart}).
Specifically in the length gauge, $\mathbf{\tilde{k}}(t^{\prime })\mathbf{%
=k+A}(t^{\prime })$. Depending on the center, this will lead to
exponentially decreasing or increasing factors $\exp [\mp \mathrm{Im}%
[k+A(t^{\prime })]R]$ in the transition probability $|M_{jj}+M_{\nu j}|^{2}.$
Clearly, this procedure is less rigorous than that adopted in the previous
section, as we are not considering the influence of the potential-energy
shifts in the imaginary part of $t^{\prime }.$
In Fig. \ref{smallR2}, we display the results obtained following the
above-stated procedure, for the same parameters as in Fig.\ref{smallR1}.
Once more, we see that the contributions of topologically similar processes,
involving either one or two centers, do not lead to a well-defined
interference minimum (Fig. \ref{smallR2}.(a)). Additionally, the quantum
interference of the two different kinds of processes starting from the same
center $C_{j}$ leads to a well-defined minimum at the expected frequency $%
\Omega =69\omega $. Furthermore, the contributions from the orbits starting
at $C_{2}$ are also roughly one order of magnitude smaller. The main
difference between the two approaches is that the interference minimum is
much deeper if modified prefactors are taken, as compared with the results
obtained with modified saddle-point equations. This discrepancy is present
throughout, and has also been observed in Ref. \cite{F2007}.
\section{Conclusions}
\label{concl} The results presented in this work indicate that the
double-slit interference maxima and minima in the high-order harmonic
spectra, which are attributed to HHG at spatially separated centers, are
mainly due to the quantum interference between the processes $%
|M_{jj}+M_{j\nu }|^{2}$, $(j=1,2)$ in which the electron is released in the
continuum at a center $C_{j}$ in the molecule, and, subsequently, recombine
either at the same center or at a different center $C_{\nu }$. This can be
seen either by employing modified saddle-point equations, in which the
one-or two center scenarios are incorporated in the action, or by utilizing
modified prefactors in which only the above-stated processes are included.
In particular, when using the latter method, the transition amplitudes
related to both processes can be grouped in such a way that the
corresponding prefactor $d_{\mathrm{ion}}^{(j\nu )}(\mathbf{k},t,t^{\prime })
$ exhibits the same interference conditions as those in the overall
prefactor (\ref{prefb}). This is in agreement with the results obtained in
\cite{F2007}.
These results are not obvious, as there are other processes which lead to
high-order harmonic emission at different centers in the molecule. They do
not lead, however, to the double-slit interference patterns. This is due to
the fact that, in the present framework, there exist potential-energy shifts
that, depending on the center, sink or increase the barrier through which
the electron must initially tunnel. Therefore, they strongly suppress the
contributions to the spectra from one of the centers in the molecule. This
will lead to an absence of the two-center interference patterns for
processes starting at different centers. We have verified that this
suppression occurs even for small internuclear separations.
Such potential-energy shifts, however, are only present in the length-gauge
strong-field approximation and have raised a great deal of controversy \cite%
{PRACL2006,F2007,BCCM2007,SSY2007}. In fact, it is not even clear whether
they are not an artifact of the SFA. On the other hand, even if single-atom
saddle-point equations are taken, we found a suppression in the yield for
one of the centers of the molecule. This in principle counterintuitive
result is related to the fact that the electron start time $t^{\prime }$ has
a non-vanishing imaginary part, which suppresses or enhances the yield
through the corresponding prefactors.
\acknowledgments This work has been financed by the UK EPSRC (Advanced
Fellowship, Grant no. EP/D07309X/1).
|
1,116,691,499,751 | arxiv | \section{Introduction}
Optical wireless communications (OWC), due to its potential for bandwidth-hungry applications, has become a very important area of research~\cite{o2005optical,chan2006free,das2008requirements,Kumar2010Led-based,Elgala2011review,Borah2012review, Gancarz13}. However, some challenges remain, especially in atmospheric environments, where \textit{robustness} is a key consideration. Therefore, in the design of high date rate OWC links, we need to consider the atmospheric impairments-induced fading which can be described by the log-normal (LN) statistical model ~\cite{Beaulieu2008itct,giggenbach2008fading}. To combat fading, multi-input-multi-output (MIMO) OWC (MIMO-OWC) systems introduce the design for the transmitted symbols distributed over transmitting apertures (space) and (or) symbol periods (time). Full large-scale diversity is achieved when the total degrees of freedom (DoF) available in the MIMO-OWC system is fully utilized.
Unfortunately, unlike MIMO techniques for radio frequency (MIMO-RF) communications with Rayleigh fading, there are \textit{two} significant challenges in MIMO-OWC communications. The \textit{first} is that there does not exist any available mathematical tool that could be directly applied to the analysis of the average pair-wise error probability (PEP) when LN is involved. Although there are really mathematical formulae in literature for numerically and accurately computing the integral involving LN~\cite{haas2002space,navidpour2007itwc,Beaulieu2008itct}, it can not be used for the theoretic analysis on diversity. The \textit{second} is a \textit{nonnegative constraint} on the design of transmission for MIMO-OWC, which is a major difference between MIMO RF communications and MIMO-OWC. It is because of this constraint that the currently available well-developed MIMO techniques for RF communications can not be directly utilized for MIMO-OWC. Despite the fact that the nonnegative constraint can be satisfied by properly adding some direct-current components (DC) into transmitter designs so that the existing advanced MIMO techniques~\cite{tarokh98} for RF communications such as orthogonal space-time block code (OSTBC)~\cite{alamouti98,tarokh99} could be used in MIMO-OWC, the power loss arising from DC incurs the fact that these modified OSTBCs~\cite{simon2005alamouti,wang2009mimo} in a LN fading optical channel have worse error performance than the RC~\cite{navidpour2007itwc,majid2008twc,abaza2014diversity}.
All the aforementioned factors greatly motivate us to develop a general criterion on the design of full large-scale diversity transmission for MIMO-OWC. As an initial exploration, we consider the space-alone code, and intend to uncover some unique characteristics of MIMO-OWC by establishing a general criterion for the design of FLDSC and attaining an optimal analytical solution to a specific two by two linear FLDSC.
\section{Channel Model And Space Code}\label{sec:model}
\subsection{Channel model with space code}
Let us consider an $M\times N$ MIMO-OWC system having $M$ receiver apertures and $N$ transmitter apertures transmitting the symbol vector $\mathbf{s}$, $\{s_l\},l=1,\ldots,L$, which are randomly, independently and equally likely, selected from a given constellation. To facilitate the transmission of these $L$ symbols through the $N$ transmitters in the one time slots (channel use), each symbol $s_l$ is mapped by a space encoder $\mathbf{F}_{l}$ to an $N\times 1$ space code vector $\mathbf{F}\left(s_l\right)$ and then summed together, resulting in an $N\times 1$ space codeword given by $\mathbf{x}=\sum_{l=1}^L\mathbf{F}_l\left(s_l\right)$, where the $n$-th element of $\mathbf{x}$ represents the coded symbol to be transmitted from the $n$-th transmitter aperture. These coded symbols are then transmitted to the receivers through flat-fading path coefficients, which form the elements of the $M\times N$ channel matrix $\mathbf{H}$. The received space-only symbol, denoted by the $M\times 1$ vector $\mathbf{y}$, can be written as
\begin{eqnarray}\label{eqn:system_model}
\mathbf{y}=\frac{1}{P_{op}}\mathbf{H}\mathbf{x}+\mathbf{n},
\end{eqnarray}
where $P_{op}$ is the average optical power of $\mathbf{x}$ and, the entries of channel matrix $\mathbf{H}$ are independent and LN distributed, i.e., $h_{ij}=e^{z_{ij}}$, where $z_{ij}\sim\mathcal{N}\left(\mu_{ij},\sigma_{ij}^2\right), i=1\ldots M,j=1\ldots N$.
The probability density function (PDF) of $h_{i,j}$ is
\begin{eqnarray}
f_{H}\left(h_{ij}\right)=\frac{1}{\sqrt{2\pi}h_{ij}\sigma_{ij} }\exp\left(-\frac{\left(\ln h_{ij}-\mu_{ij}\right)^2}{2\sigma _{ij}^{2}}\right)
\end{eqnarray}
The PDF of $\mathbf{H}$ is $f_{\mathbf{H}}\left(\mathbf{H}\right)=\prod_{i=1}^{M}\prod_{j=1}^{N}f_{H}\left(h_{ij}\right)$.
The signalling scheme of $\mathbf{s}$ is unipolar pulse amplitude modulation (PAM) to meet the unipolarity requirement of intensity modulator (IM), i.e., $\mathbf{x}\in\mathbb{R}_+^{N\times 1}$. As an example, the constellation of unipolar $2^p$-ary PAM is $\mathcal{B}_{2^p}=\{0,1,\ldots,2^p-1\}$, where $p$ is a positive integer. Then, the equivalent constellation of $\mathbf{s}$ is $\mathcal{S}=\{\mathbf{s}:s_i\in \mathcal{B},i=1,\ldots N\}$, i.e., ${\mathcal S}={\mathcal B}_{2^p}^N$.
Furthermore, for noise vector $\mathbf{n}$, the two primary sources at the receiver front end are due to noise from the receive electronics and shot noise from the received DC photocurrent induced by background radiation~\cite{Karp1988,Barry1994Ifrd}. By the central limit theorem, this high-intensity shot noise for the lightwave-based OWC is closely approximated as additive, signal-independent, white, Gaussian noise (AWGN)~\cite{Barry1994Ifrd} with zero mean and variance $\sigma _{n}^{2}$.
By rewriting the channel matrix as a vector and aligning the code-channel product to form a new channel vector, we can have $\mathbf{Hx}=\left(\mathbf{I}_{M}\otimes\mathbf{x}^T\right)\textrm{vec}\left(\mathbf{H}\right)
$, where $\otimes$ denotes the Kronecker product operation and $\textrm{vec}\left(\mathbf{H}\right)=\left[h_{11},\ldots,h_{1N},\ldots,h_{M1},\ldots,h_{MN}\right]^T$.
For discussion convenience, we call $\mathbf{I}_{M}\otimes\mathbf{x}^T$ a codeword matrix, denoted by $\mathbf{ S}\left(\mathbf{x}\right)$. Then, the correlation matrix of the corresponding error coding matrix is given by
\begin{eqnarray}\label{eqn:rank_one_equivalence}
\mathbf{ S}^T\left(\mathbf{e}\right)\mathbf{ S}\left(\mathbf{e}\right)=
\mathbf{I}_{M}\otimes\mathbf{X}\left(\mathbf{e}\right)
\end{eqnarray}
where $\mathbf{X}\left(\mathbf{e}\right)=\mathbf{e}\mathbf{e}^T$, $\mathbf{e}=\mathbf{F}\left(\mathbf{\hat{s}}\right)-\mathbf{F}\left(\mathbf{s}\right)$ is the error vector with $\mathbf{s}\neq\mathbf{\hat{s}}$ and $\mathbf{s},\mathbf{\hat{s}}\in\mathcal{S}$. All these non-zero $\mathbf{e}$ form an error set, denoted by $\mathcal{E}$.
\subsection{Problem formulation}
To formally state our problem, we make the following assumptions throughout this paper.
\begin{enumerate}
\item \textit{Power constraint}. The average optical power is constrained, i.e., $E\left[\sum_{i}^{N}x_{i}\right]=P_{op}$. Although limits are placed on both the average and peak optical power transmitted, in the case of most practical modulated optical sources, it is the average optical power constraint that dominates~\cite{hranilovic2003optical}.
\item \textit{SNR definition}. The optical SNR is defined by $\rho_{op}=\frac{P_{op}}{\sqrt{N\sigma_n^2}}$, since the noise variance per dimension is assumed to be $\sigma_n^2$. Thus, in expressions on error performance involved in the squared Euclidean distance, the term $\rho$, in fact, is equal to
\begin{eqnarray}\label{eqn:electrical_snr}
\rho=\frac{1}{N\sigma_n^2}
\end{eqnarray}
with optical power being normalized by $\frac{1}{P_{op}}$. Unless stated otherwise, $\rho$ is referred to as the squared optical SNR thereafter.
\end{enumerate}
Under the above assumptions, our primary task in this paper is to establish a general criterion on the design of FLDSC and solve the following problem.
\begin{problem}\label{prob:design_problem} Design the space encoder $\mathbf{F}(\cdot)$ subject to the total optical power such that 1) $\forall \mathbf{s}\in \mathcal{S}, \mathbf{F}\left(\mathbf{s}\right) $ meets the unipolarity requirement of IM; 2) Full large-scale diversity is enabled for the ML receiver.~\hfill\hfill $\blacksquare$
\end{problem}
\section{Design Criteria for Space Code}
This subsection aims at deriving the PEP of MIMO-OWC and then, establishing a general design criterion for the linear space coded system.
\subsection{PEP of MIMO-OWC}\label{sec:performance_analysis}
Given a channel realization $\mathbf{H}\in\mathbb{R}_{+}^{M\times N}$ and a transmitted signal vector $\mathbf{s}$, the probability
of transmitting $\mathbf{s}$ and deciding in favor of $\hat{\mathbf s}$ with the ML receiver is given by~\cite{forney98}
\begin{eqnarray}\label{eqn:ml_detection_pep1}
P\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}|\mathbf{H}\right)=Q\left(\frac{d\left(\mathbf{e}\right)}{2}\right) \end{eqnarray}
where $d^2\left(\mathbf{e}\right)=\frac{\rho}{NP_{op}^2}\textrm{vec}\left(\mathbf{H}\right)^T\mathbf{ S}^T\left(\mathbf{e}\right)\mathbf{ S}\left(\mathbf{e}\right)\textrm{vec}\left(\mathbf{H}\right)=\frac{\rho}{NP_{op}^2}\sum_{i=1}^M\left(\mathbf{h}_i^T\mathbf{e}\right)^2$ with $\mathbf{h}_i=\left[h_{i1},\ldots,h_{iN}\right]^T,i=1,\ldots,M$. Averaging \eqref{eqn:ml_detection_pep1} over $\mathbf{H}$ yields
\begin{eqnarray} \label{eqn:ml_detection_pep2}
P\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)
&=&\int P\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}|\mathbf{H}\right)f_{\mathbf{H}}\left(\mathbf{H}\right)d\mathbf{H}.
\end{eqnarray}
To extract the dominant term of~\eqref{eqn:ml_detection_pep2}, we make an assumption for time being. Later on, we will prove that this condition is actually necessary and sufficient for $\mathbf{ X}\left(\mathbf{e}\right)$ to render full diversity.
\begin{assumption} \label{assumpt:existence_of_rectangular}
Any $\mathbf{e}\in {\mathcal E}$ is unipolar without zero entry.~\hfill\hfill $\blacksquare$
\end{assumption}
\begin{theorem}\label{theorem:pep_mimo_owc}
Under Assumption \ref{assumpt:existence_of_rectangular},
$P\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$ is bounded by
\begin{eqnarray}\label{eqn:pep_mimoowc}
&&\underbrace{C_{L} \left(\ln\rho\right)^{-MN}e^{-\sum_{i=1}^{M}\sum_{j=1}^{N}\frac{\left(\ln\rho +\ln \left(P_{op}^2\Omega\right)-\ln\left(M\sum_{k=1}^Ne_k^2\right)\right)^2}{8\sigma_{ij}^2}}}_{P_{L}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)}
\nonumber\\
&&\le P\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right) \le \underbrace{C_{U1}
\rho^{-\frac{MN}{2}}
e^{-\sum_{i=1}^{M}\sum_{j=1}^{N}\frac{\ln^2 \rho}{8\sigma_{ij}^2 }}}_{P_{U1}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)}
\nonumber\\
&&+\underbrace{ C_{U2}\left(\ln\rho\right)^{-MN}e^{-\sum_{i=1}^{M}\sum_{j=1}^{N}\frac{\left(\ln \frac{\rho}{\ln^2 \rho} +\ln \left(P_{op}^2\Omega\right)-\ln e_j^2\right)^2}{8\sigma_{ij}^2}}}_{P_{U2}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)}
\end{eqnarray}
where $\Omega=\sum_{i=1}^{M}\sum_{j=1}^{N}\sigma_{ij}^{-2}$,
$C_{L}=\frac{\prod_{i=1}^M\prod_{j=1}^N\sigma_{ij}}{\left(4\pi\right)^{MN}e^{-\frac{MN}{2}}}Q\left(\frac{1}{2}\left(\sum_{k=1}^Ne_k^2\right)^{-\frac{1}{2}}\right)$, $C_{U1}=\frac{e^{\frac{\sum_{i=1}^M\sum_{j=1}^N\sigma_{ij}^2}{2}}}{2\prod_{i=1}^N\prod_{j=1}^M\sigma_{ij}}
\left(\frac{\sum_{k=1}^{N}e_k^2}{NP_{op}^2}\right)^{-\frac{MN}{2}}$ and $C_{U2}=\frac{\left(NP_{op}^2\right)^{MN}}{2\prod_{i=1}^M\prod_{j=1}^N\sqrt{\sigma_{ij}^2}}e^{-\frac{\Omega}{8}\ln^2\left(\frac{NP_{op}^2\Omega}{M}\right)}$.
~\hfill\hfill $\blacksquare$
\end{theorem}
Now, we can see that in \eqref{eqn:pep_mimoowc}, $P_{L}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$ and $P_{U2}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$ have the same exponential term, $\exp\left(-\frac{\Omega}{8}\ln^2 \rho\right)$, whereas the exponential term of $P_{U1}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$ is $\exp\left(-\frac{\Omega}{8}\ln^2\frac{\rho}{\ln^2\rho}\right)$, which decays slower than $\exp\left(-\frac{\Omega}{8}\ln^2 \rho\right)$ against high SNR. That being said, we have successfully attained the dominant term, $P_{U1}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$, of the upper-bound of $P\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$. Thus, our selection of $\tau$ is reasonable to capture the dominant behaviour of $P\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$.
With all the aforementioned preparations, we enable to give the general design criterion for FLDSC of MIMO-OWC in the following subsection.
\subsection{Design Criterion for FLDSC}
The discussions in Subsection~\ref{sec:performance_analysis} tells us that $P_{U1}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$ is the dominant term of the upper-bound of $P\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$ in \eqref{eqn:pep_mimoowc}. With this, we will provide a guideline on the space code design in this subsection. To define the performance parameters to be optimized, we rewrite $P_{U2}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$ as follows.
\begin{eqnarray}\label{eqn:dominant_term}
P_{U2}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)=
C_{U2}\mathcal{G}_{c}\left(\mathbf{e}\right)
\left(\frac{\rho}{\ln^2 \rho}\right)^{\frac{\Omega}{4}\ln\left(
\frac{NP_{op}^2\Omega}{M}\right)-\frac{3}{4}\ln \mathcal{G}_{d}\left(\mathbf{e}\right)}&&\nonumber\\
\times
\left(\ln \rho\right)^{-MN}\exp\left(-\frac{\Omega}{8} \ln^2 \frac{\rho}{\ln^2 \rho}\right)&&
\end{eqnarray}
where $\mathcal{G}_{d}\left(\mathbf{e}\right)=\prod_{j=1}^{N}|e_j|^{\sum_{i=1}^M\sigma_{ij}^{-2}}$ and $\mathcal{G}_{c}\left(\mathbf{e}\right)=\exp\left( \frac{1}{2}\sum_{i=1}^{M}\sum_{j=1}^{N}
\left(\ln |e_j|^{\sigma_{ij}}\right)^2\right)\left(\frac{NP_{op}^2\Omega}{M}\right)^{
\frac{1}{2}\ln\ln \mathcal{G}_{d}\left(\mathbf{e}\right)}$.
Here, the following three factors dictate the minimization of $P_{U1}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$:
\begin{enumerate}
\item \textit{Large-scale diversity gain}. The exponent $\Omega$ with respect to $\ln \frac{\rho}{\ln^2 \rho}$ governs the behavior of $P_{U1}\left(\mathbf{s}\rightarrow\mathbf{\hat{s}}\right)$. For this reason, $\Omega$ is named as the \textit{large-scale diversity gain}. The full large-scale diversity achievement is equivalent to the event that all the $MN$ terms in $\Omega=\sum_{i=1}^{M}\sum_{j=1}^{N}\sigma_{ij}^{-2}$ offered by the $N\times M$ MIMO-OWC are fully utilized. Thus, when we design space code, full large-scale diversity must be assured \textit{in the first place}.
\item \textit{Small-scale diversity gain}. $\mathcal{G}_{d}\left(\mathbf{e}\right)=\prod_{j=1}^{N}|e_j|^{\sum_{i=1}^M\sigma_{ij}^{-2}}$ is called \textit{small-scale diversity gain}, which affects the polynomial decaying in terms of $\frac{\rho}{\ln^2 \rho}$. $\min_{\mathbf{e}}\mathcal{G}_{d}\left(\mathbf{e}\right)$ should be maximized to optimize the error performance of the worst error event. Since the small-scale diversity gain will affect the average PEP via the polynomially decaying speed of the error curve, the small-scale diversity gain of the space code is what to be optimized \textit{in the second place}.
\item \textit{Coding gain.} $\mathcal{G}_{c}\left(\mathbf{e}\right)$ is defined as\textit{ coding gain}. On condition that both diversity gain are maximized, if there still exists DoF for further optimization of the coding gain, $\max_{\mathbf{e}\in\mathcal{E}}\mathcal{G}_{c}\left(\mathbf{e}\right)$ should be minimized as the \textit{last step} for the systematical design of space code.
\end{enumerate}
In what follows, we will give a sufficient and necessary condition on a full large-scale diversity achievement. Hence, Assumption \ref{assumpt:existence_of_rectangular} is \textit{sufficient and necessary} for FLDSC, which is summarized as the following theorem:
\begin{theorem} \label{theorem:space_code_full_diversity}
A space code enables full large-scale diversity if and only $\forall \mathbf{e}\in \mathcal{E}$, $\mathbf{e}$ is unipolar without zero-valued entries or equivalently, $\forall \mathbf{e}\in \mathcal{E}$, $\mathbf{ X}\left(\mathbf{e}\right)$ is positive.~\hfill\hfill $\blacksquare$
\end{theorem}
With these results, we can proceed to design FLDSC systematically in the following section.
\section{Optimal Design of Specific Linear FLDSC }\label{sec:design_example}
In this section, we will exemplify our established criterion in~\eqref{eqn:dominant_term} by designing a specific \textit{linear} FLDSC for $2\times 2$ MIMO-OWC with unipolar pulse amplitude modulation (PAM). For this particular design, a closed-form space code optimizing both diversity gains will be obtained by smartly taking advantage of some available properties as well as by developing some new interesting properties on Farey sequences in number theory.
\subsection{Design Problem Formulation}
Consider a $2\times2$ MIMO-OWC system with $\mathbf{F}\left(\mathbf{s}\right)=\mathbf{F}\mathbf{s}$, where
$ \mathbf{F} =
\left({
\begin{array}{cc}
f_{11}& f_{12}\\
f_{21}& f_{22}\\
\end{array}
}\right)$ and $\mathbf{ X}\left(\mathbf{e}\right)=\left(
{\begin{array}{cc}
e_{1}^2&e_1e_2\\
e_1e_2&e_2^2\\
\end{array}
}\right)$.
By Theorem~\ref{theorem:space_code_full_diversity}, $\mathbf{ X}\left(\mathbf{e}\right)$ should be positive to maximize the large-scale diversity gain.
On the other hand, from the structure of $\mathbf{ X}\left(\mathbf{e}\right)$ and \eqref{eqn:dominant_term}, the small-scale diversity gain is $\mathcal{G}_{d}\left(\mathbf{e}\right)=|e_1e_2|$ under the assumption that CSIT is unknown.
Therefore, to optimize the worst case over $\mathcal{E}$, FLDSC design is formulated as follows:
\begin{eqnarray}\label{eqn:modulator_design}
&&\max_{f_{11},f_{12},f_{21},f_{22}} \min_{\mathbf{e}} e_1e_2\nonumber \\
&& s.t.
\left\{
\begin{array}{ll}
\left[e_1,e_2\right]^T\in \mathcal{E},f_{ij}>0,i,j\in\{1,2\},\\
e_1e_2>0,f_{11}+f_{12}+f_{21}+f_{22}=1.
\end{array}
\right.
\end{eqnarray}
Our task is to analytically solve \eqref{eqn:modulator_design}.
To do that, we first simplify \eqref{eqn:modulator_design} by finding all the possible minimum terms.
\subsection{Equivalent Simplification of Design Problem}\label{subsec:simplification}
For $2^{p}$-PAM, all the possible non-zero values of $e_1e_2$ are
\begin{eqnarray}\label{eqn:objective_function}
e_1e_2=\left(mf_{11}\pm nf_{12}\right)\left(mf_{21}\pm nf_{22}\right)\neq0,m,n\in\mathcal{B}_{2^p}.
\end{eqnarray}
\subsubsection{Preliminary simplification}
After observations over \eqref{eqn:objective_function}, we have the following facts.
\begin{enumerate}
\item $\forall m\neq0,m,n\in\mathcal{B}_{2^p}$, it holds holds that
\begin{subequations}
\begin{eqnarray}
\left(mf_{11}+ nf_{12}\right)\left(mf_{21}+nf_{22}\right)
\ge f_{11}f_{21}.
\end{eqnarray}
\item $\forall n\neq0$, $m,n\in\mathcal{B}_{2^p}$, it is true that
\begin{eqnarray}
\left(mf_{11}+ nf_{12}\right)\left(mf_{21}+nf_{22}\right)\ge f_{12}f_{22}.
\end{eqnarray}
\item $\forall k\neq0,m^2+n^2\neq0,k,m,n\in\mathcal{B}_{2^p}$, we have
\begin{eqnarray}
\frac{k\left(mf_{11}- nf_{12}\right)\left( mf_{21}- nf_{22}\right)}{\left(mf_{11}-nf_{12}\right)\left(mf_{21}-nf_{22}\right)}
\ge 1.
\end{eqnarray}
\end{subequations}
\end{enumerate}
So, all the possible minimum of $e_1e_2$ in \eqref{eqn:modulator_design} are
$f_{11}f_{21}$, $f_{12}f_{22}$ and $\left(mf_{11}-nf_{12}\right)\left(mf_{21}-nf_{22}\right)$,
where $\frac{n}{m}$ are irreducible, i.e., $m\perp n$. These terms are denoted by
$F_{10}=f_{12}f_{22}\left(\frac{f_{11}}{f_{12}}\times\frac{f_{21}}{f_{22}}\right),F_{01}=f_{12}f_{22}
$ and $F_{mn}=f_{12}f_{22}\left(m\frac{f_{11}}{f_{12}}-n\right)\left(m\frac{f_{21}}{f_{22}}-n\right)$.
After putting aside the common term, $f_{12}f_{22}$, we can see that $F_{mn}$ is the piecewise linear function of $\frac {f_{11}}{f_{12}}$ and $ \frac{f_{21}}{f_{22}}$, respectively. So, \eqref{eqn:modulator_design} can be solved by fragmenting interval $\left[0,\infty\right)$ into disjoint subintervals. This fragmentation can be done by the breakpoints where $F_{mn}=0$.
To characterize this sequence, there exists an elegant mathematical tool in number theory presented below.
\subsubsection{Farey sequences}
First, we observe some specific examples of the breakpoint sequences.
For OOK, the breakpoints $\frac{0}{1}, \frac{1}{1},\infty$.
For 4-PAM, they are $\frac{0}{1},\frac{1}{3},\frac{1}{2},\frac{2}{3},\frac{1}{1},\frac{3}{2},\frac{2}{1},\frac{3}{1},\infty$.
For 8-PAM, we have the breakpoint sequence with the former part being
\begin{subequations}
\begin{eqnarray}\label{eqn:before_1}
\frac{0}{1},\frac{1}{7},\frac{1}{6},\frac{1}{5},\frac{1}{4},
\frac{2}{7},\frac{1}{3},\frac{2}{5},\frac{3}{7},\frac{1}{2},
\frac{4}{7},\frac{3}{5},\frac{2}{3},\frac{5}{7},
\frac{3}{4},\frac{4}{5},\frac{5}{6},\frac{6}{7},\frac{1}{1}&&
\end{eqnarray}
and the remaining being
\begin{eqnarray}\label{eqn:after_1}
\frac{7}{6},\frac{6}{5},\frac{5}{4},\frac{4}{3}, \frac{7}{5},\frac{3}{2},\frac{5}{3},\frac{7}{4},\frac{2}{1},
\frac{7}{3},\frac{5}{2},\frac{3}{1},\frac{7}{2},\frac{4}{1},\frac{5}{1},\frac{6}{1}
,\frac{7}{1},\infty&&
\end{eqnarray}
\end{subequations}
Through these special examples, we find that the series of breakpoints before $1/ 1$ (such as the sequence in \eqref{eqn:before_1}) is
the one which is called the Farey sequence~\cite{hardy1979introduction}.
The Farey sequence $\mathfrak{F}_k$ for any positive integer $k$ is the set
of irreducible rational numbers $\frac{a}{b}$ with $0\leq a\leq b\leq k$ arranged in an increasing order.
The series of breakpoints after $\frac{1}{1}$ (such as the sequence in \eqref{eqn:after_1}) is
the reciprocal version of the Farey sequence. Thus, our focus is on the sequence before $\frac{1}{1}$.
The Farey sequence has many interesting properties~\cite{hardy1979introduction}, some of which closely relevant to our problem are given as follows.
\begin{lemma}\label{lemma:farey_sequence}
If $\frac{n_1}{ m_1}$, $\frac{n_2}{ m_2}$ and $\frac{n_3}{ m_3}$ are three successive terms of $\mathfrak{F}_k,k>3$ and $\frac{n_1}{ m_1}<\frac{ n_2}{ m_2}<\frac{ n_3}{ m_3}$,
then,
\begin{enumerate}
\item $ m_1n_2-m_2n_1=1$ and $m_1+m_2\ge k+1$.
\item $\frac{n_1+n_2}{m_1+m_2}\in\left(\frac{n_1}{m_1},\frac{n_3}{m_3}\right)$ and $\frac{n_2}{m_2}=\frac{n_1+n_3}{m_1+m_3}$.
\end{enumerate}
~\hfill\hfill $\blacksquare$
\end{lemma}
However, having only Lemma \ref{lemma:farey_sequence} is not enough to solve our design problem in \eqref{eqn:modulator_design}. We need to develop the other new properties of Farey sequences, concluded by Properties~\ref{property:local_two_worst_cases}, \ref{property:the_local_solution} and \ref{th:min}.
\begin{property} \label{property:local_two_worst_cases} Given $k>3$, assume $\frac{n_0}{ m_0},\frac{n_1}{m_1},\frac{n_2}{ m_2},\frac{n_3}{ m_3}\in\mathfrak{F}_{k}$ and $\frac{n_0}{m_0}<\frac{n_1}{m_1}<\frac{n_2}{m_2}<\frac{n_3}{m_3}$.
If $\frac{n_1}{m_1}$ and $\frac{n_2}{ m_2}$ are successive,
then, $\frac{n_1+n_3}{m_1+m_3}\ge\frac{n_2}{m_2}$ and $\frac{n_0+n_2}{m_0+m_2}\le\frac{n_1}{m_1}$.
~\hfill\hfill $\blacksquare$
\end{property}
\begin{property} \label{property:the_local_solution}
Assume $\frac{n_1}{m_1},\frac{n_2}{m_2}\in \mathfrak{F}_{k}, k>3$ and $\frac{n_1}{m_1}<\frac{n_2}{m_2}$. Then,
\begin{enumerate}
\item $\frac{n_1}{m_1}<\frac{n_1+n_2}{m_1+m_2}<\frac{n_2}{m_2}$ holds.
\item If $\frac{f_{11}}{f_{12}},\frac{f_{21}}{f_{22}}\in\left(\frac{n_1}{m_1},\frac{n_1+n_2}{m_1+m_2} \right)$, then, $F_{m_1n_1}<F_{m_2n_2}$.
\item If $\frac{f_{11}}{f_{12}},\frac{f_{21}}{f_{22}}\in\left(\frac{n_1+n_2}{m_1+m_2},\frac{n_2}{m_2} \right)$, then, $F_{m_1n_1}>F_{m_2n_2}$.
\item If $\frac{f_{11}}{f_{12}}=\frac{f_{21}}{f_{22}}=\frac{n_1+n_2}{m_1+m_2}$, then, $F_{m_1n_1}=F_{m_2n_2}$.
\end{enumerate}
~\hfill\hfill $\blacksquare$
\end{property}
Using Properties~\ref{property:local_two_worst_cases} and~\ref{property:the_local_solution}, we attain the following property.
\begin{property}\label{th:min}
If $\frac{n_1}{m_1}$ and $\frac{n_2}{m_2}$ are successive in $\mathfrak{F}_{k}$ and $\frac{f_{11}}{f_{12}},\frac{f_{21}}{f_{22}}\in\left(\frac{n_1}{m_1},\frac{n_2}{m_2} \right)$,
then, $ F_{m_1n_1}$ and $F_{m_2n_2}$ are the two worst cases.~\hfill\hfill $\blacksquare$
\end{property}
\subsection{Techniques to Solve The Max-min Problem}\label{subsec:max_min}
Thanks to Farey sequences, \eqref{eqn:modulator_design} is transformed into a piecewise max-min problem with two objective functions. By solving this kind of problem, our code construction results can be presented as the following theorem.
\begin{theorem}\label{theorem:golbal_solution}
The solution to~\eqref{eqn:modulator_design} is determined by
\begin{eqnarray}\label{eqn:global_optimal_modulator}
\mathbf{F} =\frac{1}{2+2^{p+1}}\left(
{\begin{array}{ccc}
1&2^p\\
1&2^p\\
\end{array}}
\right),
\textrm{or}~\frac{1}{2+2^{p+1}}\left(
{\begin{array}{ccc}
2^p&1\\
2^p&1\\
\end{array}}
\right).
\end{eqnarray}
~\hfill \hfill $\blacksquare$
\end{theorem}
Theorem~\ref{theorem:golbal_solution} uncovers the fact that the optimal linear space coded symbols are actually unipolar $2^{2p}$-ary PAM symbols,
since $\mathcal{B}_{2^{2p}}=\{s_1+2^p s_2:s_1,s_2\in\mathcal{B}_{2^p}\}$. Therefore, in fact, we have rigorously proved that RC~\cite{navidpour2007itwc} is optimal in the sense of the criterion established in this paper.
\section{Computer Simulations }\label{sec:numerical_results}
In this section, we carry out computer simulations to verify our newly developed criterion in \eqref{eqn:dominant_term}. In light of our work being initiative, the only space-only transmission scheme available in the literature is spatial multiplexing (SM). Accordingly, we compare the performance of spatial multiplexing and FLDSC specifically designed for $2\times2$ MIMO-OWC in Section \ref{sec:design_example}.
In addition, we suppose that $h_{ij},i,j=1,2$ are independently and identically distributed and let $\sigma_{11}=\sigma_{12}=\sigma_{21}=\sigma_{22}=\sigma$.
These schemes are as follows:
\begin{enumerate}
\item \textit{FLDSC}. The optical power is normalized in such a way that $\sum_{i,j=1}^2f_{ij}=2$ yields $E\left[\sum_{i,j=1}^2 f_{ij}s_j\right]=1$. From \eqref{eqn:global_optimal_modulator}, the coding matrix is
$ \mathbf{F} =
\frac{1}{3}\left({
\begin{array}{cc}
2& 1\\
2& 1\\
\end{array}
}\right)$.
\item \textit{SM}. We fix the modulation formats to be OOK and vary $\sigma^2$. So the rate is 2 bits per channel use (pcu). The transmitted symbols $s_1,s_2$ are chosen from $\{0,1\}$ equally likely. The average optical power is $E\left[s_1+s_2\right]=1$.
\end{enumerate}
\begin{figure}[!htp]
\centering
\resizebox{7cm}{!}{\includegraphics{positive_modulator.pdf}}
\centering \caption{BER comparisons of FLDSC and spatial multiplexing.}
\label{fig:modulated_unmodulated}
\end{figure}
\begin{figure}[!htp]
\centering
\resizebox{7cm}{!}{\includegraphics{traditional_multiplexing_scheme.pdf}}
\centering \caption{BER performance of spatial multiplexing.}
\label{fig:multiplexing_MIMO}
\end{figure}
We can see that both schemes have the same spectrum efficiency, i.e., 2 bits pcu and the same optical power. Through numerical results, we have following observations.
Substantial enhancement from FLDSC is achieved, as shown in Fig. \ref{fig:modulated_unmodulated}. For $\sigma^2=0.01$, the improvement is almost 16 dB at the target bit error rate (BER) of $10^{-2}$. For $\sigma^2=0.5$, the improvement is almost 6 dB at the target BER of $10^{-3}$. Note that the small-scale gain also governs the negative slope of error curve. The decaying speed of the error curve of FLDSC is exponential in terms of $\ln\frac{\rho}{\ln^2\rho}$, whereas that of SM is polynomial with respect to $\rho$, even worse than single-input-singal-output (SISO).
SM presents only small-scale diversity gain illustrated in Fig. \ref{fig:multiplexing_MIMO}. By varying the variance of $\mathbf{H}$, we find that in the high SNR regimes, the error curve decays as $\rho^{-1}$ as long as the SNR is high enough . From $\sigma^2=0.001$ to $\sigma^2=0.1$, the error curve has a horizonal shift, which is the typical style of RF MIMO~\cite{tarokh98}. The reason is given as follows. The equivalent space coding matrix is
$\mathbf{ X}\left(\mathbf{e}\right)=\left(
{\begin{array}{cc}
e_{1}^2&e_1e_2\\
e_1e_2&e_2^2\\
\end{array}
}\right),e_1,e_2\in\{0,\pm1\}$ with $e_1^2+e_2^2\neq0$. It should be noted that there exists two typical error events: $e_1e_2=-1$ and $e_1e_2=0$. From the necessity proof of Theorem \ref{theorem:space_code_full_diversity}, for $e_1e_2=-1$, the attained large-scale diversity gain is zero, and at the same time, if $e_1e_2=0$ with $e_1^2+e_2^2\neq0$, then, the attained large-scale diversity gain is only two for $2\times2$ MIMO-OWC. Therefore, the overall large-scale diversity gain of SM is zero with small-scale diversity gain being attained.
\section{Conclusion and Discussions}
In this paper, we have established a general criterion on the full-diversity space coded transmission of MIMO-OWC for the ML receiver, which is, to our best knowledge, the first design criterion for the full large-scale diversity transmission of optical wireless communications with IM/DD over log-normal fading channels. Particularly for a $2\times 2$ case, we have attained an optimal closed-form FLDSC, rigorously proving that RC is the best among all the linear space codes. Our results clearly indicate that the transmission design is indeed necessary and essential for significantly improving the overall error performance for MIMO-OWC.
\section{Acknowledgements}
This work was supported in part by Key Laboratory of Universal Wireless Communications (Beijing University of Posts and Telecommunications), Ministry of Education of P. R. China under Grant No. KFKT-2012103, in part by NNSF of China (Grant No. 61271253) and in part by NHTRDP of China (``863'' Program) (Grant No. 2013AA013603).
\bibliographystyle{ieeetr}
|
1,116,691,499,752 | arxiv | \section{Introduction}
Coalescing compact binaries have been pointed out as the most promising
source of gravitational waves for the LIGO/VIRGO/TAMA/GEO
interferometers\cite{kip:300,Schutz:rev}. These binaries
typically have formed a long time ago, giving them time to radiate
most of their eccentricity away. The templates (model of the radiation)
needed for matched filtering are thus constructed according to this
assumption. Gravitational waves emitted by a circular binary will be
explicitly searched for in the output of the detectors, but not
gravitational waves emitted by eccentric binaries. The scenario we
have in mind allows
for the formation of young eccentric binary systems, young
enough that they did not have had time to be fully circularized by the
radiation reaction. For example, the collapse of a dense Newtonian
globular cluster can lead to the
formation of a copious number of eccentric binaries via two- and three-
body encounters\cite{ST,SQ}.
These eccentric binaries will emit strongly in the frequency band of the
LIGO interferometers. It
may seem that these eccentric binaries can be dealt with by
incorporating adequate templates in the bank of templates already
available, but this may prove to be inefficient. The addition of new
templates has two undesirable effects: It adds to the already
heavy computational burden associated with data processing
and it increases the probability of false detection (mistaking the
noise in the detector for a signal). A better solution might be to search
for these eccentric signals with the circular templates, and once a signal
is concluded to be present, to extract the information
using eccentric templates.
For this to be possible, the circular templates have to follow
the phase of the eccentric signals very well. To assess the quality of the
circular templates at modeling eccentric signals, special detection tools
are needed.
\section{Matched filtering as a detection method}
Gravitational wave signals are very weak and at best they will be of the
same order of magnitude as the noise in the detectors. This motivates
the general belief that matched filtering will be needed to extract the
signals from the noisy output of the detectors \cite{kip:300}. When the
signals are of known shape, this technique produces the highest
signal-to-noise ratio\cite{HuFla}. Suppose a gravitational wave $h(t)$
reaches the detector. The output of the detector $o(t)$ is then a
superposition of the useful signal $h(t)$ and the noise $n(t)$. In
matched filtering, the signal is extracted by using a theoretical
template (theoretical model) that mimics the signal as well as possible;
we call this template $m(t,\bvec{\Omega})$. The vector \bvec{\Omega}
denotes the
parameters that characterize the template. If the template were a perfect
copy of the signal, the parameters would represent the real parameters of
the source, such as its mass and distance from earth. If the
templates are not a perfect approximation to the real signal, the
parameters \bvec{\Omega} represent phenomenological parameters.
If instead of working in the time domain we work in frequency space, we
can introduce the natural inner product of matched filtering.
For two functions $a(t)$ and $b(t)$ with Fourier transforms
$\tilde{a}(f)$ and $\tilde{b}(f)$, the inner product is defined
as\cite{Apostolatos}
\begin{eqnarray}
(a|b)&=&2 \int_{0}^{\infty}{\mathrm d} f
\frac{\tilde{a}^{*}(f)\tilde{b}(f)+\tilde{a}(f)\tilde{b}^{*}(f)}{S_{n}(f)}\, ,
\end{eqnarray}
where a ``\,\,*\,\,'' denotes complex conjugation and $S_{n}(f)$ is the
one-sided spectral density of the detector's noise. In terms of this
inner product, the average signal to noise ratio is\cite{wz}
\begin{eqnarray}
\left<\rho\right>&=&\frac{(m(\bvec{\Omega})|h)}
{\sqrt{(m(\bvec{\Omega})|m(\bvec{\Omega}))}}\, .
\label{eqn:int_snr}
\end{eqnarray}
In practice, the set of parameters \bvec{\Omega} is varied until a maximum
of the signal-to-noise ratio is found. This maximum is the
signal-to-noise ratio achievable by using
$\tilde{m}(f,\bvec{\Omega})$ as a template. The signal-to-noise ratio of
equation (\ref{eqn:int_snr}) does not give any information about the
quality of the templates or, equivalently, how well the template models
the signal. The
Schwartz inequality provides an answer to this question\cite{wz}. The
absolute maximum the signal-to noise ratio can take is achieved when
the template is a {\em perfect} match of the signal, and the
parameters \bvec{\Omega} correspond to the parameters of the source
($\tilde{m}(t,\bvec{\Omega})\equiv \tilde{h}(f)$). The optimal SNR
is\cite{wz}
\begin{eqnarray}
\left<\rho\right>_{max}&=&\sqrt{(h|h)}\, .\label{eqn:snr_max}
\end{eqnarray}
By dividing the signal-to-noise ratio (equation (\ref{eqn:int_snr})) with
the value achieved by optimal filtering (equation (\ref{eqn:snr_max})),
we construct the ambiguity function ${\mathcal A}(\bvec{\Omega})$:
\begin{eqnarray}
{\mathcal A}(\bvec{\Omega}) &=&\frac{(m(\bvec{\Omega})|h)}
{\sqrt{(m(\bvec{\Omega})|m(\bvec{\Omega}))(h|h)}}
\, . \label{eqn:A}
\end{eqnarray}
This function takes values between 0 and 1. It is equal to 1
when the optimal template is used.
The value of the parameters \bvec{\Omega} can be varied until
${\mathcal A}(\bvec{\Omega})$ is maximized. The maximum value of the
ambiguity function is the fitting factor:
\begin{eqnarray}
FF=\max_{\bvec{\Omega}}{\mathcal A}(\bvec{\Omega})\, . \label{eqn:FF}
\end{eqnarray}
The fitting factor is a direct measure of
the template's quality since it can be related to the loss of event rate,
i.e. the number of events missed by using an inappropriate set of
templates. This
loss is calculated according to $1-FF^{3}$\cite{Apostolatos}. For
example, if the fitting factor is
0.8, then $48.8\%$ of the events would be mistaken for noise. We adopt a
threshold of $FF=0.9$ for the present work. This corresponds to a
loss in event rate of $27\%$.
\section{The gravitational waveforms}
We calculate the waveforms for both circular and eccentric binaries in
the quadrupole approximation. In this approximation the waveforms are
given by\cite{mtw}
\begin{eqnarray}
h^{TT}_{i j}=\frac{2}{R}\frac{{\mathrm d}^{2}}{{\mathrm d} t^{2}}\left(I_{i
j}-\frac{1}{3}\delta_{i j}I^{k}\,_{k}\right)^{TT}\, ,
\label{eqn:quadrupole}
\end{eqnarray}
where $R$ is the distance between the source and the observer, $I_{i
j}$ is the source's quadrupole moment
and the superscript $TT$ reminds us that gravitational waves are
traceless and live in the plane transverse to the direction of
propagation.
For eccentric binary systems, the waveforms are\cite{Wahlquist}
\begin{eqnarray}
\mathrm{h_{+}}&=&h_{xx}=-h_{yy}=\frac{1}{R}
\frac{\mu}{p}\Bigg\{2
(1+\cos^{2}\theta_{o})\cos 2(\varphi-\varphi_{o}) \nonumber \\
&+& e \left[ (1+\cos^{2}
\theta_{o})\left(\frac{5}{2}\cos(\varphi-2\varphi_{o})+
\frac{1}{2}\cos(3\varphi-2\varphi_{o})\right) +\sin^{2}\theta_{o}
\cos\varphi \right] \nonumber\\
&+&e^{2}{(1+\cos^{2} \theta_{o})\cos
2\varphi_{o}+\sin^{2}
\theta_{o}}\Bigg\}\, , \label{eqn:h+}\\
\hxy&=&h_{xy}=h_{yx}=-\frac{1}{R}\frac{\mu}{p}
\cos \theta_{o}\Bigg\{ 4\sin 2(\varphi-2\varphi_{o}) \nonumber \\
&+&e\left[5\sin (\varphi-2\varphi_{o})+\sin
(3\varphi-2\varphi_{o})\right\} -2e^{2}\sin 2 \varphi_{o}
\Bigg\}\, ,\label{eqn:hx}
\end{eqnarray}
where $\theta_{o}$ and $\varphi_{o}$ are the two angles defining the
location of the observer with respect to the orbital plane, $\mu$ is the
reduced mass, and $p$ and $e$ are defined in terms of the turning points
$(r_{\pm})$ of the Newtonian orbit as
\begin{eqnarray*}
r_{\pm}&=&\frac{M p}{1 \pm e}\, .
\end{eqnarray*}
\begin{figure}[!t]
\begin{center}
\epsfxsize=4.5in
\epsfysize=3.0in
\epsfbox{fig.ps}
\caption{Figure A: The ratio of the optimal signal-to-noise ratio
$\left<\rho (e_{o})\right>$ to the signal-to-noise ratio ($\left<\rho
(0)\right>$).
The figure shows that eccentric binary systems will be easier to
detect if they are explicitly searched for at the output of the
interferometers.\newline\protect
Figure B: The fitting factor as a function of $e_{o}$
for various binary systems. Two trends are apparent. The first one is
the net decrease in the fitting factor as $e_{o}$ increases, while the
total mass of the binary is held fixed. The second one is the increase
in the detection probability when the total mass of the
binary increases. The various binaries studied are labeled by the two
masses of the companions; they are given
in units of the solar mass.}\label{fig:fit}
\end{center}
\end{figure}
The eccentric waveforms oscillate at once, twice and thrice the orbital
frequency, whereas the circular waveforms oscillate only at twice the
orbital frequency. We parameterize our binaries by
specifying the two masses, and the eccentricity $e_{o}$ they have when
they first enter the LIGO frequency band (at 40 Hz).
The Fourier transform of the waveforms is calculated numerically
for the eccentric signal, and obtained through the stationary phase
approximation for the circular templates\cite{FC}. Once these Fourier
transforms are known, it is
straightforward to calculate the optimal SNR (equation
(\ref{eqn:snr_max})), build the ambiguity function (equation
(\ref{eqn:A})), and maximize it over the different parameters of the
templates to get the fitting factor (equation (\ref{eqn:FF})).
The results for the signal-to noise ratio and the fitting factor are
displayed in figure (\ref{fig:fit}). The signal-to-noise ratio for an
eccentric signal is
higher than the ratio obtained for an equivalent circular binary.
This means that if both binaries are located at the same distance $R$, the
eccentric binary will emit stronger radiation and will be easier to
detect if optimal filters are used. On the other hand, the fitting factor
decreases
as the initial eccentricity is increased. The circular templates fail to
model the eccentric signal properly. The good news is that circular
templates are still accurate enough to detect some eccentric signals.
For example, a neutron star binary system will be detected as long as its
initial eccentricity does not exceed 0.13. If the total mass of the
system is increased, the detection probability increases as well. For
example, for a system of two 8.0\,$M_{\odot}$ black holes, the initial
eccentricity can be as high
as 0.33. This trend is explained in the following way. As the total mass
of the system increases, the radiation it emits is stronger and the system
coalesces in a shorter time. The shorter the signal, the less opportunity
the circular templates have to go out of phase with it.
This is a good news, because the higher the
mass is, the easier it is to detect the signal because it is
stronger. Thus, the LIGO interferometers
should be able to detect radiation from some eccentric binaries, those
with large masses and relatively low eccentricities\cite{us}.
\newline
\newline
\textbf{Acknowledgments}: This work was carried out with
\'E.~Poisson. It was supported by NSERC.
|
1,116,691,499,753 | arxiv | \section{Introduction}
Modern radio interferometric arrays deliver large volumes of data, in order to reach higher sensitivities yielding new science. To reach the full potential of such arrays, estimation of systematic errors in the data and correction for such errors (also called as calibration) is essential. This is not a trivial task for an array with hundreds of receivers that collect data over many hours and at thousands of different frequencies. A case in point being the square kilometre array (SKA), which is in the planning phase. Thus, there is an urgent need for computationally efficient and robust algorithms. On the other hand, there is a surge in research related to large scale and distributed data processing algorithms (also called as big-data), which we can exploit to solve some of these problems.
Our recent work \cite{DCAL} introduced distributed-calibration as a way of distributing the computational burden over a network of computers while at the same time improving the quality of calibration. We essentially exploited the continuity of systematic errors over frequency to enforce an additional constraint onto calibration. This reduces calibration to a consensus optimization \cite{boyd2011} problem and we used alternating direction method of multipliers (ADMM) \cite{BT} as the underlying algorithm in the proposed distributed calibration scheme.
Consensus optimization, practically implemented with ADMM, has been extensively studied and is deployed in a wide variety of application areas (some recent examples are \cite{Chang2014,Wei2012,Erseghe12}). In addition, similar work is beginning to appear in radio astronomical imaging \cite{Ferrari2014,PURIFY,Onose}. However, compared with other users of ADMM, we observe several unique properties of the calibration problem that we face. First, the cost function used in calibration is non-linear and non-convex. The systematic errors are mainly caused by directional effects such as the ionosphere and the receiver beam shape. Although we know the general properties of such errors, building an entirely accurate model (for instance for their variation with frequency) is not feasible. Hence, we enforce consensus only by using an approximate model, and this is clearly different and also more involved from most other applications. Indeed, other applications such as consensus averaging, where consensus is enforced on a constant value, use a perfect model. Furthermore, most other applications use complicated network topologies (that in turn affect the performance of ADMM) and on the other hand, in our case, we have a much simpler (and fully connected) network with one fusion center.
Of particular interest is the convergence rate of ADMM, which depends on many factors including the penalty parameter and the network topology \cite{nishihara2015general}. In most cases, the penalty parameter is selected by trial and error, following some general guidelines \cite{BT}. However, for specific problems, better methods to select the penalty have been proposed \cite{nishihara2015general,Teix2016,Ghadimi2015}. Recent work \cite{Hong15} has suggested to select the penalty parameter as large as possible to make the objective function strongly convex. Hence for our problem, we study the Hessian of the cost function to select appropriate values for the penalty parameter. For calibration along multiple directions in the sky, we can select different penalty values along each direction. Intuitively, we select a large penalty along directions with higher signal where we have more confidence in our model. These directions are mostly close to the center of the field of view. On the other hand, for directions far away from the center, we select a smaller penalty.
The rest of the paper is organized as follows: In section \ref{sec:calib} we give an overview of radio interferometric calibration. Next, in section \ref{sec:dist}, we present distributed calibration based on consensus optimization. We also present a scheme based on the Hessian of the cost function to select the penalty parameter. Simulation results are presented in section \ref{sec:results} where we demonstrate the improved performance with a refined penalty parameter. Finally, we draw our conclusions in section \ref{sec:conclusions}.
Notation: Matrices and vectors are denoted by bold upper and lower case letters as ${\bf J}$ and ${\bf v}$, respectively. The transpose and the Hermitian transpose are given by $(.)^T$ and $(.)^H$. The matrix Frobenius norm is given by $\|.\|$. The set of real and complex numbers are denoted by ${\mathbb R}$ and ${\mathbb C}$. The identity matrix is given by $\bf I$. The matrix trace operator is given by $\rm{trace}(.)$.
\section{Radio Interferometric Calibration}\label{sec:calib}
Consider a radio interferometric array with $N$ receivers. The sky is composed of many discrete sources and we consider calibration along $K$ directions in the sky. The observed data at a baseline formed by two receivers, $p$ and $q$ is given by
\cite{HBS}
\begin{equation} \label{ME}
{\bf V}_{pq}=\sum_{k=1}^K{\bf J}_{pk} {\bf C}_{pqk} {\bf J}_{qk}^H + {\bf N}_{pq}
\end{equation}
where ${\bf V}_{pq}$ ($\in \mathbb{C}^{2\times 2}$) is the observed {\em visibility} matrix (or the cross correlations). The systematic errors that need to be calibrated for station $p$ and $q$ are given by the Jones matrices ${\bf J}_{pk},{\bf J}_{qk}$ ($\in \mathbb{C}^{2\times 2}$), respectively. Note that since $K$ directions are calibrated, for each station, there are $K$ Jones matrices (so $KN$ in total). The sky signal (or {\em coherency}) along the $k$-th direction is given by ${\bf C}_{pqk}$ ($\in \mathbb{C}^{2\times 2}$) and is known a priori. The values of ${\bf J}_{pk},{\bf J}_{qk}$ and ${\bf C}_{pqk}$ in (\ref{ME}) are implicitly dependent on sampling time and frequency of the observation. The noise matrix ${\bf N}_{pq}$ ($\in \mathbb{C}^{2\times 2}$) is assumed to have complex, zero mean, circular Gaussian elements.
Estimating the Jones matrices in (\ref{ME}) can be further simplified by using the space alternating generalized expectation maximization (SAGE) algorithm \cite{Fess94,Kaz2}. In a nutshell, using SAGE algorithm, we can simplify calibration along $K$ directions to $K$ single direction calibration subproblems (see \cite{Kaz2} for details). Calibration along the $k$-th direction is done by using the effective observed data
\begin{equation} \label{ME1}
{\bf V}_{pqk} = {\bf V}_{pq} - \sum_{l=1,l\ne k}^K\widehat{\bf J}_{pl} {\bf C}_{pql} \widehat{\bf J}_{ql}^H
\end{equation}
using current estimates $\widehat{\bf J}_{pl}$ and $\widehat{\bf J}_{ql}$ and for an array with $N$ receivers, we can form at most $N(N-1)/2$ baselines that collect visibilities as in (\ref{ME1}), for any given time and frequency sample. We define our objective function (for the $k$-th direction) under a Gaussian noise model as
\begin{equation} \label{cost1}
g_{k}({\bf J}_{1k},{\bf J}_{2k},\ldots)= \sum_{p,q}\| {\bf V}_{pqk} - {\bf J}_{pk} {\bf C}_{pqk} {\bf J}_{qk}^H \|^2
\end{equation}
where the summation is over the baselines $pq$ that have data. By increasing the time and frequency interval within which data are collected, this summation can be expanded (thus improving the signal to noise ratio).
By defining ${\bf J}$ ($\in \mathbb{C}^{2N\times 2}$) as the augmented matrix of Jones matrices of all stations along the $k$-th direction,
\begin{equation}
{\bf J}\buildrel\triangle\over=[{\bf J}_{1k}^T,{\bf J}_{2k}^T,\ldots,{\bf J}_{Nk}^T]^T,
\end{equation}
and ${\bf A}_p$ ($\in \mathbb{R}^{2\times 2N}$) (and ${\bf A}_q$ likewise) as the canonical selection matrix
\begin{equation} \label{Ap}
{\bf A}_p \buildrel\triangle\over=[{\bf 0},{\bf 0},\ldots,{\bf I},\ldots,{\bf 0}],
\end{equation}
(only the $p$-th block of (\ref{Ap}) is an identity matrix) we can rewrite (\ref{cost1}) as
\begin{equation} \label{cost2}
g_{k}({\bf J})= \sum_{p,q}\| {\bf V}_{pqk} - {\bf A}_p{\bf J} {\bf C}_{pqk} ({\bf A}_q{\bf J})^H \|^2.
\end{equation}
Calibration along the $k$-th direction is the estimation of ${\bf J}$ by minimizing (\ref{cost2}). Note that (\ref{cost2}) has to be minimized for each direction $k=1\ldots K$ and updated values of (\ref{ME1}) are re-used until convergence is reached in the SAGE algorithm. We also note that (\ref{cost2}) only gives solutions for one frequency and time interval, and to calibrate the full dataset, many such solutions are obtained for data observed at different time and frequency intervals.
\section{Distributed Calibration}\label{sec:dist}
We have introduced calibration along $K$ directions, but only working on a single frequency and time sample in section \ref{sec:calib}. In this section, we consider calibrating data observed at $P$ different frequencies, but only along $1$ direction, because this can easily be extended to $K$ directions using the SAGE algorithm. We impose an additional constraint that tries to preserve continuity of ${\bf J}$ in (\ref{cost2}) over frequency. To solve this, we introduced the use of consensus optimization in \cite{DCAL}, where the objective function is modified into an augmented Lagrangian
\begin{equation} \label{aug}
L_f({\bf J}_f,{\bf Z},{\bf Y}_f)=g_{f}({\bf J}_f) + \|{\bf Y}_f^H({\bf J}_f-{\bf B}_f {\bf Z})\| + \frac{\rho}{2} \|{\bf J}_f-{\bf B}_f {\bf Z}\|^2
\end{equation}
where the subscript $(.)_f$ denotes data (and parameters) at frequency $f$. In (\ref{aug}), $g_{f}({\bf J}_f)$ is the original cost function as in (\ref{cost2}), except that the subscripts denote frequency $f$. The Lagrange multiplier is given by ${\bf Y}_f$ ($\in \mathbb{C}^{2N\times 2}$). The calibration parameters are given by ${\bf J}_f$ ($\in \mathbb{C}^{2N\times 2}$). The continuity in frequency is enforced by the frequency model given by ${\bf B}_f$ ($\in \mathbb{R}^{2N\times 2NF}$), which is essentially a set of basis functions in frequency, evaluated at $f$. The global variable ${\bf Z}$ ($\in \mathbb{C}^{2NF\times 2}$) is shared by data at all $P$ frequencies.
The ADMM iterations for solving (\ref{aug}) are given as
\begin{eqnarray} \label{step1}
({\bf J}_f)^{n+1}= \underset{{\bf J}}{\argmin}\ \ L_f({\bf J},({\bf Z})^n,({\bf Y}_f)^n)\\ \label{step2}
({\bf Z})^{n+1}= \underset{{\bf Z}}{\argmin}\ \ \sum_f L_f(({\bf J}_f)^{n+1},{\bf Z},({\bf Y}_f)^n)\\ \label{step3}
({\bf Y}_f)^{n+1}=({\bf Y}_f)^n + \rho\left( ({\bf J}_f)^{n+1}-{\bf B}_f ({\bf Z})^{n+1} \right)
\end{eqnarray}
where we use the superscript $(.)^n$ to denote the $n$-th iteration. The steps (\ref{step1}) and (\ref{step3}) are done for each $f$ in parallel. The update of the global variable (\ref{step2}) is done at the fusion center. More details of these steps can be found in \cite{DCAL}.
In this paper, we study strategies for selecting the penalty parameter $\rho$ to get faster convergence and accurate results. In order to do this, we use the Hessian operator of the cost function (\ref{cost2}), which is given as \cite{DCAL,ICASSP13},
\begin{eqnarray}\label{Hess}
\lefteqn{\mathrm{Hess}_f\left(g_{f}({\bf {J}}),{\bf {J}},{\bmath \eta}\right)}\\\nonumber
&=&\sum_{p,q}\left( {\bf {A}}_p^T \left( ({\bf {V}}_{pqf}-{\bf {A}}_p{\bf {J}}{\bf {C}}_{pqf}{\bf {J}}^H{\bf {A}}_q^T) {\bf {A}}_q {\bmath \eta}\right.\right.\\\nonumber
&& \left.\left.- {\bf {A}}_p({\bf {J}}{\bf {C}}_{pqf} {\bmath \eta}^H + {\bmath \eta}{\bf {C}}_{pqf}{\bf {J}}^H) {\bf {A}}_q^T{\bf {A}}_q{\bf {J}}\right) {\bf {C}}_{pqf}^H\right. \\\nonumber
&&\left. + {\bf {A}}_q^T \left( ({\bf {V}}_{pqf}-{\bf {A}}_p{\bf {J}}{\bf {C}}_{pqf}{\bf {J}}^H{\bf {A}}_q^T)^H {\bf {A}}_p {\bmath \eta}\right.\right.\\\nonumber
&& \left.\left.- {\bf {A}}_q({\bf {J}}{\bf {C}}_{pqf} {\bmath \eta}^H + {\bmath \eta}{\bf {C}}_{pqf}{\bf {J}}^H)^H {\bf {A}}_p^T{\bf {A}}_p{\bf {J}}\right) {\bf {C}}_{pqf}\right) \\\nonumber
\end{eqnarray}
where ${\bmath \eta}\in \mathbb{C}^{2N\times 2}$.
For convexity, we need a positive definite Hessian. Since we have a Hessian operator (instead of a matrix), we need to find the smallest eigenvalue of the Hessian, and for convexity, this should be positive. In order to find this, we define a cost function as
\begin{eqnarray} \label{hcost}
\lefteqn{h({\bmath \eta})\buildrel\triangle\over= \frac{1}{2}\mathrm{trace}\left({\bmath \eta}^H \mathrm{Hess}_f\left(g_{f}({\bf {J}}),{\bf {J}},{\bmath \eta}\right)\right.}\\\nonumber
&&+\left.\mathrm{Hess}_f^H\left(g_{f}({\bf {J}}),{\bf {J}},{\bmath \eta}\right) {\bmath \eta}\right)
\end{eqnarray}
and we find the smallest eigenvalue $\lambda$ by solving
\begin{eqnarray} \label{eig}
&&\lambda=\underset{{\bmath \eta}}{\argmin}\ \ \ \ h({\bmath \eta})\\\nonumber
&&{\mathrm{subject\ to}}\ \ {\bmath \eta}^H{\bmath \eta}={\bf I}.
\end{eqnarray}
The constraint ${\bmath \eta}^H {\bmath \eta}={\bf I}$ makes the minimization of (\ref{hcost}) restricted onto a complex Stiefel manifold \cite{AMS}, which can be easily solved by using the Riemannian trust region method \cite{RTR,manopt}. In order to do this, we require the gradient and Hessian of $h({\bmath \eta})$, which are given as
\begin{equation}
\mathrm{grad}\left(h({\bmath \eta}),{\bmath \eta}\right)= \mathrm{Hess}_f\left(g_{f}({\bf {J}}),{\bf {J}},{\bmath \eta}\right)
\end{equation}
and
\begin{equation}
\mathrm{Hess}\left(h({\bmath \eta}),{\bmath \eta},{\bmath \zeta}\right)= \mathrm{Hess}_f\left(g_{f}({\bf {J}}),{\bf {J}},{\bmath \zeta}\right),
\end{equation}
where ${\bmath \zeta} \in \mathbb{C}^{2N\times 2}$.
After obtaining $\lambda$ from (\ref{eig}), our strategy is to select $\rho$ such that $\rho+\lambda\ge 0$ so that the Hessian of the augmented Lagrangian (\ref{aug}) is positive semi-definite \cite{Hong15}. In order to do this, we need an estimate for ${\bf J}$ in (\ref{hcost}). We can find this by initial calibration with a pre-determined value of $\rho$ (say $\rho=0$). Once we obtain $\widehat{\bf J}$, we use (\ref{eig}) to find $\lambda$ and afterwards we update $\rho$. Note that $\lambda$ is dependent on $f$, but we ignore the frequency dependence of $\lambda$ and use one value of $f$ (typically the middle) to estimate it.
So far, we have considered calibration along one direction only. The next question that we must answer is how to select $\rho$ for calibration along $K$ directions in the sky. For each direction, ${\bf {C}}_{pqf}$ in (\ref{Hess}) will influence the value of $\lambda$. If the centroid of the source (cluster) \cite{Kazemi3} is along $l,m$ direction in the sky and if its effective (unpolarized) intensity is $\alpha$, we have
\begin{equation} \label{coh}
{\bf {C}}_{pqf} \approx \exp\left(\jmath \phi(l,m,p,q)\right) \alpha {\bf I}
\end{equation}
where $\phi(l,m,p,q)$ is the phase contribution and ${\bf I}$ is a $2\times 2$ identity matrix. Hence ${\bf {C}}_{pqf}$ is a diagonal scalar matrix. If $\widehat{\bf J}$ is close to the true solution, the term ${\bf {V}}_{pqf}-{\bf {A}}_p{\bf {J}}{\bf {C}}_{pqf}{\bf {J}}^H{\bf {A}}_q^T$ becomes negligible compared with the other terms in (\ref{Hess}). The remaining terms have a product ${\bf {C}}_{pqf} {\bf {C}}_{pqf}^H$ and the phase term in (\ref{coh}) cancel out. Therefore, for different clusters, the value for $\lambda$ obtained by (\ref{eig}) is mainly determined by the squared effective intensity $\alpha^2$ of each source. Hence, once we have determined a suitable value for $\rho$ for one direction, the corresponding values for other directions can be determined by scaling by the squared effective intensity.
\section{Simulation Results}\label{sec:results}
We simulate an array of $N=47$ receivers that calibrate along $K=5$ directions in the sky. The matrices ${\bf J}_{pk},{\bf J}_{qk}$ in (\ref{ME}) are generated with their elements having values drawn from a complex uniform distribution in $[0,1]$, multiplied by a frequency dependence given by a random $7$-th order polynomial. The intensities of the $K=5$ sources are randomly generated in the range $[1,5]$ and their positions are randomly chosen in a field of view of about $7\times 7$ square degrees. The variation of intensities with frequency is given by a power law with randomly generated exponent in $[-1,1]$. The noise matrices ${\bf N}_{pq}$ in (\ref{ME}) are simulated to have complex circular Gaussian random variables. The variance of the noise is changed according to the signal to noise ratio ($\rm{SNR}=10$)
\begin{equation}
\mathrm{SNR}\buildrel\triangle\over=\frac{\sum_{p,q} \|{\bf V}_{pq}\|^2}{\sum_{p,q} \| {\bf N}_{pq}\|^2}.
\end{equation}
With this setup, we generate data for $P=8$ frequency channels in the range $115$ to $185$ MHz. For calibration, we setup a $3$-rd order polynomial model ($F=4$), using Bernstein basis functions \cite{Farouki} for the matrix ${\bf B}_f$ in (\ref{aug}). Note that we intentionally use a lower order frequency dependence than what is actually present in the data to create a realistic scenario when the exact model is not known. During calibration, initial values for the parameters are always set as ${\bf J}_p={\bf I}$ for $p\in[1,N]$. Unless stated otherwise, all directions have the same value of $\rho$. We use $50$ ADMM iterations, and after the $1$-st iteration, we solve (\ref{eig}) to estimate $\lambda$, and we get a typical value of $\lambda=-150$ for a source with unit amplitude. Regardless, we perform calibration with various values of $\rho$ to compare performance.
We find the normalized (averaged over all directions) mean squared error (NMSE) between true ${\bf J}_f$ and its estimate as
\begin{equation}\label{nmse}
\mathrm{NMSE}\buildrel\triangle\over=\frac{1}{\sqrt{2KN}}\sqrt{\sum_k \|{\bf {J}}_f-\widehat{\bf {J}}_f {\bf {U}}\|^2}
\end{equation}
to measure the accuracy of calibration. In (\ref{nmse}), ${\bf U}$ is a unitary matrix that removes the unitary ambiguity in the estimated $\widehat{\bf J}_f$ \cite{interpolation}.
In Fig. \ref{fignmse}, we show the NMSE for various values of $\rho$, with increasing number of ADMM iterations. We see that for $\rho+\lambda>0$ ($\rho=200$) we get the best performance, but increasing $\rho$ too much beyond this value ($\rho=1000$) shows no additional improvement. A notable behavior of the NMSE is the enhancement of the error at the edges (especially at low ADMM iterations), which we attribute to Runge's phenomenon \cite{Runge} in polynomial interpolation.
\begin{figure}[htbp]
\begin{minipage}[b]{0.98\linewidth}
\begin{minipage}[b]{0.48\linewidth}
\centering \centerline{\epsfig{figure=eusipco_figures/nmse_rho5_1.eps,width=4.2cm}}
\vspace{0.2cm}\centerline{$\ \rho=5$}
\end{minipage}
\begin{minipage}[b]{0.48\linewidth}
\centering \centerline{\epsfig{figure=eusipco_figures/nmse_rho50_1.eps,width=4.2cm}}
\vspace{0.2cm}\centerline{$\rho=50$}
\end{minipage}
\begin{minipage}[b]{0.48\linewidth}
\centering \centerline{\epsfig{figure=eusipco_figures/nmse_rho200_1.eps,width=4.2cm}}
\vspace{0.2cm}\centerline{$\rho=200$}
\end{minipage}
\begin{minipage}[b]{0.48\linewidth}
\centering \centerline{\epsfig{figure=eusipco_figures/nmse_rho1000_1.eps,width=4.2cm}}
\vspace{0.2cm}\centerline{$\rho=1000$}
\end{minipage}
\end{minipage}
\caption{NMSE for various $\rho$ with increasing ADMM iterations.}
\label{fignmse}
\end{figure}
In Fig. \ref{fignmseall}, we show the final NMSE for 50 ADMM iterations, which once again shows that $\rho=200$ gives the best result.
\begin{figure}[htbp]
\begin{minipage}[b]{0.98\linewidth}
\centering
\centerline{\epsfig{figure=eusipco_figures/nmse_all_1.eps,width=5.4cm}}
\end{minipage}
\caption{NMSE for various $\rho$ after $50$ ADMM iterations.} \label{fignmseall}
\end{figure}
\begin{figure}[htbp]
\begin{minipage}[b]{0.98\linewidth}
\centering
\centerline{\epsfig{figure=eusipco_figures/nmse_rhovar_1.eps,width=5.4cm}}
\end{minipage}
\caption{NMSE after $50$ ADMM iterations with fixed $\rho$ along all directions and varying $\rho$ according to squared intensity.} \label{fignmsevar}
\end{figure}
In Fig. \ref{fignmsevar}, we show NMSE for a simulation with intensities at mid frequency $5,3,3,2$ and $1.5$ along the $K=5$ directions. In one calibration, we use regularization $\rho=400$ for all directions and in the other, we use $\rho$ equal to $400,144,144,64$ and $36$ respectively. We see that varying $\rho$ in proportion to the squared intensity gives the better NMSE.
\section{Conclusions}\label{sec:conclusions}
We have investigated refining the performance of distributed calibration based on consensus optimization in this paper. We have used the Hessian of the cost function to appropriately select the penalty parameter such that the augmented Lagrangian becomes convex. Furthermore, in a multi-directional calibration scheme, we have proposed to scale the penalty parameter proportional to the squared intensity along each direction. According to our simulations, such fine-tuning of parameters gives superior performance in terms of accuracy and convergence of the distributed calibration scheme.
\bibliographystyle{IEEE}
|
1,116,691,499,754 | arxiv | \section{Introduction} \label{sec:intro}
Many astrophysical flows are highly subsonic. In this regime,
sound waves carry sufficiently little energy that they do not
significantly affect the convective dynamics of the system. In many
of these flows, modeling long-time convective dynamics are of
interest, and numerical approaches based on compressible hydrodynamics
are intractable, even on modern supercomputers. One approach to this
problem is to use low Mach number models. In a low Mach number
approach, sound waves are eliminated from the governing equations
while retaining compressibilitiy effects due to, e.g., nuclear energy
release, stratification, compositional changes, and thermal diffusion. When the Mach
number (the ratio of the characteristic fluid velocity over the
characteristic sound speed; Ma $= U/c$) is small, the resulting system
can be numerically integrated with much larger time steps than a
compressible model. Specifically, the time step increase is at least
a factor of $\sim 1/{\rm Ma}$ larger. Each time step is more
computationally expensive due to the presence of additional linear
solves, but for many problems of interest the overall gains in
efficiency can easily be an order of magnitude or more.
Low Mach number models have been developed for a variety of contexts
including combustion \citep{day2000numerical}, terrestrial atmospheric
modeling \citep{durran:1989,oneill:2014,duarte2015low}, and elastic
solids \citep{abbate2017all}. For astrophysical applications, a
number of approaches to modeling low Mach number flows have been
developed in recent years. One of the more similar approaches to ours is
\cite{Lin:2006}, however this approach is only first-order accurate and does
not account for atmospheric expansion. There are also semi-implicit all-Mach
number solvers, where the Euler equations are split into an acoustic
part and an advective part
\citep{Kwatra2009,Degond2009,Cordier2012,Haack2012,Happenhofer2013,Chalons2016,Padioleau2019}.
The fast acoustic waves are then solved using implicit time
integration, while the slow material waves are solved explicitly.
Another approach is to use preconditioned all-Mach number solvers
\citep{Miczek2014,Barsukow2016}, where the numerical flux is
multiplied by a preconditioning matrix. This reduces the stiffness of
the system at low Mach numbers, while retaining the correct scaling
behavior. In the reduced speed of sound technique (RSST) and related
methods, the speed of sound is artificially reduced by including a
suitable scaling factor in the continuity equation, reducing the
restriction on the size of the time step
\citep{Rempel2005,Hotta2012,Wang2015,Takeyama2017,Iijima2018}.
Finally, there are fully implicit time integration codes for the
compressible Euler equations
\citep{Viallet2011,kifonidis:2012,Viallet2015,Goffrey2016}. The MUSIC code
uses fully implicit time integration for the compressible Euler
equations, which therefore allows for arbitrarily large time steps.
Previously, we developed the low Mach number astrophysical solver, MAESTRO.
MAESTRO is a block-structured, Cartesian grid finite-volume, adaptive mesh refinement (AMR)
code that has been successfully used for many years for a number of applications, detailed below.
Unlike several of the references above, MAESTRO is not an all-Mach solver, but is suitable for
flows where the Mach number is small ($\sim 0.1$ or smaller).
Furthermore, the low Mach number model in MAESTRO is specifically designed for, but not limited
to, astrophysical settings with significant atmospheric stratification.
This includes full spherical stars, as well as planar simulations of dynamics within localized
regions of a star.
The numerical methodology relies on an explicit Godunov approach for advection, a stiff ODE
solver for reactions (VODE, \citealt{vode}), and multigrid-based linear solvers for the
pressure-projection steps. Thus, the time step is limited by an advective CFL constraint based on the
fluid velocity, not the sound speed.
Central to the algorithm are time-varying, one-dimensional stratified background (or base) state
density and pressure fields that are held in hydrostatic equilibrium.
The base state density couples to the full state solution through buoyancy terms in the momentum equation,
and the base state pressure couples to the full state solution by constraining the evolution of the
thermodynamic variables to match this pressure.
The time-advancement strategy uses Strang splitting to integrate the thermodynamic variables, a
second-order projection method to integrate the velocity subject to a divergence constraint,
and a velocity splitting scheme that uses a radially-averaged velocity to hydrodynamically evolve the base state.
The original MAESTRO code was developed in the pure-Fortran 90 FBoxLib software framework, whereas
MAESTROeX is developed in the C++/F90 AMReX framework \citep{AMReX,AMReX_JOSS}.
The key numerical developments of the original MAESTRO algorithm are presented in a series of
papers which we refer to as Papers I-V:
\begin{itemize}
\item In Paper I \citep{MAESTRO_I}, we derive the low Mach number equation set for stratified
environments from the fully compressible equations.
\item In Paper II \citep{MAESTRO_II}, we incorporate the effects of atmospheric expansion
through the use of a time-dependent background state.
\item In Paper III \citep{MAESTRO_III}, we incorporate reactions and the associated coupling
to the hydrodynamics.
\item In Paper IV \citep{MAESTRO_IV}, we describe our treatment of spherical stars in a
three-dimensional Cartesian geometry.
\item In Paper V \citep{MAESTRO_V}, we describe the use of block-structured adaptive mesh
refinement to focus spatial resolution in regions of interest.
\end{itemize}
Since then, there have been many scientific investigations using MAESTRO, which have included additional algorithmic enhancements. Topics include:
\begin{itemize}
\item The convective phase preceding Chandrasekhar mass models for type Ia supernovae \citep{MAESTRO_convection,MAESTRO_AMR,MAESTRO_CASTRO}.
\item Convection in massive stars \citep{Gilet:2013,gilkis:2016}.
\item Sub-Chandrasekhar white dwarfs \citep{subChandra_I,subChandra_II}.
\item Type I X-ray bursts \citep{XRB_I,XRB_II,XRB_III}.
\end{itemize}
In this paper, we present new algorithmic methodology that improves upon Paper V in a number of ways.
First, the overall temporal algorithm has been greatly simplified without compromising second-order accuracy.
The key design decisions were to eliminate the splitting of the velocity into average and perturbational components,
and also to replace the hydrodynamic evolution of the base state with a predictor-corrector approach.
Not only does this greatly simplify the dynamics of the base
state, but this treatment is more amenable to higher-order multiphysics coupling strategies
based on method-of-lines integration.
In particular, schemes based on deferred corrections \citep{dutt2000spectral} have been used to generate
high-order temporal integrators for problems of reactive flow and low Mach number combustion \citep{pazner2016high,nonaka2018conservative}.
Second, we explore the effects of alternative spatial mapping routines for coupling the base state and the Cartesian grid state for spherical problems.
Finally, we examine the performance of our new MAESTROeX implementation in the new C++/F90 AMReX public software library \citep{AMReX,AMReX_JOSS}.
MAESTROeX uses MPI+OpenMP parallelism and scales well to over 10,000 MPI processes, with each MPI process supporting tens of threads.
The resulting code is publicly available on GitHub (\url{https://github.com/AMReX-Astro/MAESTROeX}),
uses the Starkiller-Astro microphysics libraries (\citealt{starkiller}, \url{https://github.com/starkiller-astro}) ,
as well as AMReX (\url{https://github.com/AMReX-Codes/amrex}).
The rest of this paper is organized as follows.
In Section \ref{sec:equations} we review our model for stratified low Mach number astrophysical flow.
In Section \ref{eq:algorithm} we present our numerical algorithm in detail, highlighting the new temporal integration scheme as well as spatial base state mapping options.
In Section \ref{sec:results} we validate our new approach and examine the performance of our algorithm on full spherical star problems used in previous scientific investigations.
We conclude in Section \ref{sec:conclusions}.
\section{Governing Equations}\label{sec:equations}
Low Mach number models for reacting flow were originally derived using asymptotic analysis
\citep{rehm1978equations,majda1985derivation} and used in terrestrial combustion applications
\citep{knio1999semi,day2000numerical}. These models have been extended to nuclear flames
in astrophysical environments using adaptive algorithms in space and time \citep{Bell:2004}.
In Papers I-III, we extended this work and the atmospheric model by \citet{durran:1989} by deriving a model and algorithm suitable for stratified astrophysical flow.
We take the standard equations of reacting, compressible flow, and recast the equation
of state (EOS) as a divergence constraint on the velocity field.
The resulting model is a series of evolution equations for mass, momentum, and energy, subject
to an additional constraint on velocity. The evolution equations are
\begin{eqnarray}
\frac{\partial(\rho X_k)}{\partial t} &=& -\nabla\cdot(\rho X_k{\bf{U}}) + \rho\dot\omega_k,\label{eq:species}\\
\frac{\partial{\bf{U}}}{\partial t} &=& -{\bf{U}}\cdot\nabla{\bf{U}} - \frac{\beta_0}{\rho}\nabla\left(\frac{\pi}{\beta_0}\right) - \frac{\rho-\rho_0}{\rho} g{\bf{e}}_r,\label{eq:momentum}\\
\frac{\partial(\rho h)}{\partial t} &=& -\nabla\cdot(\rho h{\bf{U}}) + \frac{Dp_0}{Dt} + \rhoH_{\rm nuc}.\label{eq:enthalpy}
\end{eqnarray}
Here $\rho$, ${\bf{U}}$, and $h$ are the mass density,
velocity and specific enthalpy, respectively, and
$X_k$ are the mass fractions of species $k$ with associated
production rate $\dot\omega_k$ and energy release per time per unit mass $H_{\rm nuc}$.
The species are constrained such that $\sum_k X_k = 1$ giving $\rho = \sum_k (\rho X_k)$ and
\begin{equation}
\frac{\partial\rho}{\partial t} = -\nabla\cdot(\rho{\bf{U}}).
\end{equation}
The total pressure is decomposed into a one-dimensional hydrostatic base state
pressure, $p_0 = p_0(r,t)$, and a dynamic pressure, $\pi = \pi({\bf{x}},t)$, such that
$p = p_0 + \pi$ and $|\pi|/p_0 = \mathcal{O}({\rm Ma}^2)$ (we use ${\bf{x}}$ to represent the Cartesian coordinate
directions of the full state and $r$ to represent the radial coordinate direction for the base state).
One way to mathematically think of the difference between $p_0$ and $\pi$ is that $\pi$ controls the velocity evolution
in a way that forces the thermodynamic variables $(\rho,h,X_k)$ to evolve in a manner that is consistent with the EOS and $p_0$.
By comparing the momentum equation (\ref{eq:momentum}) to the momentum equation used in equation (2) in Paper V, we
note that we are using a formulation that enforces conservation of total energy in the
low Mach number system in the absence of external heating or viscous terms \citep{kleinpauluis,Vasil2013}.
We have previously validated this approach in modeling sub-Chandrasekhar white dwarfs using MAESTRO \citep{subChandra_II}.
We also define a one-dimensional base state density, $\rho_0 = \rho_0(r,t)$, that represents the lateral average (see Section \ref{Sec:Spatial}) of $\rho$ and is in hydrostatic equilibrium with $p_0$, i.e.,
\begin{equation}
\nabla p_0 = -\rho_0 g{\bf{e}}_r, \label{eq:HSE}
\end{equation}
where $g=g(r,t)$ is the magnitude of the gravitational acceleration and ${\bf{e}}_r$ is the unit vector in the outward radial direction.
Here $\beta_0$ is a density-like variable that carries background stratification, defined as
\begin{equation}
\beta_0(r,t) = \rho_0(0,t)\exp\left(\int_0^r\frac{1}{\overline{\Gamma}_1 p_0}\frac{\partial p_0}{\partial r'}dr'\right),
\end{equation}
where $\overline{\Gamma}_1$ is the lateral average of $\Gamma_1 = d(\log p)/d(\log\rho) |_s$ (evaluated with entropy, $s$, constant). We explored the effect of
replacing $\Gamma_1$ with $\overline{\Gamma}_1$ as well as a correction term in paper III.
Thermal diffusion is not discussed in this paper, but we have previously described the modifications to the original algorithm required
for implicit thermal diffusion in \cite{XRB_I}; inclusion of these effects in the new algorithm presented here is straightforward.
Mathematically, equations (\ref{eq:momentum})-(\ref{eq:enthalpy}) must still be closed by the EOS.
This is done by taking the Lagrangian derivative of the EOS for pressure as a function of the thermodynamic variables,
substituting in the equations of motion for mass and energy,
and requiring that the pressure is a prescribed function of altitude and time based on the hydrostatic equilibrium condition.
See Papers I and II for details of this derivation.
The resulting equation is a divergence constraint on the velocity field,
\begin{equation}
\nabla\cdot(\beta_0{\bf{U}}) = \beta_0\left(S - \frac{1}{\overline{\Gamma}_1 p_0}\frac{\partial p_0}{\partial t}\right).\label{eq:U divergence}
\end{equation}
The expansion term, $S$, incorporates local compressibility effects due to compositional changes and heat release from reactions,
\begin{equation}
S = -\sigma\sum_k\xi_k\dot\omega_k + \frac{1}{\rho p_\rho}\sum_k p_{X_k}\dot\omega_k + \sigmaH_{\rm nuc},\label{eq:S}
\end{equation}
where
$p_{X_k} \equiv \left. \partial p / \partial X_k \right|_{\rho,T,X_{j,j\ne k}}$,
$\xi_k \equiv \left. \partial h /\partial X_k \right |_{p,T,X_{j,j\ne k}}$,
$p_\rho \equiv \left.\partial p/\partial \rho \right |_{T, X_k}$, and
$\sigma \equiv p_T/(\rho c_p p_\rho)$, with $p_T \equiv \left. \partial p / \partial
T \right|_{\rho, X_k}$ and $c_p \equiv \left. \partial h / \partial T
\right|_{p,X_k}$ is the specific heat at constant pressure.
To summarize, we model evolution equations for momentum, mass, and energy, equations (\ref{eq:momentum})-(\ref{eq:enthalpy}) subject to a divergence constraint on the velocity, equation (\ref{eq:U divergence}), and the hydrostatic equilibrium condition, equation (\ref{eq:HSE}).
\section{Numerical Algorithm}\label{eq:algorithm}
\subsection{Spatial Discretization}\label{Sec:Spatial}
The spatial discretization and adaptive mesh refinement methodology remains unchanged from Paper V.
We now summarize some of the key points here before describing the new temporal integrator in the next section.
We recommend the reader review Section 3 of Paper V for further details.
We shall refer to local atmospheric flows in two and three dimensions as problems in ``planar'' geometry, and full-star flows
in three dimensions as problems in ``spherical'' geometry.
The solution in both cases consists of the Cartesian grid solution
and the one-dimensional base state solution.
Figure \ref{Fig:BaseGrid} illustrates the relationship between the base state and the Cartesian grid state for both planar and spherical geometries in the presence of spatially adaptive grids.
\begin{figure}[tb]
\centering
\includegraphics[height=2.0in]{base_grid.eps} \hspace{0.5in}
\includegraphics[height=2.0in]{base_spherical.eps}
\caption{\label{Fig:BaseGrid}
(Left) For multi-level problems in planar geometry, we force a direct alignment
between the base state cell centers and the Cartesian grid cell centers by
allowing the radial base state spacing to change with space and time.
(Right) For multi-level problems in spherical geometry, since there is no direct alignment
between the base state cell centers and the Cartesian grid cell centers, we choose to fix
the radial base state spacing across levels. Reprinted from Paper V \citep{MAESTRO_V}. }
\end{figure}
One of the key numerical modules is the ``lateral average'', which computes the average over a
layer of constant radius of a Cartesian grid variable and stores the result in a one-dimensional base state array.
In planar geometries, this is a straightforward arithmetic average of values in cells at
a particular height since the base state cell centers are in alignment
with the Cartesian grid cell centers.
However for spherical problems, the procedure is much more complicated.
In Section 4 of Paper V, we describe how there is a finite, easily computable set of radii that any
there-dimensional Cartesian cell-center can map to.
Specifically, for every three-dimensional Cartesian cell, there exists an integer $m$ such that the distance
from the cell center to the center of the star is given by
\begin{equation}
\hat{r}_m=\Delta x\sqrt{0.75+2m}.\label{eqn:radii}
\end{equation}
\begin{figure}[tb]
\centering
\includegraphics[height=2.5in]{base_spherical_new.eps}
\caption{\label{Fig:NewBaseGrid}
A direct mapping between the base state cell centers (red squares) and the Cartesian grid cell centers (blue crosses)
is enforced by computing the average of the grid cell centers that share the same radial distance from
the center of the star.}
\end{figure}
Figure \ref{Fig:NewBaseGrid} is a two-dimensional illustration
(two dimensions is chosen in the figure for ease of exposition; this mapping is only used for three-dimensional spherical stars)
of the relationship between the Cartesian grid state and the one-dimensional base state array.
We compute the lateral average by first summing the values in all the cells associated with each radius,
dividing by the number of contributing cells to obtain the arithmetic average, and then use quadratic interpolation to map this data onto a one-dimensional base state.
Previously, for spherical problems MAESTRO only allowed for a base state with constant $\Delta r$ (typically equal to $\Delta x/5$).
The companion ``fill'' module maps a base state array onto the full Cartesian grid state.
For planar problems, direct injection can be used due to the perfect alignment of the base state and Cartesian grid state.
For spherical problems, quadratic interpolation of the base state is used to assign values to each Cartesian cell center.
In this paper we explore a new option to retain an irregularly-spaced base state to eliminate mapping errors from the ``fill'' module.
For the lateral average, as before we first sum the values in all the cells associated with each radius and divide
by the number of contributing cells to obtain the arithmetic average. However we do not interpolate this onto a uniformly spaced
base state and retain the use of this irregularly-spaced base state.
The advantage with this approach is that the fill module does not require any interpolation.
A potential benefit to eliminating the mapping error is to consider a spherical star in hydrostatic equilibrium at rest.
In the absence of reactions, the star should remain at rest.
The buoyancy forcing term in the momentum equation contains $\rho-\rho_0$. With the original scheme, interpolation errors
in computing $\rho_0$ by averaging would cause artificial acceleration in the velocity field due to the interpolation error
from the Cartesian grid to and from the radial base state. By retaining the radial base state as an irregularly spaced
array, the effects due to interpolation error are nearly eliminated, but not completely eliminated since there are still
machine precision effects resulting from averaging a large number of numerical values.
We note that $\Delta r$ decreases as the base state moves further from the center of the star,
which results in far more total cells in the irregularly-spaced array than the previous uniformly-spaced array.
\subsection{Temporal Integration Scheme}\label{Sec:Temporal Integration Scheme}
We now describe the new temporal integration scheme, noting that it can be used for the original base state mapping
(with uniform base state grid spacing) as well as the new irregularly spaced base state mapping.
Previously we adopted an approach where we split the velocity into a base state component, $w_0(r,t)$,
and a local velocity $\widetilde{\Ub}({\bf{x}},t)$, so that
\begin{equation}
{\bf{U}} = \widetilde{\Ub}({\bf{x}},t) + w_0(r,t){\bf{e}}_r, \label{eq:velsplit}
\end{equation}
where ${\bf{e}}_r$ is the normal vector in the outward radial direction.
We used $w_0$ to provide an estimate of the base state density evolution over a time step.
This resulted in some unnecessary complications to the temporal integration scheme including
base state advection modules for density, enthalpy, and velocity, as well as more
cumbersome split velocity dynamics evolution equations.
Our new temporal integration scheme uses full velocities for scalar and velocity advection,
and only uses the above splitting to satisfy the velocity divergence constraint due to boundary considerations
at the edge of the star.
This results in a much simpler numerical scheme than the one from Paper V
since we use the velocity directly rather than more complex terms involving the perturbational velocity.
Additionally, the new scheme uses a simpler predictor-corrector approach to the base state density and pressure that no
longer requires evolution equations and numerical discretizations to update the base state, greatly
simplifying the algorithm while retaining the same overall second-order accuracy in time.
At the beginning of each time step we have the cell-centered Cartesian grid state,
$({\bf{U}},\rho X_k,\rho h)^n$, and nodal Cartesian grid state, $\pi^{n-\sfrac{1}{2}}$, and base state $(\rho_0,p_0)^n$.
At any time, the associated density, composition, and enthalpy can be trivially computed using, e.g.,
\begin{equation}
\rho^n = \sum_k(\rho X_k)^n, \quad
X_k^n = (\rho X_k)^n / \rho^n, \quad
h^n = (\rho h)^n / \rho^n.
\end{equation}
Temperature is computed using the equation of state\footnote{As described in Paper V, for planar problems we compute temperature using $h$ instead of $p_0$, since we have successfully developed volume discrepancy schemes to effectively couple the enthalpy to the rest of the solution; see \cite{XRB_I}. We are still exploring this option for spherical stars.}, e.g.,
$T = T(\rho,p_0,X_k)$, where $p_0$ has been mapped to the Cartesian grid using the fill module,
and ($\overline{\Gamma}_1,\beta_0)$ are similarly computed from $(\rho,p_0,X_k)$;
see Appendix A of Paper I and Appendix C of Paper III for details on how $\beta_0$ is computed.
The overall flow of the algorithm begins with a second-order Strang splitting approach to integrate the advection-reaction system for
the thermodynamic variables $(\rho X_k, \rho h)$, followed by a second-order projection methodology to integrate the velocities subject to a divergence constraint. Within the thermodynamic variable update we use a predictor-corrector approach to achieve second-order accuracy in time.
To summarize:
\begin{itemize}
\item In {\bf Step 1} we react the thermodynamic variables over the first $\Delta t/2$ interval.
\item In {\bf Steps 2--4} we advect the thermodynamic variables over $\Delta t$. Specifically, we compute an estimate for the expansion term, $S$, compute face-centered, time-centered velocities that satisfy the divergence constraint, and then advect the thermodynamic variables.
\item In {\bf Step 5} we react the thermodynamic variables over the second $\Delta t/2$ interval\footnote{After this step we could skip to the velocity advance in {\bf Steps 10--11}, however the overall scheme would be only first-order in time, so {\bf Steps 6-9} can be thought of as a trapezoidal corrector step.}.
\item In {\bf Steps 6--8} we redo the advection in {\bf Steps 2--4} but are able to use the trapezoidal rule to time-center certain quantities such as $S$, $\rho_0$, etc.
\item In {\bf Step 9} we redo the reactions from {\bf Step 5} beginning with the improved results from {\bf Steps 6--8}.
\item In {\bf Steps 10--11} we advect the velocity, and then project these velocities so they satisfy the divergence constraint while updating $\pi$.
\end{itemize}
There are a few key numerical modules we use in each time step.
\begin{itemize}
\item {\bf Average}$[\phi]\rightarrow[\overline\phi]$ computes the lateral average of a quantity over a layer at constant radius $r$, as described above in Section \ref{Sec:Spatial}.
\item {\bf Enforce HSE}$[\rho_0]\rightarrow[p_0]$ computes the base state pressure, $p_0$, from a base state density, $\rho_0$ by integrating the hydrostatic equilibrium condition in one dimension.
This follows equation (A10) in Paper V, noting that for the irregularly spaced base state case, $\Delta r$ is not constant, where $\Delta r_{j+1/2} = r_{j+1} - r_j$ for cell face with index $j+1/2$.
The base state pressure remains equal to a constant value at the location of a prescribed cutoff density outward for the entire simulation.
\item {\bf React State}$[(\rho X_k)^{\rm in},(\rho h)^{\rm in},p_0]\rightarrow[(\rho X_k)^{\rm out},(\rho h)^{\rm out},(\rho\dot\omega),(\rhoH_{\rm nuc})]$
uses VODE \citep{vode} to integrate the species and enthalpy due to reactions over $\Delta t/2$ by solving
\begin{equation}
\frac{dX_k}{dt} = \dot\omega_k(\rho,X_k,T); \qquad
\frac{dT}{dt} = \frac{1}{c_p}\left(-\sum_k\xi_k\dot\omega_k + H_{\rm nuc})\right)
\end{equation}
The inputs are the species, enthalpy, and base state pressure, and the outputs are the species, enthalpy, reaction rates, and nuclear energy generation rate.
See Paper III for details.
\end{itemize}
Each time step is constrained by the standard advective CFL condition,
\begin{equation}
\Delta t = \sigma^{\rm CFL} \min_i(\Delta x / U_i),
\end{equation}
where for our simulations we typically use $\sigma^{\rm CFL}\sim 0.7$ and the minimum is taken over all spatial directions over all cells.
There are additional constraints on the time step that are typically much less restrictive than the advective CFL including
the acceleration due to the buoyancy force (sometimes in effect when the velocity is approximately zero at the start of some simulations)
and the local magnitude of the divergence constraint (to prevent too much mass evacuation from a cell in a time step); see Section 3.4 in Paper III for details.
In stratified low Mach number models, due to the extreme variation in density, the velocity can become very large in low density regions at the edge of the star.
These large velocities can severely affect the time step, so
throughout Papers II-V, we have employed two techniques to help control these dynamics without significantly
affecting the dynamics in the convective region.
First, we use a cutoff density technique, where we hold the density constant outside a specified radius (typically near where the density is $\sim$4 orders of magnitude smaller than the largest densities in the simulation).
Second, we employ a sponge technique where we artificially damp the velocities near and beyond the cutoff region.
For more details, refer to Paper V and the previous references cited within.
Beginning with $({\bf{U}},\rho X_k,\rho h)^n$, $\pi^{n-\sfrac{1}{2}}$, and $(\rho_0,p_0)^n$,
the temporal integration scheme contains the following steps:
\begin{description}
\item[Step 1] {\em React the thermodynamic variables over the first $\Delta t / 2$ interval.}
Call {\bf React State}$[(\rho X_k)^n, (\rho h)^n, p_0^n] \rightarrow [(\rho X_k)^{(1)}, (\rho h)^{(1)}, (\rho \dot\omega_k)^{(1)}, (\rho H_{\rm nuc})^{(1)}]$.
\item[Step 2] {\em Compute the time-centered expansion term, $S^{{n+\myhalf},\star}$.}
We compute an estimate for the time-centered expansion term in the velocity
divergence constraint. Following \citet{Bell:2004}, we extrapolate
to the half-time using $S$ at the previous and current
time steps,
\begin{equation}
S^{{n+\myhalf},\star} = S^n + \frac{\Delta t^n}{2} \left(\frac{S^n - S^{n-1}}{\Delta t^{n-1}}\right).
\end{equation}
Note that in the first time step we average $S^0$ and $S^1$ from the
initialization step.
\item[Step 3] {\em Construct a face-centered, time-centered advective velocity, $\Ub^{\mathrm{ADV},\star}$.}
The construction of face-centered time-centered states used to discretize the
advection terms for velocity, species, and enthalpy, are performed using
a standard multidimensional corner transport upwind approach
\citep{colella1990multidimensional,saltzman1994unsplit} with the piecewise-parabolic method (PPM)
one-dimensional tracing \citep{colella1984piecewise}. The full details of this
Godunov advection approach for all steps in this algorithm are described
in Appendix A of \cite{XRB_III}.
Here we use equation (\ref{eq:momentum}) to compute face-centered, time-centered velocities, $\Ub^{\mathrm{ADV},\dagger,\star}$.
The $\dagger$ superscript refers to the fact that the predicted velocity field does not satisfy the divergence constraint,
\begin{equation}
\nabla \cdot \left(\beta_0^n \Ub^{\mathrm{ADV},\star}\right) = \beta_0^n \left[S^{{n+\myhalf},\star} - \frac{1}{\overline{\Gamma}_1^np_0^n}\left(\frac{\partial p_0}{\partial t}\right)^{n-\sfrac{1}{2}} \right].\label{eq:div1}
\end{equation}
We project $\Ub^{\mathrm{ADV},\dagger,\star}$ onto the space of velocities that satisfy the constraint to obtain $\Ub^{\mathrm{ADV},\star}$.
Each projection step in the algorithm involves the solution of a variable-coefficient Poisson solve using multigrid.
Note that we still employ velocity-splitting as described by equation (\ref{eq:velsplit}) for this step
in order to enforce the appropriate behavior of the system near the edge of the star as determined by the cutoff density.
The details of this ``MAC'' projection are provided in Appendix \ref{Sec:Projection}.
\item[Step 4] {\em Advect the thermodynamic variables over a time interval of $\Delta t.$}
\begin{enumerate}
\renewcommand{\theenumi}{{\bf \Alph{enumi}}}
\item Update $(\rho X_k)$ using a discretized version of
\begin{equation}
\frac{\partial(\rho X_k)}{\partial t} = -\nabla\cdot(\rho X_k{\bf{U}}),
\end{equation}
where the reaction terms have been omitted since they were already
accounted for in {\bf React State}. The update consists of two steps:
\begin{enumerate}
\renewcommand{\labelenumii}{{\bf \roman{enumii}}.}
\item Compute the face-centered, time-centered species, $(\rho X_k)^{{n+\myhalf},{\rm pred},\star}$,
for the conservative update of $(\rho X_k)^{(1)}$ using a Godunov approach \citep{XRB_III}.
As described in Paper V, for robust numerical slope limiting we predict
$\rho'^n=\rho^n-\rho_0^n$ and $X_k^n$ to faces
and here we spatially interpolate $\rho_0^n$ to faces to assemble the fluxes.
\item Evolve $(\rho X_k)^{(1)} \rightarrow (\rho X_k)^{(2),\star}$ using
\begin{equation}
(\rho X_k)^{(2),\star} = (\rho X_k)^{(1)}
- \Delta t \left\{ \nabla \cdot \left[ \Ub^{\mathrm{ADV},\star} (\rho X_k)^{{n+\myhalf},{\rm pred},\star} \right] \right\},
\end{equation}
\end{enumerate}
\item Update $\rho_0$ by calling {\bf Average}$[\rho^{(2),\star}]\rightarrow[\rho_0^{n+1,\star}]$.
\item Update $p_0$ by calling {\bf Enforce HSE}$[\rho_0^{n+1,\star}] \rightarrow [p_0^{n+1,\star}]$.
\item Update the enthalpy using a discretized version of equation
\begin{equation}
\frac{\partial(\rho h)}{\partial t} = -\nabla\cdot(\rho h{\bf{U}}) + \frac{Dp_0}{Dt} + \rhoH_{\rm nuc},
\end{equation}
again omitting the reaction terms since we already accounted for
them in {\bf React State}. This equation takes the form:
\begin{equation}
\frac{\partial (\rho h)}{\partial t} = - \nabla \cdot (\rho h{\bf{U}}) + \frac{\partial p_0}{\partial t} + ({\bf{U}} \cdot {\bf{e}}_r) \frac{\partial p_0}{\partial r}.
\end{equation}
For spherical geometry, we solve the analytically equivalent form,
\begin{equation}
\frac{\partial (\rho h)}{\partial t} = - \nabla \cdot (\rho h{\bf{U}}) + \frac{\partial p_0}{\partial t} + \nabla \cdot ({\bf{U}} p_0) - p_0 \nabla \cdot {\bf{U}}.
\end{equation}
The update consists of two steps:
\begin{enumerate}
\renewcommand{\labelenumii}{{\bf \roman{enumii}}.}
\item Compute the face-centered, time-centered enthalpy, $(\rho h)^{{n+\myhalf},{\rm pred},\star},$
for the conservative update of $(\rho h)^{(1)}$ using using a Godunov approach \citep{XRB_III}.
As described in Paper V, for robust numerical slope limiting
we predict $(\rho h)'^n=(\rho h)^n-(\rho h)_0^n$ to faces,
where $(\rho h)_0^n$ is obtained by calling {\bf Average}$[(\rho h)^n]\rightarrow[(\rho h)_0^n]$,
and here we spatially interpolate $(\rho h)_0^n$ to faces to assemble the fluxes.
\item Evolve $(\rho h)^{(1)} \rightarrow (\rho h)^{(2),\star}$ using
\begin{equation}
(\rho h)^{(2),\star}
= (\rho h)^{(1)} - \Delta t \left\{ \nabla \cdot \left[ \Ub^{\mathrm{ADV},\star} (\rho h)^{{n+\myhalf},{\rm pred},\star} \right] \right\} + \Delta t\frac{Dp_0}{Dt}
\end{equation}
where here
\begin{equation}
\frac{Dp_0}{Dt} =
\begin{cases}
\frac{p_0^{n+1,*} - p_0^n}{\Delta t} + \left(\Ub^{\mathrm{ADV},\star} \cdot {\bf{e}}_r\right) \left(\frac{\partial p_0}{\partial r} \right)^{n}& {\rm (planar)} \\
\frac{p_0^{n+1,*} - p_0^n}{\Delta t} + \left[ \nabla \cdot \left (\Ub^{\mathrm{ADV},\star} p_0^{{n+\myhalf}} \right ) - p_0^{{n+\myhalf}} \nabla \cdot \Ub^{\mathrm{ADV},\star} \right]& {\rm (spherical)}
\end{cases}
,
\end{equation}
and $p_0^{n+\myhalf} = (p_0^n+p_0^{n+1,*})/2$.
\end{enumerate}
\end{enumerate}
\item[Step 5] {\em React the thermodynamic variables over the second $\Delta t / 2$ interval.}
Call {\bf React State}$[ (\rho X_k)^{(2),\star}, (\rho h)^{(2),\star}, p_0^{n+1,\star}]
\rightarrow
[ (\rho X_k)^{n+1,\star}, (\rho h)^{n+1,\star}, (\rho \dot\omega_k)^{n+1,\star}, (\rho H_{\rm nuc})^{n+1,\star} ].$
\item[Step 6] {\em Compute the time-centered expansion term, $S^{{n+\myhalf},\star}$.}
First, compute $S^{n+1,\star}$ with
\begin{equation}
S^{n+1,\star} = \left(-\sigma \sum_k \xi_k \dot\omega_k + \frac{1}{\rho p_\rho} \sum_k p_{X_k} {\dot\omega}_k + \sigma H_{\rm nuc}\right)^{n+1,\star}.
\end{equation}
Then, define
\begin{equation}
S^{{n+\myhalf}} = \frac{S^n + S^{n+1,\star}}{2},
\end{equation}
\item[Step 7] {\em Construct a face-centered, time-centered advective velocity, $\Ub^{\mathrm{ADV}}$.}
The procedure to construct $\Ub^{\mathrm{ADV},\dagger}$ is identical to the Godunov procedure
for computing $\Ub^{\mathrm{ADV},\dagger,\star}$ in {\bf Step 3}, but uses
the updated value $S^{{n+\myhalf}}$ rather than $S^{{n+\myhalf},\star}$.
The $\dagger$ superscript refers to the fact that the predicted velocity field does not satisfy the divergence constraint,
\begin{equation}
\nabla \cdot \left(\beta_0^{{n+\myhalf}} \Ub^{\mathrm{ADV}}\right) =
\beta_0^{{n+\myhalf}} \left[S^{{n+\myhalf}} - \frac{1}{\overline{\Gamma}_1^{{n+\myhalf}}p_0^{{n+\myhalf}}}\left(\frac{\partial p_0}{\partial t}\right)^{{n+\myhalf}}\right],\label{eq:div2}
\end{equation}
with
\begin{equation}
\beta_0^{{n+\myhalf}} = \frac{ \beta_0^n + \beta_0^{n+1,\star} }{2}, \quad
\overline{\Gamma}_1^{{n+\myhalf}} = \frac{ \overline{\Gamma}_1^n + \overline{\Gamma}_1^{n+1,\star} }{2}.
\qquad
\end{equation}
we project $\Ub^{\mathrm{ADV},\dagger}$ onto the space of velocities that satisfy the constraint to obtain $\Ub^{\mathrm{ADV}}$ using a MAC projection (see Appendix \ref{Sec:Projection}).
\item[Step 8] {\em Advect the thermodynamic variables over a time interval of $\Delta t.$}
\begin{enumerate}
\renewcommand{\theenumi}{{\bf \Alph{enumi}}}
\item Update $(\rho X_k)$. This step is identical to {\bf Step 4A} except we use
the updated values $\Ub^{\mathrm{ADV}}$ and $\rho_0^{n+1,\star}$ rather than
$\Ub^{\mathrm{ADV},\star}$ and $\rho_0^n$. In particular:
\begin{enumerate}
\renewcommand{\labelenumii}{{\bf \roman{enumii}}.}
\item Compute the face-centered, time-centered species, $(\rho X_k)^{{n+\myhalf},{\rm pred}}$,
for the conservative update of $(\rho X_k)^{(1)}$ using a Godunov approach \citep{XRB_III}.
Again, we predict $\rho'^n=\rho^n-\rho_0^n$ and $X_k^n$ to faces
but here we spatially interpolate $(\rho_0^n+\rho_0^{n+1,*})/2$ to faces to assemble the fluxes.
\item Evolve $(\rho X_k)^{(1)} \rightarrow (\rho X_k)^{(2)}$ using
\begin{equation}
(\rho X_k)^{(2)} = (\rho X_k)^{(1)}
- \Delta t \left\{ \nabla \cdot \left[\Ub^{\mathrm{ADV}} (\rho X_k)^{{n+\myhalf},{\rm pred}} \right] \right\},
\end{equation}
\end{enumerate}
\item Update $\rho_0$ by calling {\bf Average}$[\rho^{(2)}]\rightarrow[\rho_0^{n+1}]$.
\item Update $p_0$ by calling {\bf Enforce HSE}$[\rho_0^{n+1}] \rightarrow [p_0^{n+1}]$.
\item Update the enthalpy. This step is identical to {\bf Step 4D} except we use
the updated values $\Ub^{\mathrm{ADV}}$, $\rho_0^{n+1}$, $(\rho h)_0^{n+1}$, and $p_0^{n+1}$
rather than
$\Ub^{\mathrm{ADV},\star}, \rho_0^n$, $(\rho h)_0^n$, and $p_0^n$.
In particular:
\begin{enumerate}
\renewcommand{\labelenumii}{{\bf \roman{enumii}}.}
\item Compute the face-centered, time-centered enthalpy, $(\rho h)^{{n+\myhalf},{\rm pred}},$
for the conservative update of $(\rho h)^{(1)}$ using a Godunov approach \citep{XRB_III}.
Again, we predict $(\rho h)'^n=(\rho h)^n-(\rho h)_0^n$ to faces
but here we spatially interpolate $[(\rho h)_0^n)+(\rho h)_0^{n+1,*}]/2$ to faces to assemble the fluxes.
\item Evolve $(\rho h)^{(1)} \rightarrow (\rho h)^{(2)}$.
\begin{equation}
(\rho h)^{(2)}
= (\rho h)^{(1)} - \Delta t \left\{ \nabla \cdot \left[ \Ub^{\mathrm{ADV}} (\rho h)^{{n+\myhalf},{\rm pred}} \right] \right\} + \Delta t\frac{Dp_0}{Dt}
\end{equation}
where here
\begin{equation}
\frac{Dp_0}{Dt} =
\begin{cases}
\frac{p_0^{n+1} - p_0^n}{\Delta t} + \left(\Ub^{\mathrm{ADV}} \cdot {\bf{e}}_r\right) \left(\frac{\partial p_0}{\partial r} \right)^{n}& {\rm (planar)} \\
\frac{p_0^{n+1} - p_0^n}{\Delta t} + \left[ \nabla \cdot \left (\Ub^{\mathrm{ADV}} p_0^{{n+\myhalf}} \right ) - p_0^{{n+\myhalf}} \nabla \cdot \Ub^{\mathrm{ADV}} \right]& {\rm (spherical)}
\end{cases}
,
\end{equation}
and $p_0^{n+\myhalf} = (p_0^n+p_0^{n+1})/2$.
\end{enumerate}
\end{enumerate}
\item[Step 9] {\em React the thermodynamic variables over the second $\Delta t / 2$ interval.}
Call {\bf React State}$[(\rho X_k)^{(2)},(\rho h)^{(2)}, p_0^{n+1}] \rightarrow [(\rho X_k)^{n+1}, (\rho h)^{n+1}, (\rho \dot\omega_k)^{n+1}, (\rho H_{\rm nuc})^{n+1}].$
\item[Step 10] {\em Define the new-time expansion term, $S^{n+1}$.}
\begin{enumerate}
\renewcommand{\theenumi}{{\bf \Alph{enumi}}}
\item Define
\begin{equation}
S^{n+1} = \left(-\sigma \sum_k \xi_k \dot\omega_k + \sigma H_{\rm nuc} +
\frac{1}{\rho p_\rho} \sum_k p_{X_k} \dot\omega_k\right)^{n+1}.
\end{equation}
\end{enumerate}
\item[Step 11] {\em Update the velocity}.
First, we compute the face-centered, time-centered velocities, ${\bf{U}}^{{n+\myhalf},{\rm pred}}$
using a Godunov approach \citep{XRB_III}. Then, we update
the velocity field ${\bf{U}}^n$ to ${\bf{U}}^{n+1,\dagger}$ by discretizing
equation (\ref{eq:momentum}) as
\begin{equation}
{\bf{U}}^{n+1,\dagger}
= {\bf{U}}^n - \Delta t \left[\Ub^{\mathrm{ADV}} \cdot \nabla {\bf{U}}^{{n+\myhalf},{\rm pred}} \right]
- \Delta t \left[ \frac{\beta_0^{n+\myhalf}}{\rho^{n+\myhalf}} \nabla \left( \frac{\pi^{n-\myhalf}}{\beta_0^{n-\myhalf}} \right) + \frac{\left(\rho^{n+\myhalf}-\rho_0^{n+\myhalf}\right)}{\rho^{n+\myhalf}} g^{{n+\myhalf}} {\bf{e}}_r \right],
\end{equation}
where
\begin{equation}
\rho^{n+\myhalf} = \frac{\rho^n + \rho^{n+1}}{2}, \qquad \rho_0^{n+\myhalf} = \frac{\rho_0^n + \rho_0^{n+1}}{2}.
\end{equation}
Again, the $\dagger$ superscript refers
to the fact that the updated velocity does not satisfy the divergence constraint,
\begin{equation}
\nabla \cdot \left(\beta_0^{n+1} {\bf{U}}^{n+1} \right) = \beta_0^{n+1} \left[ S^{n+1} - \frac{1}{\overline{\Gamma}_1^{n+1}p_0^{n+1}}\left(\frac{\partial p_0}{\partial t}\right)^{{n+\myhalf}}\right].\label{eq:div3}
\end{equation}
We use an approximate projection to project ${\bf{U}}^{n+1,\dagger}$ onto the space of velocities that satisfy the constraint to obtain ${\bf{U}}^{n+1}$ using a ``nodal'' projection.
This projection necessarily differs from the MAC projection used in
{\bf Step 3} and {\bf Step 7} because the velocities in those steps are defined
on faces and ${\bf{U}}^{n+1}$ is defined at cell centers, requiring different divergence
and gradient operators.
Furthermore, as part of the nodal projection, we also define a nodal new-time perturbational pressure, $\pi^{n+\myhalf}$.
Refer to Appendix \ref{Sec:Projection} for more details.
\end{description}
This completes one step of the algorithm.
To initialize the simulation we use the same procedure described in Paper III.
At the beginning of each simulation, we define $({\bf{U}},\rho X_k,\rho h)$.
We set initial values for ${\bf{U}}, \rho X_k$, and $\rho h$ and perform a sequence of projections
(to ensure the velocity field satisfies the divergence constraint)
followed by a small number of steps of the temporal advancement scheme to iteratively
find initial values for $\pi^{n-\sfrac{1}{2}}$ and $S^0$ and $S^1$ for use in the first time step.
Our approach to adaptive mesh refinement is algorithmically the same as the treatment described
in Section 5 of Paper V; we refer the reader there for details.
MAESTROeX supports refinement ratios of 2 between levels.
We note that for spherical problems, AMR is only available for the case of a uniformly-spaced base state.
\section{Performance and Validation}\label{sec:results}
\subsection{Performance and Scaling}\label{sec:scaling}
We perform weak scaling tests for simulations of convection preceding ignition in a spherical, full-star Chandrasekhar mass white dwarf.
The simulation setup remains the same as reported in Section 3 of \cite{MAESTRO_AMR} and originally used in \cite{MAESTRO_convection}, and thus we emphasize that these scaling tests are performed using meaningful, scientific calculations.
Here, we perform simulations using $256^3, 512^3, 768^3, 1024^3, 1280^3$, and $1536^3$ grid cells on a spatially uniform grid (no AMR).
We divide each simulation into $64^3$ grids, so these simulations contain between 64 grids ($256^3$) and 13,824 grids ($1536^3$).
These simulations were performed using the NERSC cori system on the Intel Xeon Phi (KNL) partition.
Each node contains 68 cores, each capable of supporting up to 4 hardware threads (i.e., a maximum of 272 hardware threads per node).
For these tests, we assign 4 MPI tasks to each node, and 16 OpenMP threads per MPI process.
Each MPI task is assigned to a single grid, so our tests use between 64 and 13,824 MPI processes (i.e., between 1,024 and 221,184 total OpenMP threads).
For $64^3$ grids we discovered that using more than 16 OpenMP threads did not decrease the wallclock time due to a lack of work available per grid; in principle one could use larger grids, fewer MPI processes, and more threads per MPI process to obtain a flatter weak scaling curve, however the overall wallclock time would increase except for extremely large numbers of MPI processes (beyond the range we tested here).
Thus, the more accurate measure of weak scaling is to consider the number of MPI processes, since the scaling plot would look virtually identical for larger thread counts.
Note that the largest simulation used roughly 36\% of the entire computational system.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.0in]{MAESTRO_scaling1.eps} \hspace{0.5em}
\includegraphics[width=3.0in]{MAESTRO_scaling2.eps}
\caption{\label{fig:scaling} (Left) Weak scaling results for a spherical, full-star white dwarf calculation using the original MAESTRO code, MAESTROeX, and MAESTROeX with base state evolution disabled. Shown is the average wallclock time per time step.
(Right) Weak scaling results showing the average wallclock time per time step spent in the cell-centered and nodal linear solvers within a full time step of the aforementioned simulations.}
\end{center}
\end{figure}
In the left panel of Figure \ref{fig:scaling} we compare the wallclock time per time step as a function of total core count (in this case, the total number of OpenMP threads) for the original FBoxLib-based MAESTRO implementation to the AMReX MAESTROeX implementation.
These tests were performed using the original temporal integration strategy in \cite{MAESTRO_V}, noting that the new temporal integration with and without the irregular base state gives essentially the same results.
We also include a plot of MAESTROeX without base state evolution.
Comparing the original and new implementations, we see similar scaling results except for the largest simulation, where MAESTROeX performs better.
We see that the increase in wallclock time from the smallest to largest simulation is roughly 42\%.
We also note that without base state evolution, the code runs 14\% faster for small problems, and scales much better with wallclock time from the smallest to largest simulation increasing by only 13\%.
This is quite remarkable since there are 3 linear solves per time step (2 cell-centered Poisson solves used in the MAC projection, and a nodal Poisson solve used to compute the updated cell-centered velocities).
Contrary to our prior assumptions, the linear solves are not the primary scaling bottleneck in this code.
In the right panel of Figure \ref{fig:scaling}, we isolate the wallclock time required for these linear solves and see that (i) the linear solves only use 20-23\% of the total computational time, and (ii) the increase in the solver wallclock time from the smallest to largest simulation is only 28\%.
Further profiling reveals that the primary scaling bottleneck is the average operator.
The averaging operator requires collecting the sum of Cartesian data onto one-dimensional arrays holding every possible mapping radius.
This amounts to at least 24,384 double precision values (for the $256^3$ simulation) up to 883,584 values (for the $1536^3$ simulation).
The averaging operator requires a global sum reduction over all processors, and the communication of this data is the primary scaling bottleneck.
For the simulation with base state evolution, this averaging operator is only called once per time step (as opposed to 14 times per time step when base state evolution is included).
The difference in total wallclock times with and without base state evolution is almost entirely due to the averaging.
Note that as expected, advection, reactions, and calls to the equation of state scale almost perfectly, since there is only a single parallel communication call to fill ghost cells.
\subsection{White Dwarf Convection}\label{sec:whitedwarf}
To explore the accuracy of the new temporal algorithm, we now analyze in detail three-dimensional, full-star calculations of convection preceding ignition in a white dwarf. Again, we refer the reader to \cite{MAESTRO_AMR} and \cite{MAESTRO_convection} for setup details. We implement both uniformly- and irregularly-spaced base state with the new temporal algorithm, while only uniform base state spacing is used in the original algorithm. As in Section 3 of \cite{MAESTRO_AMR}, we choose the peak temperature and peak Mach number as the two diagnostics to compare the simulations. Figure \ref{fig:wdconvect_256_maxvar} shows the evolution of both peak temperature and peak Mach number until time of ignition on a single-level grid with resolution of $256^3$. The simulation using the new temporal scheme with uniformly-spaced base state gives the same qualitative results as the original scheme, and predicts a similar time of ignition ($t=7810$ s compared to $t=7850$ s for original algorithm). The simulation using the new temporal scheme with irregularly-spaced base state displays a slightly different peak temperature behavior during the initial transition period $t<150$ s, which results in the difference between the curves post transition. We strongly suspect that this is a result of using a different initial model file (the resolution near the center of the star is much coarser with the irregular spacing than the uniform spacing). Fortunately, the simulation with irregular base state spacing still follows the same trend as with uniform spacing, and the star is shown to ignite at an earlier time $t=6840$ s.
Figure \ref{fig:wdconvect_Tmax} shows the peak temperature evolution over the first 1000 s on two grids of differing resolutions, $256^3$ and $512^3$.
Limited allocations prevented us from running this simulation further.
As previously suspected, the simulation using irregularly-spaced base state agrees much closer with the results from using uniform spacing as the resolution increases.
This is most likely due to the increased resolution of the initial model, which more closely matches the uniformly-spaced counterpart.
This is especially important when computing the base state pressure from base state density, which is particularly sensitive to coarse resolution near the center of the star.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=2.75in]{wdconvect_256_maxT.eps} \hspace{0.5in}
\includegraphics[width=2.75in]{wdconvect_256_maxMach.eps}
\caption{\label{fig:wdconvect_256_maxvar} (left) Peak temperature, $T_{\text{peak}}$, and (right) peak Mach number
in a white dwarf until time of ignition at resolution of $256^3$ for three different MAESTROeX algorithms.}
\end{center}
\end{figure}
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=3.25in]{wdconvect_compare_Tmax.eps}
\caption{\label{fig:wdconvect_Tmax} Peak temperature, $T_{\text{peak}}$ in a white dwarf at grid resolutions of
$256^3$ (dotted line) and $512^3$ (solid line) until $t=1000$ for uniform (d$x$) and irregular (d$r$) base state spacing.
Note that the irregularly-spaced solution agrees better
with the uniformly-spaced solution as the resolution increases.}
\end{center}
\end{figure}
In terms of efficiency, all three simulations on $256^3$ single-level grid were run on Cori haswell with 64 processors and 8 threads per core and their run times were compared. As a result of simplifying the algorithm by eliminating the evolution equations for the base state density and pressure, the simulation using the new temporal algorithm took only 6.75 s per time step with uniformly-spaced base state, which is 13\% faster than the 7.77 s per time step when using the original scheme. However, we do observe a 25\% increase in run time of 9.72 s when using irregularly-spaced base state with the new algorithm. This can be explained by the irregularly-spaced base state array being much larger in size than its uniformly-spaced counterpart, and thus require additional communication and computation time. One possible strategy to significantly reduce the run time is to consider truncating the base state beyond the cutoff density radius.
\subsection{AMR Performance}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=2.5in]{reacting_bubble_amr.eps} \hspace{2.5em}
\includegraphics[width=2.5in]{wdconvect_amr_3grid.eps}
\caption{\label{fig:amr_grids} Initial grid structures with two levels of refinement.
The red, green, and black lines represent grids of increasing refinement.
(Left) Profile of $T - \bar{T}$ for a hot bubble in a white dwarf environment.
(Right) Region of star where $\rho\ge 10^5 \text{ g cm}^{-3}$ in a full white dwarf star simulation. }
\end{center}
\end{figure}
We now test the performance of MAESTROeX for adaptive, three-dimensional simulations to track localized regions of interest over time. Figure \ref{fig:amr_grids} illustrates the initial grid structures with two levels of refinement for both planar and spherical geometries where the grid is refined according to the temperature and density profiles, respectively. For each of the problems we tested, the single-level simulation was run using the original temporal scheme and the adaptive simulations using the original and new temporal algorithms. We want to show that the adaptive simulation can give similar results to the single-level simulation and in a more computationally efficient manner.
In the planar case, we use the same problem setup for a hot bubble rising in a white-dwarf environment as described in Section 6 of Paper V. Here we use a domain size of $3.6\times 10^7$ cm by $3.6\times 10^7$ cm by $2.88\times 10^8$ cm, and allow the grid structure to change with time. The single-level simulation at a resolution of $128^2 \times 1024$ was run on Cori haswell with 48 processors and took approximately 33.5 s per time step (averaged over 10 time steps) using either the original or new temporal algorithm. The adaptive simulation has a resolution of $32^2 \times 256$ at the coarsest level, resulting in the same effective resolution at the finest level as the single-level simulation. We tag cells that satisfy $T-\bar{T} > 3\times 10^7$ K as well as all cells at that height. The adaptive run took only 3.7 s per time step, and this 89\% decrease in runtime is mostly due to the fact that initially only 6.25\% of the cells (1,048,576 out of $128^2 \times 1024$ cells) are refined at the finest level. Figure \ref{fig:bubble_results} shows a series of planar slices of the temperature profile at time intervals of 1.25 s, and verifies that the adaptive simulation is able to capture the same dynamics as the single-level simulation at much lower computational cost.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=2.5in]{reacting_bubble_result.eps} \hspace{2.5em}
\includegraphics[width=2.5in]{reacting_bubble_amr_result.eps}
\caption{\label{fig:bubble_results} Time-lapse cross-section of a hot bubble in a white dwarf environment at
$t = 0$, 1.25, and 2.5 s for (left) single-level simulation, and
(right) adaptive simulation at the same effective resolution. The red, green, and black boxes indicate grids of increasing resolution}
\end{center}
\end{figure}
We continue to use the full-star white dwarf problem described in Section \ref{sec:whitedwarf} to test adaptive simulations on spherical geometry. The adaptive grid is refined twice by tagging the density at $\rho > 10^5$ g cm$^{-3}$ on the first level and $\rho > 10^8$ g cm$^{-3}$ on the second level. These tagging values have been shown to work well previously in Paper V, but we have found that the code may encounter numerical difficulties when the tagging values are too close to each other in subsequent levels of refinement.
The simulation on a single-level grid of $512^3$ resolution took 12.7 s per time step (again averaged over 10 time steps). The adaptive grid has a resolution of $128^3$ at the coarsest level and an effective resolution of $512^3$ at the finest level. On this grid, 27.8\% of the cells (4,666,880 out of $256^3$ cells) are refined at the first level and 5.3\% (7,176,192 out of $512^3$ cells) at the second. The adaptive simulation took 5.61 s, resulting in more than a factor of 2 in speedup. Both simulations are computed to $t=2000$ s and we choose to use the peak temperature as the diagnostic to compare the results. Figure \ref{fig:wdconvect_amr_Tmax} shows the evolution of the peak temperature for all three runs and shows that the adaptive simulation gives the same qualitative result as the single-level simulation. We do not expect the curves to match up exactly because the governing equations are highly nonlinear, and slight differences in the solution caused by solver tolerance and discretization error can change the details of the results. Each simulation was run on Cori haswell with 512 processors and 4 threads per core.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=4.0in]{wdconvect_amr_Tmax.eps}
\caption{\label{fig:wdconvect_amr_Tmax} Peak temperature, $T_{\text{peak}}$, in a white dwarf from $t=0$ to $2000$ s
for grids with effective resolution of $512^3$. We can see that the adaptive grids with two levels of
refinement give very similar solution trends compared to the single-level grid.}
\end{center}
\end{figure}
\section{Conclusions and Future Work}\label{sec:conclusions}
We have developed a new temporal integrator and spatial mapping options into our low Mach number solver, MAESTROeX.
The new AMReX-enabled code scales well on large fractions of supercomputers with multicore architectures.
Future software enhancements will include GPU implementation.
In particular, the AMReX-based companion code, the compressible CASTRO code \citep{CASTRO}, has recently ported hydrodynamics and reactions to GPUs \citep{CASTRO_GPU}.
We plan to leverage the newly implemented mechanisms for offloading compute kernels to GPUs inside of the AMReX software library itself.
Our future scientific investigations include convection in massive rotating stars, \citep{heger2000presupernova}, the convective Urca process in white dwarfs \citep{willcox2016type}, solar physics \citep{wood2018self}, and magnetohydrodynamics \citep{wood2015three,wood2011sun}.
Our future algorithmic enhancements include more accurate and higher-order multiphysics coupling strategies based on spectral deferred corrections \citep{dutt2000spectral,bourlioux2003high}.
This framework has been successfully used in terrestrial combustion \citep{pazner2016high,nonaka2018conservative}
\acknowledgements
The work at LBNL was supported by the U.S. Department of Energy's
Scientific Discovery Through Advanced Computing (SciDAC) program under
contract No. DE-AC02-05CH11231. The work at Stony Brook was supported
by DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 and through
the SciDAC program DOE grant DE-SC0017955. This research used
resources of the National Energy Research Scientific Computing Center
(NERSC), a U.S. Department of Energy Office of Science User Facility
operated under Contract No. DE-AC02-05CH11231.
\software{AMReX \citep{AMReX, AMReX_JOSS}, StarKiller Microphysics \citep{starkiller}}
\facilities{NERSC}
|
1,116,691,499,755 | arxiv | \section{The Model}
\label{S.model}
We consider a one-dimensional system of $2M$ ($M\in\mathbb{N}$)
interacting spin-$1/2$ particles in the presence of a uniform
magnetic field. The interaction is of the isotropic and planar
(XX) Heisenberg form
\begin{equation}
\label{e.hamilton0}
\hat{\cal H}_0{=}-\frac{J}{2}\sum_{n{=}-M+\frac12}^{M-\frac12}
\big(\hat \sigma^x_n \hat
\sigma^x_{n+1} {+}\hat\sigma^y_n\hat\sigma^y_{n+1}
{+}2 h \hat \sigma^z_n\big) ~,
\end{equation}
where $(\hat\sigma^x_n, \hat\sigma^y_n, \hat\sigma^z_n)$ are the
Pauli matrices for the spin at site $n$, $J$ is the homogeneous
coupling, and $h$ is the magnetic field. The $2M$ lattice sites are
labelled by the half-integer index
$n=-M+\frac{1}{2},...M-\frac{1}{2}$. Correspondingly, the lattice
bonds are labelled by the integer index $b=-M+1,...,M$, with
$b=n\,{+}\,1/2$ indicating the bond between sites $n-1$ and $n$. This
notation allows the reflection symmetry with respect to the impurity
bond to emerge more clearly in many of the following equations
involving the correlation functions which, on the other hand, refer
to lattice sites. The enforcement of the PBC conditions
${\hat{\vec{\sigma}}}_{M+n}\,{=}\,{\hat{\vec{\sigma}}}_{-M+n}$ makes
Eq.~\eqref{e.hamilton0} the Hamiltonian of a ring.
We introduce a single bond impurity (BI)
by varying the exchange integral that generates the bond $b{=}0$, i.e. the interaction strength between the two spins on sites
${n{=}{-}\frac12}$ and $n{=}\frac12$ (which we will refer to as the impurity spins). This implies adding the term
\begin{equation}
\hat{\cal H}_{\rm{I}} {=} \frac{J{-}j}2\big(
\hat\sigma^x_{-\frac12} \hat\sigma^x_{\frac12}
+ \hat\sigma^y_{-\frac12} \hat\sigma^y_{\frac12}\big)
\label{e.hamilton-imp}
\end{equation}
to the translation-invariant Hamiltonian in
Eq.~\eqref{e.hamilton0}. From now on, we assume $J{=}1$ as the
energy unit. The resulting system
$\hat{\cal{H}}{=}\hat{\cal{H}}_0\,{+}\,\hat{\cal{H}}_{\rm{I}}$ is
illustrated in Fig.~\ref{f.modello}, where $j$ gives the coupling
strength and $j{=}1$ ($j{=}0$) corresponds to the well-known
$2M$-PBC ($2M$-OBC) spin chains~\cite{LiebSM1961}. For every value
different from the two cases above, we diagonalize the Hamiltonian
as follows.
\begin{figure}[b!]
\includegraphics[width=60mm]{modello.pdf}
\caption{(Color online) A ring of interacting spin-$1/2$ particles,
all coupled through an XX model, includes a bond defect: while all
spin pairs $(n,n+1)$ with $n{\neq}-1/2$ are mutually interacting with
strength $J$, the pair $(-1/2,1/2)$ experiences the strength $j$. The
spins are all subjected to a homogeneous magnetic field $h$.}
\label{f.modello}
\end{figure}
The total Hamiltonian $\hat{\cal H}_0+\hat{\cal H}_{\rm{I}}$ can be
mapped via the Jordan-Wigner transformation~\cite{LiebSM1961} into
\begin{equation}
\label{e.hamiltonf}
\hat{\cal H}{=}{-}\mkern-18mu \sum_{n{=}{-}M{+}\frac12}^{M{-}\frac12}
\mkern-18mu (\hat c^\dagger_{n+1}\hat c_n{+}h.c.
{+} 2 h \hat c^\dagger_n \hat c_n)
{-}(j{-}1)(\hat c^\dagger_{\frac12} \hat c_{-\frac12}{+}h.c.),
\end{equation}
where $\{c_n,c^{\dagger}_n\}$ are the fermionic destruction and
creation operators. As translation symmetry is broken for $j{\ne}1$,
a Fourier transform does not diagonalize Eq.~\eqref{e.hamiltonf}. It
is nevertheless possible to solve the model analytically by making
use of a Green function approach~\cite{Economoubook,Pury1991}. The
key steps of this procedure are outlined in Appendix~\ref{a.diag} and
the diagonalized Hamiltonian in the thermodynamic limit finally reads
\begin{equation}
\label{e.hamiltonfin}
\hat{\cal H}\,{=}\,\int_{-\pi}^{\pi}\frac{dk}{2\pi}~
E_k~\hat\zeta^{\dagger}_k\hat\zeta_k
+E_+~\hat\zeta^{\dagger}_+ \hat\zeta_+
+E_-~\hat\zeta^{\dagger}_- \hat\zeta_- ~.
\end{equation}
The first term represents the intra-band contributions and we have
introduced the operators
\begin{equation}\label{E.modes}
\hat\zeta_k=\frac1{\sqrt{2M}}
\sum_ne^{-ikn}(1{+}f_{kn})\,\hat{c}_n~,
\end{equation}
which annihilate fermions with energy $E_k=2(\cos{k}-h)$.
Here, the functions $f_{kn}$ account for the spatial distortion of
the intra-band excitations as
\begin{equation}\label{E.distorsion}
f_{kn} {=} \left\{\begin{aligned}
&\frac{i(j^2{-}1)e^{2ikn}}
{2\sin|k|{-}i(j^2{-}1)e^{i|k|}}
~~~~~~~~~~{\rm if}~~ kn>0,~~
\\
&\frac{2(j{-}1)\sin|k|{+}i(j^2{-}1)e^{i|k|}}
{2\sin|k|{-}i(j^2{-}1)e^{i|k|}}
~~~{\rm if}~~ kn<0,~~
\end{aligned}\right.
\end{equation}
Such distortion is evidently due to the BI ($f_{kn}{=}0$ for
$j{=}1$), and is responsible for the oscillations observed in the
correlations, as discussed in the following Section. The second term
of Eq.~\eqref{e.hamiltonfin} accounts for two discrete-energy
eigenstates $E_\pm$ which appear only for $j\,{>}\,1$: their energies
are $E_\pm\,{=}\,-2h\pm(j{+}1/j)$, above and below the band,
respectively. They correspond to excitations that, once expressed in
terms of direct lattice-site fermionic operators, take the form
\begin{equation}\label{E.locstates}
\hat \zeta_{\pm}=\sqrt{\sinh{q}}
\sum_n(\pm)^{n+\frac12}e^{-q|n|}\,\hat c_n
\end{equation}
with $q=\ln{j}$ being the reciprocal of the localization length.
Let us now compare the behavior of the system in the two extreme
cases of $j\,{=}\,0$ and $j\,{\to}\,\infty$.
First, one can easily see that for $kn\,{<}\,0$ in both cases one has
$f_{kn}\,{=}\,-1$, namely the impurity acts as a purely reflective
barrier yielding complete backscattering.
On the other hand, for $kn\,{>}\,0$ the distortions of the in-band
excitations in the two limits read
$f_{kn}\,{=}\,-e^{i(2kn{\pm}|k|)}$, respectively. It follows that for
$j\,{\to}\,\infty$ the distortion at the impurity sites is
$f_{k,\frac12}=f_{k,-\frac12}=-1$, meaning that these sites
completely decouple from the rest of the system, their state being
exclusively determined by the two, now completely localized,
out-of-band states
$\ket{E_{\pm}}\,{=}\,\frac1{\sqrt2}\big({c^{\dagger}_{\frac12}\mp
c^{\dagger}_{-\frac12}}\big)\ket{0}$: as they do not take part in the
dynamics, the spins at sites $n=\pm 3/2$ take the role of head and
tail of a segment of length $2M{-}2$. Of course, for $j\,{=}\,0$ the
resulting segment has length $2M$. This argument suggests that one BI
can indeed change the boundary conditions from PBC to OBC. In other
terms, a segment can be obtained not only by actually cutting the
ring ($j=0$), but also by making the interaction between the spins
sitting at sites $n\,{=}\,\pm1/2$ strong enough with respect to the
coupling between all the other nearest-neighbor spins
($j\,{\gg}\,1$), as to effectively decouple them from the rest of the
system.
In the next Section we further explore this idea in the case
$M\to\infty$, where the availability of the analytical results
presented here allows us, through a straightforward application of
Wick's theorem~\cite{LiebSM1961}, to exactly evaluate two-points
correlations functions, concurrence and quantum
discord~\cite{discord1,discord2,discordreview}. We focus on the
possibility that the efficiency of the ring-cutting mechanism
described above holds for moderately large values of $j$.
\section{Effective ring-cutting mechanism: study of the two-point functions}
\label{S.cutting-2points}
In this Section we study the effects of the BI on two-point
functions, i.e. quantities relative to spin pairs. As far as we only
consider pairs of nearest-neighbor spins, such quantities can be
labelled by the integer bond-index $b$ representing the distance in
lattice spacings from the BI, according to $O_{n,n+1}=O_b$ with
$b=n{+}{1}/{2}$. We first analyze the nearest-neighbor magnetic
correlations
$g^{\alpha,\alpha}_b{\equiv}\valmed{\sigma^\alpha_n\sigma^\alpha_{n+1}}$
($\alpha=x,z$). For any $j{\neq} 1$, Friedel-like oscillations appear
and induce a spatial modulation of the correlations with periodicity
$p{=}{\pi}/{k_F}$, where $k_F{=}\cos^{-1} h$ is the Fermi momentum.
In Figs.~\ref{F.xxzzoscillh0} we consider $h=0$, corresponding to
$p=2$, and study $g^{\alpha,\alpha}_b$ against the value of $b$ for
various choices of $j$. The presence of the BI modifies the strength
of correlations and the following relations ($b$ is an integer)
clearly emerge
\begin{equation}
\begin{aligned}
&|g^{\alpha,\alpha}_{2b}(j{<}1)|<|g^{\alpha,\alpha}(j{=}1)|~{<}~|g^{\alpha,\alpha}_{2b}(j{>}1)|,\\
&|g^{\alpha,\alpha}_{2b{+}1}(j{<}1)|>|g^{\alpha,\alpha}(j{=}1)|~{>}~|g^{\alpha,\alpha}_{2b{+}1}(j{>}1)|,
\end{aligned}
\end{equation}
where the bond-index dependence is omitted for $j{=}1$, as in the
uniform case PBC guarantee translation invariance. From the above
inequalities we deduce that the results corresponding to the limit
$j{\to}\infty$ cannot be possibly related with the behavior of the
segment obtained by an actual cut, i.e. what is found by setting
$j{=}0$. Indeed, $g^{\alpha,\alpha}_b(\infty)$ is in general
different from $g^{\alpha,\alpha}_b(0)$. In fact, as already
mentioned at the end of the above Section, we expect the
$j{\to}\infty$ limit to reproduce the behavior of a segment with head
and tail at $n{=}\pm{3}/{2}$, i.e. $b{=}\pm2$. Therefore, in all
those cases for which the actual value of $M$ is not relevant, such
as in the thermodynamic limit considered here, the meaningful
comparison to be performed involves $g^{\alpha,\alpha}_b(j{=}0)$ and
$g^{\alpha,\alpha}_{b{+}1}(j{\to}\infty)$. In order to quantitatively
check to what extent a model with large $j$ can be actually
considered to behave as a segment, in Fig.~\ref{F.corr12e23} we
compare $g^{\alpha,\alpha}_2$ and $g^{\alpha,\alpha}_3$ for
increasing values of $j$. Clearly, the correlations along $x$ and $z$
almost match the values corresponding to a true segment already for
$j>8$, confirming that an effective ring-cutting mechanism takes
place.
\begin{figure}
\includegraphics[width=\columnwidth]{xxalong.pdf}
\\
\includegraphics[width=\columnwidth]{zzalong.pdf}
\caption{(Color online)
Correlators $\valmed{\hat\sigma_n^x\hat\sigma_{n+1}^x}$ (top) and
$\valmed{\hat\sigma_n^z\hat\sigma_{n+1}^z}$ (bottom) for $j{=}0, 0.5,
0.8, 1.5, 2, 11$ (corresponding to increasing absolute values for
$n{+}1/2$ even). The straight lines correspond to the correlators in
the PBC case, $j\,{=}\,1$. For $j{=}11$ the data are indistinguishable
from the OBC limit.}
\label{F.xxzzoscillh0}
\end{figure}
\begin{figure}[b!]
\includegraphics[height=80mm,angle=90]{corr12e23.pdf}
\caption{(Color online) The nearest-neighbor correlation functions
$\valmed{\hat\sigma_n^x\hat\sigma_{n+1}^x}$ and
$\valmed{\hat\sigma_n^z\hat\sigma_{n+1}^z}$ corresponding to the
second and third bond after the defect, vs $j$. The $xx$ ($zz$)
correlators take positive (negative) values; their absolute value
increases (decreases) with $j$ for $n\,{=}\,3/2$ ($n\,{=}\,5/2$). The
dashed lines show that the third-bond correlators at $j\to\infty$
behave as the second-bond correlators of the open chain, i.e. $j{=}0$.}
\label{F.corr12e23}
\end{figure}
In order to provide an all-round characterization of our proposal,
we now complement the analysis performed above by addressing the
leakage of information out of head and tail of the segment
effectively obtained by increasing $j$. We quantify the extent of
such leakage by addressing the values taken by both classical
correlations (CC) and quantum discord
(QD)~\cite{discord1,discord2,discordreview,nota} {\it across} the
impurity, i.e., between two spins sitting on opposite sides with
respect to the BI, normalized by their respective values for $j{=}1$.
The results corresponding to considering the spins at sites
$n={\pm}{3}/{2}$ are shown in Fig.~\ref{F.decaywithinset}. Both CC
and QD across the BI are non-monotonic functions of the strength
$j$. For small values of $j$, both rapidly grow. On the other
hand, the range $j\gg1$ corresponds to the monotonic decrease of
all forms of correlations, thus demonstrating that the ring is
effectively cut. Remarkably, for $j\,{\gtrsim}\,1$ ,CC and QD are
larger than their value at $j\,{=}\,1$. This is due to the spread
of the localized state over these sites, yielding an enhancement
similar to that reported in Refs.~\onlinecite{OsendaHK2003,
ApollaroCFPV2008,PlastinaA2007}, to which we refer for a detailed
discussion. CC and QD behave in very similar ways, decaying
asymptotically, for $j\gg1$, as $j^{-2}$ (cf. the inset of
Fig.~\ref{F.decaywithinset}). This power-law decay stems from the
behavior of the magnetic correlations. In fact, these enter both
the expression of the concurrence (cf. Eq.~\eqref{E.Conc} below)
and those of QD and CC (which are not reported here as too lengthy
to be informative). In particular, by considering
Eqs.~\eqref{E.modes} and~\eqref{E.distorsion} in the $j\gg 1$
limit, and evaluating by standard methods the magnetic correlation
functions (as done, for instance, in Ref.~\cite{LiebSM1961}), we
find that $\valmed{\hat\sigma_n^x\hat\sigma_{m}^x}={\cal
O}\left(j^{-\left(\left|n\right|+\left|m\right|\right)}\right)$,
whereas $\valmed{\hat\sigma_n^z\hat\sigma_{m}^z}={\cal
O}\left(j^{-2}\right)$, regardless of the relative distance
between the spins. As a consequence, the scaling law $j^{-2}$
reported in the inset of Fig.~\ref{F.decaywithinset} originates
from the correlations along the $z$-axis and is thus independent
of the site-separation. On the contrary, the correlation functions
along the $x$-axis shown in Fig.~\ref{F.corr12e23} (a) do depend
on the distance, as reported above.
\begin{figure}
\includegraphics[height=80mm,angle=90]{cceqdins.pdf}
\caption{(Color online) QD and CC (normalized with respect
to the $j\,{=}\,1$ value) plotted vs $j$ for the two spins at sites
$\pm3/2$, i.e., sitting at opposite sides of the impurity. Cutting
the chain affects both quantum and classical correlations in an
essentially identical way. Inset: log-log plot of QD and CC (here
indistinguishable) vs $j$, showing that they obey a $j^{-2}$ scaling
law, which is in fact independent of the distance of the sites.}
\label{F.decaywithinset}
\end{figure}
We conclude this Section by briefly discussing how, by tuning the
intensity of the impurity strength, it is possible to exploit the
Friedel oscillations in order to spatially modulate the
concurrence~\cite{wootters} between neighboring spins. In
Fig.~\ref{F.chvar} we show the nearest-neighbor concurrence for
$j{=}6$ at different values of $h$. Analytically, the concurrence
$C_{n,m}$ depends on the magnetic correlation functions
as~\cite{Fubinietal2006}
\begin{equation}\label{E.Conc}
C_{n,m}\,{=}\,\max{[0,\valmed{\hat\sigma^x_n{\otimes}
\hat\sigma^x_m}{-}\frac12\sqrt{(S^{zz}_{nm})^2{-}(s^{zz}_{nm})^2}]}
\end{equation}
with
$S^{zz}_{nm}{=}1{\pm}\valmed{\hat\sigma^z_n{\otimes}\hat\sigma^z_m}$
and
$s^{zz}_{nm}{=}\valmed{\hat\sigma^z_n}{+}\valmed{\hat\sigma^z_m}$.
The values of $C_{n,m}$ achieved in our system are the same as those
of an open-boundary spin chain in the presence of a strong magnetic
field on a single spin~\cite{SonAPV09,ApollaroCFPV2008}. Moreover, we
notice the presence of a periodic spatial modulation (with respect to
the value of concurrence achieved for PBC), determined by the
periodicity $p={\pi}/{\cos^{-1} h}$ of the Friedel oscillations, as
reported also for different impurity types in
Refs.~\onlinecite{OsendaHK2003,ApollaroCFPV2008.2}.
\begin{figure}[h]
\centering
{\includegraphics[width=0.8\columnwidth]{ch1su2Mauro.pdf}}\\
{\includegraphics[width=0.8\columnwidth]{ch1sqrt2Mauro.pdf}}
\caption{Nearest-neighbor concurrence $C_{n,n+1}$ for $j\,{=}\,6$ vs
the bond index $n{+}\frac12$. Panels (a) and (b) are for
$h\,{=}\,0.5,\frac{1}{\sqrt{2}}$ respectively. The straight dashed
line shows the value of the concurrence at $j\,{=}\,1$. The magnetic
field sets the periodicity of the one- and two-points spin
correlators, which enter the concurrence, to $p\,{=}\, 3, 4$
respectively.}
\label{F.chvar}
\end{figure}
\section{Effective ring-cutting mechanism: analysis of the state fidelity}
\label{S.fidelity}
In order to further verify the efficiency of the proposed mechanism,
we now take a different point of view and consider a global figure of
merit from which we can obtain indications on the similarity between
the state of the cut ring and that of a true segment. As a
description of the state of the former, we choose the reduced density
matrix
$\rho{=}\Tr_{n{=}{\pm\frac{1}{2}}}\left[\ket{\Omega}\!\!\bra{\Omega}\right]$
of a $2(M-1)$ spin system where the impurity spins have been traced out of the ring. As for the state of a segment, which embodies our target state, we take the pure state $\ket{\Sigma}$ of a system of $2(M-1)$ spins with OBC.
As a measure of closeness between two quantum states we use the
quantum fidelity~\cite{Josza94} $\mathcal
F\!\left(\ket{\Sigma}\!,\!\rho\right) {=}
\valmed{\Sigma\vert\rho\vert\Sigma}$.
The ground state of a free-fermion model such as the one in
Eq.~\eqref{e.hamiltonf} is given by
\begin{align}
\ket{\Omega} {=}\!\! \prod_{k{:}E_k{<}0}\!\!\! \hat\zeta_k^\dagger \ket 0,
\label{e.diracsea}
\end{align}
for which all the negative-energy eigenstates up to the Fermi energy $E_{k_F}{=}0$ are occupied by a fermionic quasi-particle, whereas positive-energy levels are empty.
As a consequence, states with a different number of fermions yield zero fidelity.
As the number of fermions in the Dirac sea is given by the intensity of the magnetic field $h$, which sets the Fermi momentum, we will compare the actual state of the cut ring with a target state for the same value of the applied magnetic field.
A somewhat lengthy but otherwise straightforward calculation based on the use of Wick's theorem shows that $\mathcal F$ depends on the submatrices
of the transformation mapping the
real-space fermions $\hat c_n$ to those diagonalizing the Hamiltonian in the case of Eq.~\eqref{e.hamiltonfin} (the target model)
for $n{=}{-}M{+}1/2,\dots,M{-}1/2$ and $k<k_F$. Some details of this derivation
are sketched in Appendix~\ref{a.fid}.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{fidh0Nb.pdf}\\
\includegraphics[width=0.9\columnwidth]{fidh05Nb.pdf}\\
\includegraphics[width=0.9\columnwidth]{fidhsq2Nb.pdf}\\
\includegraphics[width=0.9\columnwidth]{fidh1Nb.pdf}
\caption{(Color online) Fidelity $\mathcal F\!\left(\ket{\Sigma}\!,\!\rho\right) $ between the reduced state $\rho$ of our
model and the pure state $\ket{\Sigma}$ of a linear chain with the same number of spins by varying the coupling strength $j>1$ at different values of the magnetic field $h=0,\frac{1}{2},\frac{1}{\sqrt{2}}, 1$ (panels (a), (b), (c), and (d) respectively). We have taken $M=10,100,1000$ in all panels. The dashed line shows the behavior of the function $1-1/j^2$, which matches the thermodynamic limit of the state fidelity at large magnetic fields.}
\label{F.fidelitycomposta}
\end{figure}
In Fig.~\ref{F.fidelitycomposta} the fidelity is shown as a
function of $j>1$, for different values of $h$ and $M$. As a
perturbative analysis suggests, for $j\gg1$ the ground state of
our model tends to the factorized state
$\ket{\Psi^+}_{\pm\frac{1}{2}}\otimes\ket{\omega}_{-\frac{3}{2},\ldots\frac{3}{2}}$,
where $\ket{\Psi^+}_{\pm\frac{1}{2}}$ is a Bell state of the spins
across the BI, while
$\ket{\omega}_{-\frac{3}{2},\ldots\frac{3}{2}}$ is a pure state of
the rest of the system. Fig.~\ref{F.fidelitycomposta} shows that,
almost independently of the magnetic field value, the mixed state
of the reduced system is almost indistinguishable from the target
state for relatively small values of the impurity strength. As far
as finite-size effects are involved, we note that the shorter the
ring, the lower the value of $j$ needed for cutting it, although
differences decrease with increasing $j$ and $h$ [see
Figs.~\ref{F.fidelitycomposta}~(a)-(c)]. On the other hand, for
$h\geq1$ finite-size effects are almost absent but for $j\lesssim2$
[cf. Fig.~\ref{F.fidelitycomposta}~(d)]. This can be easily
explained by noticing that the target state is fully polarized,
$\ket{\Sigma}=\ket{0}^{\otimes 2(M-1)}$, while the ground state of
the ring is $\ket{\Omega}{=}\hat\zeta_-^\dagger\ket{0}$. When the
localization length $q^{-1}$ is less than the length of the ring
2$M$, by taking into account Eq.~\eqref{E.locstates} we get that
the spins located at a distance $d>q^{-1}$ are, for all practical
purposes, in state $\ket{0}$. As a consequence, considering longer
chains will not affect substantially the value of the fidelity
due to the presence in the ground state of our model of only a
single localized mode. This is at variance with the case $h<1$
where the extended (distorted) eigenstates given by
Eq.~\ref{E.modes}, spread all over the chain. Therefore, for
$h\geq1$ the length of the ring does not play a significant role.
Moreover, the analytical expression for the fidelity in the
thermodynamic limit reads
$\mathcal{F}=1\,{-}\,e^{-2q}=1\,{-}\,1/j^2$. It is worth noticing
that, for all practical purposes, the thermodynamic limit is
already reached when the length of the chain exceeds the
localization length $q^{-1}=1/(\ln{j})$. Finally, for arbitrarily
large values of $h$, the target state does not change because the
XX-Heisenberg model enters the saturated phase. In addition, as
the localized mode is independent of $h$, the ground state of our
model is invariant for $h\geq 1$. This yields the very same
behavior of the fidelity for $h>1$ as that reported in
Fig.~\ref{F.fidelitycomposta}~(d).
\section{Conclusions}\label{S.conclusions}
We have shown that, by means of a BI, it is possible to turn a spin
chain with PBC into an Open Boundary one. The XX-impurity model has been solved
analytically in the thermodynamical limit and two-points magnetic
correlations functions, as well as CC and QD, have been shown to
decay to zero for spin residing across the BI already for a
relatively modest value of the impurity strength. The analogous
figures of merit for pairs of spins residing on the same side of the
BI take values approaching those of a chain with OBC. For finite, yet
arbitrarily large, spin chains, the fidelity between the ground state
of a chain including all the spins but those coupled by the BI and an
open chain of the same size, has been adopted in order to confirm the
validity of the approach discussed here. It follows that impurity
bonds can be used in otherwise translation invariant systems as a
means to achieve an effective cutting of the spin chain at the
desired point. The full analytical treatment provided here allows for
an exact quantification of the cutting quality.
This result shows the possibility, via impurity bonds, to break-up physical systems with a ring topology or to cut long chains in smaller ones by different specific techniques depending on the actual physical implementation, such as chemical doping in molecular spin arrays~\cite{Timco2009,TroianiBCLA2010}, site-dependent modulation of the trapping laser in cold atoms/ions systems~\cite{Lewensteinetal07} or spatial displacement of an optical cavity in an array~\cite{GiampaoloI2010}. This could be exploited in order to make some systems more useful for quantum-state transfer, where often a necessary requisite consists in an addressable head and tail as well as in the finiteness of the quantum data bus. Finally, tuning the values of the impurity strength within $j\in\left[0,10\right]$ is sufficient to investigate the emergence of edge effects, such as total or partial wavefunction backscattering which, by choosing an appropriate uniform magnetic field, spatially modulate the spin correlations functions.
\acknowledgments
{TJGA is supported by the European Commission, the
European Social Fund and the Region Calabria through the program POR
Calabria FSE 2007-2013 - Asse IV Capitale Umano-Obiettivo Operativo
M2.
LB is supported by the ERC grant PACOMANEDIA.
MP thanks the Alexander von Humboldt Stiftung, the UK EPSRC for a Career Acceleration Fellowship and a grant under the ``New Directions for EPSRC Research Leaders" initiative (EP/G004759/1), and the John Templeton Foundation (grant ID 43467).}
|
1,116,691,499,756 | arxiv | \section{Introduction}
Quantum many-body systems with a single-particle flat band have
attracted much attention. About twenty years ago Mielke and Tasaki
\cite{mielke,tasakiPRL,tasakiCMP,maksym2012} showed that a
repulsive on-site interaction in flat-band Hubbard systems yields
ferromagnetic ground states. More recently, a very active and
still ongoing discussion of flat-band systems in the context of
topological insulators has been started, see, e.g.
Ref.~\onlinecite{bergholtz} and references therein. Frustrated
quantum antiferromagnets represent another active research field,
where flat-band physics my lead to interesting low-temperature
phenomena
\cite{schnack,Schulenburg,Richter2004,Zhitomir,derzhko,derzhko2006,tsunetsugu},
such as a macroscopic jump in the ground-state magnetization curve
and a nonzero residual ground-state entropy at the saturation
field as well as an extra low-temperature peak in the specific
heat. All these phenomena are related to the existence of a class
of exact eigenstates in a form of localized multi-magnon states
which become ground states in high magnetic fields.
An interesting and typical example of such a flat-band system is
the $s=\frac{1}{2}$ delta or sawtooth Heisenberg model consisting
of a linear chain of triangles as shown in Fig.\ 1. The
interaction $J_{1}$ acts between the apical (even) and the basal
(odd) spins, while $J_{2}$ is the interaction between the neighbor
basal sites. There is no direct exchange between apical spins. The
Hamiltonian of this model has the form
\begin{equation}
\hat{H}=J_{1}\sum (\mathbf{S}_{2n-1}\cdot
\mathbf{S}_{2n}+\mathbf{S}_{2n}\cdot
\mathbf{S}_{2n+1}-\frac{1}{2})+J_{2}\sum (\mathbf{S}_{2n-1}\cdot
\mathbf{S}_{2n+1}-\frac{1}{4})-h\sum S_{n}^{z} , \label{q}
\end{equation}
where $\mathbf{S}_{n}$ are $s=\frac{1}{2}$ operators and $h$ is
the dimensionless magnetic field.
The ground state of model (\ref{q}) with both antiferromagnetic
$J_{1}>0$ and $J_{2}>0$ (AF delta chain) has been studied as a
function of $J_2/J_1$ in Refs.\onlinecite{nakamura,sen,blundell}.
At high magnetic fields for excitations above the fully polarized
ferromagnetic state the lower one-magnon band is dispersionless
for a special choice of the coupling constants $J_2=J_1/2$
\cite{derz}. The excitations in this band are localized states,
i.e. the excitations are restricted to a finite region of the
chain. These localized one-magnon states allow to construct a set
of multi-magnon states. Configurations, where the localized
magnons spatially separated (isolated) from each other, become
also exact eigenstates of the Hamiltonian (\ref{q}). At the
saturation field $h=h_{s}=2J_{1}$ all these states have the lowest
energy and the ground state is highly degenerated
\cite{derzhko,derz,Zhitomir}. The degree of the degeneracy can be
calculated by taking into account a hard-core rule forbidding the
overlap of localized magnons with each other (hard-dimer rule).
Exact diagonalization studies\cite{derzhko2006,derz} indicate,
that the ground states in this antiferromagnetic model are
separated by finite gaps from the higher-energy states. Thus the
localized multi-magnon states can dominate the low-temperature
thermodynamics in the vicinity of the saturation field and the
thermodynamic properties can be calculated by mapping the AF delta
chain onto the hard-dimer problem \cite{Zhitomir,derzhko,derz}. A
similar structure of the ground states with localized magnons is
realized in a variety of frustrated spin lattices in one, two and
three dimensions such as the kagome, the checkerboard, the
pyrochlore lattices, see e.g.
Refs.\onlinecite{Schulenburg,Richter2004,Zhitomir,derzhko,derzhko2006,tsunetsugu}.
In contrast to the AF delta chain, the model (\ref{q}) with
ferromagnetic $J_{1}<0$ and antiferromagnetic $J_{2}>0$
interactions ( F-AF delta chain) is less studied, though it is
rather interesting. In particular, it is a minimal model for the
description of the quasi-one-dimensional compound
$[Cu(bpy)H_{2}O][Cu(bpy)(mal)H_{2}O](ClO_{4})$ containing magnetic
$Cu^{2+}$ ions \cite{Inagaki}.
It is known \cite{Tonegawa} that the ground state of the F-AF
delta chain is ferromagnetic for $\alpha =\frac{J_{2}}{\left\vert
J_{1}\right\vert }<\frac{1}{2}$. In Ref.~\onlinecite{Tonegawa} it
was argued that the ground state for $\alpha>\frac{1}{2}$ is a
special ferrimagnetic state. The critical point $\alpha
=\frac{1}{2}$ is the transition point between these two ground
state phases.
In this paper we will demonstrate that the behavior of the model
at this point is highly non-trivial. Similarly to the AF delta
chain also the F-AF model at the critical point supports localized
magnons which are exact eigenstates of the Hamiltonian. They are
trapped in a valley between two neighboring triangles, where the
occupation of neighboring valleys is forbidden (the so-called
non-overlapping or isolated localized-magnon states.) We will show
that the ground states in the spin sector $S=S_{\max}-k$, $k<N/4$,
consist of states with $k$ isolated localized magnons ($k$-magnon
states), but in contrast to the AF case they are exact ground
states at zero magnetic field \cite{remark}. Moreover, in addition
to $k$-magnon configurations consisting of non-overlapping
localized magnons there are states with overlapping ones. Hence,
the degree of degeneracy of the ground state is even larger than
in the AF delta chain. Another difference to the localized-magnon
states in the AF delta chain concerns the gaps between the ground
state and the excited states which become very small for $k>1$. It
means that the contribution of the ground states to the
thermodynamics does not dominate even for low temperatures.
Our paper is organized as follows. In Section II we consider the
ground states of the F-AF delta chain at the critical point. Based
on the localized-states scenario we calculate analytically the
degree of the ground-state degeneracy and check our analytical
predictions by comparing them with full exact diagonalization (ED)
data for finite chains up to $N=24$ sites. In the Section III we
study the low-temperature thermodynamics of the considered model.
We will show that the low-lying states are separated from the
ground states by very small gaps. These low-lying excitations give
the dominant contribution to the thermodynamics as the temperature
grows from zero and approaches these small gaps. We calculate
different thermodynamic quantities, such as magnetization,
susceptibility, entropy, and specific heat by full ED of finite
chains and discuss the low-temperature behavior of these
quantities. In Section IV we consider the magnetocaloric effect in
the critical F-AF delta chain. In the concluding section we give a
summary of our results.
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{triangles}
\caption{The $\triangle$-chain model.} \label{fig1}
\end{figure}
\section{Ground state}
In this section we study the ground state of the F-AF delta chain
at the critical point. For this aim it is convenient to represent
the Hamiltonian (\ref{q}) at $\alpha =\frac{1}{2}$ as a sum of
local Hamiltonians
\begin{equation}
\hat{H}=\sum \hat{H}_{i} \label{q1}
\end{equation
where $\hat{H}_{i}$ is the Hamiltonian of the $i$-th triangle,
which can be written in a form
\begin{equation}
\hat{H}_{i}=-(\mathbf{S}_{i_1}+\mathbf{S}_{i_3})\cdot
\mathbf{S}_{i_2}\mathbf{+}\frac{1}{2}\mathbf{S}_{i_1}\cdot
\mathbf{S}_{i_3}+\frac{3}{8} .\label{q2}
\end{equation}
In Eq.(\ref{q2}) we put $J_{1}=-1$.
The three eigenvalues of Eq.(\ref{q2}) are $E_{i}=0$, $E_{i}=0$
and $E_{i}=\frac{3}{2}$ for the states with spin quantum numbers $S=\frac{3}{2}$,
$S=\frac{1}{2}$ and $S=\frac{1}{2}$, correspondingly. Because the
local Hamiltonians $\hat{H}_{i}$ generally do not commute with
each other, for the lowest eigenvalue $E_{0}$ of $\hat{H}$ holds
\begin{equation}
E_{0}\geq \sum E_{i}=0 . \label{q3}
\end{equation}
It is evident that the energy of the ferromagnetic state with
maximal total spin $S_{\max}=\frac{N}{2}$ of model
(\ref{q1}) is
zero. Therefore, the inequality in Eq.(\ref{q3}) turns in an
equality and the ground state energy of Eq.~(\ref{q1}) is zero. The
question is: how many states with different total spin have zero
energy?
At first, we consider one-magnon states with $S=S_{\max }-1$. The
spectrum $E(q)$ of these states for the F-AF delta chain with
periodic boundary conditions (PBC) has two branches. One of them
is dispersionless with $E(q)=0$ while the second branch is
dispersive and its energy is
\begin{equation}
E(q)=2-\sin ^{2}q,\quad -\frac{\pi }{2}<q<\frac{\pi }{2} . \label{one-magnon}
\end{equation}
The dispersionless one-magnon states correspond to localized
states which can be chosen as
\begin{equation}
\hat{\varphi} _{1}\left\vert F\right\rangle
=(s_{2}^{-}+s_{4}^{-}+2s_{3}^{-})\left\vert F\right\rangle ,\; \hat{\varphi}
_{2}\left\vert F\right\rangle =(s_{4}^{-}+s_{6}^{-}+2s_{5}^{-})\left\vert
F\right\rangle ,\; \ldots \; , \hat{\varphi} _{n}\left\vert F\right\rangle
=(s_{N}^{-}+s_{2}^{-}+2s_{1}^{-})\left\vert F\right\rangle \label{q4}
\end{equation
where $n=\frac{N}{2}$ and $\left\vert F\right\rangle =\left\vert \uparrow
\uparrow \uparrow \ldots \uparrow \right\rangle $.
These functions are exact eigenfunctions of each local
$\hat{H}_{i}$ with zero energy. It can be checked directly that
$\hat{H}_{l}\hat{\varphi} _{l}\left\vert F\right\rangle =0$ and
$\hat{H}_{l+1}\hat{\varphi} _{l}\left\vert F\right\rangle =0$,
while for other $i\neq l-1,l$ the local Hamiltonian $\hat{H}_{i}$
and the operators $\hat{\varphi} _{l}$ defined by Eq.(\ref{q4})
commute giving $\hat{H}_{i}\hat{\varphi} _{l}\left\vert
F\right\rangle =\hat{\varphi} _{l}\hat{H}_{i}\left\vert
F\right\rangle =0$. The $n$ states (\ref{q4}) form a complete
nonorthogonal basis in the space of the dispersionless branch. It
follows from the fact that the relation
\begin{equation}
\sum a_{i}\hat{\varphi} _{i}=0
\end{equation
is fulfilled if all $a_{i}=0$, only. Besides, we note that there
are $(n-1)$ linear combinations of $\hat{\varphi} _{i}\left\vert
F\right\rangle $ which belong to the states with $S=S_{\max }-1$
and one combination belongs to $S=S_{\max }$. The latter is
\begin{equation}
\sum \hat{\varphi} _{i}\left\vert F\right\rangle =2S_{tot}^{-}\left\vert
F\right\rangle .
\end{equation}
For the F-AF delta chain with open boundary conditions (OBC)
and odd $N$ there are $n=\frac{N+1}{2}$ localized one-magnon
states with zero energy and their wave functions are
\begin{equation}
\hat{\varphi} _{1}\left\vert F\right\rangle
=(s_{2}^{-}+2s_{1}^{-})\left\vert F\right\rangle ,\; \hat{\varphi}
_{2}\left\vert F\right\rangle
=(s_{2}^{-}+s_{4}^{-}+2s_{3}^{-})\left\vert F\right\rangle , \;
\ldots \; , \hat{\varphi} _{n}\left\vert F\right\rangle
=(s_{N-1}^{-}+2s_{N}^{-})\left\vert F\right\rangle . \label{q5}
\end{equation}
These functions are linearly independent similarly to those for
the periodic delta chain. It is convenient to introduce another
set of linearly independent operator functions instead of
$\hat{\varphi} _{i}$ which have the form
\begin{equation}
\hat{\Phi} (m)=\sum_{i=1}^{m}\hat{\varphi} _{i},\quad m=1,2\ldots n
\end{equation}
All functions $\hat{\Phi} (m)\left\vert F\right\rangle $ are
eigenfunctions with zero energy of each local Hamiltonian
$\hat{H}_{i}$. Similarly to the periodic chain the $(n-1)$
functions $\hat{\Phi} (m)\left\vert F\right\rangle $ with
$m=1,2,..,n-1$ belong to $S=S_{\max }-1$ and $\hat{\Phi}
(n)\left\vert F\right\rangle $ is the function of the state with
$S=S_{\max }$ and $S^{z}=S_{\max }-1$ because $\hat{\Phi}
(n)=2S_{tot}^{-}$.
Let us consider two-magnon states. For simplicity we will deal
with the delta chain with OBC. It is clear that the pair of
isolated (non-overlapping) magnons is an exact ground state of
the Hamiltonian {(\ref{q1}) and the wave functions of pairs, $\hat{\varphi}
_{i}\hat{\varphi}_{j}\left\vert F\right\rangle $ $(j\geq i+1)$ are
exact ground state functions of each local $\hat{H}_{l}$ with zero
energy. The number of such pairs is $C_{n-1}^{2}$, where
$C_{m}^{n}=\frac{m!}{n!(m-n)!} $ is the binomial coefficient. It
can be proved similarly to the case of the AF delta chain
\cite{Richter} that these states are linearly independent.
In fact, the exact two-magnon ground state wave functions of the
Hamiltonian (\ref{q1}) at $\alpha =\frac{1}{2}$ can be chosen by
many other ways. We determine the set of two-magnon states as
following
\begin{equation}
\hat{\Phi} (m_{1})\hat{\Phi} (m_{2})\left\vert F\right\rangle ,\quad 1\leq
m_{1}<m_{2}\leq n-1 . \label{q6}
\end{equation}
Though Eq.\ (\ref{q6}) contains products of interpenetrating
operator functions $\hat{\varphi} _{i}$ (i.e.\ acting on commonly
involved sites), it is easy to be convinced that the states
defined in Eq.\ (\ref{q6}) are exact ground state wave functions
of each $\hat{H}_{l}$. For example, let us consider the function
$\hat{\Phi} (1)\hat{\Phi} (2)\left\vert F\right\rangle $. It
equals
\begin{equation}
\hat{\Phi} (1)\hat{\Phi} (2)\left\vert F\right\rangle =(\hat{\varphi} _{1}+\hat{\varphi}
_{2})\hat{\varphi} _{1}\left\vert F\right\rangle
=(2s_{1}^{-}+2s_{2}^{-}+2s_{3}^{-}+s_{4}^{-})\hat{\varphi}
_{1}\left\vert F\right\rangle =(2S^{-}(1)+s_{4}^{-})\hat{\varphi}
_{1}\left\vert F\right\rangle , \label{q7}
\end{equation
where $S^{-}(1)$ is the lowering spin operator of the first
triangle. Then, this function is an exact ground state function of
$\hat{H}_{1}$, because $\hat{\varphi} _{1}$ creates a mixture of
the states with $S=\frac{3}{2}$ and $S=\frac{1}{2}$ of
$\hat{H}_{1}$ with zero energy. On the other hand, this function
is an exact ground state function of $\hat{H}_{2}$, because it
contains the combination $2s_{3}^{-}+s_{4}^{-}$ in the first
bracket. It is also clear that the function (\ref{q7}) is an exact
ground state function of $\hat{H}_{i}$ with $i\geq 3$ because
$\hat{H}_{i}$ for these $i$ commute with $\hat{\Phi} (1)\hat{\Phi}
(2)$ and $\hat{H}_{i}\hat{\Phi}
(1)\hat{\Phi}
(2)\left\vert F\right\rangle =\hat{\Phi} (1)\hat{\Phi} (2)\hat{H}
_{i}\left\vert F\right\rangle =0$. A similar consideration can be
extended to any function having the form (\ref{q6}). The function
$\hat{\Phi} (m_{1})\hat{\Phi} (m_{2})\left\vert F\right\rangle $
contains the lowering operators $S^{-}(1,2\ldots m_{1}-1)$ and
$S^{-}(1,2\ldots m_{2}-1)$ (where $S^{-}(1,2\ldots k) $ is the
total lowering spin operator for the first $k$ triangles). The
construction of the brackets in Eq.\ (\ref{q6}) ensures the
relation $\hat{H}_{i}\hat{\Phi} (m_{1})\hat{\Phi}
(m_{2})\left\vert F\right\rangle =0$ for $i\leq m_{2}$, while this
relation for $i>m_{2}$ is fulfilled automatically. It easy to
check that the set of functions (\ref{q6}) can be transformed to
the set $\hat{\varphi} _{i}\hat{\varphi} _{j}\left\vert
F\right\rangle $ $(j\geq i+1)$ using the condition $\hat{\Phi}
(n)=2S_{tot}^{-}$.
Strictly speaking we should also show that the set of the states
(\ref{q6}) after a projection onto the states with
$S_{tot}=S^{z}=S_{\max }-2$ gives all linearly independent states
in this spin sector. We checked this analytically for systems with
$n=5,7$ (i.e.\ $N=11,15$) but we did not succeed with a rigorous
proof of this statement.
Since the operator function $\hat{\Phi} (n)$ with $m_{2}\leq n-1$
belongs to a state $\hat{\Phi} (m_{1})\hat{\Phi} (n)\left\vert
F\right\rangle =2S_{tot}^{-}\hat{\Phi} (m_{1})\left\vert
F\right\rangle $ in the sector $S_{tot}=S_{\max }-1$, it is not
described by Eq.\ (\ref{q6}) by definition. The number of states
described by Eq.\ (\ref{q6}) amounts $C_{n-1}^{2}$.
Now we consider the general case of the $k$-magnon subspace with
$S_{tot}=S^{z}=S_{\max }-k$. It is evident that a state consisting
of $k$ isolated localized magnons
\begin{equation}
\hat{\varphi} _{i_1}\hat{\varphi} _{i_2}\hat{\varphi} _{i_3}\ldots \hat{\varphi}
_{i_k}\left\vert F\right\rangle ,\quad i_{l}>i_{l-1}+1
\label{k0-set}
\end{equation
is an exact ground state of Eq.\ (\ref{q1}). The number of such
states is $C_{n-k+1}^{k}$ and they are feasible if
$k<\frac{n+1}{2}$ for OBC. However, the set of states
(\ref{k0-set}) does not present the complete manifold of the
ground states in the sectors of $S_{tot}$ $=S^{z}=S_{\max }-k$ for
$k>2$. Similarly to the two-magnon case we choose the $k$-magnon
set in the form
\begin{equation}
\hat{\Phi} (m_{1})\hat{\Phi} (m_{2})\hat{\Phi} (m_{3})\ldots \hat{\Phi} (m_{k})\left\vert
F\right\rangle ,\quad 1\leq m_{1}<m_{2}<m_{3}<\ldots m_{k}\leq n-1 .
\label{k-set}
\end{equation}
The functions (\ref{k-set}) are exact ground state functions of
the Hamiltonian (\ref{q1}). This can be proved by analogy with the
two-magnon case. We assume again that after projection onto
$S_{tot}=S_{\max }-k$ the set of states (\ref{k-set}) will give a
complete set of linearly independent wave functions in this
sector. As follows from Eq.\ (\ref{k-set}) the number of these
functions is $C_{n-1}^{k}$. Again we have checked and confirmed
this by full ED for finite delta chains. We note that the
hypothesis about the number of degenerated ground states in the
sector $S_{tot}$ $=S^{z}=S_{\max }-k$ has been suggested in Ref.\
\onlinecite{suzuki} as a guess based on numerical calculations.
The number of functions in Eq.\ (\ref{k-set}) is larger than the
number of those given in Eq.\ (\ref{k0-set}). Moreover, the
functions of the type described by Eq.\ (\ref{k-set}) are feasible
for any $k$. In particular, for $S_{tot}=\frac{1}{2}$ there is a
single ground state function with zero energy.
In addition to Eq.(\ref{k-set}) we can choose the sets of the
ground state functions in the sectors $S^{z}=S_{\max }-k$ and
$S>S_{\max }-k$. They have the forms
\begin{eqnarray*}
\hat{\Phi} (m_{1})\hat{\Phi} (m_{2})\hat{\Phi} (m_{3})\ldots \hat{\Phi} (m_{k-1})\hat{\Phi} (n)\left\vert
F\right\rangle ,\quad 1 &\leq &m_{1}<m_{2}<m_{3}<\ldots m_{k-1}\leq n-1 \\
\hat{\Phi}(m_{1})\hat{\Phi} (m_{2})\hat{\Phi} (m_{3})\ldots
\hat{\Phi} (m_{k-2})\hat{\Phi} ^{2}(n)\left\vert F\right\rangle
,\quad 1 &\leq &m_{1}<m_{2}<m_{3}<\ldots
m_{k-2}\leq n-1 \\
&&\ldots \\
\hat{\Phi} (m_{1})\hat{\Phi} ^{k-1}(n)\left\vert F\right\rangle ,
&&1 \leq m_1 \leq n-1 \\
\hat{\Phi} ^{k}(n)\left\vert F\right\rangle &&\quad .
\end{eqnarray*}
This set of functions represents the ground state functions with
$S^{z}=S_{\max}-k$ but $S_{tot}=S_{\max}-k+1$,
$S_{tot}=S_{\max}-k+2$, ..., $S_{tot}=S_{\max}$.
The total number of ground states in the sector $S^{z}=S_{\max}-k$
amounts
\begin{equation}
C_{n-1}^{0}+C_{n-1}^{1}+\ldots +C_{n-1}^{k} .
\end{equation}
Let us now consider the delta chain with PBC. It is evident
that the ground state in the sector $S^{z}=S_{\max }-k$ can be
formed by $k$ non-overlapping localized magnons
\begin{equation}
\hat{\varphi} _{i1}\hat{\varphi} _{i2}
\hat{\varphi} _{i3}\ldots \hat{\varphi} _{ik}\left\vert
F\right\rangle . \label{k periodic}
\end{equation}
The number of possibilities to place $k$ magnons on a delta chain
without overlap is
\begin{equation}
g_{n}^{k}=\frac{n}{n-k}C_{n-k}^{k},\quad n=\frac{N}{2} \label{g} .
\end{equation}
This is the number of degenerated ground states in the sector
$S^{z}=S_{\max }-k$ built by $k$ non-overlapping localized
magnons. It corresponds to the one-dimensional classical
hard-dimer problem.\cite{fisher,derzhko} The maximum number of
localized magnons for the closest possible packing is $k_{\max
}=\frac{n}{2}$ and $g_{n}^{n/2}=2$. Remarkably, the
non-overlapping localized-magnon states (\ref{k periodic}) do not
exhaust all possible ones for $k>2$. There is another way of the
ground state construction. For example, we can write the exact
ground state for $k=2$ as
\begin{equation}
\hat{\varphi} _{i}(\hat{\varphi} _{i-1}+\hat{\varphi} _{i}+\hat{\varphi} _{i+1})\left\vert
F\right\rangle . \label{p2}
\end{equation}
Carrying out computations similarly to those for the open chain it
is easy to see that the function (\ref{p2}) is an exact
eigenfunction with zero energy for the local Hamiltonians
$\hat{H}_{i}$, $\hat{H}_{i+1}$ and $\hat{H}_{i-1}$ and for the
other ones. Formula (\ref{p2}) can be extended for $k>2$ by adding
corresponding brackets. On the base of the analysis of possible
construction of such type we conjecture that the ground state
degeneracy in the sector $S_{tot}$ $=S^{z}=S_{\max }-k$ amounts
\begin{equation}
A_{n}^{k}=C_{n}^{k}-C_{n}^{k-1}+\delta _{k,n} . \label{An}
\end{equation}
According to Eq.\ (\ref{An}) $A_{n}^{k}=0$ for $n>k>\frac{n}{2}$
and $A_{n}^{n/2}=\frac{2}{2+n}C_{n}^{n/2}$. The third term in
Eq.\ (\ref{An}) corresponds to the special ground state for $S=0$
described by the famous resonating-valence-bond eigenfunction
\cite{hamada} which is not of "multi-magnon" nature. As follows
from Eq.\ (\ref{An}) the number of the ground states for fixed
$S^{z}=S_{\max }-k$ is
\begin{eqnarray}
B_{n}^{k} &=&C_{n}^{k},\quad 0\leq k\leq \frac{n}{2} \nonumber \\
B_{n}^{k} &=&C_{n}^{n/2}+\delta _{k,n},\quad \frac{n}{2}<k\leq n .
\label{number}
\end{eqnarray}
Eqs.(\ref{An}) and (\ref{number}) have been confirmed by ED
calculations of finite chains up to $N=24$.
The total number of degenerate ground states is
\begin{equation}
W=2\sum_{k=0}^{n-1}B_{n}^{k}+B_{n}^{n}=2^n+nC_n^{n/2}+1 .
\label{W}
\end{equation}
The value of the entropy per site is $s_0=\ln(W)/N$. That is
the residual entropy per site at zero magnetic field which
becomes for $N\to\infty$
\begin{equation}
s_0=\frac 12 \ln 2 . \label{s0}
\end{equation}
Obviously, the residual entropy of the considered $N$-site
interacting spin-$1/2$ system corresponds to the entropy of
$\frac N2$ non-interacting $s=1/2$ spins.
It is interesting to compare the residual entropy of the F-AF
delta chain at the critical point with that for the AF delta chain
at the saturation field. For the AF delta chain it amounts
$s_0^{AF}=0.347\ln 2$ \cite{Zhitomir,derzhko,derz}. i.e.\ $s_0$ is
larger than $s_{AF}$ due to the existence of the additional ground
states which do not belong to the class of non-overlapping
localized magnons.
Concluding this section we point out that the considered model is
one more example of a quantum many-body system with a macroscopic
ground-state degeneracy resulting therefore} in a residual
entropy.
\section{Low-temperature thermodynamics}
The next interesting question is whether the degenerate ground
states are separated by a finite gap from all other eigenstates.
This question is important for thermodynamic properties of the
model. If a finite gap exists in all spin sectors then the
low-temperature thermodynamics is determined by the contribution
of the degenerate ground states. Such a situation takes place for
the delta chain with antiferromagnetic interactions. As it will be
demonstrated below it is not the case for the considered model.
As follows from Eq.(\ref{one-magnon}) the gap $\Delta E$ in the
one-magnon sector is $\Delta E=1$ (in $\left\vert J_{1}\right\vert
$ units). However, the minimal energy of two-magnon excitations
dramatically decreases. Numerical calculations show that it equals
$\Delta E\approx $ $0.022$. The exact wave function of this state
has the form
\begin{eqnarray}
\Psi
&=&0.484\sum_{n}(-1)^{n}s_{2n}^{-}(s_{2n-1}^{-}+s_{2n+1}^{-})\left\vert
F\right\rangle \nonumber \\
&&-0.321\sum_{n}\sum_{m=0}(-1)^{n}\exp (-\lambda
m)s_{2n}^{-}(s_{2n-2m-3}^{-}+s_{2n+2m+3}^{-})\left\vert
F\right\rangle
\nonumber \\
&&+0.545\sum_{n}\sum_{m=1}(-1)^{n}\exp \{-\lambda
(m-1)\}s_{2n+1}^{-}s_{2n+4m-1}^{-}\left\vert F\right\rangle \nonumber \\
&&-0.157\sum_{n}\sum_{m=0}(-1)^{n}\exp (-\lambda
m)s_{2n}^{-}s_{2n+4m}^{-}\left\vert F\right\rangle ,
\label{two-magnon gap}
\end{eqnarray
where $\lambda \simeq 3.494$. The energy of this state is $\Delta
E=0.02177676$. It could be expected that the low-lying excited
two-magnon states are formed by scattering states of magnons from
the dispersionless one-magnon branch. However, the wave function
(\ref{two-magnon gap}) has a more complicated specific form of a
bound state.
The gaps for the $k$-magnon states with $k>2$ decrease rapidly
with increasing $k$ as it can be seen from the Table 1, where the
gaps in the sector $S=S_{\max }-k$ for chains with $N=16,20,24$
are presented. Obviously, the gaps become extremely small.
\begin{table}[tbp]
\caption{Excitation gaps in the $k$-magnon sectors (i.e. $S_z=N/2
- k$) calculated for $N=16,20,24$.}
\begin{ruledtabular}
\begin{tabular}{cccc}
& $N=16$ & $N=20$ & $N=24$ \\
\hline
$k=1$ & $1.0$ & $1.0$ & $1.0$ \\
$k=2$ & $0.021776237324972$ & $0.021776745369208$ & $0.021776760796279$ \\
$k=3$ & $0.000471848035563$ & $0.000484876324415$ & $0.000487488767250$ \\
$k=4$ & $0.000009935109570$ & $0.000013213815119$ & $0.000014315249351$ \\
$k=5$ & $0.000003034124289$ & $0.000000197371592$ & $0.000000295115215$ \\
$k=6$ & $0.000002583642491$ & $0.000000064146143$ & $0.000000004288885$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
These data clearly testify that the contribution of the excited
states to the partition function cannot be neglected even for very
low temperatures. Nevertheless, to clarify this point it is proper
to calculate the contribution to the partition function from only
the degenerate ground states. Using Eq.\ (\ref{number}) we obtain
the partition function $Z$ of the model in the magnetic field in a
form (we use PBC for the calculation since $Z$ for the chains with
PBC and OBC coincide in the thermodynamic limit)
\begin{equation}
Z=2\sum_{k=0}^{n/2}C_{n}^{k}\cosh \left[ \frac{(n-k)h}{T}\right]
+2C_{n}^{n/2}\sum_{k=0}^{n/2}\cosh \left[ \frac{(\frac{n}{2}-k)h}{T}\right]
-2C_{n}^{n/2}\cosh \left( \frac{nh}{2T}\right) -C_{n}^{n/2} . \label{Z}
\end{equation}
The magnetization is given by
\begin{equation}
M=\left\langle S^{z}\right\rangle =T\frac{d\ln Z}{dh} . \label{M}
\end{equation}
It follows from Eqs.~(\ref{Z}) and (\ref{M}) that $M$ is a
function of the universal variable $x=h/T$. The dependence $M(x)$
is shown in Fig.\ 2 for different $N$. As it is seen from Fig.\ 2
for small $x$ the magnetization grows with the increase of $N$.
Analyzing the magnetization curve $M(x)$ for small $x$ one needs
to distinguish the limits $x\ll 1/N$ and $x\gg 1/N$. Using Eqs.\
(\ref{Z}) and (\ref{M}) we obtain the magnetization for $x\ll 1/N$
in the form
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{M_hgs}
\caption{Magnetization curves calculated using Eqs.~(\ref{Z}) and
(\ref{M}) for $N=20$ (long-dashed line), $N=200$ (short-dashed
line) and using Eq.(\ref{M2}) for $N\to\infty$ (thin solid line).
Thick solid line corresponds to ED for $N=20$
and $T=10^{-6}$.} \label{M_hgs}
\end{figure}
\begin{equation}
M=c_{N}\frac{N^2h}{T},\quad c_{N}=\frac{2^{n-2}n(n+1)+C_{n}^{n/2}(\frac{3}{4
n^{2} +\frac{1}{2}C_{n}^{3})}{n^{2}2^{n+2}+4n^{3}C_{n}^{n/2}} .
\label{M1}
\end{equation}
For $N\gg 1$, $c_{N}\sim 1/48$ and the magnetization per site
becomes
\begin{equation}
\frac{M}{N}\simeq \frac{Nh}{48T}(1+2\sqrt{\frac{\pi }{N}}),\quad h\ll T/N .
\label{M11}
\end{equation}
In the opposite limit $x\gg 1/N$, the magnetization is
\begin{equation}
\frac{M}{N}\simeq \frac{1}{2(1+e^{-h/T})},\quad h\gg T/N .
\label{M2}
\end{equation}
However, it is clear that both equations (\ref{M11}) and
(\ref{M2}) do not give an adequate description of the
magnetization at $x\to 0$. For $x\ll 1/N $, $M$ is proportional to
$N^{2}$ instead of to $N$. On the other hand, according to Eq.\
(\ref{M2}), the magnetization in the thermodynamic limit is finite
at $h=0$. This is an artefact because the long range order (the
magnetization) in one-dimensional systems can not exist at $T>0$.
Therefore, the contribution of only the degenerate ground states
is not sufficient to describe the correct dependence of $M(x)$ for
small $x$ and it is necessary to take into account the
contributions of other low-lying eigenstates. Unfortunately,
analytical calculation of the corresponding contributions is
impossible. Therefore, we carried out the full ED for $N=16$ and
$N=20$.
The magnetization curves obtained by ED calculations are shown in
Fig.~3. It is seen that curves for $N=16$ and $N=20$ are close
(especially at $h/T>1$) that testifies small finite-size effects.
One of the most interesting points related to the magnetization
curve is its behavior at low magnetic fields. At first, we note
that $M$ obtained by ED calculations is not a function of only
$x=h/T$ in contrast with the predictions given by
Eqs.~(\ref{M11}), (\ref{M2}). That can be seen in the inset in
Fig.~3, where the magnetization for $N=20$ is presented as a
function of $x$ for two temperatures, $T=10^{-4}$ and $T=10^{-5}$,
i.e. in fact, $M=M(x,T)$.
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{M_h}
\caption{Magnetization curves calculated by ED
for $N=16$ and $N=20$ at fixed temperature $T=10^{-6}$. The inset
shows low-field limit of the magnetization curve calculated for
$N=20$ and two temperatures $T=10^{-4}$ and $T=10^{-5}$.}
\label{M_h}
\end{figure}
In order to study the low-field limit of the magnetization curve
we have calculated the uniform susceptibility per site
\begin{equation}
\chi =\frac{1}{3NT}\sum_{ij}\left\langle \mathbf{S}_{i}\cdot \mathbf{S
_{j}\right\rangle .\label{cor}
\end{equation}
The calculated dependencies of $\chi (T)$ for $N=16$ and $N=20$
are shown in Fig.\ 4. For convenience they are plotted as
$\ln(\chi T)$ vs.\ $\ln T$. Both curves are almost
indistinguishable for $T>10^{-3}$, indicating a weak finite-size
dependence. A linear fit in this temperature range for the log-log
plot of $\chi (T)$ yields a power-law dependence
\begin{equation}
\chi =\frac{c_{\chi }}{T^{\alpha }} \label{sus}
\end{equation}
with
\begin{eqnarray}
c_{\chi } &\simeq &0.317 \nonumber \\
\alpha &\simeq &1.09 \label{alpha}
\end{eqnarray}
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{lnchiT_T}
\caption{Log-log plot for the dependence of the susceptibility per
site on temperature calculated for $N=16$ and $N=20$. The thin
solid line corresponds to Eq.\ (\ref{sus}).} \label{lnchi_lnT}
\end{figure}
As shown in Fig.\ 4, Eq.\ (\ref{sus}) perfectly coincides with the
numerical data for $N=16$ and $N=20$ from $T\sim 10^{-3}$ up to
$T=1$, only slight deviations near $T=0.1$ and $T=1$ are observed.
However, for $T<10^{-3}$ the curves $\chi(T)$ for $N=16$ and
$N=20$ start to split and both deviate from Eq.\ (\ref{sus}).
At $T\to 0$ the susceptibility is determined by the contribution
of the degenerate ground states and it is
\begin{equation}
\chi =c_N\frac{N}{T}. \label{susT0}
\end{equation}
with $c_{N}$ given by Eq.(\ref{M1}). For $N\gg 1$ it reduces to
$\chi =N/48T$.
We assume that both expressions for the susceptibility (\ref{sus})
and (\ref{susT0}) are described by a single universal finite-size
scaling function. This guess leads to the following form for the
finite-size susceptibility:
\begin{equation}
\chi_{N}(T)=T^{-\alpha }f(c_{N}NT^{\alpha -1}) \label{chiscal}
\end{equation}
Really, the behavior of the scaling function $f(z)=z$ for $z\ll 1$
provides the correct limit to Eq.\ (\ref{susT0}). In the
thermodynamic limit when $z=c_{N}NT^{\alpha -1}\to\infty$ the
scaling function $f(z)$ tends to a finite value $c_{\chi }$ in
full accord with Eq.\ (\ref{sus}). The crossover between the two
types of the susceptibility behavior occurs at $z\sim 1 $, which
defines the effective temperature of the crossover $T_{0}\sim
N^{-1/(\alpha -1)}$. At $T<T_{0}$ the susceptibility is determined
mainly by the contribution of the degenerate ground states, but
this regime vanishes in the thermodynamic limit where $T_{0}=0$.
Substituting the value $\alpha \simeq 1.09$ we obtain a very large
exponent $\simeq 11$ for $T_{0}\sim 1/N^{11} $. This exponent
defines the energy scale of the excited states which contribute to
the susceptibility.
The scaling hypothesis written in Eq.\ (\ref{chiscal}) is
confirmed numerically. In Fig.\ \ref{Fig_scale} the ED data for
$N=16$ and $N=20$ are plotted in the axes $\chi_N T^{\alpha }$
vs.\ $c_{N}NT^{\alpha -1}$. As shown in Fig.\ \ref{Fig_scale} the
data for $N=16$ and $N=20$ lie very close and define the scaling
function $f(z)$.
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{scale}
\caption{Universal scaling function for the dependence of the
finite-size susceptibility on temperature defined in
Eq.(\ref{chiscal}) calculated by ED for $N=16$ and $N=20$. Thin
dashed lines correspond to Eqs.\ (\ref{sus}) and (\ref{susT0}).}
\label{Fig_scale}
\end{figure}
The obtained temperature dependence $\chi (T)$ (\ref{sus}) allows
us to determine the low-field behavior of the magnetization curve
\begin{equation}
\frac{M}{N}=c_{\chi }\frac{h}{T^{\alpha }} . \label{Mlowh}
\end{equation}
This implies that the low field magnetization is a function of a
single scaling variable $y=h/T^{\alpha }$. This statement is
confirmed by numerical calculations, presented in Fig.\ 6. As
shown in Fig.\ 6 the magnetization calculated for different (and
small) values of the field $h$ and the temperature $T$ lies on one
line when it is plotted against the scaling variable
$y=h/T^{\alpha }$ with $\alpha =1.09$.
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{M_lowh109}
\caption{Dependence of the magnetization per site on the scaling
parameter $y=h/T^{1.09}$ calculated by ED ($N=20$) for different
values of the magnetic field $h$ and temperature $T$. Thin solid
line corresponds to Eq.\ (\ref{Mlowh}).} \label{M_lowh109}
\end{figure}
The temperature dependence of the spin correlation functions
$\langle {\bf S}_i\cdot{\bf S}_i\rangle $ for $N=16$ is presented
in Fig.\ 7. For low temperature up to $T\leq 10^{-3}$ the spin
correlation functions are almost constants and the sum in Eq.\
(\ref{cor}) at $T=10^{-9}$ is equal to $c_{16}$ with $c_{16}$
given by Eq.\ (\ref{M1}). For $T>10^{-3}$ the correlations decay
with the increase of $T$ and with the distance between the spins.
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{sisj_log}
\caption{Temperature dependence of various spin correlators
$\langle {\bf S}_i\cdot{\bf S}_i\rangle $ (ED data for $N=16$.)
The numbering in the legend corresponds to Fig.~\ref{fig1}
(periodic boundary conditions imposed).} \label{sisj}
\end{figure}
Let us consider now the entropy and the specific heat. We note
that the partition function (\ref{Z}) at $h=0$ does not depend on
the temperature, and the Helmholtz free energy is
\begin{equation}
\frac{F}{N}=-T\ln Z=-TS_{0}
\end{equation}
The fact that $Z$ in Eq.\ (\ref{Z}) does not depend on $T$ at $h=0$ means that
the partition function (\ref{Z}) is not relevant at $T>0$.
Nevertheless, Eq.\ (\ref{Z}) gives the exact value for the residual
entropy given by Eqs.\ (\ref{W}) and (\ref{s0}).
The numerical data for the $T$-dependence of the entropy at $h=0$
obtained by ED are shown in Fig.\ 8. As it is there, the data for
$N=16$ and $N=20$ perfectly coincide for $T>10^{-3}$ and split for
$T<10^{-3}$. At $T\to 0$ the entropy for $N=16$ and $N=20$ tends
to different values of the residual value given by Eq.\ (\ref{W}).
From these facts we conclude that the finite-size effects in our
calculations become substantial for $T<10^{-3}$, but the obtained
data for $T>10^{-3}$ perfectly describes the behavior of the
entropy at $N\to \infty $. Therefore, we used the data for
$T>10^{-3}$ only, and found that the behavior of the entropy in
the thermodynamic limit is to first approximation reasonably well
described by a power-law dependence (see Fig.8):
\begin{equation}
\frac{S(T)}{N}=\frac{1}{2}\ln 2+c_{s}T^{\lambda } \label{STfit}
\end{equation}
with $c_{s}\simeq 0.245$ and $\lambda \simeq 0.12$.
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{entropy2}
\caption{Dependence of the entropy per site on temperature
calculated for $N=16$ and $N=20$ and presented in a logarithmic
scale. The thick solid line describes the approximate smooth
expression given by Eq.\ (\ref{STfit}). The inset shows the
low-temperature limit of $S(T)$.} \label{entropy_fig}
\end{figure}
The dependence of the specific heat on the temperature is
presented in Fig.\ 9. It has a peculiar form and is characterized by
a broad maximum at $T\simeq 0.7$ and two weak maxima at $T\leq
0.1$.
It is important to note that the data for $N=16$ and $N=20$ are
slightly different at $T<10^{-3}$ but they are indistinguishable
for $T>10^{-3}$, testifying to these data are already close to
those for the thermodynamic limit. Therefore, we conclude that the
prominent feature of this dependence remains relevant at $N\to
\infty$.
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{CT}
\caption{Dependence of the specific heat on temperature calculated for
$N=16$ (dashed line) and $N=20$ (solid line).} \label{C_T}
\end{figure}
\section{Magnetocaloric effect}
As it is well-known \cite{Zhit} that spin systems with a
macroscopic degenerate ground state show an appreciable
magnetocaloric effect, i.e.\ for the cooling of the system under
an adiabatic demagnetization. The standard materials for magnetic
cooling are paramagnetic salts. The geometrically frustrated
quantum spin systems can be considered as alternative materials
for low-temperature magnetic cooling. The macroscopic degeneracy
of the ground state at the saturation magnetic field in some of
them, including the AF delta chain, leads to an enhanced
magnetocaloric effect in the vicinity of this field
\cite{Honecker,Zhit1,derzhko2006,Schmidt,Garlatti}. However, the
saturation field is relatively high in real materials and
practical applications of such systems for magnetic cooling are
rather questionable.
In contrast, the F-AF delta chain with $\alpha =\frac{1}{2}$ has a
finite zero-temperature entropy at zero magnetic field. Therefore,
it is interesting to consider the magnetocaloric properties of
this model. The efficiency of the magnetic cooling is
characterized by the cooling rate $(\frac{\partial T}{\partial
h})_{s}$ and so it is determined by the dependence $T(h)$ at a
fixed value of the entropy. This dependence at small $h$ and $T$
can be found using the results obtained in the previous Sections.
According to the standard thermodynamic relations the entropy
$S(T,h)$ is connected with the magnetization curve by
\begin{equation}
S(T,h)-S(T,0)=\frac{\partial }{\partial T}\int_{0}^{h}M(T,h')dh'
\label{entropy2}
\end{equation}
As was stated in the previous Section, there are two regions with
different behavior of the magnetization curve. For very low
magnetic field $h<T^{\alpha}$ the magnetization is proportional to
$h$ according to Eq.\ (\ref{Mlowh}). For higher magnetic field
$h>T^{\alpha}$ (but both $h\ll 1$ and $T\ll 1$) the magnetization
curve is described by Eq.\ (\ref{M2}). Therefore, we will consider
these two cases separately.
At first we study the low-field case $h<T^{\alpha}$. Substituting
the expression (\ref{Mlowh}) to Eq.\ (\ref{entropy2}) we obtain the
entropy per site $s(T,h)=S(T,h)/N$:
\begin{equation}
s(T,h)=s(T,0)-\frac{\alpha c_{\chi }h^{2}}{2T^{\alpha +1}}
\label{entropy1}
\end{equation}
where the function $s(T,0)=S(T,0)/N$ is given by Eq.\ (\ref{STfit}).
>From Eq.\ (\ref{entropy1}) we obtain the function $h(T)$ at
constant entropy $s(T,h)=s^{\ast }$ as
\begin{equation}
h(T)=\sqrt{\frac{2(s_{0}+c_{s}T^{\lambda }-s^{\ast })}{\alpha
c_{\chi }}}T^{(\alpha +1)/2} \label{hT1}
\end{equation}
where $s_0=\ln 2/2$ as given by Eq.\ (\ref{s0}).
From Eq.\ (\ref{hT1}) we see that the cases $s^{\ast}<s_{0}$ and
$s^{\ast}>s_{0}$ are different. For the case $s^{\ast}\geq s_{0}$
the temperature tends to the finite value $T_{0}$ at $h\to 0$:
\begin{equation}
T_{0}=\left( \frac{s^{\ast }-s_{0}}{c_{s}} \right)^{1/\lambda } .
\label{T0}
\end{equation}
In other words $T_{0}$ is the lowest temperature which can be
reached in the adiabatic demagnetization process if the entropy
exceeds $s_{0}$. For low magnetic fields Eq.\ (\ref{hT1}) allows
to express the dependence $T(h)$ as:
\begin{equation}
T(h)=T_{0}+\frac{\alpha c_{\chi} h^{2}}{2\lambda
c_{s}T_{0}^{\alpha +\lambda }} .
\end{equation}
In the limit $T\gg T_{0}$, the curve $T(h)$ transforms into
\begin{equation}
T(h)=\left(\frac{\alpha c_{\chi }}{2c_{s}}\right)^{1/(1+\alpha
+\lambda )}h^{2/(1+\alpha +\lambda )} .
\end{equation}
Substituting the values for $\alpha$, $c_{\chi }$, $\lambda$ and
$c_{s}$ into the latter equation, we get
\begin{equation}
T(h)\simeq 0.85 h^{0.905} \label{Th1}
\end{equation}
which gives the cooling rate
\begin{equation}
\left(\frac{\partial T}{\partial h}\right)_{s^{\ast}} \simeq 0.77
h^{-0.095} . \label{dTdh1}
\end{equation}
As follows from Eq.\ (\ref{T0}) for the special case
$s^{\ast}=s_{0}$ the critical temperature $T_0=0$ and
Eqs.(\ref{Th1}) and (\ref{dTdh1}) are valid in the low temperature
limit.
In the case $s^{\ast }<s_{0}$ we can omit the term
$c_{s}T^{\lambda }$ in Eq.(\ref{hT1}), which means that $T\to 0$
at $h\to 0$. The cooling rate for $T\ll (s_{0}-s^{\ast
})^{1/\lambda }$ is given by the following expression:
\begin{equation}
\left(\frac{\partial T}{\partial h}\right)_{s^{\ast}}=
\frac{0.413}{(s_{0}-s^{\ast })^{0.48}}h^{-0.043} .
\end{equation}
For the case of small $h$ and $T$ but $h/T\gg 1$ we can calculate
the integral in Eq.\ (\ref{entropy2}) using the expression for the
magnetization given by Eq.\ (\ref{M2}). Then the entropy $s^{\ast }$
is
\begin{equation}
s^{\ast }=\frac{1}{2}\ln (1+e^{-h/T})+\frac{h}{2T(e^{h/T}+1)} .
\label{entropy3}
\end{equation}
This entropy coincides with the entropy per site of the ideal
paramagnet of $\frac{N}{2}$ spins $\frac{1}{2}$. The
transcendental Eq.\ (\ref{entropy3}) does not allow to derive an
explicit expression for $T(h)$. However, since the magnetic field
and the temperature enter Eq.\ (\ref{entropy3}) only in the
combination $h/T$, the dependence $T(h)$ is a linear function. In
the limit $h/T\gg 1$ ($s^{\ast }\ll 1$) one has $T(h)\sim -h/\ln
(2s^{\ast })$.
We have calculated the function $T(h)$ by ED for $N=16$ for
several fixed values of the entropy, see Fig.\ 10. It is seen
there that the cooling rate increases when $s^{\ast}$ approaches
$s_{0}$ from below. For $s^{\ast}>s_{0}$ a nonzero $T_{0}$
appears, but for $T>T_{0}$ the cooling rate is rather high. For
small $h$ and $T$ the behavior of the curves $T(h)$ agrees with
that given by Eqs.\ (\ref{entropy2})-(\ref{entropy3}).
Having in mind real materials for applications one should be aware
that the expected magnetocaloric effect is expected to be somewhat
reduced due to deviations from the critical point considered here
and always present residual interactions beyond those considered
in Eq.\ (1). A quantitative and systematic study of these cases is
postponed to subsequent studies.
\begin{figure}[tbp]
\includegraphics[width=5in,angle=0]{calor}
\caption{Constant entropy curves as a function of the applied
magnetic field and temperature for $N=16$.} \label{calorfig}
\end{figure}
\section{Conclusion}
We have studied the ground state and the low-temperature
thermodynamics of the delta chain with F and AF interactions at
the transition point between the ferromagnetic and the
ferrimagnetic ground states. The most spectacular feature of this
frustrated quantum many-body system is the existence of a
macroscopically degenerate set of ground states leading to a large
residual entropy per spin of $s_0=\frac{1}{2}\ln 2$. Remarkably,
for these ground states explicit exact expressions can be found.
Among the exact ground states in the spin sector $S_{tot}=S_{\max
}-k$ there are states consisting of $k$ independent
(non-overlapping) magnons each of which is localized between two
neighboring apical sites. The same class of localized ground
states exist for the sawtooth model (\ref{q}) with both AF
interactions at the saturation field \cite{Zhitomir,derzhko,derz}.
However, such states do not exhaust all ground states in the
considered model. In addition to them, there are exact ground
states of another type consisting of products of overlapping
localized magnons. Since such states do not exist for the sawtooth
chain with both AF interactions, in this respect the considered
model with F and AF interactions differs from the AF model. We
have checked our analytical predictions for the degeneracy of the
ground states in the sectors $S_{tot}=S_{\max }-k$ by comparing
them with numerical data for finite chains. The ground-state
degeneracy grows exponentially with the system size $N$ and leads
to above mentioned finite entropy per site at $T=0$. A
characteristic property of the excitation spectrum of the
$k$-magnon states is the sharp decrease of the gap between the
ground states and the excited ones when $k$ grows. As a result
both the highly degenerate ground-state manifold as well as the
low-lying excited states contribute substantially to the partition
function, especially at small $T$. That is confirmed by the
comparison of the data for the magnetization $M$ and the
susceptibility $\chi$ obtained by ED of finite chains with those
given by the contribution of the only degenerate ground states.
The subtle interplay of ground states and excited states leads to
unconventional low-temperature properties of the model. We have
shown that the magnetization $M$ at small $h$ and $T$ is a
function of the universal variable $h/T^{\alpha }$ with an index
$\alpha =1.09\pm 0.01$. This value of $\alpha $ agrees with the
critical index for the susceptibility. Furthermore, we have
analyzed the behavior of $\chi $ for finite chains. We have found
that this behavior can be described by one universal finite-size
scaling function. The entropy and the specific heat have also been
calculated by ED for finite chains. The entropy per site is finite
at $T=0$ and increases approximately with a power-law dependence
at $T>0$. The temperature dependence of the specific heat has a
rather interesting form characterized by a broad maximum at
$T\simeq 0.7$ and two weak maxima at $T\leq 0.1$.
Similar as the model with both AF interactions there is an
enhanced magnetocaloric effect. While for AF model this enhanced
effect is observed when passing the saturation field, we find it
for the considered model when the applied magnetic field is
switched off, which is obviously more suitable for a possible
application.
In conclusion, we note that the structure of the ground state
formed by the localized magnons is realized not only in the
critical point of the spin-$1/2$ F-AF delta-chain but also in the
$s_1,s_2$ chain, where $s_1$ and $s_2$ are the spins on the apical
and the basal sites correspondingly. The critical point for this
model is $\alpha_c=s_1/2s_2$ and the ground state in this critical
point has the same degeneracy as for the $s=1/2$ chain.
|
1,116,691,499,757 | arxiv | \section{Measurement of the \boldmath{\ensuremath{C\!P}\xspace} asymmetry in \decay{\Bd}{\Kstarz \g} decays} \label{section:corrections}
The \decay{\Bd}{\Kstarz \g} and $\ensuremath{\Bbar^0}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Kbar^{*0}}\xspace\gamma$ invariant mass distributions are fitted simultaneously to measure a raw asymmetry defined as
\begin{equation}
\mathcal{A}_{\mathrm{RAW}}=\frac{N(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace\ensuremath{\Pgamma}\xspace)-N(\ensuremath{\kaon^+}\xspace\ensuremath{\pion^-}\xspace\ensuremath{\Pgamma}\xspace)}{N(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace\ensuremath{\Pgamma}\xspace)+N(\ensuremath{\kaon^+}\xspace\ensuremath{\pion^-}\xspace\ensuremath{\Pgamma}\xspace)}\,,
\end{equation}
where $N(X)$ is the signal yield measured in the final state $X$. This asymmetry must be corrected for detection and production effects to measure the physical \ensuremath{C\!P}\xspace asymmetry. The detection asymmetry arises mainly from the kaon quark content giving a different interaction rate with the detector material depending on its charge. The \ensuremath{\B^0}\xspace and \ensuremath{\Bbar^0}\xspace mesons may also not be produced with the same rate in the region covered by the \mbox{LHCb}\xspace detector, inducing the \ensuremath{\B^0}\xspace meson production asymmetry. The physical \ensuremath{C\!P}\xspace asymmetry and these two corrections are related through
\begin{equation}
{\cal A}_{CP}(\decay{\Bd}{\Kstarz \g}) = {\cal A}_{\mathrm{RAW}}(\decay{\Bd}{\Kstarz \g}) - {\cal A}_\mathrm{D}(K\pi) -\kappa {\cal A}_\mathrm{P}(\ensuremath{\B^0}\xspace)\,,
\label{equation:CPasym}
\end{equation}
where ${\cal A}_\mathrm{D}(K\pi)$ and ${\cal A}_\mathrm{P}(\ensuremath{\B^0}\xspace)$ represent the detection asymmetry of the kaon and pion pair and \ensuremath{\B^0}\xspace meson production asymmetry, respectively. The dilution factor $\kappa$ arises from the oscillations of neutral \ensuremath{\PB}\xspace mesons.
To determine the raw asymmetry, the fit keeps the same signal mean and width, as well as the same mass-window threshold parameters for the \ensuremath{\B^0}\xspace and \ensuremath{\Bbar^0}\xspace signal. The yields of the combinatorial background and partially reconstructed decays are allowed to vary independently. The relative amplitudes of the exclusive peaking backgrounds, $\ensuremath{\L^0_\bquark}\xspace\ensuremath{\rightarrow}\xspace \ensuremath{\PLambda}\xspace^*\gamma$, $\ensuremath{\B^0_\squark}\xspace\ensuremath{\rightarrow}\xspace K^{*0}\gamma$ and $B^0_{(s)}\ensuremath{\rightarrow}\xspace K^+\pi^-\pi^0$, are fixed to the same values for both \ensuremath{\PB}\xspace flavours.
\begin{figure}[b]
\begin{center}
\includegraphics[width=18pc]{fig3a.eps}
\includegraphics[width=18pc]{fig3b.eps}
\caption{\small Invariant-mass distributions of the (a) \ensuremath{\Bbar^0}\xspace$\rightarrow$\ensuremath{\Kbar^{*0}}\xspace\ensuremath{\Pgamma}\xspace and (b) \decay{\Bd}{\Kstarz \g} decay candidates. The black points represent the data and the fit result is represented as a solid blue line. The different background components are also shown. The Poisson \ensuremath{\chi^2}\xspace residuals~\cite{Baker:chi2:1984} are shown below the fits with the $\pm2\,\sigma$ confidence-level interval delimited by solid red lines. \label{figure:fitsCP}}
\end{center}
\end{figure}
\figf{fitsCP} shows the result of the simultaneous fit. The yields of the combinatorial background across the entire mass window are compatible within statistical uncertainty. The number of combinatorial background candidates is $2070\pm414$ and $1552\pm422$ in the full mass range for the \decay{\Bd}{\Kstarz \g} and $\ensuremath{\Bbar^0}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Kbar^{*0}}\xspace\gamma$ decays, respectively. The contribution from the charmless partially reconstructed decay \decay{\ensuremath{\Bu}\xspace}{\ensuremath{\kaon^{*0}}\xspace\ensuremath{\pion^+}\xspace\gamma} to \decay{\Bd}{\Kstarz \g} and $\ensuremath{\Bbar^0}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Kbar^{*0}}\xspace\gamma$ is $(10\pm6)\,\%$ and $(24\pm7)\,\%$ of the signal yield, respectively. Furthermore, the charmed partially reconstructed decays $B\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^{*0}}\xspace\pi^0\mathrm{X}$ contribute with $(7\pm8)\,\%$ and $(9\pm8)\,\%$ of the signal yield to the \decay{\Bd}{\Kstarz \g} and $\ensuremath{\Bbar^0}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Kbar^{*0}}\xspace\gamma$ decays, respectively. The latter decays give contributions that are mainly located outside the signal invariant-mass region, as can be seen from \fig{fitsCP}.
The value of the raw asymmetry determined from the fit is $\mathcal{A}_{\mathrm{RAW}} = (0.3\pm1.7)\,\%$, where the uncertainty is statistical only.
The systematic uncertainty from the background modelling is determined as explained in \secr{background}. To address the systematic uncertainty from the possible \ensuremath{C\!P}\xspace asymmetry in the background, the yield of the $\ensuremath{\B^0}\xspace\ensuremath{\rightarrow}\xspace K^+\pi^-\pi^0$ decay is varied within its measured \ensuremath{C\!P}\xspace asymmetry \mbox{$\mathcal{A}_{\ensuremath{C\!P}\xspace}(\ensuremath{\B^0}\xspace\ensuremath{\rightarrow}\xspace K^{*0}\pi^0)=(-15\pm 12)\%$~\cite{hfag:2012}}. For the other decays, a measurement of the \ensuremath{C\!P}\xspace asymmetry has not been made. The variation is therefore performed over the full $\pm100\%$ range. The effect of these variations on $\mathcal{A}_{\mathrm{RAW}}$ gives rise to a Gaussian distribution centred at $-0.2\%$ with a standard deviation of 0.7\%, thus a correction of $\Delta {\cal A}_{\mathrm{bkg}}=(-0.2 \pm 0.7)\%$ is applied. The systematic uncertainty from the signal modelling is evaluated using a similar procedure and is found to be negligible. The possible double misidentification (\mbox{$K^-\pi^+\ensuremath{\rightarrow}\xspace\pi^- K^+$}) in the final state would induce a dilution of the measured raw asymmetry. This is evaluated using simulated events and is also found to be negligible.
An instrumental bias can be caused by the vertical magnetic field, which deflects oppositely-charged particles into different regions of the detector. Any non-uniformity of the instrumental performance could introduce a bias in the asymmetry measurement. This potential bias is experimentally reduced by regularly changing the polarity of the magnetic field during data taking. As the integrated luminosity is slightly different for the ``up" and ``down" polarities, a residual bias could remain. This bias is studied by comparing the \ensuremath{C\!P}\xspace asymmetry measured separately in each of the samples collected with opposite magnet polarity, up or down. Table~\ref{table:AcpPol} summarises the \ensuremath{C\!P}\xspace asymmetry and the number of signal candidates for the two magnet polarities. The asymmetries with the two different polarities are determined to be compatible within the statistical uncertainties and the luminosity-weighted average, ${\cal A}_{\mathrm{RAW}}=(0.4\pm1.7)\%$, is in good agreement with the \ensuremath{C\!P}\xspace asymmetry measured in the full data sample.
\begin{table}[ht!]
\begin{center}
\caption{\small \label{table:AcpPol} \ensuremath{C\!P}\xspace asymmetry and total number of signal candidates measured for each magnet polarity.}
\begin{tabular}{lcc}
\hline
& Magnet Up & Magnet Down \Tstrut\Bstrut\\
\hline
$\int {\cal L}dt$ (\ensuremath{\mbox{\,pb}^{-1}}\xspace) & $432\pm15$ & $588\pm21$ \Tstrut\Bstrut\\
${\cal A}^{\mathrm{RAW}}$ (\%) & $1.3\pm 2.6$ & $-0.4\pm 2.2$ \Bstrut\\
Signal candidates & $2189\pm65$ & $3103\pm71$ \Bstrut\\
\hline
\end{tabular}
\end{center}
\end{table}
The residual bias can be extracted from the polarity-split asymmetry as
\begin{equation}
\Delta{\cal A}_\mathrm{M}=\left(\frac{\mathcal{L}^{\mathrm{up}}-\mathcal{L}^{\mathrm{down}}}{\mathcal{L}^{\mathrm{up}}+\mathcal{L}^{\mathrm{down}}}\right) \left(\frac{\mathcal{A}^{\mathrm{down}}_{\mathrm{RAW}}-\mathcal{A}^{\mathrm{up}}_{\mathrm{RAW}}}{2}\right)\,,
\end{equation}
which is found to be consistent with zero $\Delta{\cal A}_\mathrm{M} = (+0.1\pm0.2)\,\%$. The raw asymmetry obtained from the fit is corrected by $\Delta {\cal A}_{\mathrm{bkg}}$ and $\Delta{\cal A}_\mathrm{M}$.
The detection asymmetry can be defined in terms of the detection efficiencies of the charge-conjugate final states by
\begin{equation}
{\cal A}_\mathrm{D}(K\pi)=\frac{\epsilon(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace)-\epsilon(\ensuremath{\kaon^+}\xspace\ensuremath{\pion^-}\xspace)}{\epsilon(\ensuremath{\kaon^-}\xspace\ensuremath{\pion^+}\xspace)+\epsilon(\ensuremath{\kaon^+}\xspace\ensuremath{\pion^-}\xspace)}\,.
\end{equation}
The related asymmetries have been studied at \mbox{LHCb}\xspace using control samples of charm decays \cite{Carbone:exp-analysis-charmless-2body:2011}. It has been found that for \mbox{$K\pi$} pairs in the kinematic range relevant for our analysis the detection asymmetry is ${\cal A}_\mathrm{D}(K\pi)=(-1.0\pm 0.2)\%$.
The \ensuremath{\PB}\xspace production asymmetry is defined in terms of the different production rates
\begin{equation}
{\cal A}_\mathrm{P}(B^0)=\frac{R(\ensuremath{\Bbar^0}\xspace)-R(\ensuremath{\B^0}\xspace)}{R(\ensuremath{\Bbar^0}\xspace)+R(\ensuremath{\B^0}\xspace)}
\end{equation}
and has been measured at \mbox{LHCb}\xspace to be ${\cal A}_\mathrm{P}(B^0)=(1.0\pm 1.3)\%$ using large samples of \mbox{\decay{\Bd}{\jpsi\Kstarz}} decays~\cite{Carbone:exp-analysis-charmless-2body:2011}. The contribution of the production asymmetry to the measured \ensuremath{C\!P}\xspace asymmetry is diluted by a factor $\kappa$, defined as
\begin{equation}
\kappa=\frac{\int^{\infty}_0\cos(\Delta m_d t)e^{-\Gamma_d t}\epsilon(t) dt}{\int^{\infty}_0\cosh(\frac{\Delta\Gamma_dt}{2})e^{-\Gamma_d t}\epsilon(t) dt}\,,
\end{equation}
where $\Delta m_d$ and $\Delta\Gamma_d$ are the mass difference and the decay width difference between the mass eigenstates of the $\ensuremath{\B^0}\xspace-\ensuremath{\Bbar^0}\xspace$ system, $\Gamma_d$ is the average of their decay widths and $\epsilon(t)$ is the decay-time acceptance function of the signal selection. The latter has been determined from data using the decay-time distribution of background-subtracted signal candidates, the known \ensuremath{\B^0}\xspace lifetime and assuming $\Delta\Gamma_d=0$. The dilution factor is found to be $\kappa=0.41\pm 0.04$, where the uncertainty comes from knowledge of the acceptance function parameters as well as $\Gamma_d$ and $\Delta m_d$.
\begin{table}[th!]
\begin{center}
\caption{\small Corrections to the raw asymmetry and corresponding systematic uncertainties.\label{table:breakdown}}
\begin{tabular}{lll}
\hline
Correction to $A_{RAW}$ & & Value [\%]\Tstrut\Bstrut \\
\hline
Background model &$\Delta {\cal A}_{bkg}$\Tstrut & $-0.2\pm0.7$ \\
Magnet polarity &$\Delta {\cal A}_\mathcal{M}$\Tstrut & $+0.1\pm0.3$ \\
Detection &$-{\cal A}_\mathrm{D}(K\pi)$\Tstrut & $+1.0\pm0.2$ \\
\ensuremath{\B^0}\xspace production &$-\kappa {\cal A}_\mathrm{P}(\ensuremath{\B^0}\xspace)$\Tstrut & $-0.4\pm0.5$ \\
\hline
Total & & $+0.5\pm0.9$ \Tstrut\Bstrut \\
\hline
\end{tabular}
\end{center}
\end{table}
Adding the above corrections, which are summarised in Table \ref{table:breakdown}, to the raw asymmetry, the direct \ensuremath{C\!P}\xspace asymmetry in \decay{\Bd}{\Kstarz \g} decays is measured to be
\begin{equation*}
{\cal A}_{CP}(\decay{\Bd}{\Kstarz \g}) = (0.8\pm1.7\,(\mathrm{stat.})\pm0.9\,(\mathrm{syst.}))\%\,.
\end{equation*}
\section*{Acknowledgements}
\noindent We express our gratitude to our colleagues in the CERN accelerator
departments for the excellent performance of the LHC. We thank the
technical and administrative staff at CERN and at the LHCb institutes,
and acknowledge support from the National Agencies: CAPES, CNPq,
FAPERJ and FINEP (Brazil); CERN; NSFC (China); CNRS/IN2P3 (France);
BMBF, DFG, HGF and MPG (Germany); SFI (Ireland); INFN (Italy); FOM and
NWO (The Netherlands); SCSR (Poland); ANCS (Romania); MinES of Russia and
Rosatom (Russia); MICINN, XuntaGal and GENCAT (Spain); SNSF and SER
(Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF
(USA). We also acknowledge the support received from the ERC under FP7
and the Region Auvergne.
\section{Signal and background description \label{section:background}}
The signal yields of the \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} decays are determined from an extended unbinned maximum-likelihood fit performed simultaneously to the invariant-mass distributions of the \ensuremath{\B^0}\xspace and \ensuremath{\B^0_\squark}\xspace candidates. A constraint on the \ensuremath{\B^0}\xspace and \ensuremath{\B^0_\squark}\xspace masses is included in the fit which requires the difference between them to be consistent with the \mbox{LHCb}\xspace measurement of \mbox{$87.3\pm0.4$\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace~\cite{lhcb:Bmasses}}. The \ensuremath{\kaon^{*0}}\xspace and \ensuremath{\phi}\xspace resonances are described by a relativistic \mbox{$P$-wave} Breit-Wigner distribution~\cite{herab:2006} convoluted with a Gaussian distribution to take into account the detector resolution. The natural width of the resonances is fixed to the world average value~\cite{pdg2012}. A polynomial line shape is added to describe the background. The resulting distribution is fitted to the vector meson invariant-mass distribution, as shown in \fig{vector}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=18pc]{fig1a.eps}
\includegraphics[width=18pc]{fig1b.eps}
\caption{\small Invariant-mass distributions of the (a) \ensuremath{\kaon^{*0}}\xspace and (b) \ensuremath{\phi}\xspace resonance candidates. The black points represent the data and the fit result is represented as a solid blue line. The fit is described in the text. The regions outside the vector meson invariant-mass window are shaded. The Poisson \ensuremath{\chi^2}\xspace residuals~\cite{Baker:chi2:1984} are shown below the fits with the $\pm2\,\sigma$ confidence-level interval delimited by solid red lines. \label{figure:vector}}
\end{center}
\end{figure}
The fit to the invariant mass of the vector-meson candidates yields a resonance mass of \mbox{$895.7\pm0.4$\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace} and $1019.42\pm0.09$\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace for the \ensuremath{\kaon^{*0}}\xspace and \ensuremath{\phi}\xspace, respectively, in agreement with the world average values~\cite{pdg2012}. The detector resolution extracted from the fit is $5\pm4$\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace for the \ensuremath{\kaon^{*0}}\xspace resonance and $1.3\pm0.1$\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace for the \ensuremath{\phi}\xspace. The effect of taking the value found in data or the world average as the central value of the vector meson mass window is negligible. In addition no systematic uncertainty due to the choice of the line shape of the resonances is assigned.
Both \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} signal distributions are parametrised with a two-sided Crystal Ball distribution~\cite{Skwarnicki:cb:1986}. In the low mass region, there can be possible losses in the photon energy due to the fiducial volume of the calorimeter. A tail at high masses is also observed and can be explained by the spread in the error of the reconstructed \ensuremath{\PB}\xspace mass and pile-up effects in the photon deposition. The parameters describing the tails on both sides are fixed to the values determined from simulation. The width of each signal peak is left as a free parameter in the fit.
The reconstructed mass distribution of the combinatorial background has been determined from the low-mass sideband of the \ensuremath{\kaon^{*0}}\xspace mass distribution as an exponential function with different attenuation constants for the two decay channels. Additional contamination from several exclusive background decays is studied using simulated samples. The irreducible $B^0_s\ensuremath{\rightarrow}\xspace K^*\gamma$ decays, the $\ensuremath{\L^0_\bquark}\xspace\ensuremath{\rightarrow}\xspace \ensuremath{\PLambda}\xspace^*(pK^-)\gamma$ decays\footnote{$\ensuremath{\PLambda}\xspace^*$ stands for $\ensuremath{\PLambda}\xspace(1520)$ and other b-baryon resonances promptly decaying into a $pK^-$ final state.}, and the charmless \mbox{$B^0_{(s)}\ensuremath{\rightarrow}\xspace h^+h'^-\pi^0$} decays produce peaked contributions under the invariant-mass peak of \decay{\Bd}{\Kstarz \g}. As the experimental branching fractions of the charmless \ensuremath{\B^0_\squark}\xspace and \ensuremath{\L^0_\bquark}\xspace decays are unknown, the corresponding contamination rates are estimated either using the predicted branching fraction in the case of $B_s^0\ensuremath{\rightarrow}\xspace K^{*0}\gamma$ decays, assuming SU(3) symmetry for $B^0_s\ensuremath{\rightarrow}\xspace h^+h'^-\pi^0$ decays, or by directly estimating the signal yield from an independent sample as in $\ensuremath{\L^0_\bquark}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\PLambda}\xspace^*\gamma$ decays.
The overall contribution from these decays is estimated to represent $(2.6\pm0.4)$\% and $(0.9\pm0.6)$\% of the \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} yields, respectively. Each of these contributions is modelled with a Crystal Ball function determined from a simulated sample and their yields are fixed in the fit.
The partial reconstruction of the charged $B\ensuremath{\rightarrow}\xspace h^+h'^-\gamma X$ or $\ensuremath{\PB}\xspace\ensuremath{\rightarrow}\xspace h^+h'^-\pi^0X$ decays gives a broad contribution at lower candidate masses, with a high-mass tail that extends into the signal region. The partially reconstructed $B^+\ensuremath{\rightarrow}\xspace K^{*0}\pi^+\gamma$ and $B^+\ensuremath{\rightarrow}\xspace \phi K^+\gamma$ radiative decays produce a peaking contribution in the low-mass sideband at around 5.0\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace for \decay{\Bd}{\Kstarz \g} and around 4.5\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace for \decay{\Bs}{\phi \g}. The corresponding contamination has been estimated to be $(3.3\pm1.1)\%$ and $(1.8\pm 0.3)$\% for the \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} decays, respectively. The partially reconstructed neutral \ensuremath{\PB}\xspace meson decays also contribute at the same level and several other channels exhibit a similar final state topology. These contributions are described by a Crystal Ball function and the yields are left to vary in the fit. The parameters of the Crystal Ball function are determined from the simulation. Additional contributions from the partial reconstruction of multi-body charmed decays and $B\ensuremath{\rightarrow}\xspace V\pi^0\mathrm{X}$ have been added to the simultaneous fit in the same way. The shape of these contributions, again determined from the simulation, follows an ARGUS function~\cite{ARGUS} peaking around $4.0$\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace. The various background contributions included in the fit model are summarised in Table~\ref{table:background}.
\begin{table}[htb!]
\begin{center}
\caption{\small Expected contributions to the \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} yields in the $\pm$1\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace mass window from the exclusive background channels: radiative decays, $h^+h'^-\gamma$ (top), charmless b decays involving energetic $\pi^0$, $h^+h'^-\pi^0$ (middle) and partially reconstructed decays (bottom). The average measurement (exp.) or theoretical (theo.) branching fraction is given where available. Each exclusive contribution above 0.1\% is included in the fit model, with a fixed shape determined from simulation. The amplitude of the partially reconstructed backgrounds is left to vary in the fit while the $h^+h'^-\gamma$ and $h^+h'^-\pi^0$ contributions are fixed to their expected level.}\label{table:background}
\begin{tabular}{llll}
\hline
Decay & Branching fraction & \multicolumn{2}{c}{Relative contribution to} \\
& ($\times 10^6$) & \decay{\Bd}{\Kstarz \g} & \decay{\Bs}{\phi \g}\Bstrut \\
\hline
\decay{\ensuremath{\L^0_\bquark}\xspace}{\ensuremath{\PLambda}\xspace^*\gamma}\Tstrut & estimated from data & ($ 1.0\pm 0.3$)\% & ($0.4\pm 0.3$)\% \\
\decay{\ensuremath{\B^0_\squark}\xspace}{K^{*0}\gamma}\Tstrut\Bstrut & $1.26\pm 0.31$ (theo.~\cite{Ball:QCDfac:2007}) & ($0.8\pm 0.2$)\% & $\mathcal{O}(10^{-4})$ \\
\hline
\decay{\ensuremath{\B^0}\xspace}{K^+\pi^-\pi^0}\Tstrut & $35.9^{\,+\,2.8}_{\,-\,2.4}$ (exp.~\cite{hfag:2012}) & ($0.5\pm 0.1$)\% & $\mathcal{O}(10^{-4})$ \\
\decay{\ensuremath{\B^0_\squark}\xspace}{K^+\pi^-\pi^0}\Tstrut & estimated from SU(3) symmetry & ($0.2\pm 0.2$)\% & $\mathcal{O}(10^{-4})$ \\
\decay{\ensuremath{\B^0_\squark}\xspace}{K^+K^-\pi^0}\Tstrut\Bstrut & estimated from SU(3) symmetry & $\mathcal{O}(10^{-4})$ & ($0.5\pm 0.5$)\% \\
\hline
\decay{\ensuremath{\Bu}\xspace}{K^{*0}\pi^+\gamma}\Tstrut & $20^{\,+\,7}_{\,-\,6}$ (exp.~\cite{hfag:2012}) & ($3.3\pm 1.1$)\% & $ < 6\times 10^{-4}$ \\
\decay{\ensuremath{\B^0}\xspace}{K^+\pi^-\pi^0\gamma} & $41\pm4$ (exp.~\cite{hfag:2012}) & $(4.5\pm1.7)$\% & $\mathcal{O}(10^{-4})$ \\
\decay{\ensuremath{\Bu}\xspace}{\phi K^+\gamma} & $3.5\pm0.6$ (exp.~\cite{hfag:2012}) & $3\times10^{-4}$ & $(1.8\pm 0.3)$\% \\
$B\ensuremath{\rightarrow}\xspace V\pi^0\mathrm{X}$\Tstrut\Bstrut & $\mathcal{O}(10\%)$ (exp.~\cite{hfag:2012}) & $\mathrm{a~few}\%$ & $\mathrm{a~few}\%$ \\
\hline
\end{tabular}
\end{center}
\end{table}
At the trigger level, the electromagnetic calorimeter calibration is different from that in the offline analysis. Therefore, the $\pm$1\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace mass window requirement imposed by the trigger causes a bias in the \ensuremath{\PB}\xspace meson acceptance to appear near the limits of this window. The inefficiency at the edges of the mass window is modelled by including a three-parameter threshold function in the fit model
\begin{equation}
T(m_B) = \left(1-\mathrm{erf}\left( \frac{m_B-t_{\mathrm{L}}}{\sqrt{2} \mathrm{\sigma_d}}\right)\right)\times\left(1-\mathrm{erf}\left( \frac{t_{\mathrm{U}}-m_B}{\sqrt{2} \mathrm{\sigma_d}}\right)\right)\,,
\end{equation}
where erf is the Gauss error function. The parameter $\mathrm{t_L}$($\mathrm{t_U}$) represents the actual lower (upper) mass threshold and $\mathrm{\sigma_d}$ is the resolution.
\section{Results and conclusions}
Using an integrated luminosity of 1.0\ensuremath{\mbox{\,fb}^{-1}}\xspace of \ensuremath{\proton\proton}\xspace collision data collected by the \mbox{LHCb}\xspace experiment at a centre-of-mass energy of $\ensuremath{\protect\sqrt{s}}\xspace=7\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$, the ratio of branching fractions between \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} has been measured to be
\begin{equation*}
\frac{\ensuremath{\BR(\decay{\Bd}{\Kstarz \g})}\xspace}{\ensuremath{\BR(\decay{\Bs}{\phi \g})}\xspace} = 1.23 \pm 0.06\,\mathrm{(stat.)} \pm 0.04\,\mathrm{(syst.)} \pm 0.10\,(f_s/f_d)\label{equation:final-result}\,,
\end{equation*}
which is the most precise measurement to date and is in good agreement with the SM prediction of $1.0\pm0.2$~\cite{Ali:th-b2vgamma-NNLO:2008}.
Using the world average value $\BF(\decay{\Bd}{\Kstarz \g})=(4.33\pm 0.15)~\times10^{-5}$~\cite{hfag:2012}, the \decay{\Bs}{\phi \g} branching fraction is determined to be
\begin{equation*}
\ensuremath{\BR(\decay{\Bs}{\phi \g})}\xspace = (3.5\pm0.4)\times 10^{-5}\,,
\end{equation*}
in agreement with the previous measurement~\cite{belle:exp-bs2phigamma-bs2gammagamma:2007}.
This is the most precise measurement to date and is consistent with, but supersedes, a previous \mbox{LHCb}\xspace result using an integrated luminosity of 0.37\ensuremath{\mbox{\,fb}^{-1}}\xspace~\cite{radPap}.
The direct \ensuremath{C\!P}\xspace asymmetry in \decay{\Bd}{\Kstarz \g} decays has also been measured with the same data sample and found to be
\begin{equation*}
{\cal A}_{\ensuremath{C\!P}\xspace}(\decay{\Bd}{\Kstarz \g})=(0.8\pm1.7\,\mathrm{(stat.)}\pm0.9\,\mathrm{(syst.)})\%\,,
\end{equation*}
in agreement with the SM expectation of $(-0.61\pm0.43)$\,\%\,\cite{Keum:th-b2kstgamma-pqcd:2005}.
This is consistent with previous measurements~\cite{babar:exp-b2kstgamma:2009,*belle:exp-b2kstgamma:2004,*cleo:exp-excl-radiative-decays:1999}, and is the most precise result of the direct \ensuremath{C\!P}\xspace asymmetry in \decay{\Bd}{\Kstarz \g} decays to date.
\section{Measurement of the ratio of branching fractions\label{section:extraction}}
The ratio of branching fractions is measured as
\begin{equation}
\frac{\BF(\decay{\Bd}{\Kstarz \g})}{\BF(\decay{\Bs}{\phi \g})} = \frac{N_{\decay{\Bd}{\Kstarz \g}}}{N_{\decay{\Bs}{\phi \g}}}
\times \frac{\BF(\phi\ensuremath{\rightarrow}\xspace K^+K^-)}{\BF(K^{*0}\ensuremath{\rightarrow}\xspace K^+\pi^-)}
\times \frac{f_s}{f_d}
\times \frac{\epsilon_{\decay{\Bs}{\phi \g}}}{\epsilon_{\decay{\Bd}{\Kstarz \g}}}\,,
\label{equation:BRRatio}
\end{equation}
where $N$ are the observed yields of signal candidates, \mbox{$\mathrm{\BF(\phi\ensuremath{\rightarrow}\xspace K^+ K^-)/\BF(K^{*0}\ensuremath{\rightarrow}\xspace K^+\pi^-)}=0.735\pm 0.008$~\cite{pdg2012}} is the ratio of branching fractions of the vector mesons, \mbox{$f_s/f_d=0.267^{+0.021}_{-0.020}$~\cite{lhcb:fsfd-paper:2011}} is the ratio of the $\ensuremath{\PB}\xspace^0$ and $\ensuremath{\PB}\xspace_s^0$ hadronization fractions in \ensuremath{\proton\proton}\xspace collisions at $\ensuremath{\protect\sqrt{s}}\xspace=7\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$ and $\epsilon_{\decay{\Bs}{\phi \g}}/\epsilon_{\decay{\Bd}{\Kstarz \g}}$ is the ratio of total reconstruction and selection efficiencies of the two decays.
The results of the fit are shown in \fig{fits}. The number of $\decay{\Bd}{\Kstarz \g}$ and $\decay{\Bs}{\phi \g}$ candidates is $5279\pm93$ and $691\pm36$, respectively, corresponding to a yield ratio of $7.63\pm0.38$. The relative contamination from partially reconstructed radiative decays is fitted to be $(15\pm5)\%$ for \decay{\Bd}{\Kstarz \g} and $(5\pm3)\%$ for \decay{\Bs}{\phi \g}, in agreement with the expected rate from \mbox{$B^{+(0)}\ensuremath{\rightarrow}\xspace K^{*0}\pi^{+(0)}\gamma$} and \mbox{$B^{+(0)}\ensuremath{\rightarrow}\xspace \phi K^{+(0)}\gamma$}, respectively. The contribution from partial reconstruction of charmed decays at low mass is fitted to be $(5\pm4)\%$ and $(0^{+9}_{-0})$\% of the \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} yields, respectively.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.49\textwidth]{fig2a.eps}
\includegraphics[width=0.49\textwidth]{fig2b.eps}
\caption{\small Invariant-mass distributions of the (a) \decay{\Bd}{\Kstarz \g} and (b) \decay{\Bs}{\phi \g} candidates. The black points represent the data and the fit result is represented as a solid blue line. The signal is fitted with a double-sided Crystal Ball function (short-dashed green line).
The combinatorial background is modelled with an exponential function (long-dashed red line). In decreasing amplitude order, the exclusive background contributions to \decay{\Bd}{\Kstarz \g} are \decay{B^{+(0)}}{K^{*0}\pi^{+(0)}\gamma} (short-dotted black), $B\ensuremath{\rightarrow}\xspace K^{*0}(\phi)\pi^0X$ (long-dashed blue), \decay{\ensuremath{\B^0_\squark}\xspace}{K^{*0}\gamma} (dotted short-dashed green), \decay{\ensuremath{\L^0_\bquark}\xspace}{\ensuremath{\PLambda}\xspace^*\gamma} (double-dotted dashed pink), \decay{\ensuremath{\B^0}\xspace}{K^+\pi^-\pi^0} (dotted long-dashed black) and \decay{\ensuremath{\B^0_\squark}\xspace}{K^+\pi^-\pi^0} (long-dotted blue). The background contributions to \decay{\Bs}{\phi \g} are \mbox{\decay{B^{+(0)}}{\phi K^{+(0)}\gamma}} (dotted black), \decay{\ensuremath{\L^0_\bquark}\xspace}{\ensuremath{\PLambda}\xspace^*\gamma} (double-dotted dashed pink) and \decay{\ensuremath{\B^0_\squark}\xspace}{K^+K^-\pi^0} (dotted-dashed black). No significant contribution to \decay{\Bs}{\phi \g} is found from partially reconstructed $B\ensuremath{\rightarrow}\xspace K^{*0}(\phi)\pi^0X$ decays.
The Poisson \ensuremath{\chi^2}\xspace residuals~\cite{Baker:chi2:1984} are shown below the fit with the $\pm 2\,\sigma$ confidence-level interval delimited by solid red lines. \label{figure:fits}}
\end{center}
\end{figure}
The systematic uncertainty from the background modelling is determined by varying the parameters that have been kept constant in the fit of the invariant-mass distribution within their uncertainty. The 95\% CL interval of the relative variation on the yield ratio is determined to be $[-1.2,+1.4]\%$ and is taken as a conservative estimate of the systematic uncertainty associated with the background modelling. The relative variation is dominated by the effect from the partially reconstructed background.
This procedure is repeated to evaluate the systematic uncertainty from the signal-shape modelling, by varying the parameters of the Crystal-Ball tails within their uncertainty. A relative variation of $[-1.3,+1.4]\%$ on the yield ratio is observed and added to the systematic uncertainty.
As a cross-check of the possible bias introduced on the ratio by the modelling of the mass window thresholds and the partially reconstructed background that populates the low mass region, the fit is repeated in a reduced mass window of $\pm 700$\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace around the world average \ensuremath{\PB}\xspace meson mass. The result is found to be statistically consistent with the nominal fit. Combining these systematic effects, an overall $({}_{-1.8}^{+2.0})$\% relative uncertainty on the yield ratio is found.
The efficiency ratio can be factorised as
\begin{equation}
\frac{\epsilon_{\decay{\Bs}{\phi \g}}}{\epsilon_{\decay{\Bd}{\Kstarz \g}}} = r_{\text{reco\&sel}}
\times r_{\text{PID}}
\times r_{\text{trigger}}\,,
\label{equation:ratioEffs}
\end{equation}
where $r_{\text{reco\&sel}}$, $r_{\text{PID}}$ and $r_{\text{trigger}}$ are the efficiency ratios due to the reconstruction and selection requirements, the particle identification (PID) requirements and the trigger requirements, respectively.
The correlated acceptance of the kaons due to the limited phase-space in the \mbox{\decay{\ensuremath{\phi}\xspace}{\ensuremath{\kaon^+}\xspace\ensuremath{\kaon^-}\xspace}} decay causes the \ensuremath{\phi}\xspace vertex to have a worse spatial resolution than the \ensuremath{\kaon^{*0}}\xspace vertex. This affects the \decay{\Bs}{\phi \g} selection efficiency through the IP \ensuremath{\chi^2}\xspace and vertex isolation cuts, while the common track cut \mbox{$p_{\rm T}$}\xspace$>500\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace$ is less efficient on the softer pion from the \ensuremath{\kaon^{*0}}\xspace decay. These effects partially cancel and the reconstruction and selection efficiency ratio is found to be $r_{\text{reco\&sel}}=0.906 \pm 0.007\,\text{(stat.)}\pm 0.017\,\text{(syst.)}$. The majority of the systematic uncertainties also cancel, since the kinematic selections are almost identical for both decays. The remaining systematic uncertainties include the hadron reconstruction efficiency, arising from differences in the interaction of pions and kaons with the detector and uncertainties in the description of the detector material. The reliability of the simulation in describing the $\text{IP}\,\chi^2$ of the tracks and the isolation of the \ensuremath{\PB}\xspace vertex is also included in the systematic uncertainty on the $r_{\text{reco\&sel}}$ ratio. The simulated samples are weighted for each signal and background contribution to reproduce the reconstructed mass distribution seen in data. No further systematic uncertainties are associated with the use of the simulation, since kinematic properties of the decays are observed to be well modelled.
Uncertainties associated with the photon are negligible, because the reconstruction is identical in both decays.
The PID efficiency ratio is determined from data by means of a calibration procedure using pure samples of kaons and pions from \decay{\ensuremath{\D^{*\pm}}\xspace}{\ensuremath{\D^0}\xspace(\ensuremath{\kaon^+}\xspace\ensuremath{\pion^-}\xspace)\ensuremath{\pion^\pm}\xspace} decays selected without PID information. This procedure yields $r_{\text{PID}}=0.839 \pm 0.005\,\text{(stat.)}\pm 0.010\,\text{(syst.)}$.
The trigger efficiency ratio $r_{\text{trigger}}=1.080 \pm 0.009\,\text{(stat.)}$ is obtained from the simulation. The systematic uncertainty due to any difference in the efficiency of the requirements made at the trigger level is included as part of the selection uncertainty.
Finally, the ratio of branching fractions is obtained using \eq{BRRatio},
\begin{equation*}
\frac{\BF(\decay{\Bd}{\Kstarz \g})}{\BF(\decay{\Bs}{\phi \g})}=1.23\pm0.06\,\mathrm{(stat.)} \pm0.04\,\mathrm{(syst.)} \pm0.10\,(f_s/f_d)\,,\label{equation:resultStat}
\end{equation*}
where the first uncertainty is statistical, the second is the experimental systematic uncertainty and the third is due to the uncertainty on $f_s/f_d$.
The contributions to the systematic uncertainty are summarised in Table \ref{table:effSummary}.
\begin{table}[htb!]
\center
\caption{\label{table:effSummary}\small Summary of the individual contributions to the relative systematic uncertainty on the ratio of branching fractions as defined in \eq{BRRatio}.}
\begin{tabular}{lc}
\hline
Uncertainty source & Systematic uncertainty \Tstrut\Bstrut\\
\hline
$r_{\mathrm{reco\& sel.}}$ & 2.0\%\Tstrut\\
$r_{\mathrm{PID}}$ & 1.3\% \\
$r_{\mathrm{trigger}}$ \Bstrut & 0.8\% \\
\hline
$\mathrm{\BF(\phi\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^+}\xspace \ensuremath{\kaon^-}\xspace)/\BF(K^{*0}\ensuremath{\rightarrow}\xspace \ensuremath{\kaon^+}\xspace\ensuremath{\pion^-}\xspace)}$\TTstrut\BBstrut & 1.1\%\\
\hline
Signal and background modelling & ${}^{+2.0}_{-1.8}$\%\Tstrut\Bstrut \\
\hline
Total & 3.4\%\Tstrut \\
\hline
\end{tabular}
\end{table}
\section{Introduction}
In the Standard Model (SM), the decays\footnote{Unless stated otherwise, charge conjugated modes are implicitly included throughout this paper.} \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} proceed at leading order through the electromagnetic penguin transitions, \decay{b}{s\gamma}. At one-loop level these transitions are dominated by a virtual intermediate top quark coupling to a \ensuremath{\PW}\xspace boson.
Extensions of the SM predict additional one-loop contributions that can introduce sizeable changes to the dynamics of the transition~\cite{Descotes:theo-isospin:2011,*Gershon:th-null-tests:2006,*Mahmoudi:th-msugra:2006,*Altmannshofer:2011gn}.
Radiative decays of the \ensuremath{\B^0}\xspace meson were first observed by the CLEO collaboration in 1993 in the decay mode \mbox{\decay{\Bd}{\Kstarz \g}}~\cite{cleo:exp-first-penguins:1993}.
In 2007 the Belle collaboration reported the first observation of the analogous decay in the \ensuremath{\B^0_\squark}\xspace sector, \mbox{\decay{\Bs}{\phi \g}~\cite{belle:exp-bs2phigamma-bs2gammagamma:2007}.}
The current world averages of the branching fractions of \mbox{\decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g}} are \mbox{$(4.33\pm 0.15)\times10^{-5}$ and $(5.7^{+2.1}_{-1.8})\times10^{-5}$}, respectively~\cite{hfag:2012,babar:exp-b2kstgamma:2009,*belle:exp-b2kstgamma:2004,*cleo:exp-excl-radiative-decays:1999}. These results are in agreement with the latest theoretical predictions from NNLO calculations using soft-collinear effective theory~\cite{Ali:th-b2vgamma-NNLO:2008}, \mbox{$\BF(\decay{\Bd}{\Kstarz \g})=(4.3\pm1.4)\times10^{-5}$ and $\BF(\decay{\Bs}{\phi \g})=(4.3\pm1.4)\times10^{-5}$}, which suffer from large uncertainties from hadronic form factors. A better-predicted quantity is the ratio of branching fractions, as it benefits from partial cancellations of theoretical uncertainties. The two branching fraction measurements lead to a ratio \mbox{$\ensuremath{\BR(\decay{\Bd}{\Kstarz \g})}\xspace/\ensuremath{\BR(\decay{\Bs}{\phi \g})}\xspace$=$0.7\pm0.3$}, while the SM prediction is $1.0\pm0.2$~\cite{Ali:th-b2vgamma-NNLO:2008}. When comparing the experimental and theoretical branching fraction for the \decay{\Bs}{\phi \g} decay, it is necessary to account for the large decay width difference in the $\ensuremath{\B^0_\squark}\xspace-\ensuremath{\Bbar^0_\squark}\xspace$ system. This can give rise to a correction on the theoretical branching fraction as large as $9\%$ as described in~\cite{deBruyn:BRmeasurement}.
The direct \ensuremath{C\!P}\xspace asymmetry in the \mbox{\decay{\Bd}{\Kstarz \g}} decay is defined as \mbox{$\mathcal{A}_{\ensuremath{C\!P}\xspace}=[\Gamma(\ensuremath{\Bbar^0}\xspace\ensuremath{\rightarrow}\xspace\overline{f})-\Gamma(\ensuremath{\B^0}\xspace\ensuremath{\rightarrow}\xspace f)]/
[\Gamma(\ensuremath{\Bbar^0}\xspace\ensuremath{\rightarrow}\xspace\overline{f})+\Gamma(\ensuremath{\B^0}\xspace\ensuremath{\rightarrow}\xspace f)]$.} The SM prediction, \mbox{$\mathcal{A}^\mathrm{SM}_{\ensuremath{C\!P}\xspace}(\decay{\Bd}{\Kstarz \g})=(-0.61\pm0.43)\%$\,\cite{Keum:th-b2kstgamma-pqcd:2005}}, is affected by a smaller theoretical uncertainty from the hadronic form factors than the branching fraction calculation.
The precision on the current experimental value, \mbox{$\mathcal{A}_{\ensuremath{C\!P}\xspace}(\decay{\Bd}{\Kstarz \g})=(-1.6\pm2.2\pm0.7)\%$\,\cite{babar:exp-b2kstgamma:2009,pdg2012}}, is statistically limited and more precise measurements would constrain contributions from beyond the SM scenarios, some of which predict that this asymmetry could be as large as \mbox{$-15\%$ \cite{Acp:MSSM,*Aoki:SUSY,*Aoki:SUSY-CP,*Kagan:NP}.}
This paper presents a measurement of \ensuremath{\BR(\decay{\Bd}{\Kstarz \g})}\xspace/\ensuremath{\BR(\decay{\Bs}{\phi \g})}\xspace using 1.0\ensuremath{\mbox{\,fb}^{-1}}\xspace of data taken with the LHCb detector. The measured ratio and the world average value of \ensuremath{\BR(\decay{\Bd}{\Kstarz \g})}\xspace are then used to determine \ensuremath{\BR(\decay{\Bs}{\phi \g})}\xspace. This result supersedes a previous \mbox{LHCb}\xspace measurement based on an integrated luminosity of 0.37\ensuremath{\mbox{\,fb}^{-1}}\xspace of data at $\ensuremath{\protect\sqrt{s}}\xspace=7\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$~\cite{radPap}. A measurement of the direct \ensuremath{C\!P}\xspace asymmetry of the decay \decay{\Bd}{\Kstarz \g} is also presented.
\section{The \mbox{LHCb}\xspace detector and dataset}
The \mbox{LHCb}\xspace detector~\cite{Alves:2008zz} is a single-arm forward spectrometer covering the \mbox{pseudorapidity} range $2<\eta <5$, designed for the study of particles containing \ensuremath{\Pb}\xspace or \ensuremath{\Pc}\xspace quarks. The detector includes a high precision tracking system consisting of a silicon-strip vertex detector surrounding the $pp$ interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about $4{\rm\,Tm}$, and three stations of silicon-strip detectors and straw drift tubes placed downstream. The combined tracking system has a momentum resolution $\Delta p/p$ that varies from 0.4\% at 5\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace to 0.6\% at 100\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace, and an impact parameter (IP) resolution of 20\ensuremath{\,\upmu\rm m}\xspace for tracks with high transverse \mbox{momentum (\mbox{$p_{\rm T}$}\xspace)}. Charged hadrons are identified using two ring-imaging Cherenkov detectors (RICH). Photon, electron and hadron candidates are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The trigger consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage which applies a full event reconstruction.
Decay candidates are required to have triggered on the signal photon and the daughters of the vector meson. At the hardware stage, the decay candidates must have been triggered by an electromagnetic candidate with transverse energy \mbox{(\mbox{$E_{\rm T}$}\xspace) $>2.5\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$.} The software stage is divided into two steps. The first one performs a partial event reconstruction and reduces the rate such that the second can perform full event reconstruction to further reduce the data rate. At the first software stage, events are selected when a charged track is reconstructed with IP \ensuremath{\chi^2}\xspace$>16$. The IP \ensuremath{\chi^2}\xspace is defined as the difference between the \ensuremath{\chi^2}\xspace of the \ensuremath{\proton\proton}\xspace interaction vertex (PV) fit reconstructed with and without the considered track. Furthermore, a charged track is required to have either \mbox{$p_{\rm T}$}\xspace$>1.7\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ for a photon with \mbox{$E_{\rm T}$}\xspace$>2.5\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$ or \mbox{\mbox{$p_{\rm T}$}\xspace$>1.2\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$} when the photon has \mbox{$E_{\rm T}$}\xspace$>4.2\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$. At the second software stage, a track passing the previous criteria must form a \ensuremath{\kaon^{*0}}\xspace or \ensuremath{\phi}\xspace candidate when combined with an additional track, and the invariant mass of the combination of the \mbox{\ensuremath{\kaon^{*0}}\xspace (\ensuremath{\phi}\xspace)} candidate and the photon candidate that triggered the hardware stage is required to be within 1\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace of the world average \ensuremath{\B^0}\xspace (\ensuremath{\B^0_\squark}\xspace) mass.
The data used for this analysis correspond to 1.0\ensuremath{\mbox{\,fb}^{-1}}\xspace of \ensuremath{\proton\proton}\xspace collisions collected in 2011 at the \mbox{LHC}\xspace with a centre-of-mass energy of $\ensuremath{\protect\sqrt{s}}\xspace=7\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$.
Large samples of \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} Monte Carlo simulated events are used to optimise the signal selection and to parametrise the invariant-mass distribution of the \ensuremath{\PB}\xspace meson. Possible contamination from specific background channels has also been studied using dedicated simulated samples.
For the simulation, \ensuremath{\proton\proton}\xspace collisions are generated using \mbox{\textsc{Pythia}}\xspace~6.4~\cite{Sjostrand:pythia:2006} with a specific \mbox{LHCb}\xspace configuration~\cite{LHCb-PROC-2010-056}. Decays of hadronic particles are described by \mbox{\textsc{EvtGen}}\xspace~\cite{Lange:evtgen:2001} in which final state radiation is generated using \mbox{\textsc{Photos}}\xspace~\cite{Golonka:2005}. The interaction of the generated particles with the detector and its response are implemented using the \mbox{\textsc{Geant4}}\xspace toolkit~\cite{Allison:geant4:2006,*Agostinelli:geant4:2003} as described in Ref.~\cite{Clemencic:2011}.
\section{Offline event selection}
The selection of \decay{\Bd}{\Kstarz \g} and \decay{\Bs}{\phi \g} decays is designed to maximise the cancellation of uncertainties in the ratio of their selection efficiencies.
The charged tracks used to build the vector mesons are required to have \mbox{\mbox{$p_{\rm T}$}\xspace$>500\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace$}, with at least one of them having \mbox{$p_{\rm T}$}\xspace$>1.2\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$. In addition, a requirement of IP $\ensuremath{\chi^2}\xspace>25$ means that they must be incompatible with coming from any PV. The charged tracks are identified as either kaons or pions using information provided by the RICH system. This is based on the comparison between the two particle hypotheses. Kaons (pions) in the studied \decay{\ensuremath{\PB}\xspace}{V\gamma} decays, where $V$ stands for the vector meson, are identified with a $\sim70\,(83)\,\%$ efficiency for a $\sim3\,(2)\,\%$ pion (kaon) contamination.
Photon candidates are required to have \mbox{$E_{\rm T}$}\xspace$>2.6\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$. Neutral and charged clusters in the electromagnetic calorimeter are separated based on their compatibility with extrapolated tracks~\cite{Deschamps:exp-calo-reco:2003} while photon deposits are distinguished from \ensuremath{\pion^0}\xspace deposits using the shape of the showers in the electromagnetic calorimeter.
Oppositely-charged kaon-pion (kaon-kaon) combinations are accepted as \mbox{$\ensuremath{\kaon^{*0}}\xspace$ ($\ensuremath{\phi}\xspace$)} candidates if they form a good quality vertex and have an invariant mass within $\pm50\,(\pm10)\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ of the world average \mbox{$\ensuremath{\kaon^{*0}}\xspace$ ($\ensuremath{\phi}\xspace$)} mass~\cite{pdg2012}.
The resulting vector meson candidate is combined with the photon candidate to make a \ensuremath{\PB}\xspace candidate. The invariant-mass resolution of the selected \ensuremath{\PB}\xspace candidate is $\approx$100\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace for the decays presented in this paper.
The \ensuremath{\PB}\xspace candidates are required to have an invariant mass within 1\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace of the world average \ensuremath{\PB}\xspace mass ~\cite{pdg2012} and to have \mbox{$p_{\rm T}$}\xspace$>3\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$. They must also point to a PV, with IP \ensuremath{\chi^2}\xspace$<9$, and the angle between the \ensuremath{\PB}\xspace candidate momentum direction and the \ensuremath{\PB}\xspace line of flight has to be less than 20\ensuremath{\rm \,mrad}\xspace. In addition, the vertex separation \ensuremath{\chi^2}\xspace between the \ensuremath{\PB}\xspace meson vertex and its related PV must be larger than 100. The distribution of the helicity angle $\theta_\mathrm{H}$, defined as the angle between the momentum of any of the daughters of the vector meson and the momentum of the \ensuremath{\PB}\xspace candidate in the rest frame of the vector meson, is expected to follow a $\sin^2\theta_\mathrm{H}$ function for \decay{\ensuremath{\PB}\xspace}{V\gamma}, and a $\cos^2\theta_\mathrm{H}$ function for the \decay{\ensuremath{\PB}\xspace}{V\ensuremath{\pion^0}\xspace} background. A requirement of $|\cos\theta_\mathrm{H}|<0.8$ is therefore made to reduce \decay{\ensuremath{\PB}\xspace}{V\ensuremath{\pion^0}\xspace} background, where the neutral pion is misidentified as a photon. Background coming from partially reconstructed \mbox{\ensuremath{\PB}\xspace-hadron} decays is reduced by requiring the \ensuremath{\PB}\xspace vertex to be isolated: its \ensuremath{\chi^2}\xspace must increase by more than two units when adding any other track in the event.
|
1,116,691,499,758 | arxiv | \section{Introduction}
Vernier and Jacobsen{\color{blue}\cite{VJ2012}} considered a number of two-dimensional
lattice models in statistical mechanics for which the bulk free energies have been
calculated exactly and conjectured their surface and corner free energies. They
considered only the rotation-invariant (isotropic) cases of these models, when the surface
free energies are the same for the vertical and horizontal surfaces.
For some of these models the surface free energies have been, or can readily be,
calculated exactly, and this can be done for the more general non-rotation-invariant cases.
For the case of the square-lattice self-dual Potts model, Vernier and Jacobsen
commented that it seemed likely that the surface free energy had been calculated. It
seems that this has not yet been reported in the literature for the case in which they were
interested. That omission is repaired here for the general anisotropic case.
We also present arguments that Vernier and Jacobsen's{\color{blue}\cite{VJ2012}}
conjecture for the corner free energy should apply to the anisotropic case.
Consider the self-dual $Q$-state Potts model on the square lattice, which is
equivalent to an homogeneous six-vertex model.{\color{blue}\cite[\S 12.5]{book}}
Owczarek and Baxter{\color{blue}\cite{OB1989}} showed that for this model an extended
Bethe ansatz worked for a lattice of $N$ columns with free (rather than cylindrical)
boundary conditions.
They wrote down the resulting ``Bethe equations" for the eigenvalues of the
row-to-row transfer matrix $T$. They were
interested in the critical case, which occurs when the number of states $Q$ is not greater
than 4, and solved the equations for $N$ large to obtain the bulk and
surface free energies.
Vernier and Jacobsen{\color{blue}\cite[\S 3.2.1]{VJ2012}} instead considered the case
$Q > 4$, when the model is at a first-order transition point.
Here we solve the Bethe equations for this case. We obtain the surface
free energies $f_s$ and $f'_s$ (as well as the bulk free energy $f_b$) and verify the
correctness of Vernier and Jacobsen's conjectures for the rotation-invariant case.
We also show that the four free energies all satisfy ``inversion" and ``rotation" relations,
and that if we assume certain plausible analyticity properties, then these are sufficient
to determine the bulk and surface free energies, and to show that the corner free energy
is independent of the anisotropy of the model, depending only on $Q$. The results
of this method of course agree with those of the more rigorous Bethe ansatz
calculations.
The self-dual Potts model contains two free parameters $Q, K_1$, or equivalently the
$q, w$ defined by (\ref{weights}), (\ref{xxx}), (\ref{defx}), (\ref{defqw}).\footnote{{From}
these equations, $Q = q+2+q^{-1}$.}
Our Bethe ansatz method is not sufficient to calculate the corner free energy $f_c$ ,
but the inversion relation method implies that it is independent of $K_1$ or $w$,
depending only on $Q$ or $q$. We also comment in section \ref{sec4} that we have
performed direct numerical calculations on
finite lattices to obtain the first 10 coefficients in a series expansion in powers of $q$ as
functions of $s = w^2/q^{1/2}$. (Each coefficient is a finite Laurent polynomial in $s$.)
We find agreement (as expected) with Vernier and
Jacobsen's,{\color{blue}\cite[\S 3.2.1]{VJ2012}} conjecture for the isotropic
case,\footnote{$q$ herein is $q_{VJ}^2$, where $q_{VJ}$ is the $q$ of Vernier
and Jacobsen, and all the free energies are negated.} which is when
$w = q^{1/4}$ and $s=1$.
For the corner free energy $f_c$, we also observe that all the 10 coefficients are
{\em independent } of $s$, which agrees with the inversion relation result that $f_c$
is a function only of $q$.
For this model, therefore, $f_c$ resembles the order parameters $M_0$ and $P_0$
of the associated six-vertex model,{\color{blue}\cite[eqn. 8.10.9]{book}}, in that it
depends only on $Q$ or $q$.
We have found corresponding behaviour for the square-lattice Ising
model.{\color{blue}\cite{Ising}} For both models, this means that the corner free energy
is a function only of the order parameter. Possibly this property applies more generally.
\section{The square-lattice Potts model}
\setcounter{equation}{0}
We consider the $Q$-state Potts model on a square lattice $\cal L$ of $M$ rows and
$N$ columns, as shown in Fig. {\ref{sqlattice1}}. On each site $i$ there is a "spin" $\sigma_i$ that
takes the values $1, 2, \ldots ,Q$. Spins at horizontally adjacent sites $i, j$
interact with dimensionless energy $-K_1 \delta (\sigma_i, \sigma_j)$, and
those on vertically adjacent sites with energy $-K_2 \delta (\sigma_k, \sigma_m)$.
\setlength{\unitlength}{1.2pt}
\begin{figure}[hbt]
\begin{picture}(420,160) (0,0)
\put (174,7) { $K_1$ }
\put (144,-12) { $i$ }
\put (204,-12) { $j$ }
\put (253,90) { $K_2$ }
\put (275,59) { $k$ }
\put (273,119) { $m$ }
\put (85,-13) { $1$ }
\put (265,-13) { $N$ }
\put (73,-1) { $1$ }
\put (72,119) { $M$ }
{\color{blue}
\multiput(91.4,0.5)(60,0){4}{\circle{7}}
\multiput(91.4,60.5)(60,0){4}{\circle{7}}
\multiput(91.4,120.5)(60,0){4}{\circle{7}}
\multiput(95.2,00)(5,0){11}{\bf .}
\multiput(155.2,00)(5,0){11}{\bf .}
\multiput(215.2,00)(5,0){11}{\bf .}
\multiput(95.2,60)(5,0){11}{\bf .}
\multiput(155.2,60)(5,0){11}{\bf .}
\multiput(215.2,60)(5,0){11}{\bf .}
\multiput(95.2,120)(5,0){11}{\bf .}
\multiput(155.2,120)(5,0){11}{\bf .}
\multiput(215.2,120)(5,0){11}{\bf .}
\put (304, 25) {\LARGE $\cal L$ }
\thinlines
\multiput(90,4.6)(0,5){11}{\bf .}
\multiput(90,64.6)(0,5){11}{\bf .}
\multiput(150,4.6)(0,5){11}{\bf .}
\multiput(150,64.6)(0,5){11}{\bf .}
\multiput(210,4.6)(0,5){11}{\bf .}
\multiput(210,64.6)(0,5){11}{\bf .}
\multiput(270,4.6)(0,5){11}{\bf .}
\multiput(270,64.6)(0,5){11}{\bf .}}
\end{picture}
\vspace{1.0cm}
\caption{ The square lattice $\cal L$ (of 3 rows and 4 columns), indicating the horizontal
and vertical interaction coefficients $K_1, K_2$.}
\label{sqlattice1}
\end{figure}
The partition function is
\begin{equation} \label{Pottspartnfn}
Z_P \; = \; \sum_{\bf \sigma }
\exp \left[ K_1 \sum \delta (\sigma_i, \sigma_j) +
K_2 \sum \delta (\sigma_k, \sigma_m) \right] \;\; , \; \; \end{equation}
where the first inner sum is over all horizontal edges $(i,j)$ and the second over all
vertical edges $(k,m)$. The outer sum is over all $Q^{MN}$ values of all the spins.
We expect that when $M, N$ are large,
\begin{equation} \label{freeenergies}
\log Z_P \; = \; -MN f_b - M f_s - N f'_s - f_c + O({\mathrm{e}}^{-\delta M},{\mathrm{e}}^{-\delta' N})
\;\; , \; \; \end{equation}
where $f_b, f_s, f'_s, f_c $ are the dimensionless bulk, vertical surface, horizontal surface
and corner free energies,
and $\delta, \delta'$ are positive numbers.
\setlength{\unitlength}{1.2pt}
\begin{figure}[hbt]
\begin{picture}(420,160) (0,0)
\put (60,120) {\line(1,1) {30}}
\put (60,0) {\line(1,1) {150}}
\put (60,60) {\line(1,1) {90}}
\put (90,-30) {\line(1,1) {180}}
\put (150,-30) {\line(1,1) {150}}
\put (210,-30) {\line(1,1) {90}}
\put (270,-30) {\line(1,1) {30}}
\put (60,00) {\line(1,-1) {30}}
\put (60,120) {\line(1,-1) {150}}
\put (60,60) {\line(1,-1) {90}}
\put (90,150) {\line(1,-1) {180}}
\put (150,150) {\line(1,-1) {150}}
\put (210,150) {\line(1,-1) {90}}
\put (270,150) {\line(1,-1) {30}}
\put (320, 30) {\LARGE $\cal L'$ }
{\color{blue}
\multiput(90,0.5)(60,0){4}{\circle{7}}
\multiput(90,60.5)(60,0){4}{\circle{7}}
\multiput(90,120.5)(60,0){4}{\circle{7}}
\multiput(93.5,00)(5,0){11}{\bf .}
\multiput(153.5,00)(5,0){11}{\bf .}
\multiput(213.5,00)(5,0){11}{\bf .}
\multiput(93.5,60)(5,0){11}{\bf .}
\multiput(153.5,60)(5,0){11}{\bf .}
\multiput(213.5,60)(5,0){11}{\bf .}
\multiput(93.5,120)(5,0){11}{\bf .}
\multiput(153.5,120)(5,0){11}{\bf .}
\multiput(213.5,120)(5,0){11}{\bf .}
\thinlines
\multiput(88.6,4.6)(0,5){11}{\bf .}
\multiput(88.6,64.6)(0,5){11}{\bf .}
\multiput(148.6,4.6)(0,5){11}{\bf .}
\multiput(148.6,64.6)(0,5){11}{\bf .}
\multiput(208.6,4.6)(0,5){11}{\bf .}
\multiput(208.6,64.6)(0,5){11}{\bf .}
\multiput(268.6,4.6)(0,5){11}{\bf .}
\multiput(268.6,64.6)(0,5){11}{\bf .}}
\multiput(90,-30)(60,0){4}{\circle*{7}}
\multiput(60,0)(60,0){5}{\circle*{7}}
\multiput(60,60)(60,0){5}{\circle*{7}}
\multiput(60,120)(60,0){5}{\circle*{7}}
\multiput(90,30)(60,0){4}{\circle*{7}}
\multiput(90,90)(60,0){4}{\circle*{7}}
\multiput(90,150)(60,0){4}{\circle*{7}}
\put (25,-30) {$1$}
\put (25,0) {$2$}
\put (25,120) {$2M$}
\put (18,150) {$2M \! + \! 1$}
\put (87,-50) {$1$}
\put (147,-50) {$2$}
\put (265,-50) {$N$}
\end{picture}
\vspace{2.0cm}
\caption{\footnotesize The square lattice $\cal L$ of dotted lines and
circles, and its medial lattice $\cal L'$ of full circles and lines .}
\label{sqlattice2}
\end{figure}
We show in {\color{blue}\cite[\S 12.5]{book}} that this model is equivalent to a
six-vertex model on the lattice $\cal L'$ of Fig. \ref{sqlattice2}, i.e the lattice of
solid lines and circles therein. On this lattice we place an arrow on each edge subject
to the rule that at each site or vertex there must be as many arrows pointing in as
there are pointing out. There are six such configurations of arrows at an internal vertex, as
shown in Fig. {\ref{sixvertices}}.
The lattice $\cal L'$ has $2M+1$ rows, even-numbered rows having $N+1$
vertices, and odd-numbered ones having $N$ vertices.
Between two successive rows there are $2N$ diagonal
edges, on which one places arrows. Each of the $M$ even-numbered rows has
$N-1$ internal vertices, with weights
\begin{equation} \label{oddrows} \omega_1, \ldots , \omega_6 \; = \; 1 , \, , \, x_1 , \, x_1 , \,
1+x_1 {\mathrm{e}}^{\lambda} , \, 1+x_1 {\mathrm{e}}^{-\lambda} \;\; , \; \; \end{equation}
and each of the $M-1$ odd-numbered rows $3, 5, \ldots, 2M-1$ has
$N$ internal vertices with weights
\begin{equation} \label{evenrows} \omega_1, \ldots , \omega_6 \; = \; x_2 , \, x_2 , \, 1 , \, 1 , \,
x_2+{\mathrm{e}}^{\lambda} , \, x_2+ {\mathrm{e}}^{-\lambda} \;\; , \; \; \end{equation}
where
\begin{equation} \label{weights}
Q^{1/2} \; = \; 2 \, \cosh \lambda \;\; , \;\; x_1= ({\mathrm{e}}^{K_1}-1)/Q^{1/2} \;\; , \;\; x_2 =
({\mathrm{e}}^{K_2}-1)/Q^{1/2} \, \, . \end{equation}
The vertices on the boundaries of $\cal L'$ only have two edges joining them and must
have one arrow in and one arrow out. The weights of the possible configurations are
indicated in Fig. \ref{bdyweights}.
The partition function of this six-vertex model is
\begin{equation} Z_{6V} \; = \; \sum_C \; \; {\displaystyle \prod }_i \; w_i \;\; , \; \; \end{equation}
where the sum is over all allowed configurations $C$ of arrows on the edges of
$\cal L'$ and for each configuration the product is over all vertices $i$ of the
corresponding weights $w_i$ (including the boundary vertices).
If $\cal L'$ were wound on a torus (which is {\em not} the case considered in this
paper), we could interchange the two types of rows without
affecting the partition function. This is equivalent to replacing $x_1, x_2$ by
$x_1^{*} = 1/x_2, x_2^{*} = 1/x_1$ and multiplying $Z_P$ by $(x_1/x_2)^{MN}$, and
to replacing $K_1, K_2$ by their ``duals" $K_1^{*}, K_2^{*}$, where
\begin{equation} \label{dualKK}
\exp( {K_1^{*}} ) = \frac{{\mathrm{e}}^{K_2}+Q-1}{{\mathrm{e}}^{K_2} -1} \;\; , \;\;
\exp( {K_2^{*}} ) = \frac{{\mathrm{e}}^{K_1}+Q-1}{{\mathrm{e}}^{K_1} -1} \, \, . \end{equation}
The partition function $Z$ of the Potts model, as defined in (\ref{Pottspartnfn}), is
related exactly to $Z_{6V}$ by
\begin{equation} \label{Pottsand 6V}
Z_P \; = \; Q^{MN/2} \, Z_{6V} \end{equation}
\vspace{0.6cm}
\setlength{\unitlength}{1pt}
\begin{figure}[hbt]
\begin{picture}(420,90) (-50,0)
\begin{tikzpicture}[scale=0.71]
\draw[black,fill=black] (1.0,-2) circle (0.8ex);
\draw[black,fill=black] (3.9,-2) circle (0.8ex);
\draw[black,fill=black] (6.8,-2) circle (0.8ex);
\draw[black,fill=black] (9.7,-2) circle (0.8ex);
\draw[black,fill=black] (12.6,-2) circle (0.8ex);
\draw[black,fill=black] (15.5,-2) circle (0.8ex);
\node at (1.0,-4.1) { $\omega_1$};
\node at (3.9,-4.1) { $\omega_2$};
\node at (6.8,-4.1) { $\omega_3$};
\node at (9.7,-4.1) { $\omega_4$};
\node at (12.6,-4.1) { $\omega_5$};
\node at (15.5,-4.1) { $\omega_6$};
\coordinate (Q) at (1,-2) ;
\coordinate (R1) at (2,-1) ;
\coordinate (R2) at (0,-3) ;
\coordinate (R3) at (2,-3) ;
\coordinate (R4) at (0,-1) ;
\coordinate (Q2) at (3.9,-2) ;
\coordinate (S1) at (4.9,-1) ;
\coordinate (S2) at (2.9,-3) ;
\coordinate (S3) at (4.9,-3) ;
\coordinate (S4) at (2.9,-1) ;
\coordinate (Q3) at (6.8,-2) ;
\coordinate (T1) at (7.8,-1) ;
\coordinate (T2) at (5.8,-3) ;
\coordinate (T3) at (7.8,-3) ;
\coordinate (T4) at (5.8,-1) ;
\coordinate (Q4) at (9.7,-2) ;
\coordinate (U1) at (10.7,-1) ;
\coordinate (U2) at (8.7,-3) ;
\coordinate (U3) at (10.7,-3) ;
\coordinate (U4) at (8.7,-1) ;
\coordinate (Q5) at (12.6,-2) ;
\coordinate (V1) at (13.6,-1) ;
\coordinate (V2) at (11.6,-3) ;
\coordinate (V3) at (13.6,-3) ;
\coordinate (V4) at (11.6,-1) ;
\coordinate (Q6) at (15.5,-2) ;
\coordinate (W1) at (16.5,-1) ;
\coordinate (W2) at (14.5,-3) ;
\coordinate (W3) at (16.5,-3) ;
\coordinate (W4) at (14.5,-1) ;
\draw[black,directed,thick] (Q) -- (R4);
\draw[black,directed,thick] (R3) -- (Q);
\draw[black,directed,thick] (Q) -- (R1);
\draw[black,directed,thick] (R2) -- (Q);
\draw[black,directed,thick] (S4) -- (Q2);
\draw[black,directed,thick] (Q2) -- (S3);
\draw[black,directed,thick] (S1) -- (Q2);
\draw[black,directed,thick] (Q2) -- (S2);
\draw[black,directed,thick] (Q3)--(T4);
\draw[black,directed,thick] (T3) -- (Q3);
\draw[black,directed,thick] (T1) -- (Q3);
\draw[black,directed,thick] (Q3) -- (T2);
\draw[black,directed,thick] (U4) --(Q4);
\draw[black,directed,thick] (Q4)--(U3);
\draw[black,directed,thick] (Q4)--(U1);
\draw[black,directed,thick] (U2)--(Q4);
\draw[black,directed,thick] (V4) --(Q5);
\draw[black,directed,thick] (V3)--(Q5);
\draw[black,directed,thick] (Q5)--(V1);
\draw[black,directed,thick] (Q5)--(V2);
\draw[black,directed,thick] (Q6)--(W4);
\draw[black,directed,thick] (Q6)--(W3);
\draw[black,directed,thick] (W1)--(Q6);
\draw[black,directed,thick] (W2)--(Q6);
\end{tikzpicture}
\end{picture}
\vspace{0.0cm}
\caption{\footnotesize The six vertices, with weights $\omega_1, \ldots ,\omega_6$.}
\label{sixvertices}
\end{figure}
\vspace{0.3cm}
\setlength{\unitlength}{1pt}
\begin{figure}[hbt]
\begin{picture}(420,70) (-52,10)
\begin{tikzpicture}[scale=0.51]
\draw[black,fill=black] (1.0,-2) circle (0.8ex);
\draw[black,fill=black] (3.9,-2) circle (0.8ex);
\draw[black,fill=black] (7.2,-1.4) circle (0.8ex);
\draw[black,fill=black] (10.1,-1.4) circle (0.8ex);
\draw[black,fill=black] (13,-1.7) circle (0.8ex);
\draw[black,fill=black] (15.6,-1.7) circle (0.8ex);
\draw[black,fill=black] (19.7,-1.7) circle (0.8ex);
\draw[black,fill=black] (22.3,-1.7) circle (0.8ex);
\node at (1.2,-4.1) { ${\mathrm{e}}^{\lambda/2}$};
\node at (4,-4.1) { ${\mathrm{e}}^{-\lambda/2}$};
\node at (7.2,-4.1) { ${\mathrm{e}}^{\lambda/2}$};
\node at (10.1,-4.1) { ${\mathrm{e}}^{-\lambda/2}$};
\node at (13.4,-4.1) { \footnotesize$1$};
\node at (16,-4.1) { \footnotesize $1$};
\node at (19.5,-4.1) { \footnotesize$1$};
\node at (22.1,-4.1) { \footnotesize $1$};
\coordinate (Q) at (1,-2) ;
\coordinate (R1) at (2,-1) ;
\coordinate (R4) at (0,-1) ;
\coordinate (Q2) at (3.9,-2) ;
\coordinate (S1) at (4.9,-1) ;
\coordinate (S4) at (2.9,-1) ;
\coordinate (Q3) at (7.2,-1.4) ;
\coordinate (T2) at (6.2,-2.4) ;
\coordinate (T3) at (8.2,-2.4) ;
\coordinate (Q4) at (10.1,-1.4) ;
\coordinate (U2) at (9.1,-2.4) ;
\coordinate (U3) at (11.1,-2.4) ;
\coordinate (Q5) at (13,-1.7) ;
\coordinate (V1) at (14,-0.7) ;
\coordinate (V3) at (14,-2.7) ;
\coordinate (Q6) at (15.6,-1.7) ;
\coordinate (W1) at (16.6,-0.7) ;
\coordinate (W3) at (16.6,-2.7) ;
\coordinate (Q7) at (19.7,-1.7) ;
\coordinate (X2) at (18.7,-2.7) ;
\coordinate (X4) at (18.7,-0.7) ;
\coordinate (Q8) at (22.3,-1.7) ;
\coordinate (Y2) at (21.3,-2.7) ;
\coordinate (Y4) at (21.3,-0.7) ;
\draw[black,directed,thick] (R4) -- (Q);
\draw[black,directed,thick] (Q) -- (R1);
\draw[black,directed,thick] (Q2) -- (S4);
\draw[black,directed,thick] (S1) -- (Q2);
\draw[black,directed,thick] (T3) -- (Q3);
\draw[black,directed,thick] (Q3) -- (T2);
\draw[black,directed,thick] (Q4)--(U3);
\draw[black,directed,thick] (U2)--(Q4);
\draw[black,directed,thick] (V3)--(Q5);
\draw[black,directed,thick] (Q5)--(V1);
\draw[black,directed,thick] (Q6)--(W3);
\draw[black,directed,thick] (W1)--(Q6);
\draw[black,directed,thick] (V3)--(Q5);
\draw[black,directed,thick] (Q5)--(V1);
\draw[black,directed,thick] (Q6)--(W3);
\draw[black,directed,thick] (W1)--(Q6);
\draw[black,directed,thick] (X4) --(Q7);
\draw[black,directed,thick] (Q7)--(X2);
\draw[black,directed,thick] (Q8)--(Y4);
\draw[black,directed,thick] (Y2)--(Q8);
\end{tikzpicture}
\end{picture}
\vspace{0.5cm}
\caption{\footnotesize The boundary weights.}
\label{bdyweights}
\end{figure}
\vspace{0.6cm}
Let $T_1$ be the row-to-row transfer matrix for an odd row of $\cal L'$, and $T_2$ the
transfer matrix for an even row. Then
\begin{equation} \label{pfn6v}
Z_{6V} \; = \; <\!0 \, |\, T_1 \, T_2 \, T_1 \cdots T_2 \, T_1 | \, 0\!> \;\; , \; \; \end{equation}
where there are $M$ factors $T_1$ in the matrix product, and $M-1$ factors $T_2$, and
$ <\!0 \, |$ , $| \, 0\!> $ are vectors that account for the bottom and top boundaries of
$\cal L'$. Let $\Lambda^2$ be a typical eigenvalue of $T_1 T_2$, given by
the equations
\begin{equation} \label{eigval} \Lambda f = T_1 g \;\; , \;\; \Lambda g = T_2 f \;\; , \; \; \end{equation}
$f,g$ being the associated eigenvectors.
The the right-hand side of (\ref{pfn6v}) can be written as a sum over terms, each
proportional to $\Lambda^{2M}$. In the limit of $M$ large, this will be given by
\begin{equation} Z_{6V} \; = \; C \, \Lambda_{\rm max} ^{2M} \left[ 1+ O( {\mathrm{e}}^{-\gamma M } ) \right]
\;\; , \; \; \end{equation}
where $\Lambda_{\rm max}$ is the maximum eigenvalue and $Re(\gamma) >0 $.
In the limit of $M$ large it follows that
\begin{equation} \label{pfntm}
\lim_{M \rightarrow \infty} \left( \log Z_{6V} \right)/M \; = \;
\log \Lambda_{\rm max} ^2 \, \, . \end{equation}
\section{The self-dual Potts model, with $x_1 x_2 = 1$}
\setcounter{equation}{0}
For general $x_1, x_2$ the Bethe ansatz does not work for this inhomogeneous model.
However, if $x_2 = 1/x_1$, we can define
\begin{equation} \label{xxx}
x_1 = x \;\; , \;\; x_2 = 1/x \;\; , \; \; \end{equation}
and then the weights for the internal vertices on odd and even rows,
given in (\ref{oddrows}) and (\ref{evenrows}) satisfy
\begin{equation} ( \omega_1, \ldots , \omega_6 )_{\rm odd} \; = \; x^{-1}
\; ( \omega_1, \ldots , \omega_6 )_{\rm even} \end{equation}
so
\begin{equation} \label{6Vhom}
Z_{6V} \; = \; x^{-N(M-1)} \, Z_{\rm hom} \;\; , \; \; \end{equation}
where $Z_{\rm hom}$ is the partition function of a six-vertex model defined in the same way
as previously, but with all internal weights given by (\ref{oddrows}), so it is homogeneous
(but not rotation-invariant).
We note from
(\ref{freeenergies}), (\ref{Pottsand 6V}), (\ref{pfntm}), (\ref{6Vhom}) that
\begin{equation} \label{Mlarge}
-N f_b - f_s \; = \; (N/2) \log Q - N \log x + \log \Lambda_0^2 \end{equation}
to within terms of order ${\mathrm{e}}^{-\delta ' N}$, $\Lambda_0$ being the maximum eigenvalue
of the transfer matrix of the homogeneous model.
The corresponding Potts model is self-dual, with
\begin{equation} \label{dual} K_1^{*} = K_2 \;\; , \;\; K_2^{*} = K_1 \, \, . \end{equation}
One must still distinguish
between $T_1$ and $T_2$ because the boundary conditions are different for the
two type of row. However, Owczarek and Baxter{\color{blue}\cite{OB1989}} were able to
solve (\ref{eigval}) by extending the Bethe ansatz to free boundary conditions
(for every wave number $k$ there is a reflected wave number $-k$).
The number $n$ of down arrows between two successive rows of $\cal L'$ is conserved in
this model. Owczarek and Baxter{\color{blue}\cite{OB1989}} solved (\ref{eigval}) for
arbitrary $n$, but the top and bottom boundary conditions ensure that $n=N$ (there are as
many down arrows as up ones), and we shall only consider this case.
Our notation here is not quite consistent with {\color{blue}\cite{OB1989}}, one significant
difference being that $N$ in that paper is $2N$ here.
To make the notation for the weights
consistent, associate an extra weight $t$ with the top of every down-pointing NW -SE arrow,
and a weight $1/t$ with the bottom of every such arrow. Then the first four weights
$\omega_1, \ldots , \omega_6$ in Fig. (\ref{sixvertices}) are unchanged, while
$\omega_5 , \, \omega_6$ become $t^{-1}\omega_5 ,\, t \, \omega_6$. The eight boundary
weights in Fig. (\ref{bdyweights}) are multiplied by $t^{-1},1,1,t, \, 1,t,t^{-1},1$, respectively.
Taking $t$ to be as in {\color{blue}\cite{OB1989}}, and $\lambda$ herein to be given by
\begin{equation} {\mathrm{e}}^{\lambda/2} \; = \; t \;\; , \; \; \end{equation}
we obtain the weights of (2.64) -- (2.67) of {\color{blue}\cite{OB1989}}, $q$ therein
being the $Q$ of this paper.
These additional edge weights cancel out of the partition function and of the eigenvalue
$\Lambda$.
The parameter $\mu$ of {\color{blue}\cite{OB1989}} is given by $\mu = i \lambda $
and we replace $ v $ therein by $v = \mu - 2 i u$ so
\begin{equation} \label{defx}
x = \frac{\sinh ( \lambda-2 u) }{\sinh 2 u } \, \, . \end{equation}
Then
equations (2.86), (2.87), (2.74) of {\color{blue}\cite{OB1989}} become (replacing
$n,N$ therein by $N, 2N $)
\begin{equation} \label{eigen}
\Lambda^2 \; = \; \prod_{j=1}^N \frac{\sinh ( \lambda-u-\alpha_j ) \sinh ( \lambda-u+\alpha_j )}
{\sinh ( u-\alpha_j ) \sinh ( u+\alpha_j )} \;\; , \; \; \end{equation}
where $\alpha_1, \ldots , \alpha_N$ are given by the $N$ ``Bethe equations"
\begin{displaymath} \left[ \frac{\sinh (u+\alpha_j) \sinh (\lambda-u+\alpha_j)}{\sinh (u-\alpha_j)
\sinh (\lambda-u-\alpha_j)} \right]^{2N} = \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\;
\;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \nonumber \end{displaymath}
\begin{equation} \label{Betheeqns}
\prod_{m=1, m \neq j}^N \frac{\sinh (\lambda + \alpha_j-\alpha_m) \,\
\sinh (\lambda + \alpha_j + \alpha_m) }
{\sinh (\lambda + \alpha_m-\alpha_j) \,\sinh (\lambda - \alpha_j - \alpha_m) }
\! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \end{equation}
for $j=1, \ldots , N$.
(\ref{Betheeqns}) has many solutions, corresponding to the various eigenvalues. We are only
concerned with the maximum eigenvalue.
\subsection{Solution of the Bethe equations}
If the number of states $Q$ is less than four, then $\lambda$ is pure imaginary and the large-$N$
solution of (\ref{Betheeqns}) is given in {\color{blue}\cite{OB1989}}.
If $Q > 4$, then $\lambda$ is real and positive. For the ferromagnetic Potts model,
from (\ref{weights}) $x$ is real and positive so
\begin{equation} \label{restr} 0 < u < \lambda /2 \, \, . \end{equation}
Here we obtain the large-$N$ behaviour of the
maximum eigenvalue $\Lambda_{\rm max}$ for this case, using a method similar to
that given in Appendix D of {\color{blue}\cite{RJB1972}} for the eight-vertex model.
First write (\ref{eigen}), (\ref{Betheeqns}) in terms of polynomials in the
variables
\begin{equation} \label{defqw} q = {\mathrm{e}}^{-2 \lambda} \;\; , \;\; w = e^{-2 u} \;\; , \;\; z_j = e^{-2 \alpha_j } \end{equation}
as
\begin{equation} \label{eigen2}
\Lambda^2 \; = \; \left( w^{2N}/q^N \right) \, \prod_{j=1}^N \frac{(1-
q/w z_j ) (1- q z_j /w) }{(1- w/z_j ) (1- w z_j) } \;\; , \; \; \end{equation}
\begin{displaymath} z_j^{-4N} \left[ \frac{(1-w z_j)( 1-q z_j/w)}{(1-w/z_j)( 1-q /w z_j)} \right)^{2N}
\; = \; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\;
\;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;\;\; \nonumber \end{displaymath}
\begin{equation} \label{beeqns}
z_j^{2-2N} \frac{(1-q/z_j^2)}{(1-q z_j^2)} \prod_{m=1}^N \frac{(1-q z_j z_m)(1-q z_j/z_m)}
{(1-q z_m /z_j)(1-q /z_j z_m)} \;\; , \;\; j = 1, \ldots , N \, \, . \! \! \! \! \! \! \! \! \! \end{equation}
\vspace{5mm}
Consider the limit when $q, w \rightarrow \infty$.
From (\ref{restr}), the largest of $q, w, q/w$ is $w$, so if we take $w \rightarrow 0$,
then it is also true that $q, q/w \rightarrow 0$. Suppose that $z_1, \ldots, z_N $
remain of order one. Then (\ref{beeqns}) becomes
\begin{equation} \label{eqz}
z_j^{2N+2} = 1 \;\; , \;\; j = 1, \ldots , N \, \, . \end{equation}
This has $2N+2$ solutions for $z_j$.
The Bethe ansatz used in {\color{blue}\cite{OB1989}} is a sum over all permutations
and inversions of $z_1, \ldots z_N$. If any $z_j $ is equal to its inverse, or if any two
are equal to one another, or to their inverses, then the Bethe ansatz gives a zero
eigenvector, which must be rejected. Replacing any $z_j$ (or $z_m$) in (\ref{eigen2}),
(\ref{beeqns}) by its inverse does not change the equations.
We therefore reject the solutions $z_j = \pm 1 $ of (\ref{eqz}), and group the
remaining $2N$ solutions into $N$ distinct pairs $z_j, 1/z_j$. Equivalently,
we require $z_1, \ldots , z_N$ to be distinct and to lie in the upper half of the
complex plane.
Then there is a unique solution of (\ref{eqz}) for the $z_1, \ldots , z_N$ , and the
corresponding eigenvalue in this limit is
\begin{equation} \Lambda^2 = w^{2N}/q^N \, \, . \end{equation}
This is indeed then the maximum eigenvalue $\Lambda_0$, corresponding to
all the left-hand arrows in $\cal L'$ being
down, and the arrows then alternating in direction from left to right. The $N$ vertices
in odd rows are in configuration 5, those in even rows in configuration 6.
Now define the functions
\begin{equation} r(z) = (1-w z)^{2N} (1-q z/w)^{2N} (1-q z^2) \;\; , \; \; \end{equation}
\begin{equation} R(z) \; = \; \prod_{m=1}^N (1-z/z_m)(1-z \, z_m) \;\; , \; \; \end{equation}
\begin{equation} S(z) \label{eqnS}
\; = \; \frac{z^{2N+2} \, r(1/z)} {R(q/z)} - \frac{r(z)}{R(q z)} \, \, . \end{equation}
Then (\ref{eigen2}), (\ref{beeqns}) can be written simply as
\begin{equation} \label{Lam}
\Lambda^2 \; = \; \frac{w^{2N} \, R(q/w)}{q^N \, R(w) } \;\; , \; \; \end{equation}
\begin{equation} S(z_j) \; = \; 0 \;\; , \;\; j = 1, \ldots , N
\, \, . \end{equation}
$S(z)$ therefore has zeros when $ = z_m$ or $z=1/z_m$. It also has zeros at $ z= 1 $
and $z=-1$. It is of course a rational function, but if we take $z, z_m$ to be of order unity
and expand in powers of $q, \, w$ and $q/w$, then to order $w^{2N}$, $S(z)$
remains a polynomial of degree $2N+2$.
To this order therefore, we can set
\begin{equation} S(z) = (z^2-1) R(z) \, \, . \end{equation}
Further, the terms proportional to $z^{2N+2}, z^{2N+1}, z^{2N}, \ldots ,
z^{N+2}$ come solely from the first term on the RHS of (\ref{eqnS}), while the
terms proportional to $1, z, z^2, \ldots z^N$ come from the second term.
Using the second feature, it follows that for $|z| <1 $,
\begin{equation} \label{res}
\frac{r(z)}{R(q z)} \; = \; (1-z^2) R(z) \, \, . \end{equation}
More accurately, if $|z| < {\mathrm{e}}^{-\delta}$, then (\ref{res}) is true to
relative order $ {\mathrm{e}}^{-N\delta}$.
Since this is true for $|z| <1$, it is more strongly true for $|z|<q$, so we can replace
$z$ by $qz$ to obtain
\begin{equation} \label{res2}
\frac{r(qz)}{R(q^2 z)} \; = \; (1-q^2 z^2) R(qz) \, \, . \end{equation}
Proceeding in this way, noting that $R(z) \rightarrow 1$ as $z \rightarrow 0$,
we can solve the equations (\ref{res}), (\ref{res2}), $\ldots$ ,
for $R(z)$ to obtain
\begin{equation} R(z) \; = \; \prod_{k=0}^{\infty} \frac{( 1 \! - \! q^{4k+2}z^2) \, r(q^{2k} z)}{\; \; (1\, - \, q^{4k}z^2)
\; \; r(q^{2k+1} z)} \;\; , \;\; |z| <1 \;\; , \; \; \end{equation}
i.e.
\begin{displaymath} R(z) = \prod_{k=0}^{\infty} \frac{(1-q^{4k+1} z^2)(1-q^{4k+2} z^2)}
{(1-q^{4k} z^2)(1-q^{4k+3} z^2)}
\left[ \frac{(1-q^{2k} w z )(1-q^{2k+1} z/w)}{(1-q^{2k+1} w z )(1-q^{2k+2} z/w)} \right]^{2N} \end{displaymath}
or
\begin{equation} \label{resR}
\log R(z) \; = \; \sum_{n=1}^{\infty} \frac{(1-q^n) z^{2n} }{n (1+q^{2n} ) } -
2 N \sum_{n=1}^{\infty} \frac{(w^n+q^n/w^n) z^n}{n (1+q^n ) } \, \, . \end{equation}
\subsubsection{The free energies}
Substituting (\ref{resR}) into (\ref{Lam}), we get, to within additional terms that vanish
exponentially fast as $N$ becomes large,
\begin{displaymath} \log \Lambda_0^2 = N \! \left[ \log \frac{w^2}{q} + \! 2
\sum_{n=1}^{\infty} \frac{(w^{2n} \! - \! q^{2n}w^{-2n})} {n (1+q^{n})} \right] -
\sum_{n=1}^{\infty} \frac{(1 \! - \! q^n)(w^{2n} \! - \! q^{2n}w^{-2n})} {n (1+q^{2n})} \end{displaymath}
so from (\ref{Mlarge}), the bulk and surface free energies of the original Potts model of
(\ref{Pottspartnfn}) and (\ref{freeenergies}) are
\begin{equation} \label{resfb1}
f_b \; = \; -{\textstyle \frac{1}{2}} \log Q + \log x - \log \frac{w^2}{q} - 2
\sum_{n=1}^{\infty} \frac{(w^{2n} \! - \! q^{2n}w^{-2n})} {n (1+q^{n})} \;\; , \; \; \end{equation}
\begin{equation} \label{surface}
f_s \; = \;
\sum_{n=1}^{\infty} \frac{(1 \! - \! q^n)(w^{2n} \! - \! q^{2n}w^{-2n})} {n (1+q^{2n})}
\, \, . \end{equation}
{From} (\ref{weights}), (\ref{defx}) \begin{displaymath} Q = q+2+ q^{-1}\;\; , \;\;
x =\frac{w^2 (1-q/w^2)}{ q^{1/2} (1-w^2)} \;\; , \; \; \end{displaymath} so \begin{equation} \label{resfb}
f_b \; = \; \log \left( \frac{q}{1+q} \right) - \sum_{n=1}^{\infty} \frac{(1-q^n)(w^{2n}+
q^n/w^{2n})}{n(1+q^n)} \end{equation}
which is the same result as that of eqns. (12.5.5) and (12.5.6c) of
{\color{blue}\cite{book}} , $q, \psi, \beta$ therein being the $Q, f_b, \lambda-2u$ of
this paper. We can also write (\ref{resfb}), (\ref{surface}) as
\begin{equation} \label{resfb3}
f_b \; = \; - K_1-K_2 - \log (1+q) + \sum_{n=1}^{\infty} \frac{q^n \, (1-q^n)(w^{2n}+
q^n/w^{2n})}{n(1+q^n)} \;\; , \; \; \end{equation}
\begin{equation} \label{surface2}
f_s \; = \; \log\left( \frac{1 \! - \! q^2/w^2}{1 \! - \! w^2} \right) -
\sum_{n=1}^{\infty} \frac{q^{n} (1 \! + \! q^n)(w^{2n} \! - \! q^{2n} w^{-2n})} {n (1+q^{2n})} \, \, . \end{equation}
Rotating the model through $90^{\circ}$ is equivalent to inverting $x$, i.e. of
replacing $u$ by $\lambda/2 -u$, and of replacing $w$ by $q^{1/2}/w$. We see that this
does indeed leave the RHS of (\ref{resfb}) unchanged. Also, making this rotation
we obtain from (\ref{surface}) the result
\begin{equation} \label{resfsp}
f'_s \; = \;
\sum_{n=1}^{\infty} \frac{q^n (1 \! - \! q^n)( w^{-2n} \! - \! w^{2n})} {n (1+q^{2n})}
\end{equation}
for the horizontal surface free energy.
\section{The isotropic case conjectures of Vernier and Jacobsen}
\label{sec4}
\setcounter{equation}{0}
\subsection{Bulk and surface free energies}
Vernier and Jacobsen{\color{blue}\cite{VJ2012}} negated the free energies, here we
revert to the conventional signs, as given in (\ref{freeenergies}). As we noted earlier,
if $q_{VJ}$ is their $q$, then our $q = q_{VJ}^2$.
For the rotationally
invariant case, when $w= q^{1/4}$, they obtained
\begin{equation}
{\mathrm{e}}^{-f_b} \; = \; \frac{(1+q)}{q (1-q^{1/2})^2} \; \prod_{k=1}^{\infty}
\left( \frac{1- q^{2k-1/2}}{1-q^{2k+1/2} }\right)^4 \, \, . \end{equation}
Taking logarithms, this gives
\begin{equation} f_b \; = \; \log \left( \frac{q}{1+q} \right) - 2 \sum_{n=1}^{\infty} \frac{q^{n/2} \,
(1-q^{n})}{n (1+q^{n})} \, \, . \end{equation}
They observed that this does indeed agree with the known result (\ref{resfb}) above.
They also conjectured that
\begin{equation} {\mathrm{e}}^{-f_s} \; = \; (1-q^{1/2}) \prod_{k=1}^{\infty}
\left( \frac{1-q^{4k-1/2}}{1-q^{4k-5/2}} \right)^2 \;\; , \; \; \end{equation}
i.e.
\begin{equation} \label{conjfs}
f_s \; = \; \sum_{n=1}^{\infty} \frac{q^{n/2} (1-q^n)^2}{n (1+q^{2n})} \, \, . \end{equation}
Again, this agrees with the our result (\ref{surface}) when $w = q^{1/4}$.
\subsection{The corner free energy} \label{42}
Vernier and Jacobsen{\color{blue}\cite{VJ2012}} also conjectured from their series
expansions that the corner free energy is given by
\begin{equation} \label{conj}
{\mathrm{e}}^{-f_c} \; = \; \prod_{k=1}^{\infty} \frac{1}{(1-q^{4k-3})(1-q^{4k-2})^4 (1-q^{4k-1})}
\;\; , \; \; \end{equation}
i.e.
\begin{equation} \label{cnrfree} f_c \; = \; - \sum_{n=1}^{\infty} \frac{q^n+4 \, q^{2n} + q^{3n}}{n (1-q^{4n}) }
\, \, . \end{equation}
\subsection{Our series expansions}
\label{43}
We have also used series expansions to test Vernier and Jacobsen conjectures. We put
the six-vertex model
into interaction-round-a-face (IRF) form{\color{blue}\cite[\S 10.3]{book}} and calculated
the finite-size partition function by dividing it into four corners, as in the corner transfer
matrix method{\color{blue}\cite[Fig. 13.2]{book}}, and building up the lattice by going round
the centre spin. We took
\begin{equation} w = q^{1/4} s^{1/2} \end{equation}
and expanded $f_b, f_s, f'_s, f_c$ in powers of $q$ for given $s$. The coefficients of
the expansion are Laurent polynomials in $s$, and in the
isotropic (rotation-invariant) case $s$ is equal to one.
This was reasonably efficient, but we were only able to get to order $q^9$,
whereas Vernier and Jacobsen{\color{blue}\cite[\S3.2]{VJ2012}} went to order $q^{31/2}$.
We of course agreed with them for $s=1$.
For general $s$, we found, to the order to which we went, that $f_c$ was {\em independent}
of $s$ (i.e. all the coefficients were constants), suggesting that this is true to
all orders and $f_c$ is exactly independent of $s$ or $w$, being a function only of $q$.
This agrees with our result for $f_c$ of the next section.
\section{Inversion relations}
\label{sec5}
\setcounter{equation}{0}
{From} (\ref{weights}) and (\ref{xxx}),
\begin{equation} {\mathrm{e}}^{K_1} = 1 + Q^{1/2} x \;\; , \;\; {\mathrm{e}}^{K_2} = 1 + Q^{1/2} /x
\;\; , \; \; \end{equation}
so from (\ref{defx}),
\begin{eqnarray} \label{K1K2}
{\mathrm{e}}^{K_1} & = & \frac{\sinh (2 \lambda-2 u)}{\sinh 2 u } \; = \; \frac{w^2}{q} \;
\frac{1- q^2/w^2}{1-w^2} \;\; , \; \; \nonumber \\
{\mathrm{e}}^{K_2} & = & \frac{\sinh ( \lambda+2 u)}{\sinh (\lambda -2 u )} \; = \;
\frac{1}{w^2} \;
\frac{1- q w^2}{1-q/w^2} \, \, . \end{eqnarray}
We regard these equations as defining $K_1,K_2$ as functions of
the variable $u$. Then
\begin{equation} {\mathrm{e}}^{K_1(u) } \, {\mathrm{e}}^{K_1(\lambda-u)} = 1 \;\; , \;\;
{\mathrm{e}}^{K_2(\lambda-u) } = \frac{ \sinh(3 \lambda-2u)}{\sinh(2u-\lambda) }
\; = \; 2-Q-{\mathrm{e}}^{K_2(u)} \end{equation}
The row-to-row transfer matrix of the Potts model, as formulated in (\ref{Pottspartnfn}), is
$\tilde{T}_1 \tilde{T}_2$, where
\begin{equation} \label{defT1T2}
(\tilde{T}_1)_{{ \sigma},{\sigma}' } \; = \; \delta(\sigma,\sigma')
\prod_{j=1}^{N-1} {\mathrm{e}}^{K_1 \delta({\sigma}_j, {\sigma}_{j+1}) }
\;\; , \;\; (\tilde{T}_2)_{{ \sigma},{\sigma}' } \; = \;
\prod_{j=1}^N {\mathrm{e}}^{K_2 \delta({\sigma}_j, {\sigma}'_{j}) }
\end{equation}
writing $\sigma = \sigma_1, \ldots, \sigma_N$ for all the $N$ spins in a row, and
similarly for the spins $\sigma'= \sigma'_1, \ldots, \sigma'_N$ in the row above.
Regarding $\tilde{T}_1, \tilde{T}_2$ as functions of the variable $u$, it follows that
\begin{equation} \label{invT}
\tilde{T}_1(u) \tilde{T}_1(\lambda -u) = {\bf{1} }\;\; , \;\;
\tilde{T}_2(u) \tilde{T}_2(\lambda -u) \; = \; \xi(u)^N {\bf{1} }
\;\; , \; \; \end{equation}
where $\bf{1}$ is the $Q^N$-dimensional identity matrix and
\begin{equation} \xi (u) \; = \; {\mathrm{e}}^{K_2(u)} {\mathrm{e}}^{K_2(\lambda-u)} +Q-1 \; = \; - \, \frac{ Q \sinh (2u)
\sinh (2 \lambda-2u)}{\sinh (\lambda-2 u )^2 } \, \, . \end{equation}
Define the combined transfer matrix
\begin{equation} \label{defV}
V = T_2^{1/2} T_1 T_2^{1/2} \end{equation}
and let $| 0 \rangle$ and $\langle 0 |$ be the $Q^N$-dimensional column and row vectors
all of whose entries are one. Then from (\ref{Pottspartnfn})
\begin{equation} \label{partf}
Z_P \; = \; \langle 0 | T_1 T_2 T_1 \cdots T_2 T_1 | 0 \rangle \; = \;
\langle 0 | T_2^{-1/2} \, V^M \, T_2^{-1/2} | 0 \rangle \, \, . \end{equation}
Let \begin{equation} \Delta \; = \; \Delta (u) \; = \; {\mathrm{e}}^{K_2} + Q -1 \; = \; \frac{ 2 \, \cosh \lambda \, \sinh (2 \lambda-2u)}
{\sinh( \lambda-2 u ) } \;\; , \; \; \end{equation}
then {from} (\ref{defT1T2}),
\begin{equation} T_2 \, | 0 \rangle \; = \; \Delta ^N \, | 0 \rangle \;\; , \; \; \end{equation}
so $ | 0 \rangle $ is an eigenvector of $T_2$ and
\begin{equation} T_2^{-1/2} \, | 0 \rangle \; = \; \Delta^{-N/2} \, | 0 \rangle \, \, . \end{equation}
Hence (\ref{partf}) can be written
\begin{equation} \label{partfn }
Z_P \; = \; \Delta^{-N} \, \langle 0 | V^M | 0 \rangle \, \, . \end{equation}
The $\Lambda^2$ of (\ref{eigval} ) is also the eigenvalue of $V$, so if we neglect only terms
that are relatively exponentially small when $M$ is large, we can write (\ref{partfn }) as
\begin{equation} \label{pfn} Z_P \; = \; \Delta^{-N} \, \Lambda_{\rm max} ^{2M} \, \langle \psi | 0 \rangle^2
\;\; , \; \; \end{equation}
where $\psi$ is the maximal eigenvector of $V$:
\begin{equation} V \psi \; = \; \Lambda_{\rm max} ^2 \, \psi \, \, . \end{equation}
The number of rows $M$ enters (\ref{pfn}) only explicitly ($\Lambda_{\rm max}$ and
$\psi$ are independent of $M$), so from (\ref{freeenergies}),
\begin{equation} \label{relf}
-N f_b - \! f_s = 2 \log \Lambda_{\rm max} \;\; , \;\; \! -N f'_s - \! f_c = -N \log \Delta +
2 \log \langle \psi | 0 \rangle \, \, . \end{equation}
We expect these equations to hold in the physical region, where
$ 0 < u < \lambda/2 $ and all the Boltzmann weights are positive. We would like to analytically
continue them to $u > \lambda/2$.
For the Potts model turned through $45^{\circ}$, with cylindrical boundary conditions, this
is not difficult. The eigenvector $\psi$ is independent of $u$, so for finite $N$ the eigenvalue
$\Lambda_{\rm max}$ is (after removing the known poles coming from $e^{K_1}$ and
${\mathrm{e}}^{K_2}$) a polynomial on $w$. Here we do not have these properties, but we shall show
that if we make some plausible analyticity assumptions, then we can obtain the results
(\ref{resfb1}) - (\ref{resfsp}) very simply.
{From} (\ref{invT}) and (\ref{defV}), exhibiting the dependence of $V$ on $u$,
\begin{equation} \label{VV}
V(u) V(\lambda-u) \; = \; \xi(u)^N\, \bf 1 \, \, . \end{equation}
Hence if $\psi$ is the maximal eigenvector of $V(u)$, it is also an eigenvector of
$V(\lambda-u)$. Let $\Lambda (u)$ and $\Lambda(\lambda-u)$
be the associated eigenvalues. (For $ 0 < u < \lambda/2 $
the latter will be the smallest of the eigenvalues.) Then
\begin{equation} \label{lamlam}
\Lambda (u)^2 \, \Lambda(\lambda-u)^2 \; = \; \xi(u)^N\, \, . \end{equation}
This relation defines $\Lambda(u)$ is the larger interval $0 < u < \lambda$.
We {\em assume} that the resulting function
$\Lambda (u)$ is analytic throughout this extended interval, in particular
at the inversion point $u = \lambda/2$ (apart from a trivial
pole of degree $N$ coming from the double pole of $\xi (u)$).
\addtocounter{equation}{1}
\setcounter{storeeqn}{\value{equation}}
\setcounter{equation}{0}
\renewcommand{\theequation}{\arabic{section}.\arabic{storeeqn}\alph{equation}}
We also assume that the relations (\ref{relf}) can be analytically continued into
the extended interval. Then on replacing $u$ by $\lambda-u$ in the first relation
and using (\ref{lamlam}), we obtain
\begin{equation} \label{518a}
-N f_b(\lambda-u) -f_s(\lambda-u) \; = \; N \, \log \xi(u) -2 \, \log \Lambda_{\rm max} \end{equation}
where $\Lambda_{\rm max} = \Lambda(u)$.
Doing the same in the second relation gives
\begin{equation} \label{518b} - N f'_s(\lambda-u) -f_c(\lambda-u) \; = \; -N \log \Delta(\lambda-u) + 2 \log
\langle \psi | 0 \rangle \;\; , \; \; \end{equation}
$\psi$ being unchanged.
\setcounter{equation}{\value{storeeqn}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\addtocounter{equation}{1}
\setcounter{storeeqn}{\value{equation}}
\setcounter{equation}{0}
\renewcommand{\theequation}{\arabic{section}.\arabic{storeeqn}\alph{equation}}
Adding (\ref{518a}) to the first of the relations (\ref{relf}) (exhibiting the dependence on $u$),
we eliminate $\Lambda_{\rm max}$. Then
separating the terms linear in $N$ from those independent of $N$, we obtain
\begin{equation} \label{inv1}
- f_b (u) -f_b(\lambda-u) \; = \; \log \xi (u) \;\; , \;\; - f_s(u) -f_s(\lambda-u) \; = \; 0 \, \, . \end{equation}
Subtracting (\ref{518b}) from from the second relation (\ref{relf}), we eliminate
$ \langle \psi | 0 \rangle $ and obtain
\begin{equation} \label{inv2}
-f'_s(u) + f'_s(\lambda-u) \; = \; \log \frac {\Delta(\lambda-u) }{\Delta(u)} \;\; , \;\;
-f_c(u) + f_c(\lambda-u) \; = \; 0 \, \, . \end{equation}
\setcounter{equation}{\value{storeeqn}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
We refer to the four relations (5.19) as the {\em inversion relations}. There are also four
{\em rotation relations} that can be obtained by noting that replacing $u$ by $\lambda/2-u$
interchanges $K_1$ with $K_2$ which is equivalent to rotating the lattice through
$90^{\circ}$, so
\begin{eqnarray} \label{rotn1}
f_b(u) = f_b(\lambda/2-u) & , & f_s(u) = f'_s(\lambda/2-u) \!\! \;\; , \; \; \nonumber \\
f'_s(u) = f_s(\lambda/2-u) & , & f_c(u) = f_c(\lambda/2-u) \, \, . \end{eqnarray}
\subsection{Alternative derivation of the free energies}
We shall now show that we can use the above inversion and rotation relations
to derive the bulk and surface free energies,
and to show that the corner free energy depends only on the parameter $\lambda$, but
{\em not} on $u$. The method depends on certain analyticity assumptions, so is
not rigorous, but it is much simpler than the Bethe ansatz method used above.
\subsubsection{Assumptions}
For finite $M, N$ the partition function is a finite sum of products of ${\mathrm{e}}^{K_1}$ and
${\mathrm{e}}^{K_2}$, so from (\ref{K1K2}) is a rational function of $w^2$. The denominator
is a product of at most $M(N-1)$ powers of $1-w^2$, and of at most
$N(M-1)$ powers of $1-q/w^2$. From (\ref{freeenergies}), we therefore expect
${\mathrm{e}}^{-f_b}$ to have simple poles at $w^2=1$ and $w^2= q$, ${\mathrm{e}}^{-f_s}$
to have a simple zero at $w^2=1$, and ${\mathrm{e}}^{-f'_s}$ to have a simple zero at
$w^2=q$.
Define $F(u), G(u)$ by
\begin{equation} {\mathrm{e}}^{-f_b} \; = \; {\mathrm{e}}^{K_1+K_2} F(u) \;\; , \;\;
{\mathrm{e}}^{-f_s(u) } \; = \; \frac{(1-w^2) \, G(u)}{1-q^2/w^2} \;\; , \; \; \end{equation}
then, consistent with the above remarks and with series expansions, we
{\em assume} that $\log F(u), \log G(u), f_c(u) $ are single-valued analytic
functions of $w^2$, not just in the physical regime $q < w^2 < 1$, but in an annulus
containing $ q \leq |w^2| \leq 1 $ in the complex $w^2$-plane.
Hence we can write
\begin{equation} \label{bexpn} \log F(u) \; = \; c_ 0^{(b)} +
\sum_{n=1}^{\infty} [c_n^{(b)} w^{2n} + d_n^{\, (b)} w^{-2n} ] \;\; , \; \; \end{equation}
\begin{equation} \label{conjG}
\log G(u) \; = \; c_ 0^{(s)} +
\sum_{n=1}^{\infty} [c_n^{(s)} w^{2n} + d_n^{\, (s)} w^{-2n} ] \;\; , \; \; \end{equation}
\begin{equation} \label{fcexpn}
f_c(u) \; = \; c_ 0^{(c)} +
\sum_{n=1}^{\infty} [c_n^{(c)} w^{2n} + d_n^{\, (c)} w^{-2n} ] \;\; , \; \; \end{equation}
where the expansions are convergent for $ q \leq |w| \leq 1 $.
We shall show that the relations (5.19), (\ref{rotn1}) then define the coefficients
in these expansions, with the sole exception of $c_ 0^{(c)} $.
This gives $f_b, f_s$ and $f_c$, and $f'_s$ is then given by
the third of the relations (\ref{rotn1}).
\subsubsection{Bulk free energy}
{From} (\ref{inv1}), (\ref{rotn1}) and (\ref{bexpn}),
\begin{eqnarray} \log F_b(u) + \log F_b(\lambda-u) & = & \log \xi(u) -K_1(u)-K_2(u) -K_1(\lambda - u)-
K_2(\lambda- u) \nonumber \\
& = & 2 \log (1+q) - \sum_{n=1} \frac{(1-q^n)(w^{2n}+q^{2n}/w^{2n})}{n} \, \, . \end{eqnarray}
Using (\ref{bexpn}) and equating the series term by term, this gives
\begin{equation} c_0^{(b)} = -\log (1+q) \;\; , \;\; c_n^{(b)} + q^{-2n} \, d_n^{(b)} = (1-q^n)/n \;\; , \;\; n > 0
\, \, . \end{equation}
Further, the first of the rotation relations (\ref{rotn1}) gives $\log F_b(u) =
\log F_b(\lambda-u)$
and hence $d_n^{(b)} = q^{n} \, c_n^{(b)}$, so
\begin{equation} c_n^{(b)} = q^{-n} \, d_n^{(b)} = \frac{q^n (1-q^n)}{n(1+q^n) } \;\; , \;\; n > 0 \;\; , \; \; \end{equation}
in agreement with our previous result (\ref{resfb3}).
\subsubsection{Surface free energy}
Using (\ref{rotn1}), we can write the first of the relations (\ref{inv2}) as
\begin{equation} \label{fsinv} f_s(u) - f_s(-u) \; = \; \log \frac{\Delta (\lambda/2-u)}{\Delta(\lambda/2+u)}
\; = \; \log \left[ - \frac{\sinh (\lambda+2 u)}{\sinh (\lambda-2u)} \right] \, \, . \end{equation}
Then (\ref{inv1}) and (\ref{conjG}) give $G(u) G(\lambda-u) = 1$, and hence from (\ref{conjG})
\begin{equation} c_0^{(s)} =0 \;\; , \;\; c_n^{(s)} +q^{-2n} d_n^{(s)} = 0 \;\; , \;\; n > 0 \, \, . \end{equation}
Also, using (\ref{rotn1}) in the first of the relations (\ref{fsinv}), we obtain
\begin{equation} f_s(u) - f_s(-u) \; = \; \log \frac{\Delta(\lambda/2-u)}{\Delta(\lambda/2+u )} \; = \;
\log \left[ - \frac{(1 - qw^2)}{w^2 (1-q/w^2)} \right] \end{equation}
which implies
\begin{equation} - \log G(u) + \log G(-u) \; = \; \log \frac{(1-q w^2)(1-q^2 w^2)}{(1-q /w^2)(1-q^2 /w^2) }
\end{equation}
and hence, for $n > 0 $,
\begin{equation} c_n^{(s)} - d_n^{(s)} \; = \; \frac {q^n (1+q^n)}{n} \, \, . \end{equation}
It follows that
\begin{equation} c_n^{(s)} = \frac{q^n(1+q^{n}}{n(1+q^{2n})} \;\; , \;\; d_n^{(s)} =
- \, \frac{q^{3 n}(1+q^n) }{n(1+q^{2n})} \;\; , \; \; \end{equation}
so from (\ref{conjG})
\begin{equation} \log G(u) \; = \; \sum_{n=1} \frac{q^n (1+q^n) (w^{2n}-q^{2n}/w^{2n})}{n (1+q^{2n})}
\end{equation}
in agreement with our result (\ref{surface2}).
\subsubsection{Corner free energy}
Using (\ref{fcexpn}), the last of the relations (\ref{inv2}), (\ref{rotn1}) give
\begin{equation} d_n^{(c)} \; = \; q^{2n} \, c_n^{(c)} \; = \; q^{n} \, c_n^{(c)} \;\; , \;\; n > 0 \, \, . \end{equation}
Since $0 < q <1$, these equations imply
\begin{equation} c_n^{(c)} = d_n^{(c)} = 0 \;\; , \;\; n > 0 \, \, . \end{equation}
Hence we are left with
\begin{equation} f_c(u) \; = \; c_ 0^{(c)} \;\; , \; \; \end{equation}
i.e. $f_c(u)$ is a constant, independent of $u$, and this is in agreement with our
conjecture of sub-section \ref{42}. If Vernier and Jacobsen's conjecture (\ref{cnrfree})
is true for the isotropic case,
when $u = \lambda/4$, then it follows that it must be true for all $u$.
\subsection{Inversion relations for non-solved models}
The derivations of the previous sub-section rely on $\log F(u), \log G(u)$
and $f_c(u)$ being analytic at the inversion point $u = \lambda/2, w = q^{1/2}$,
where (\ref{VV}) implies that $V(u)$ is proportional to its inverse.
More strongly, they depend on them being analytic in a vertical strip in the
complex $u$-plane that contains the domain $0 \leq {\rm Re} (u) \leq \lambda/2$.
There are inversion relations for models that have not been solved, e.g. the
square lattice Ising model in a magnetic
field,{\color{blue}\cite{RJB1980}}-{\color{blue}\cite{Maillard94}} but the free
energies have complicated
singularities at the inversion point, and little progress has been made in solving them.
\subsection{Related work using the reflection relations}
Because of our assumptions regarding the analyticity properties of $\log F(u)$,
$\log G(u), f_c(u)$, the method of this section, while simple, is not rigorous. The
reflection Yang-Baxter
relations{\color{blue}\cite{Pearce87}}-{\color{blue}\cite{Pearce97}} can be used to
obtain functional relations for the transfer matrix eigenvalues, and in a private
communication{\color{blue}\cite{Pearcenotes}} Paul Pearce shows how one can
use these to obtain a more rigorous derivation of the inversion relations for the surface
free energies.
\section{Critical behaviour}
\label{sec6}
\setcounter{equation}{0}
It is shown in {\color{blue}\cite[\S 8.11]{book}} that the bulk free energy of the six-vertex
model has a singularity at $\lambda = 0$, which corresponds to $Q=4$ in the Potts model.
The singularity is of infinite order, being proportional to $\exp (-\pi^2/\lambda)$, i.e.
$\exp [-2 \pi^2/(Q-4)^{1/2}]$. What is the corresponding behaviour of the surface and corner
free energies?
To answer this we need the result (4.2) of Owczarek and Baxter{\color{blue}\cite{OB1989}}
for the surface free energy when $Q < 4$, which is (replacing $y$ by $2y$)
\begin{displaymath} f_s \; = \; 2\, s_{\infty} \; = \; \log \frac{\sin[(\mu+v)/2]}{\sin[(\mu-v)/2]} \; \; \; - \end{displaymath}
\begin{equation} \label{sinf}
\int_{-\infty}^{\infty} \frac{2 \, \sinh (2vy) \, \sinh(\pi y -2 \mu y) \, \cosh( \pi y -\mu y) \,
\cosh (\mu y) \, dy} { y \sinh (2 \pi y) \cosh(2 \mu y) } \;\; , \; \; \end{equation}
where $v, \mu $ are given in terms of our $\lambda, u$ by
\begin{equation} \label{muv}
\mu = -i \lambda \;\; , \;\; v= -i (\lambda-2u) \end{equation}
and the $Q, x_1, x_2, x$ of ((\ref{weights}) and (\ref{xxx}) above are given by
\begin{equation} Q^{1/2} = 2 \cos \mu \;\; , \;\; x_1 = x_2^{-1} = x = \frac{\sin v} {\sin (\mu-v)} \, \, . \end{equation}
In the physical regime(Boltzmann weights positive) $\mu, v$ are real and $ 0 < v < \mu$.
The factor 2 in (\ref{sinf}) comes from the fact that $N' = N/2$ in (4.1) of
{\color{blue}\cite{OB1989}}. Also, from (3.15) of {\color{blue}\cite{OB1989}},
$\sinh[(\pi-2\mu)y]$ in (4.2) should be $\sinh[(\pi-2\mu)y/2]$.
We can use the identity
\begin{displaymath} \sinh ( \pi y -2\, \mu y ) \cosh(\pi y - \mu y) = \sinh(\pi y -3 \mu y) \cosh (\pi y) +
\sinh ( \mu y) \cosh( 2 \mu y) \end{displaymath}
to write (\ref{sinf}) as
\begin{displaymath} f_s \; = \; \log \frac{\sin (\mu+v) }{\sin (\mu-v)} \, - \, {\cal P} \! \! \int_{-\infty}^{\infty}
\frac{ \sinh (2 v y) \, \cosh (\mu y) \, {\mathrm{e}}^{ \pi y-3 \mu y} }
{y \sinh (\pi y) \cosh ( 2 \mu y)} \, dy \;\; , \; \; \end{displaymath}
$\cal P$ denoting the principal value integral.
We want to analytically continue this result to $Q > 4$ so as to compare it with
(\ref{surface2}). We move $mu$ into the lower half plane and can then close the integration
round the upper-half $y$-plane. Summing the residues of the poles and suing (\ref{muv})
gives
\begin{displaymath} f_s =\sum_{n=1}^{\infty}
\frac{ (1-q^n) (w^{2n}-q^{2n}/w^{2n})}{n (1+q^{2n})} \; \; - \end{displaymath}
\begin{equation} \label{anlytc}
4 \sum_{n \; \mathrm{odd} } \frac{[ i+(-1)^{(n-1)/2} ] \, \sinh [ \pi n (\lambda -2u)/2 \lambda ] \,
{\mathrm{e}}^{-\pi^2 n /2 \lambda } }{n (1-{\mathrm{e}}^{-\pi^2 n /2 \lambda )} }
\;\; , \; \; \end{equation}
the second sum being over all positive odd integers $n$, i.e. $n= 1,3,5, \ldots $.
Comparing this with (\ref{surface} ) above, we see that the dominant singularity in $f_s$ is
proportional to ${\mathrm{e}}^{-\pi^2/ 2 \lambda}$. This is of infinite order, i.e. all derivatives exist and
are continuous. This singularity is proportional to the square root of the dominant singularity
in $f_b$.
The conjectured expression (\ref{conj}) for the corner free energy can be written
\begin{equation} {\mathrm{e}}^{-f_c} \; = \; P(q)^{-1} \, P(q^2)^{-4} \;\; , \; \; \end{equation}
where
\begin{equation} P(q) = \prod_{k=1}^{\infty} (1-q^{2k-1}) \, \, . \end{equation}
The function
\begin{equation} {\cal Q}(q) \; = \; \prod_{n=1}^{\infty} (1-q^n) \end{equation}
occurs in Jacobi elliptic functions and satisfies the ``conjugate modulus" relation
\begin{equation} {\cal Q}(q) \; = \; \epsilon^{-1/2} \, \exp \left[ {\frac{\pi (\epsilon- \epsilon^{-1})}{12}}
\right] \, {\cal Q}(q') \;\; , \; \; \end{equation}
where if $q = {\mathrm{e}}^{-2 \pi \epsilon}$, then $ q' = {\mathrm{e}}^{-2 \pi/ \epsilon}$. Noting that
$ P(q) = Q(q)/Q(q^2)$, it follows that
\begin{equation} P(q) \; = \; \sqrt 2 \, \exp \left[ - \frac{\pi \epsilon}{12} - \frac{\pi }{24 \epsilon}\right]
\; P({q'}^{1/2} ) \;\; , \; \; \end{equation}
and hence that
\begin{equation} {\mathrm{e}}^{-f_c} \; = \; \exp \left( \frac{ 3 \pi \epsilon}{4} +\frac{ \pi}{8 \epsilon } \right) \left/ \left[
{2^{5/2} \, P({q'}^{1/2}) \, P({q'}^{1/4})^4 } \right] \right. \end{equation}
in agreement with eqn. 81 of {\color{blue}\cite{VJ2012}} (the $q$ therein is our
${\mathrm{e}}^{-\pi \epsilon}$).
Near the critical point $Q \rightarrow 4^{+}$ and $\epsilon, q' \rightarrow 0^{+}$. We see that
\begin{equation} f_c \sim \, - \, \frac{\pi}{8 \epsilon} \sim \, - \, \frac{\pi^2}{4 [2(Q-4)]^{1/2}} \;\; , \; \; \end{equation}
so $f_c$ becomes negatively infinite.
\section{Summary}
\setcounter{equation}{0}
In sections 2 and 3 we have adapted previous work{\color{blue}\cite{OB1989}} on the
$Q$-state self-dual Potts model on the square lattice from the case when $Q<4$ to
when $Q>4$. This gives the bulk free energy, which was
known{\color{blue}\cite[eqn. 12.5.6]{book}}, and also the vertical free energy. We
considered the general model, homogeneous but anisotropic. It contains two free
parameters, the vertical and horizontal interaction coefficients $K_1, K_2$, or
equivalently the parameters $q, w$ defined by (\ref{weights}), (\ref{defx}), (\ref{defqw} ).
Vernier and Jacobsen{\color{blue}\cite{VJ2012}} had conjectured the bulk, surface
and corner free energies for the isotropic case, when $K_1=K_2$ and $w= q^{1/2}$.
We report these conjectures in section \ref{sec4}, and note that our
results for the bulk and surface free energies, specialized to this case, agree with
their conjectures. We also made series expansions for the more general anisotropic case
(taking $w = q^{1/4} s^{1/2}$, where $s$ is a parameter of order unity) and found that the
coefficients of the terms in the series were independent of $s$. They agreed with
Vernier and Jacobsen's conjectures, not just for $s=1$, but for {\em all} $s$.
It is known that the bulk free energy can be easily obtained using the ``inversion
relation" method{\color{blue}\cite{Strog}}, {\color{blue}\cite{Bax82}},
{\color{blue}\cite[\S 12.5]{book}}. In section \ref{sec5} we show how this can be
extended to the surface and corner free energies. Together with the simple
rotation relations and appropriate analyticity assumptions, these give an alternative
method (simpler than the Bethe ansatz calculation of Owczarek and
Baxter{\color{blue}\cite{OB1989}})
of deriving the surface free energy. They also imply that the corner free energy
is a function only of the number of states $Q$, in agreement with our series expansions
of section \ref{sec4}.
These inversion relation calculations are similar to those for the Ising
model.{\color{blue}\cite{Ising}}
Finally, in section \ref{sec6} we discuss the behaviour when $Q \rightarrow 4^{+}$
and $q \rightarrow 1^{-}$, which is the critical case of the associated
six-vertex model.
|
1,116,691,499,759 | arxiv | \section{Introduction}
Anomalous transport properties have been reported in the normal phase of the organic superconductors (TMTSF)$_2$PF$_6$ and (TMTSF)$_2$ClO$_4$, also called Bechgaard salts \cite{DenisJerome1,bechgaard1}, as a function of temperature and applied pressure. These quasi one-dimensional materials are remarkable in many respects, the least of all being the striking similarity of their temperature-pressure ($T\!-\!P$) phase diagram \cite{Jerome91} with other exotic superconductors, namely the iron-based superconductors \cite{doiron-leyraudSC,Jiang09}. At ambient pressure, (TMTSF)$_2$PF$_6$ undergoes a magnetic phase transition of spin density wave (SDW) character, where the Fermi surface is destabilized by the onset of a spin density wave of the itinerant conduction electrons \cite{Jerome82,Yamaji98,Montambaux88}.
Under application of high pressure, the compound becomes superconducting with a maximum critical temperature~$T_c \approx 1.2\ \mathrm{K}$ \cite{DenisJerome1}.
Superconductivity coexists with the SDW order until the pressure reaches a critical value~$P_c$ above which the SDW order is no longer observed \cite{brazovskiijerome},
indicating the existence of an SDW quantum critical point (QCP).
At larger pressure, superconductivity shows a smooth decrease of $T_c$ with pressure.
The similarity with pnictide superconductors is not only seen in the ($T\!-\!P$) phase diagram, showing the simultaneous occurrence of SDW order and superconductivity, but also in the properties of the normal phase \cite{doiron-leyraudSC}. Indeed, the electrical resistivity~$\rho$ shows a striking linear temperature dependence, $\rho-\rho_0 \propto T$, at~$11.8~\mathrm{kbar}$ and larger pressure in (TMTSF)$_2$PF$_6$, suggesting an unusual scattering mechanism that is possibly similar to the one present in the pnictides (and cuprates).
Extensive theoretical investigation has been carried out, based on a one-loop renormalization group (RG) formalism \cite{sedeki-bourbonnais} especially suitable for one-dimensional systems \cite{Giamarchi08a}. These studies have been reported to reproduce many of the most striking features observed experimentally \cite{sedeki-bourbonnais,efetovinc}. Of special importance is the interplay between itinerant antiferromagnetism (SDW) and superconductivity, which controls the magnitude of the coupling constants in the vicinity of the QCP. An extensive regime of criticality has been uncovered, for which linear in $T$ resistivity is obtained down to lowest temperatures (within the one-dimensional framework) on a finite range of pressure. Furthermore, the $T$-linear behavior disappears above an upper scale $T_0$ where the quadratic law is recovered.
Above $P_c$, the system is thought to be homogeneous, and although the quantum phase transition is reported to be weakly first-order \cite{brazovskiijerome}, it is believed that considerable quantum fluctuations are present to cause anomalous regimes, in both transport and thermodynamics.
In this paper, we build upon the RG insight and suggest an explicit model based on a Peierls instability in a \emph{quasi-one-dimensional system} driven to zero-temperature by the presence of warping on the Fermi surface. The idea of driving a finite temperature Peierls transition towards a QCP end point through increasing the warping has been used in past studies of chromium \cite{ricechromium,chromiumpepinnorman}, a system showing almost perfect nesting between Fermi pockets via translation of the SDW wave vector. Here, we shall employ a model of the same type, where the SDW transition is driven to zero-temperature through the increase of the unnested part of the electronic dispersion \cite{millis,Moriya,hertz}. To the best of our knowledge, the theoretical investigation of a Peierls-type QCP driven by curvature effects is unprecedented while the family of organic superconductors provides a suitable model system to check the theory.
\section{Model for a nesting QCP}
In the Bechgaard salts, the kinetic electronic spectrum is suitably modeled by the orthorhombic dispersion relation, \begin{align}
\label{eqn0}
\varepsilon(\bold{p}) = v_0(|p_x| - p_F) - t_b\cos(b p_y)-t_b'\cos(2b p_y)\ .
\end{align}
The corresponding Fermi surface is shown in Fig.~\ref{fig1}(a).
Generally $t_b'\ll t_b \ll v_0 p_F$ and the $t_b'$-unnesting term drives the system into criticality. For $t_b'=0$, the Fermi surface is perfectly nested with the nesting vector~$\bold{Q}=(2p_F,\pi/b)$ and at sufficiently low temperature the system is found to be in the SDW phase. For finite yet still small~$t_b'$, nesting is no longer perfect but still very good in the proximity of the inflection points --- the \emph{hot spots} --- of the Fermi surface. Eventually as $t_b'$ is increased further (by applying greater external pressure) beyond a critical value, nesting will be ineffective and the SDW order is destroyed. Nesting of finite parts of the Fermi surface at high temperature is now reduced to the four inflection points~$\mathbf{P}_{1-4}$ at $T=0$ as depicted in Fig.~\ref{fig1}(a). This picture survives down to zero-temperature, implying a quantum phase transition at a critical value of the coupling constant~$t_b'$. Using the parameters of Ref.~\onlinecite{sedeki-bourbonnais}, we have $t_b$$ \sim
200\ \mathrm{K}$, and the critical coupling $t_b' \approx 25.4\ \mathrm{K}$.
\begin{figure}[tbp]
\centerline{\includegraphics[width=\linewidth]{fig01.eps}}
\caption{(Color online) (a) Brillouin zone with the Fermi surface and the nesting vector~$\mathbf{Q}$ connecting two inflection points~$\mathbf{P}_1$ and~$\mathbf{P}_3$. Inset: Unit vectors~$\mathbf{e}_\parallel$ and~$\mathbf{e}_\perp$ parallel and perpendicular to the Fermi velocity at~$\mathbf{P}_3$ that define the coordinates~$\mathbf{k}=(k_\parallel,k_\perp)$.
(b) Polarization bubble for the paramagnon mode. (c) Fermion self-energy. The wavy line represents
the effective paramagnon from Eq.~(\ref{eqn1a}).}
\label{fig1}
\end{figure}
In order to study the physics around the hot spots, we expand the spectrum~(\ref{eqn0})
in their vicinity. Around~$\mathbf{P}_3$ and in leading order in~$t_b'$,
\begin{align}
\varepsilon(\mathbf{P}_3+\mathbf{k}) \simeq v k_\parallel - b_3 k_\perp^3 - b_4 k_\perp^4\ .
\label{eqn0a}
\end{align}
Herein, $k_\parallel$ and $k_\perp$ are the momentum components parallel and perpendicular, respectively, to the Fermi velocity at~$\mathbf{P}_3$, cf. the inset of Fig.~\ref{fig1}(a). The parameters in the reduced spectrum~(\ref{eqn0a}) are $v=v_0/\gamma$, $b_3 = b^3\gamma^3 t_b/6$, and $b_4 = - b^4\gamma^4 t_b'/2$ with $\gamma^{-1} = \sqrt{1+(bt_b/v)^2}$. Close to the ``nesting partner''~$\mathbf{P}_1$, the spectrum is similarly approximated as
\begin{align}
\varepsilon(\mathbf{P}_1+\mathbf{k}) = \varepsilon(\mathbf{P}_3 - \mathbf{k}) \simeq -v k_\parallel + b_3 k_\perp^3 - b_4 k_\perp^4
\ . \label{eqn0b}
\end{align}
Note that we have expanded the spectrum up to the term of forth order in~$k_\perp$. Had we kept only terms until order~$k_\perp^3$, the reduced hot-spot spectra~(\ref{eqn0a}) and~(\ref{eqn0b}) would not contain
any mechanism to violate perfect nesting. Indeed, the simple coordinate transformation $k_\parallel \mapsto k_\parallel + v^{-1}b_3k_\perp^3$ illustrates that there would effectively be no curvature effects in the resulting physics so that, inevitably, we
would find a finite-temperature Peierls instability towards SDW for any externally applied pressure. In contrast, the quartic-order term clearly breaks nesting and, since $b_4\propto t_b'$, is an immediate consequence of the presence of the pressure-driven $t_b'$-warping term in Eq.~(\ref{eqn0}). With the range of transverse momenta~$k_\perp$ approximately between~$-\pi/(2b)$ and~$\pi/(2b)$, we find that the forth-order term becomes important as soon as~$t_b'/t_b \sim 2/(3\pi)$, which is fairly compatible with the critical value for~$t_b'$ given in Ref.~\onlinecite{sedeki-bourbonnais}. Increasing~$t_b'$ beyond its critical value reduces the Fermi surface
region for which the reduced spectra~(\ref{eqn0a}) and~(\ref{eqn0b}) are valid and thus sets a limit to the volume of the hot spots.
Looking for nontrivial effects such as the observed QCP, we need to enrich the model of noninteracting fermions by a proper model for the two-particle interaction. Previous works established that there are three relevant interaction channels in the Bechgaard salts: backward scattering (with the coupling constant~$g_1$), forward scattering ($g_2$), and Umklapp scattering ($g_3$) \cite{sedeki-bourbonnais}. The RG studies have shown that the superconducting fluctuations lead to a drastic decrease of the coupling constants~$g_1$ and~$g_2$, due to the interplay of Cooper and SDW fluctuations at the nesting points, whereas the Umklapp coupling constant~$g_3$ remains essentially unaffected. For the following QCP study, we represent the fermion-fermion interaction as mediated by a bosonic mode that becomes critical at the QCP. Following the insight of Ref.~\onlinecite{sedeki-bourbonnais}, we retain only the coupling constant $g_3$ as medium for the electron-paramagnon interaction. Our approach allows us to analyze
both thermodynamic and transport properties, but considering that $g_3$ is related to the Umklapp processes enables us to focus mainly on the resistivity behavior. We perform calculations neglecting vertex corrections to the one-loop self-energies ---
an approximation that we may expect to yield at least qualitatively the correct physical picture.
In a phenomenological low-energy picture, we may assume that after integrating out
all high-energy degrees of freedom, the effective interaction is mediated by long-wave
paramagnon modes. Here, we consider such a bosonic mode that transfers
a momentum of order~$\bold{Q}$, cf. Fig.~\ref{fig1}(a).
The coupling of the bosonic modes to the electrons generates a self-energy term~$\Pi_{\omega,\mathbf{q}}$ in the boson propagator~$\chi(\mathrm{i}\omega,\bold{Q}+\bold{q})$. In
the one-loop approximation, $\Pi_{\omega,\mathbf{q}}$ is given by the polarization bubble [see Fig.~\ref{fig1}(b)], whose relevant
nonanalytic part is
\begin{align}
\label{eqn1}
\Pi_{\omega,\bold{q}} &= \frac{g_3|q_\perp|}{4\pi^2v}\ \ln\bigg\{
\frac{(2b_4q_\perp^4)^2}{\omega^2+ \xi_{\mathbf{q}}^2}
\bigg\}
\end{align}
with $\xi_{\mathbf{q}}= vq_\parallel - b_3 q_\perp^3 + b_4 q_\perp^4$. Formula~(\ref{eqn1})
for the bosonic self-energy is valid for~$t_b'$ larger than the critical value~$\approx 25.4\ \mathrm{K}$, ensuring that that the unnesting terms $b_4k_\perp^4$ in Eqs.~(\ref{eqn0a})--(\ref{eqn0b}) are active.
It is important to note that the presence of the~$k_\perp^4$-term prevents the polarization from producing mass terms containing logarithms in temperature and thus establishes
the existence of a QCP --- for subcritical values~$t_b' \lesssim 25.4\ \mathrm{K}$, the effective absence of forth-order $k_\perp^4$-terms does lead to logarithmic $\ln T$-terms in the bosonic mass so that the phase transition towards an SDW state sets in at a finite temperature~$>0$. As the remaining analytic part of the bosonic spectrum is generically an analytic function of~$\xi_{\mathbf{q}}^2$, we thus write the effective propagator for the para\-magnons as
\begin{align}
\label{eqn1a}
\chi(\mathrm{i}\omega,\bold{Q}+\bold{q}) &= \frac{1}{\mu + \alpha (\xi_{\mathbf{q}}/v)^2 + \Pi_{\omega,\bold{q}}}
\end{align}
with~$\alpha\sim 1$. The bosonic mass~$\mu$ [In this work, we consider $\mu>0$.] measures the distance to the QCP and, close to it, we should consider the limit~$\mu \rightarrow 0$. The logarithm present in the paramagnon propagator~(\ref{eqn1a}) is characteristic of a Peierls phase transition and all the anomalous behavior shall ultimately be due to this nonanalyticity.
\section{Conductivity}
Using the effective model for the hot-spot electrons coupled to critical paramagnons, we are in a position to investigate their transport properties. The relevant quantity is the retarded electron self-energy~$\Sigma^{R}(\varepsilon)$, which within the precision of the one-loop approximation is represented by the Feynman diagram in Fig.~\ref{fig1}(c). In the standard way for an itinerant electron QCP, the momentum dependence of the self-energy is negligible compared to the energy dependence
and the Matsubara self-energy is thus given by
\begin{align}
\Sigma(\mathrm{i}\varepsilon) = - g_3 T\sum_{\mathrm{i}\omega} \int \frac{\mathrm{d}\xi_\mathbf{q}}{2\pi v}
\frac{\bar{\chi}(\mathrm{i}\omega,\xi_\mathbf{q})}{\mathrm{i}(\varepsilon+\omega)-\xi_\mathbf{q}}
\end{align}
with $\bar{\chi}(\mathrm{i}\omega,\xi_\mathbf{q}) = \int(\mathrm{d}q_\perp/2\pi)\ \chi(\mathrm{i}\omega,\bold{Q}+\bold{q})\big|_{\xi_\mathbf{q}=\mathrm{const}}$. Performing the analytical continuation to real-time frequencies~$\varepsilon$, we obtain in the limit $\varepsilon\rightarrow 0$ for the imaginary part of the retarded self-energy in the electronic Green's functions the formula
\begin{align}
\mathrm{Im}\ \Sigma^{R}(\varepsilon) &\simeq \pi T\ \frac{\ln\big(p_F^{-2}\mu+ \varepsilon^2/\varepsilon_F^2\big)}{\ln\big(\varepsilon^2/\varepsilon_F^2\big)}
\ .
\label{2a03}
\end{align}
It shows that at criticality, $\mu=0$, the self-energy of hot-spot electrons is linear in temperature, $\mathrm{Im} \Sigma^{R}_{\mathrm{QCP}}(\varepsilon\!\rightarrow\! 0) = \pi T$, and independent from the coupling constants. As a straightforward consequence, the resistivity of the hot-spot electrons in the compound would at arbitrarily low temperatures be linear in~$T$ as well. Away from the QCP, the finite bosonic mass $\mu$ suppresses for frequencies $|\varepsilon|\lesssim v\sqrt{\mu}$ the quantity~$\mathrm{Im} \Sigma^{R}(\varepsilon)$, which for~$\varepsilon\!\rightarrow\! 0$ tends logarithmically to zero. As a result, since for the conductivity essential frequencies~$\varepsilon$ are of order~$T$, a linear law for the temperature dependence of the hot-spot resistivity appears only above the critical temperature
\begin{align}
\label{2a04}
T_S \sim v \sqrt{\mu}\ .
\end{align}
At the QCP, clearly, $T_S=0$. Note that, since the limit~$T\gg T_S$ is essentially equivalent to the limit~$\mu\rightarrow 0$, the coefficient in front of~$T$ in the resistivity effectively does not depend
on the value of the bosonic mass~$\mu$, i.e. on the applied pressure that determines~$\mu$.
We turn now towards a dichotomic description of the transport properties \cite{hublina-rice} of the compound in terms of hot-spot and cold-spot regions on the Fermi surface, hereby providing a simple model upon which to test the experimental data of (TMTSF)$_2$PF$_6$ from Ref.~\onlinecite{doiron-leyraudSC}. The conductivity~$\sigma(T)$ is the sum of contributions from the entire Fermi surface. Treating separately the contributions from the hot spots (with the Fermi surface volume fraction~$v_h$) and those due to the cold regions (volume $1- v_h$), we write~$\sigma(T)$ as the sum
\begin{align}
\sigma(T)= \frac{v_h}{\rho_0 + \rho_{\mathrm{hot}}(T) } + \frac{1-v_h}{\rho_0 + \rho_{\mathrm{cold}}(T)}\ .
\label{dichotomy}
\end{align}
In circuit language, see the inset of Fig.~\ref{fig2}, this formula corresponds to the parallel arrangement of the resistances due to the hot and cold regions of the Fermi surface while each of the two resistances is viewed as a series of the residual resistance and a specific temperature-dependent one. The residual resistivity~$\rho_0$ is experimentally given by the $T \rightarrow 0 $ limit and is the result of elastic scattering processes. Guided by the preceding theoretical considerations, we specify in the following the form of the temperature-dependent resistivities~$\rho_{\mathrm{hot}/\mathrm{cold}}(T)$ and their underlying scattering processes in the hot and cold regions.
For the cold regions, we may for all temperatures assume the quadratic law $\rho_{\mathrm{cold}}(T) = B T^2$ accounting for the electron-electron scattering processes typical of the metallic behavior.
At a sufficiently high temperature $T > T_0$, the notion of cold and hot regions is irrelevant so that we may expect
the same law also in the hot regions, $\rho_{\mathrm{hot}}(T) = B T^2$. Lowering the temperature, we encounter at a temperature~$T_0$ the crossover into the quantum critical regime. Here, (Umklapp) scattering of hot conduction electrons through the quantum critical paramagnons leads according to the preceding analysis to a linear law~$\rho_{\mathrm{hot}}(T) = A T$, cf. Eq.~(\ref{2a03}). Below a second crossover temperature $T_S$, Eq.~(\ref{2a04}),
the linear resistivity is suppressed and one should again expect a Fermi-liquid like behavior,
$\rho_{\mathrm{hot}}(T)= C T^2$, though with an effective quasi-particle mass heavily renormalized by
the interaction with paramagnons close to criticality. At the QCP, $T_S=0$ so that the linear law for~$\rho_{\mathrm{hot}}(T)$ prevails down to zero temperature while at very high pressure, the differentiation between hot and cold regions is no longer valid so that we expect $C \rightarrow B$ and the critical window between~$T_S$ and $T_0$ to shrink to zero. Table~\ref{tbl: summary} summarizes the temperature laws for the three regimes.
\begin{table}[t]
\caption{\label{tbl: summary} Temperature dependencies of the resistivities of hot and cold electrons.}
\begin{tabular}{c|cc}
\noalign{\smallskip}
\hline\hline
\multirow{2}{*}{temperature region} & \multicolumn{1}{c|}{\multirow{2}{*}{$\rho_\mathrm{hot}(T)$}} & \multirow{2}{*}{$\rho_\mathrm{cold}(T)$} \\
&\multicolumn{1}{c|}{ }&\\
\hline
\multirow{2}{*}{$T>T_0$} & \multicolumn{2}{c}{\multirow{2}{*}{$BT^2$}} \\
&&\\ \cline{2-3}
\multirow{2}{*}{$\qquad T_S<T<T_0\qquad$} &
\multicolumn{1}{c|}{\multirow{2}{*}{$\qquad AT\qquad$}} & \multirow{2}{*}{$\qquad BT^2\qquad$} \\
&\multicolumn{1}{c|}{ }&\\
\multirow{2}{*}{$T<T_S$} & \multicolumn{1}{c|}{\multirow{2}{*}{$CT^2$}} & \multirow{2}{*}{$BT^2$} \\
&\multicolumn{1}{c|}{ }&\\
\hline\hline
\end{tabular}
\end{table}
\section{Comparison with experiments}
Our dichotomic conductivity model
suggests a three-step analysis of the transport data on (TMTSF)$_2$PF$_6$ \cite{doiron-leyraudSC}: In the first step, the coefficient~$B$ is fixed from the quadratic resistivity law $\rho_0 +B T^2$ at high temperatures ($\sim 30\ \mathrm{K}$). For the residual resistivity~$\rho_0$, the zero-temperature extrapolation of the experimental data, we have
used the same values as in Ref.~\onlinecite{doiron-leyraudSC}. Then, we extract
in the second step the critical regime at intermediate temperatures where a significant linear temperature contribution is observed. Fitting here the data to the formula~(\ref{dichotomy}) written for temperatures $T_S<T<T_0$, we find the coefficient~$A$ and the hot-spot volume fraction~$v_h$. Note that $B$ has already been fixed and thus is not a free fitting parameter.
At the same time, our theory predicts that~$A$ is pressure-independent. Thus once~$A$ is determined for one pressure, e.g. the one at which the linearity in~$T$ prevails down to lowest temperatures, the only remaining free fitting parameter is $v_h$.
In the final third step, we similarly use the low-temperature ($T<T_S$) form of Eq.~(\ref{dichotomy}) to determine the coefficient~$C$. Within the philosophy of the resistor model, $A$ and $C$ are constants as a function of temperature inside the temperature regime they appear in. This ensures that the regimes are properly defined according to Eq.~(\ref{dichotomy}) and Table~\ref{tbl: summary}. Theoretically, we may expect logarithmic corrections, see Eq. (\ref{2a03}), but when comparing with experiments, these are fairly approximated by constants. Finally, we determine the crossover temperatures~$T_S$ and~$T_0$ as the intersections of the fits found for each regime.
\begin{figure}[h]
\centerline{\includegraphics[width=0.8\linewidth]{fig02.eps}}
\caption{(Color online) Pressure dependence of the model parameters obtained from the fit: $10\times B$ (diamonds, in $\mu\Omega\ \mathrm{cm}/\mathrm{K}^2$), $C$ (squares, in $\mu\Omega\ \mathrm{cm}/\mathrm{K}^2$), and $v_h$ (circles) compared
to the temperature of the superconducting
transition~$T_c$ (triangles, in $\mathrm{K}$) from Ref.~\onlinecite{doiron-leyraudSC}. Inset: ``equivalent circuit'' for the dichotomic conductivity model~(\ref{dichotomy}).}
\label{fig2}
\end{figure}
Figure~\ref{fig2} shows the pressure dependence of~$B$, $C$, and $v_h$ as a result
of the analysis of $\sigma(T)$ between $0.15$ and $34\ \mathrm{K}$ at seven different pressures according to the three-step fitting procedure discussed above.
The analysis confirms that the contribution linear in~$T$ is indeed well-described by a pressure-independent coefficient~$A$ that if treated as a free fitting parameter mildly jitters around $A=0.38\ \mathrm{\mu \Omega\ cm/K}$.
The coefficient $B$ is related to the effective mass in the cold regions, $B\sim m_{\parallel}^{2}$.
Its slight decrease under pressure in Fig.~\ref{fig2} can be ascribed to the increase of the intermolecular in-chain overlap, possibly enhanced by correlations. The data are also in agreement with the pressure dependence of the spin susceptibility measured by NMR experiments \cite{Wzietek93}.
The coefficient $C$ describing the increase of the effective electron mass at hot spots is roughly constant but its size is ten times larger than the order of magnitude of~$B$. The fact that such an enhancement does not fade away under pressure indicates that even under $21.8\ \mathrm{kbar}$ the scattering off antiferromagnetic spin fluctuations is still very strong. Under a pressure of $11.8\ \mathrm{kbar}$, corresponding to the point closest to the QCP, no quadratic law could be observed down to the lowest temperature after superconductivity had been removed by the application of a small magnetic field along $c^{\star}$. This suggests that the quantum critical regime of linear resistivity extends down to temperatures very close to zero at this point.
The hot-spot volume~$v_h$, which close to the QCP is $v_h \approx 0.97$, is decreasing under pressure ($v_h \approx 0.30$ at $21.8\ \mathrm{kbar}$). This is in accordance with the intuitive physical picture that
the distance from the QCP enhances unnesting and thus reduces the effective size of the hot spots. Its value remarkably follows $T_c$, in agreement with earlier findings \cite{doiron-leyraudSC}.
\begin{figure}[h]
\centerline{\includegraphics[width=0.9\linewidth]{fig03.eps}}
\caption{(Color online) (pressure$-$temperature) phase diagram of (TMTSF)$_2$PF$_6$ displaying
the crossover temperatures~$T_S$ and $T_0$ from this analysis as well as the long-range order phases. The lines are guides to the eye.
The transition temperatures towards~SDW (triangles) and~SC (circles) are those from Ref.~\onlinecite{doiron-leyraudSC}.
}
\label{fig3}
\end{figure}
Both crossovers $T_0$ and $T_S$ are plotted \textit{versus} pressure in Fig.~\ref{fig3}. The behavior of $T_S$ is strongly suppressed in the vicinity of the QCP, in fair agreement with Eq.~(\ref{2a04}). It is to be noted that while the hot-spot contribution to conductivity is dominant at the pressure of $11.8\ \mathrm{kbar}$ close to the QCP, the presence of the cold regions is crucial to explain the pressure decrease of the resistivity at a fixed temperature. Indeed, as discussed above, increasing the distance to the QCP induces a decrease of $v_h$, thus favoring the conduction through the cold regions at larger pressures. In the language of the equivalent circuit
(see inset of Fig.~\ref{fig2}), the less-resistive cold regions short-circuit the larger and for $T_S<T<T_0$ linear in~$T$ hot-spot resistance.
\section{Conclusion}
In conclusion, we present a theory of a QCP associated with the Peierls-type singularity. This theory is nontrivial as the role of the curvature is preponderant in stabilizing the logarithmic divergences and yields strong influence on the form of the crossovers. The organic Bechgaard salts constitute an almost perfect model system with the simplicity of their band structure allowing us to test the curvature effects.
Within a hot-spot/cold-spot dichotomic conductivity model for the itinerant electron QCP,
we confront the critical theory with the experimental data obtained in transport measurements for (TMTSF)$_2$PF$_6$,
showing a good agreement. At the hot spots, the physics of the Bechgaard salts shows strong similarity with the physics in heavy-fermion systems.
\section*{Acknowledgements}
We acknowledge a fruitful cooperation with the team at Sherbrooke where the data in Ref.~\onlinecite{doiron-leyraudSC} were obtained. We acknowledge very useful discussions with K. B. Efetov and S.S. Brazovskii. H. M. acknowledges financial support from the SFB/TR~12 of the Deutsche Forschungsgemeinschaft and is grateful for the Chaire Blaise Pascal Fellowship of K.B. Efetov, which enabled his extended visit to the IPhT at the CEA-Saclay.
|
1,116,691,499,760 | arxiv | \section{Introduction}
\label{sec1}
The stated goal of the relativistic heavy-ion programs at CERN and BNL
is the study of the phase diagram of strongly interacting matter at
high temperatures and densities and the search for the quark-gluon
plasma (QGP). The discussion of a phase diagram requires thermodynamic
language. A phase transition from an initial color-deconfined QGP to a
color-confined hadronic state (as it is supposed to occur in heavy-ion
collisions) can only be reasonably well defined if the system under
study is in a state of approximate local thermodynamic equilibrium.
The application of thermal and hydrodynamic models to relativistic
heavy-ion data is therefore more than a poor man's approach to
heavy-ion dynamics, it is rather a necessity for everybody who wants
to convince himself and others that we succeeded in creating the
quark-gluon plasma (QGP) and observing the phase transition accompanying
its hadronization.
Of course, the models may fail; in fact, they must {\em necessarily}
fail beyond a certain level of detail when applied to heavy-ion data.
The reason is obvious: the collision systems are small, causing
corrections to the infinite volume limit usually assumed in the
thermodynamic approach, and they undergo a strong dynamical evolution on
time scales which are comparable to the microscopic thermalization time.
Thermal models therefore can never provide more than a rough picture of
the bulk of the phenomena, good for qualitative answers; on a more
detailed and quantitative level, the failure of the thermal model will
become manifest, and traces of genuine QCD {\em dynamics} (as opposed
to {\em thermodynamics}) will show up. But when trying to assess
{\em bulk} phenomena like QGP formation, we are not (in fact, we
{\em must} not be) primarily interested in these deviations
from thermodynamic behaviour and the traces of elementary QCD dynamics;
the latter can be studied much more easily and cleanly in elementary
lepton or hadron collisions. We should rather concentrate on the
rough global pattern of the data and try to understand them within a
(nota bene: sufficiently sophisticated, see below) thermo- and
hydrodynamic approach. On the other hand, if it turns out that not
even the rough qualitative features of the data can be understood in
this way and detailed hadronic dynamics is required even for a
superficial understanding of the observations, then we should concede
that our attempt to create ``hot QCD matter'' has failed.
Having argued in favor of a ``simple'' thermal approach to heavy-ion
data, the next questions to be addressed are (i) the level of
sophistication which the thermal model should have before being
applicable to the description of particle production in nuclear
collisions, (ii) which level of agreement between model and data
can at best be expected, and (iii) where to draw the line between
agreement and disagreement when comparing model and data. This is
what this contribution is about. At the end I will try to draw some
conclusions about what we have learnt so far from the thermal
analysis of heavy-ion data, and which further steps should be taken.
\section{Two types of ``thermal'' behaviour}
\label{sec2}
``Thermal'' behaviour can arise in {\em conceptually different}
ways, with different meanings of the ``temperature'' parameter $T$.
For us the two most important variants of ``thermal'' behaviour are the
following:
{\bf 1.} The {\em statistical occupation of hadronic phase-space} with
minimum information. The latter is in practice provided by external
constraints on the total available energy $E$, baryon number $B$,
strangeness $S$ and, possibly, a constraint $\lambda_s$ on the overall
fraction of strange hadrons. ``Thermal'' behaviour arises in this case
via the {\em Maximum Entropy Principle} in which the ``temperature'' $T$
and ``fugacities'' $e^{\mu_b/T}$, $e^{\mu_s/T}$ (which in the canonical
approach are replaced by so-called ``chemical factors''
\cite{becattini,BH97}) occur as Lagrange multipliers for the
constraints. Examples are nucleon emission from an evaporating
compound nucleus in low-energy nuclear physics and hadronization in
$e^+e^-$, $pp$ and $p\bar p$ collisions (hadron yields
\cite{becattini,BH97} and $m_\perp$-spectra~\cite{sollfrank}). The
number of parameters to fit the data in such a situation is equal
to the number of ``conserved quantities'' (constraints), and it
reflects directly the information content of the fitted observable(s).
This type of ``thermal'' behaviour requires {\em no} rescattering and
{\em no} interactions among the hadrons, there is no isotropic pressure
and no collective flow in the hadronic final state and, in fact, the
concept of {\em local} equilibrium can {\em not} be applied. Of course,
this type of ``thermal'' behaviour is not really what we are interested
in in heavy ion collisions, except as a baseline against which to
differentiate interesting phenomena.
{\bf 2.} Thermalization of a non-equilibrium initial state by {\em
kinetic equilibration} (rescattering). This does require (strong!)
interactions among the hadrons. Here one must differentiate between
{\em thermal} equilibration (reflected in the shape of the momentum
spectra), which defines the temperature $T$, and {\em chemical}
equilibration (reflected in the particle yields and ratios) which
defines the chemical potentials in a grand canonical description. The
first is driven by the {\em total} hadron-hadron cross section while
the second relies on usually much smaller {\em inelastic} cross
sections and thus happens more slowly. This type of equilibrium is
accompanied by pressure which drives collective flow (radial expansion
into the vacuum as well as directed and elliptic flow in non-central
collisions). In heavy ion collisions it is realized {\em at most
locally}, in the form of local thermal and/or chemical equilibrium --
due to the absence of confining walls there is never a global equilibrium.
This is the type of ``thermal'' behaviour which we are searching for in
heavy-ion collisions.
I stress that {\em flow} is an unavoidable consequence of this type of
equilibration. Thermal fits without flow to hadron spectra are not
consistent with the kinetic thermalization hypothesis. Flow contains
information; it is described by three additional fit parameters
$\vec{v}(x)$. This information is related to the pressure history in
the early stages of the collision and thereby (somewhat indirectly) to
the equation of state of the hot matter.
Most thermal fits work with global parameters $T$ and $\mu$ which, at
first sight, appears inconsistent with what I just said. Here the
role of freeze-out becomes important: freeze-out cuts off the
hydrodynamical evolution of the thermalized region via a kinetic
freeze-out criterium \cite{freeze} which involves the particle
densities, cross sections and the expansion rate. In practice freeze-out
may, but need not occur at nearly the same temperature everywhere
\cite{freeze}.
Clearly a thermal fit to hadron production data (if it works) is not
the end, but rather the beginning of our understanding. One
must still check the {\em dynamical consistency} of the fit
parameters $T_f$, $\mu_f$, $\vec{v}_f$: can one find equations of
state and initial conditions which yield such freeze-out parameters?
Which dynamical models can be excluded?
\section{The hadronic phase diagram}
\label{sec3}
In figure \ref{figure1} I show a recent version of the phase diagram
for strongly interacting matter, with various sets of data points
included \cite{S97,Hqm97,BMSqm97,Mqm97,CR98}. In the present section
I discuss the meaning of this figure and explain how these data points
were obtained. In the next section I will discuss some problems connected
with the extraction procedures.
\begin{figure}[t]
\caption[]{Compilation of freezeout points from SIS to SPS energies.
Filled symbols: chemical freeze-out points from hadron abundances. Open
symbols: thermal freeze-out points from momentum spectra and
two-particle correlations. For each system, chemical and thermal
freeze-out were assumed to occur at the same value $\mu_B/T$.The shaded
region indicates the parameter range of the expected transition to a QGP.
\label{figure1}}
\begin{indented}
\item[]\hspace{0cm}\epsfxsize 12cm \epsfbox{sqm98f1.ps}
\end{indented}
\end{figure}
\subsection{Chemical freeze-out}
\label{sec3a}
\subsubsection{$e^+e^-$ and $pp$ collisions.}
\label{sec3a1}
Let me begin with the $e^+e^-$ data point in figure \ref{figure1}. (There
is also a $pp$ point from Ref.~\cite{BH97} which was omitted for clarity.)
In spite of what I said about case {\bf 1.} above, a ``thermal''
analysis of hadron yields in elementary collisions \cite{becattini,BH97}
is still interesting. (Of course, in this case the canonical formalism
must be used, due to the small collision volume.) The interest arises
{\em a posteriori} from the observed universality of the fit parameters,
namely a universal ``hadronization'' or ``chemical freeze-out'' temperature
$T_{\rm chem} = T_{\rm had} \approx 170$ MeV (numerically equal to the old
Hagedorn temperature $T_{\rm H}$ and consistent with the
inverse slope parameter of the $m_T$-spectra in $pp$ collisions
\cite{sollfrank}), and a universal strangeness fraction
$\lambda_s =$ $2\langle\bar s s\rangle/(\langle\bar u u \rangle +
\langle\bar d d\rangle)|_{\rm produced}$ $\approx 0.2{-}0.25$, almost
independent of $\sqrt{s}$ \cite{becattini,BH97,BGS98}.
This is most easily understood~\cite{BH97} in terms of a universal
critical energy density $\epsilon_{\rm crit}$ for hadronization which,
via the Maximum Entropy Principle, is parametrized by a universal
``hadronization temperature'' $T_{\rm had}$ and which, according to
Hagedorn, characterizes the upper limit of hadronic phase-space.
Supporting evidence comes from the observed increase with $\sqrt{s}$
of the fitted fireball volume $V_f$ (which accomodates the increasing
multiplicities and widths of the rapidity distributions). Although
higher collision energies result in larger {\em initial} energy
densities $\epsilon_0$, the collision zone subsequently undergoes more
(mostly longitudinal and not necessarily hydrodynamical) expansion until
$\epsilon_{\rm crit}$ is reached and hadron formation can proceed. The
systematics of the data can only be understood if hadron formation at
$\epsilon{>}\epsilon_{\rm crit}$ (i.e. $T{>}T_{\rm H}$ for the
corresponding Lagrange multipliers) is impossible. With this
interpretation, the chemical analysis of $e^+e^-$, $pp$ and $p\bar p$
collisions does provide one point in the $T$-$\mu_b$ phase diagram (see
figure \ref{figure1}). -- The only ``childhood memory'' of the collision
system is reflected in the low value of $\lambda_s$, indicating suppressed
strange quark production (relative to $u$ and $d$ quarks) in the early
pre-hadronic stages of the collision.
\subsubsection{$AA$ collisions: strangeness enhancement.}
\label{sec3a2}
In this light the observation \cite{S97} of a chemical freeze-out
temperature $T_{\rm chem} \approx T_{\rm H} \approx 170$ MeV in heavy-ion
collisions with sulphur beams at the SPS (figure \ref{figure1}), taken
by itself, is not really interesting. It suggests that in heavy-ion
collisions hadronization occurs via the same statistical hadronic
phase-space occupation process as in $pp$ collisions. What {\em is}
interesting, however, is the observation \cite{BGS98,G96} that the global
strangeness fraction $\lambda_s{\approx}0.4{-}0.45$ in $AA$ collisions
is about a factor 2 larger than in $e^+e^-$ and $pp$ collisions. If $pp$
and S+$A$ collisions hadronize via the same mechanism, and in S+$A$
collisions the Maximum Entropy particle yields fixed at $T_{\rm had}$
are not modified by inelastic hadronic final state rescattering, this
increase in $\lambda_s$ must reflect a {\em difference} in the properties
of the {\em prehadronic} state! In nuclear collisions the prehadronic
stage allows for more strangeness production, most likely due to a
longer lifetime before hadronization.
It was noted before \cite{BGS98,G96} that the global strangeness
enhancement occurs already in collisions between medium size nuclei
(S+S) and remains roughly unchanged in Pb+Pb collisions. In this
conference we saw data from the WA97 collaboration \cite{WA97,Lietava,Evans}
which provide two important further details:
1. While the bulk of the strangeness enhancement from $p$+Pb to Pb+Pb
collisions is carried by the kaons and hyperons ($\Lambda$, $\Sigma$),
which are enhanced by about a factor 3 near midrapidity, the enhancement
is much stronger for the doubly and triply strange baryons $\Xi$ and
$\Omega$ and their antiparticles, with an enhancement factor of about
17 (!) for $\Omega+\bar\Omega$ at midrapidity. The enhancement clearly
scales with the strangeness content \cite{Lietava}, as naively expected
in statistical and thermal models, but in stark contradiction to
expectations based on the consideration of the respective production
thresholds in hadronic (re)interactions.
2. In semicentral Pb+Pb collisions the enhancement grows linearly with
the number of participating nucleons in the collision, so the enhanced
yield of all measured strange hadron species per participating nucleon
is {\em independent of the effective size of the colliding system} from
about 150 to 400 participants \cite{WA97}. Where comparison is possible,
this systematics even carries over to central S+S collisions \cite{Evans}
with as few as 55-60 participating nucleons. So whatever causes the
enhancement in Pb+Pb collisions (e.g. the existence of a color-deconfined
prehadronic stage) is not particular to (semi)\-cen\-tral Pb+Pb collisions,
but exists already in S+S collisions! I will return to the
$A$-independence of this effect below.
At most half of the 100\% increase of global strangeness production
in $AA$ collisions can be explained \cite{SBRS98} by the removal of
canonical constraints in the small $e^+e^-$ and $pp$ collision
volumes (which would be an interesting observation in itself
because it would already imply that in nuclear collisions hadron
production and the conservation of quantum numbers occurs no longer
on nucleonic, but indeed on nuclear length scales). The remainder of
the increase must be due to extra strangeness production in the whole
fireball volume {\em before} hadronization. It is interesting to analyze
in the same way the strong $\Omega+\bar\Omega$ enhancement in Pb+Pb
(by a factor 17 relative to $p$+Pb \cite{Lietava}): of course, the $\Omega$
(carrying 3 units of strangeness) suffers a particularly strong canonical
suppression due to exact strangeness conservation in the small hadronization
volume of a $pp$ (or $p$+Pb) collision; for $T_{\rm had}\simeq 170$ MeV,
$\gamma_{\rm s}=0.5$ and $V=17.6$ fm$^3$ (as obtained by fitting $pp$
data at $\sqrt{s}=27$ GeV \cite{BH97}) $\Omega+\bar\Omega$ are
canonically suppressed by a factor 12 relative to a grand canonical
treatment \cite{Jocklpriv}. Again the observed enhancement effect in
Pb+Pb collisions is considerably larger than expected from a simple
removal of the canonical constraints.
The observed strangeness fraction $\lambda_s=0.45$ in nuclear collisions
corresponds to a strangeness saturation coefficient \cite{R91}
$\gamma_s\approx 0.7$ \cite{BGS98}. On the other hand, a value of
$\gamma_s \approx 0.7$ in the hadronic final state may, in fact, be
the upper limit reachable in heavy-ion collisions \cite{SBRS98}
because the corresponding strangeness fraction agrees with that
in a fully equilibrated QGP at $T_{\rm had} \approx 170$ MeV. If
both strangeness and entropy are conserved or increase similarly
during hadronization, $\gamma_s\approx 0.7$ in the Maximum Entropy
particle yield after hadronization would be a universal consequence
of a fully thermally and chemically equilibrated QGP before
hadronization \cite{SBRS98}. The SPS data would then be completely
consistent with such a prehadronic state.
The existence of a prehadronic stage without color confinement, both in
S+S and Pb+Pb collisions at the SPS, is also suggested by an analysis of
baryon/antibaryon ratios of different strangeness. This was stressed
at this conference by A. Bialas \cite{B98} who redid this analysis
with the new data following the ideas of Rafelski \cite{R91}.
According to figure \ref{figure1} chemical freeze-out in sulphur-induced
collisions at the SPS appears to occur {\em right at the critical line},
i.e. immediately after hadronization. The SIS data, on the other hand,
indicate much lower chemical freeze-out temperatures. The origin of this
is probably due to longer lifetimes of the reaction zone at lower beam
energies, allowing for some chemical re-equilibration by inelastic hadronic
reactions.
A tendency for some chemical re-equilibration after the hadronization of
the proposed pre-hadronic stage may also be visible in the still
preliminary Pb+Pb data at the SPS: although thermal model analyses of
these data still give wildly scattering results \cite{BGS98,BMSqm97,LR98},
some authors \cite{LR98} find chemical freeze-out temperatures in Pb+Pb
below 140 MeV. A thermal model analysis of RQMD simulations also gives
chemical freeze-out temperatures of 172 MeV in S+S, but of only 155 MeV
in Pb+Pb collisions \cite{SHSX98}. Both analyses show a characteristic
failure to reproduce the $\Omega$ and $\bar\Omega$ yields \cite{WA97}.
This was interpreted \cite{SHSX98,HSX98} in terms of early freeze-out
of these triply strange baryons due to their small interaction cross
sections with other types of hadrons. It is interesting to observe that
in the thermal analysis of the data \cite{LR98} the model {\em underpredicts}
the measured $\Omega$ and $\bar\Omega$ yields (which prefer a higher
freeze-out temperature $T\geq 170$ MeV \cite{BMSqm97,BGS98}) while in
RQMD, which is known to produce too few $\Omega$ and $\bar \Omega$
baryons in $pp$ and $pA$ collisions, the thermal model {\em overpredicts}
the simulated yields. All this points to the $\Omega$ and $\bar\Omega$ as
relatively early hadronic messengers, and the message they seem to carry
in the Pb+Pb {\em data} is again that of the existence of a prehadronic
stage with enhanced global strangeness which hadronizes statistically at
$T_{\rm had} = T_{\rm H} \simeq 170$~MeV.
\subsection{Flow and thermal freeze-out}
\label{sec3b}
The other important observation in the hadronic sector of nuclear
collisions is that of collective flow (radial expansion flow, directed
and elliptical flow). It is usually extracted from the shape of the
single-particle momentum distributions. Radial flow, for example,
leads to a flattening of the $m_\perp$-spectra. For the analysis one
must distinguish two domains. In the relativistic domain
$p_\perp{\gg}m_0$ the inverse slope $T_{\rm app}$ of all particle
species is the same and given by the blueshift formula \cite{freeze}
$T_{\rm app}{=}T_f \sqrt{(1{+}\langle v_\perp\rangle)/(1{-}\langle
v_\perp\rangle)}$. This formula does not allow to disentangle the
average radial flow velocity $\langle v_\perp\rangle$ and freeze-out
temperature $T_f$. In the non-relativistic domain $p_\perp{\ll}m_0$
the inverse slope is given approximately by $T_{\rm app}{=}T_f{+}m_0
\langle v_\perp^2 \rangle$, and the rest mass dependence of the
``apparent temperature'' (inverse slope) allows to determine $T_f$ and
$\langle v_\perp^2 \rangle$ separately. (In $pp$ collisions no
$m_0$-dependence of $T_{\rm app}$ is seen \cite{NuXu}.) Plots of
$T_{\rm app}$ against $m_0$ were shown in several talks at this
conference, showing that the data follow very nicely this systematics,
from SIS to SPS energies.
A notable exception are the $\Omega$-spectra of WA97 \cite{WA97}
which are steeper than expected from this formula. Again, as in the
above discussion of their abundance, this reflects their character
as ``early hadronic messengers'' \cite{HSX98}: the $\Omega$ and
$\bar \Omega$ are the only baryons which (due to quantum number mismatch)
do not have a strong resonance with pions, the most abundant particles
in the fireball. Since resonance scattering is the most efficient
thermalization mechanism in a dense hadronic system, the $\Omega$ and
$\bar \Omega$ momentum distributions freeze out earlier than those of
all other baryons. This implies \cite{HSX98} that they cannot efficiently
pick up the collective transverse flow which builds up among the pions
in the later stages of the expansion, and their spectra reflect the much
weaker collective transverse flow in the early collision stages,
just after hadron formation.
[This also illustrates the important role of the baryon contamination
in the hot fireball: in Pb+Pb collisions at the SPS the pion and baryon
spectra decouple late and cool down to rather low temperatures of about
120 MeV (see below) because the pions are ``glued'' together by the
baryons via resonance scattering. At RHIC this glue will be less efficient
since near midrapidity there will essentially exist only baryon-antibaryon
pairs at rather low thermal equilibrium abundances. It is thus expected
that at RHIC thermal decoupling occurs at somewhat higher freeze-out
temperatures, closer to the hadronization phase transition.]
The separation of collective flow and random thermal motion from an
analysis of single particle spectra is not uncontroversial. The main
reason is that the fitted values for $T$ and $v_\perp$ tend to be
strongly correlated. To break the correlation one must study spectra
of hadrons with different masses in the low-$p_\perp$ region which,
on the other hand, is contaminated by post-freeze-out resonance decays.
As a consequence, fits done in different $p_\perp$ windows tend to give
different results.
A clearer determination of the transverse collective flow comes
from a direct measurement of the flow-induced space-momentum
correlations via the $M_\perp$-dependence of the two-pion HBT radii
\cite{H96}. As shown in \cite{NA49HBT,Wiedemann} the correlations between
temperature and transverse flow in a fit of the single particle transverse
momentum spectra and of the transverse two-particle HBT radii are essentially
orthogonal to each other (see figure \ref{figure2}), and the combined
analysis of spectra and correlations allows for a clean separation of
random thermal motion from collective flow.
\begin{figure}[t]
\caption[]{Thermal freeze-out temperature and transverse flow
velocities extracted from fits to the transverse momentum spectra
of negative hadrons ($h^-$) and deuterons (d) and to the transverse
HBT radius ($2\pi$-BE). The shaded area indicates the overlap region
near $T_{\rm f.o.} \approx 120$ MeV and $v_\perp \approx 0.55$ $c$.
(From Ref.~\protect\cite{NA49HBT}.)
\label{figure2}}
\begin{indented}
\item[]\hspace{0cm}\epsfxsize 12cm \epsfbox{sqm98f2.ps}
\end{indented}
\end{figure}
This analysis \cite{NA49HBT,Wiedemann} gave rise to the open circle for
the SPS data in figure \ref{figure1}, indicating the point of thermal
decoupling in 158 $A$ GeV/$c$ Pb+Pb collisions ($T_{\rm therm} \approx
120$ MeV, $\langle v_\perp \rangle \approx 0.5\, c$). It is consistent
with a comparison of the spectral slopes for different mass hadrons
by K\"ampfer \cite{kaempfer}. Earlier analyses of S+S data \cite{SSH93}
showed a thermal decoupling at $T_{\rm therm} \approx 140-150$ MeV,
$\langle v_\perp \rangle \approx 0.25-0.35\, c$. The open circle for the
AGS as well as the open quadrangles for the SIS in figure \ref{figure1}
were obtained similarly as in \cite{kaempfer,SSH93} by comparing
transverse momentum spectra of particles with different masses
(see \cite{BMSqm97,Mqm97} for references). All open symbols correspond
to heavy collision systems (Pb+Pb, Au+Au). The dashed line connects them
by eye in an attempt to construct a ``thermal freeze-out curve'' for
heavy-ion collisions of size 200+200.
It is interesting to analyze the $A$-dependence of the various hadronic
observables, i.e. of strangeness enhancement, chemical and thermal
freeze-out temperatures and radial transverse flow. This discussion should
include the available information on the impact parameter dependence
in Pb+Pb collisions since collisions at different impact parameters also
involve different numbers $N_{\rm part}$ of participating nucleons.
Whereas the freeze-out temperatures (certainly the thermal freeze-out
temperature, but perhaps also the chemical one, see above) seem to come
down with increasing size of the collision system, while at the same time
the strength of the radial transverse flow goes up, the strangeness
enhancement (i.e the strange particle production per participating
nucleon relative to $pp$ and $pA$ collisions) appears to be independent
of the number of $N_{\rm part}$, at least in the range $60 \leq
N_{\rm part} \leq 400$. The buildup of radial flow is largely (although
not exclusively) a hadronic reinteraction phenomenon \cite{HSX98}; the
same is true for the freeze-out process. The $N_{\rm part}$-dependence
of both features can be explained in terms of the longer lifetime of the
reaction zone as $N_{\rm part}$ increases, giving the system more time
to re-equilibrate, cool down and develop collectivity.
In contrast, the $A$-independence of the strangeness enhancement
features suggest that they are not due to hadronic re-interactions,
but originate in a prehadronic phase with properties which are
essentially independent of the system size once $N_{\rm part} \geq 50$
or so. My interpretation of these facts is that at SPS energies the
energy density threshold for QGP has been overcome by a sizeable margin
\cite{Rio97}, and that even in small collision systems a deconfined phase
is created which interacts sufficiently long and sufficiently strongly
to approximately saturate strangeness production. One can even argue
that isotropic pressure (a signature of local thermalization) must be
present at this early stage \cite{Rio97}, and that the observed elliptic
flow in non-central Pb+Pb collisions \cite{NA49flow} actually signals
this early pressure \cite{private}. A future systematic investigation
also of the range $1 < N_{\rm part} < 100$ would be very useful to
study the onset and saturation of thermal and collective behaviour
as the size {\em and lifetime} of the collision system increases.
It will {\em not}, however, be an efficient method to study the onset of
deconfinement -- for that one must go to {\em lower} beam energies.
\section{Limitations of thermal model analyses}
\label{sec4}
After having explained how the data points in figure \ref{figure1} were
obtained, I would now like to ask the notorious David Mermin question
\cite{Mermin} ``What's wrong with this phase diagram?'' In other words,
I want to point out in more detail certain unavoidable problems with
thermal model analyses of heavy-ion data. Only by remaining conscious
of the limitations of the thermal approach and avoiding the
overinterpretation of uncontrollable details one can fully exploit
its power in providing essential qualitative insight into the physics
of heavy-ion collisions.
\subsection{Rapidity dependence of particle ratios}
\label{sec4a}
The first problem is of purely practical nature: no heavy-ion experiment
so far has full $4\pi$ acceptance for identified particles, and particle
spectra are available in restricted windows of $p_\perp$ and $y$ only.
The observed nearly exponential form of the $p_\perp$-spectra allows for
an extrapolation of the yields to the full transverse momentum range
without introducing large uncertainties (at least if the acceptance
covers sufficiently low values of $p_\perp$). A similar extrapolation
of the rapidity spectra is not possible: even in a system which is in
perfect local thermal equilibrium, particles with different masses
tend to have strongly different rapidity distributions. In practice
these differences in the shape of the rapidity spectra are even stronger,
in the sense that even particles and antiparticles (with obviously identical
masses) have different rapidity distributions. Thus there is essentially
no way of extrapolating the rapidity distributions without really measuring
them.
That this is a serious problem for thermal model analyses is illustrated
by the following example: consider a stationary, spherical fireball in
global thermodynamic equilibrium. It emits hadrons with the following
rapidity distributions:
\begin{equation}
\label{rapid}
{dN_i\over dy} \sim e^{-m_i\cosh y/T} \left[ 1 + 2 {T\over m_i \cosh y}
+ 2 \left( {T\over m_i \cosh y} \right)^2\right]
\end{equation}
These resemble Gaussians with widths $\Gamma_i\approx 2.35\sqrt{T\over m_i}$
(the approximation being valid for $m_i \gg T$). Clearly, the particle
ratios $(dN_i/dy)/(dN_j/dy)$ then depend strongly on the position of
the rapidity interval $dy$: away from $y=0$ heavy particles will be
much more strongly suppressed relative to light particles than near
$y=0$. In this case a measurement of particle yields in a small
rapidity window is completely useless for a thermal model analysis
(irrespective of whether the window is located near $y=0$ or at $y\ne 0$)
unless it is {\em known a priori} that the radiator is a stationary
spherical fireball.
The presence of (strong) longitudinal flow in relativistic heavy-ion
collisions does not help very much in this connection; only in the limit
of infinite beam energy with exact longitudinal boost-invariance due to
Bjorken scaling, resulting in flat rapidity distributions, is it possible
to base a thermal analysis on data in a finite, narrow rapidity interval.
At SPS energies and below, where Bjorken scaling is not observed, thermal
fits of data in finite rapidity windows require that the thermal model yields
are cut to the actual experimental acceptance; this induces serious
dependences on the detailed model assumptions, for example about the
strength and profile of the longitudinal and transverse flow of the source.
This model-dependence was recently studied in some detail in
Ref.~\cite{soll98}.
Flow effects drop largely out, however, if one works with particle
ratios obtained from $4\pi$ yields. (Please note that this requires
a {\em measurement} of some sort, not a blind extrapolation of data
from a small window in $y$ and $p_\perp$ to full momentum space!) The
insensitivity to hydrodynamic flow becomes exact if the freeze-out
temperature and chemical potential is everywhere the same. If freeze-out
occurs on a sharp hypersurface $\Sigma$, the total yield of particle
species $i$ is then given by
\begin{equation}
\label{flow}
N_i = \int {d^3p\over E} \int_\Sigma p^\mu d^3\sigma_\mu(x)\, f_i(x,p)
= \int_\Sigma d^3\sigma_\mu(x) \, j_i^\mu(x)\,,
\end{equation}
where
\begin{equation}
\label{current}
j_i^\mu(x) = \int d^4p\, 2\theta(p^0)\delta(p^2-m_i^2)
\, p^\mu {g_i\over e^{[p{\cdot}u(x) - \mu_i]/T} \pm 1}
\end{equation}
is the number current density of particle species $i$. In thermal
equilibrium it is given by $j_i^\mu(x)=\rho_i(x)\, u^\mu(x)$ with
\begin{eqnarray}
\label{dens}
\rho_i(x) &=& u_\mu(x) j_i^\mu(x)
= \int d^4p\, 2\theta(p^0)\delta(p^2-m_i^2)\, p{\cdot}u(x) \,
f_i\bigl(p{\cdot}u(x);T,\mu_i\bigr)
\nonumber\\
&=& \int d^3p'\, f_i(E_{p'};T,\mu_i) = \rho_i(T,\mu_i).
\end{eqnarray}
Here $E_{p'}$ is the energy in the local rest frame at point $x$.
The total particle yield of species $i$ is therefore
\begin{equation}
\label{yield}
N_i = \rho_i(T,\mu_i) \int_\Sigma d^3\sigma_\mu(x) u^\mu(x)
= \rho_i(T,\mu_i)\, V_\Sigma(u^\mu)
\end{equation}
where only the total comoving volume $V_\Sigma$ of the freeze-out
hypersurface $\Sigma$ depends on the flow profile $u^\mu$. Thus the
flow pattern drops out from ratios of $4\pi$ yields which therefore
depend only on $T$ and the chemical potentials. These considerations
are easily generalized to ``fuzzy freeze-out'' (i.e. freeze-out from
a space-time volume rather than from a sharp hypersurface): as long as
$T$ and $\mu_i$ are the same everywhere, $4\pi$ particle ratios are
independent of the collective dynamics of the source.
For heavy-ion collisions at SPS energies and below one should therefore
perform a thermal analysis on $4\pi$-integrated yields and not on
particle ratios inside small rapidity windows. This requires a strong
experimental effort to measure the rapidity distributions of as many particle
species as possible over the full rapidity range.
\subsection{Non-constant thermodynamic parameters at freeze-out}
\label{sec4b}
The second, even more serious problem is the observation that in reality
freeze-out {\em does not happen} at constant temperature and chemical
potential. For example, it was shown in Ref.~\cite{Slotta95} that a
successful thermal description of the rapidity distributions of hadrons
created in 200 $A$ GeV S+S collisions (in particular the different
shapes of the rapidity distributions for $\Lambda$ and $\bar \Lambda$, $K^+$
and $K^-$) requires not only strong longitudinal flow, but also a baryon
chemical potential $\mu_i(\eta)$ which depends on the longitudinal
position in the fireball: the central rapidity region is baryon-poorer
than the target and projectile fragmentation regions. A second example
demonstrating this type of problem is the observed rapidity dependence of
the $p_\perp$-slopes of the $h^-$ and proton spectra \cite{Jacobs} and of
the $K_\perp$-slopes of the transverse HBT radius $R_\perp$
\cite{NA49HBT,Schoenf}. According to a simultaneous analysis (as
discussed in section \ref{sec3b} above) of spectra and correlations from
Pb+Pb collisions at the SPS by Sch\"onfelder \cite{Schoenf} the
decrease of the inverse slope parameters away from midrapidity must
be attributed to {\em both} a reduction of the transverse collective
flow {\em and} of the freeze-out temperature $T(\eta)$. If true, this
would speak against a constant freeze-out temperature.
In such a situation the thermal fit replaces the functions $T(\eta)$ and
$\mu_i(\eta)$ by suitably averaged values $\bar T$, $\bar\mu_i$ (see
Ref.~\cite{soll98} for a recent detailed study). Obviously the fit will
then not be perfect: particle yields from a system in perfect local
thermodynamic equilibrium (as, e.g., assumed in all hydrodynamic
simulations), but with {\em spatially varying} temperature and chemical
potentials, cannot be exactly recovered by a fit with {\em constant}
temperature and chemical potential. In practice, this does not appear to
be a very serious problem if one believes that the results from a recent
thermal model analysis of particle yields from a hydrodynamic simulation
\cite{soll98} are representative for realistic situations: the freeze-out
temperature reconstructed from the fit nearly coincided with the input
temperature at which the hydrodynamic evolution was stopped, and the fitted
chemical potentials agreed approximately with their average values across
the freeze-out surface. Nevertheless, small differences remain between
the real yields from the hydrodynamic simulation and the yields returned
by the thermal model fit. In a least mean square fit these {\em systematic}
deviations would lead to a value for $\chi^2$/d.o.f. which increases above
all limits as the statistical error of the simulated (``measured'')
yields is further and further reduced, even though the system was, by
construction, in perfect local thermal equilibrium.
This illustrates that global thermal fits to heavy-ion data can never
be fully successful, due to the dynamics of the collision and its
intricate influence on the freeze-out process. For this reason one must
not expect too much from the thermal model -- a perfect fit with extremely
small $\chi^2$/d.o.f. is not necessarily a good and often rather a bad
sign, indicating accidental error correlations, e.g. due to the use
of redundant fit parameters.
This leaves us with the question where to draw the line between ``good''
and ``bad'' thermal model fits, between success and failure of a
thermodynamic description of relativistic heavy-ion collisions. The above
discussion should have made it clear that $\chi^2$/d.o.f. is not a good
criterium for answering this question. On the other hand, a fit which
reproduces particle yields which cover a range of more than three orders
of magnitude with individual deviations of less than $\pm$25\% \cite{BH97}
is obviously not bad. Quantitative model studies like those presented
in \cite{soll98} give us guidance for separating the grain from the
straw; when supplemented by a thermal analysis of microscopic kinetic
simulations as those presented at this meeting by Larissa Bravina
\cite{Bravina}, they are the foundations which we can use when
collecting arguments in favor or against the formation of thermalized
hot hadronic matter and quark-gluon plasma.
\section{Conclusions}
\label{sec5}
Let me summarize shortly: a thermal + flow analysis of yields, spectra
and 2-particle correlations in S+$A$ and Pb+Pb collisions at the CERN
SPS suggests
\begin{itemize}
\item the formation of a {\em prehadronic} state in which twice as
much strangeness is produced as in $pp$ and $pA$ collisions and
in which quarks are uncorrelated, i.e. they are not bound into
color singlets;
\item statistical hadronization of this state at $T_{\rm had}
\approx 170 \pm 20$ MeV with hadron abundances controlled
by the {\em Maximum Entropy Principle};
\item rapid decoupling of the particle abundances, with chemical freeze-out
temperatures $T_{\rm chem} \approx T_{\rm had}$ in sulphur-induced
and $T_{\rm chem} \leq T_{\rm had}$ in Pb+Pb collisions;
\item elastic rescattering among the hadrons (dominated by $s$-channel
resonances) after hadronization from the prehadronic state which
leads to a cooling of the momentum spectra and the generation of
(more) collective flow;
\item finally, thermal freeze-out at $T_{\rm therm} \approx 140-150$ MeV
in S+$A$ and at $T_{\rm therm} \approx 120\pm 10$ MeV in Pb+Pb
collisions, with average transverse collective flow velocities of
order $\langle v_\perp \rangle \approx 0.4-0.5\,c$.
\end{itemize}
Smaller collision systems distinguish themselves from larger ones not
primarily by the achieved maximal energy density, but by the occupied
collision volume in space and time. Compared to S+S collisions, Pb+Pb
collisions live longer until thermal freeze-out, expand more in the
tranverse direction, develop more transverse collective flow and cool
down to lower (chemical and thermal) freeze-out temperatures. Of course,
this view disagrees with the (present) majority opinion (I refer to the
respective contributions to the Proccedings of ``Quark Matter '97''
\cite{QM97}) that the critical energy density for deconfinement can
be crossed at fixed beam energy of 160 $A$ GeV by changing the size
of the projectile and target or the collision centrality, and that
the ``anomalous'' $J/\psi$ suppression observed in central Pb+Pb
collisions as a function of produced transverse energy \cite{QM97,NA50}
signals this transition.
I find this interpretation irreconcilable with the systematics of
light hadron production as discussed in this talk; a more consistent
interpretation rests on the observation \cite{NA50} that, as one
increases $N_{\rm part}$, one first sees ``anomalous'' suppression of
the weakly bound $\psi'$, then (in semiperipheral Pb+Pb collisions)
the ``anomalous'' suppression of the more strongly bound $\chi_c$ states
(indirectly, via the disappearance of their 32\% feed-down contribution
to the measured $J/\psi$ yield), and only in very central Pb+Pb
collisions the disappearance of directly produced $J/\psi$'s which
are {\em very} strongly bound (this last part of the supression pattern
still remains to be confirmed by an improved measurement at very high
$E_T$). This suggests to me that what NA50 is seeing is {\em not the
onset of deconfinement} (the latter is there even in S+$A$ collisions),
but the dissociation of more and more strongly bound heavy quark states
(respectively the removal of the corresponding components in the
$c\bar c$ wavefunction) by collisions with the dense partonic medium
in the early stages of the collision. For the more strongly bound
states most of these collisions will be subthreshold; for this reason
a longer lifetime of the dense early stage, which is achieved in larger
collision systems or more central collisions, is crucial for an efficient
destruction not only of the weakly bound $\psi'$, but also of the more
strongly bound $\chi_c$ and $J/\psi$. Charmonium suppression is thus, in
my opinion, more a {\em lifetime effect} than a {\em deconfinement signal}
(although the necessary high density of scatterers with sufficiently large
cross sections probably requires deconfinement, too).
The picture which I have painted is, I believe, intrinsically consistent.
It is sufficiently simple to be attractive but also sufficiently
sophisticated not to be unrealistic. It may not be unique, not least
because of the intrinsic systematic uncertainties associated with thermal
model analyses which I pointed out and not all of which are quantitatively
understood. What is urgently needed is more high-quality data on the
chemistry of Pb+Pb collisions, the reconciliation of some puzzling
discrepancies between different experiments as discussed at this meeting,
and an improved systematics of the impact parameter and $A$-dependences,
both in the light and heavy hadron sector.
\ack
The author acknowledges the hospitality at the INT (Seattle) in March/April
1998 where discussions with many colleagues allowed him to sharpen
the arguments presented here. This work was supported by BMBF, DFG
and GSI.
\section*{References}
|
1,116,691,499,761 | arxiv | \section{Introduction}
\label{sec:intro}
The assumption that the sources of breaking of the flavour symmetry
present in the standard model (SM) Lagrangian determine completely the
structure of flavour symmetry breaking also beyond the SM, is commonly
referred to as the Minimal Flavour Violation (MFV)
hypothesis~\cite{MFV,MFV2,DAmbrosio:2002ex}. In the quark sector
there is a unique way to implement MFV: the two quark SM Yukawa
couplings are identified as the only relevant breaking terms of the
$SU(3)^3$ quark-flavour symmetry~\cite{DAmbrosio:2002ex}. For the
lepton sector the same is not true: the SM cannot accommodate Lepton
Flavour Violation (LFV) because there is a single set of Yukawa
couplings (those of the charged leptons) that can always be brought
into diagonal form by rotating the three $SU(2)_L$-doublets
$\ell_\alpha$ and the three right-handed (RH) $SU(2)_L$-singlets
$e_\alpha$ ($\alpha=e,\,\mu,\,\tau$).
However, with the discovery of neutrino oscillation it has been
clearly established that lepton flavor is not conserved. It is then
interesting to extend the MFV hypothesis to the lepton sector (MLFV)
by starting from a Lagrangian able to describe the observed LFV in
neutrino oscillations. The problem is that we do not know which
physics beyond the SM is responsible for these effects, and different
generalizations of the SM yield different formulations of the MLFV
hypothesis.
\section{Minimal effective theories for the seesaw}
A theoretically very appealing way to extend the SM to a dynamical
model that can account for strongly suppressed neutrino masses is the
type-I seesaw, where it is assumed that in addition to the SM leptons
($\ell$ and $e$) at high energies there is at least another set of
dynamical fields carrying lepton flavour: three SM singlets heavy
Majorana neutrinos $N_i$. The gauge invariant kinetic terms for the
lepton fields $\ell_\alpha$, $e_\alpha$ and $N_i$ is:
\begin{equation}
\label{eq:Kin}
\mathcal{L}_{\rm Kin}=
\bar\ell_\alpha \not\!\!D_{\ell}\,\ell_\alpha+
\bar e_\alpha \not\!\!D_{e}\, e_\alpha +
\bar N_i \not\!\partial\, N_i\,,
\end{equation}
where $D_{\ell}\,, D_{e}$ denote covariant derivatives. The largest
group of flavour transformations that leaves $\mathcal{L}_{\rm Kin}$ invariant
is $\mathcal{G} = U(3)_\ell\times U(3)_N\times U(3)_e$. We assume that $\mathcal{G}$,
or some subgroup of $\mathcal{G}$, is the relevant group of flavour
transformations, and we require that the only symmetry-breaking terms
can be identified with the parameters appearing in the seesaw
Lagrangian, that is:
\begin{eqnarray}
- \mathcal{L}_{\rm seesaw}&=&
\nonumber
\epsilon_e\, \bar \ell_\alpha Y_e^{\alpha\beta}e_\beta\, H
+ \epsilon_\nu\, \bar\ell_\alpha Y_\nu^{\alpha j}\,N_j\, \tilde H \\
&+&\frac{1}{2}\,\epsilon_\nu^2\,\mu_L\; \bar N^c_i\, Y_M^{ij}\, N_j
+{\rm h.c.}.
\label{eq:seesaw}
\end{eqnarray}
The symmetry group can be decomposed as $\mathcal{G} = U(1)_Y \times U(1)_L
\times U(1)_R \times \mathcal{G}_F~$ where $U(1)_Y$ and $U(1)_L$ correspond to
hypercharge (that remains unbroken) and to total lepton number,
respectively; $U(1)_R$ can be identified either with $U(1)_e$ or with
$U(1)_N$, corresponding respectively to global phase rotations of $e$
or $N$, and
\begin{equation}
\mathcal{G}_F = SU(3)_\ell\times SU(3)_N\times SU(3)_e~,
\end{equation}
is the flavour group, broken at some large scale $\Lambda_F\gg\,$TeV.
Formal invariance of $\mathcal{L}_{\rm seesaw}$ under $\mathcal{G}_F$ is recovered by
promoting the Lagrangian parameters to spurions transforming as:
\begin{equation}
\label{eq:generalspurions}
Y_\nu \sim (3,\bar 3,1);
\ \ \
Y_M \sim (1,\bar 6,1);
\ \ \
Y_e \sim (3,1,\bar 3)\,.
\end{equation}
As regards the two broken Abelian factors, $U(1)_L$ is broken (by two
units) by $\mu_L$, that is a spurion with dimension of a mass, while
$U(1)_R\,$ is broken by a dimension-less spurion $\epsilon_R\,$, where
$\epsilon_R$ denotes $\epsilon_e$ or $\epsilon_\nu$.
By itself, the Lagrangian~\eqn{eq:seesaw} induces LFV effects for the
charged leptons that are well below $\mathcal{O}(10^{-50})$, and thus
unobservable. However, a theoretical prejudice states that {\it there
is new physics at the {\rm TeV} scale}, since this is needed to
cure the SM naturalness problem. It is then reasonable to assume that
at some scale $\Lambda_{NP} \ll \Lambda_F,\,\mu_L$, presumably around
or somewhat above the electroweak scale, other states carrying flavour
exist. Integrating out these heavy degrees of freedom, as well as the
heavy RH neutrinos with masses $ \epsilon^2_\nu\mu_L > $ TeV, at $E
\ll {\rm TeV}$ we obtain an effective Lagrangian of the form:
\begin{equation}
\label{eq:LowE}
\mathcal{L}_{\rm eff} = \mathcal{L}_{\rm SM} + \mathcal{L}^{\rm seesaw}_{D5}+
\frac{1}{\Lambda_{NP}^2} \sum_{i} c_i O_i^{(6)} + \ldots~.
\end{equation}
$\mathcal{L}^{\rm seesaw}_{D5}$ is the Weinberg
operator~\cite{Weinberg:1979sa} that depends on the spurions (see
\eqn{eq:D5}). $O_i^{(6)}$ denote generic dimensions-six operators
written in terms of the SM fields and of the spurions, and the dots
denote higher dimension operators. Dimensions-six operators involving
only the SM fields conserve $B-L$~\cite{Weinberg:1979sa}, and since we
have not introduced (dangerous) sources of $B$ violation, then the
operators $O_i^{(6)}$ must conserve separately $L$. This is the
reason why the scale $\Lambda_{NP}$ can be substantially lower than
$\Lambda_F$ and $\mu_L$. Note also that $U(1)_N$ breaking and
$\epsilon_\nu$ only affect the RH neutrino masses, without affecting
in any way the Weinberg operator (see \eqn{eq:D5}). As far as the
flavour structure of the operators $O_i^{(6)}$ is concerned, the
assumptions about $\mathcal{G}_F$ breaking imply the following:
\begin{itemize}
\item[{\bf I.}] {\em Once the transformation properties of the
spurions eq.}~(\ref{eq:generalspurions})~{\em and of the fields are
taken into account},~{\em all $O_i^{(6)}$ must be formally
invariant under $\mathcal{G}_F$.}
\end{itemize}
This condition alone is not sufficient to obtain an effective theory
that is predictive, since the flavour structure of $Y_\nu,\,Y_M$ and
$Y_e$ cannot be determined from low-energy data
alone~\cite{Cirigliano:2005ck}. A predictive MLFV formulation must
satisfy an additional working hypothesis:
\begin{itemize}
\item[{\bf II.}] {\em The spurions flavour structure must be
reconstructable from low energy observables, namely the light
neutrino masses and the PMNS mixing matrix.}
\end{itemize}
The only way this second hypothesis can be satisfied is by restricting
the form of the spurions $Y_i$ in such a way that the relevant LFV
combinations will depend on a reduced number of parameters. This can
be obtained by assuming that the flavour symmetry corresponds to a
subgroup of $\mathcal{G}_F$, rather than to the full flavour group.
\mathversion{bold}
\subsection{ $U(1)_R$ breaking and size of the LFV effects}
\mathversion{normal}
Before analyzing the possible subgroups of $\mathcal{G}_F$ yielding predictive
frameworks, let us discuss the connection between the overall size of
the LFV effects and the breaking of $U(1)_R$. The explicit structure
of $\mathcal{L}^{\rm seesaw}_{D5}$ and the corresponding
light neutrino mass matrix are
\begin{eqnarray}
\label{eq:D5}
\mathcal{L}^{\rm seesaw}_{D5} &=& \frac{1}{\mu_L}
\left(\bar\ell\tilde H\right)
Y_\nu\frac{1}{Y_M}Y_\nu^T \left(\tilde H^T\ell^c\right)\,, \\
\label{eq:Mnu}
\Longrightarrow\qquad
m_\nu^\dagger &=&
\frac{v^2}{\mu_L}\; Y_\nu\frac{1}{Y_M}Y_\nu^T
\ = \ U\, \mathbf{m}_\nu\, U^T \,,
\end{eqnarray}
where $v$ is the Higgs vacuum expectation value, $U$ is the PMNS matrix
and ${\mathbf m}_\nu = {\rm diag }
(m_{\nu_1},\,m_{\nu_2}\,,m_{\nu_3})$. Note that since
$\mathcal{L}^{\rm seesaw}_{D5}$
does not break $U(1)_{R}$, the overall size of
${\mathbf m}_\nu$
depends only on the lepton-number violating scale $\mu_L$,
but not on $\epsilon_{e,\nu}$.
Without loss of generality we can rotate $Y_{e}$ and $Y_M$
to a diagonal basis. In terms of mass
eigenvalues the diagonal entries can be written as:
\begin{equation}
\label{eq:diageM}
(Y_e)_{\alpha\alpha}=
\frac{1}{\epsilon_e\,v}\, m_\alpha\,, \qquad
(Y_M)_{ii}=\frac{1}{\epsilon_\nu^2\,\mu_L}\,M_i \,.
\end{equation}
This shows that the overall size of $Y_e$ and $Y_M$ is
controlled by the Abelian spurions (the same is true for $Y_\nu$). A
natural choice for their size is such that the entries in the $Y_i$
matrices are of $\mathcal{O}(1)$. Considering the light-neutrino mass
matrix \eqn{eq:Mnu} it can be seen how
this choice points to a very large $L$-breaking
scale
\begin{equation}
\label{eq:muL}
\mu_L \sim
v^2/\sqrt{\Delta m^2_{\rm atm}} \approx 6\times 10^{14}~{\rm GeV}\,.
\end{equation}
In the case when $U(1)_R=U(1)_N$ however, we are free to assume
$\epsilon_\nu \ll 1$ as would naturally result from an approximate
$U(1)_N$ symmetry. In this case, in spite of the large values of
$\mu_L$, the RH neutrinos could have much smaller masses, possibly
within the reach of future experiments which, from the
phenomenological point of view, this represents a very interesting
possibility~\cite{Alonso:2011jd}.
\subsection{Two predictive cases}
The dimension-six LFV operators $O_i^{(6)}$ are invariant under
$U(1)_L$ and $U(1)_N$, but break $\mathcal{G}_F$ through
various spurions combinations, like for example:
\begin{equation}
\label{eq:Delta}
\hspace{-.5cm}
\Delta^{(1)}_8=Y_\nu Y_\nu^\dagger;\ \
\Delta_6=Y_\nu Y_M^\dagger Y_\nu^T;\ \
\Delta^{(2)}_8=Y_\nu Y_M^\dagger Y_M Y_\nu^\dagger\,.
\end{equation}
In the absence of further assumptions, the $\Delta$'s cannot be
determined in terms of $U$ and ${\mathbf m}_\nu $. To obtain
predictive frameworks basically two different criteria can be adopted,
that correspond to assume that in a given basis either $Y_M$ or
$Y_\nu$ are proportional to the identity matrix in flavour space
$I_{3\times 3}$~\cite{Cirigliano:2005ck,Alonso:2011jd}. Both these
criteria have the property of being {\it natural} in the sense that
they can be formulated in terms of symmetry hypotheses, that is by
choosing as flavour symmetry some suitable subgroup of $\mathcal{G}_F$.
(Alternative formulations of the MLFV hypothesis have also been
proposed in~\cite{Davidson:2006bd,Gavela:2009cd,Joshipura:2009gi}.)
\subsubsection{$SU(3)_N \to O(3)_N \times CP$.}
Assuming that the flavour group acting on the RH neutrinos is $O(3)_N$
rather than $SU(3)_N$, implies that $Y_M$ must be proportional to
$I_{3\times 3}$. However, this condition alone is not enough to deduce
the structure of $Y_\nu$ from the seesaw formula. Full predictivity
for this framework is ensured only if we further assume that $Y_\nu$
is real: $Y_\nu^\dagger = Y_\nu^T$, which follows from imposing CP
invariance~\cite{Cirigliano:2005ck}. In this case, since the Majorana
mass term has a trivial structure, all LFV effects stem from the
(real) Yukawa coupling matrices giving:
\begin{equation}
\label{eq:ON}
\Delta_6 = \Delta_8^{(1)}= \Delta_8^{(2)}=
Y_\nu Y_\nu^T= \frac{\mu_L}{v^2}\;
U\, \mathbf{m}_\nu\, U^T \,.
\end{equation}
The main implication for LFV in this scenario is that the largest
entries in the $\Delta$'s are determined by the {\it heaviest} neutrino
mass. We refer to~\cite{Cirigliano:2005ck} for further details.
\subsubsection{\it $SU(3)_\ell\times SU(3)_N \to SU(3)_{\ell+N}$.}
If we assume that $\ell$ and $N$ belong to the fundamental
representation of the same $SU(3)$ group, then in a generic basis
$Y_\nu$ must be a unitary matrix (and thus it can be always rotated to
the identity matrix by a suitable unitary transformation of the RH
neutrinos). This condition, first proposed in~\cite{Alonso:2011jd},
also allows to invert the seesaw formula in \eqn{eq:Mnu}, giving
\begin{equation}
\Delta_6
=\frac{v^2}{\mu_L} \;
U\, \frac{1}{\mathbf{m}_\nu}\, U^T \,, \quad
\label{eq:MMLFV2}
\Delta_8^{(2)}
=\frac{v^4}{\mu_L^2} \;
U\, \frac{1}{\mathbf{m}_\nu^2}\, U^\dagger~,
\end{equation}
while $\Delta_8^{(1)} =I_{3\times 3}$ gives no LFV effects. The
choice of a unitary $Y_\nu$ can be phenomenologically interesting
because it has been shown that if the $N$'s belong to an irreducible
representation of a non-Abelian group, then $Y_\nu$ is precisely
(proportional to) a unitary matrix~\cite{Bertuzzo:2009im}. Now,
models based on non-Abelian (discrete) groups have proved to be quite
successful in reproducing the approximate tri-bimaximal~\cite{TBM}
structure of the PMNS matrix, so an approximate unitarity of $Y_\nu$
is what is obtained in several cases. This scenario has also the
remarkable implication that the largest LFV effects are controlled by
the {\it lightest} neutrino mass. Other phenomenologically interesting
features are discussed in~\cite{Alonso:2011jd}.
Let us note at this point that $\mathbf{m}_\nu^{-1}$ appearing in
\eqn{eq:MMLFV2} does not correspond to any combination of the spurions
of the high energy theory. Therefore we learn that, contrary to common
belief, a MFV high energy Lagrangian can produce, at low energies,
operators which are not MFV. A low energy theory following from a MFV
high energy theory is guaranteed to be also MFV only under the
additional requirement that, when all the spurions are set to zero,
the only massless fields are the SM ones.
\subsection{MLFV Operators}
Several MLFV operators can be constructed with the spurions
combinations given in \eqn{eq:Delta} or the analogous structures
involving also $Y_e$, like $ \Delta_{8}Y_e$, and it is useful to
provide at least a partial classification of the most
important ones: \\ [-6pt]
\noindent
{\it 1. On-shell photonic operators}.\\[2pt]
They control the radiative decays $\ell\to \ell'\gamma$,
and also contribute to $\mu-e$ conversion in nuclei, and to
four-leptons processes like $\ell\to 3 \ell'$ decays. Their structure is:
\begin{equation}
\label{eq:photonic}
O^{(F)}_{RL} = \bar\ell_\alpha
\left(\Delta_{8}Y_e\right)^{\alpha\beta}
(\sigma\cdot F)\, e_\beta\cdot H
\end{equation}
where $F$ denotes generically the field strength of the $SU(2)_L
\times U(1)_Y$ gauge fields. When these operators are the dominant
ones, one can predict quantitative relations between $\mu\to e\gamma$
and other processes, as for example:
\begin{eqnarray}
B_{\mu\to eee} &\simeq &
\frac{1}{160}\;B_{\mu \to e\gamma}
\\
\frac{\Gamma_{\mu\,Ti \to e\,Ti}}{\Gamma_{\mu\,Ti \to{\rm capt}}}
&\simeq &
\frac{1}{240}\;B_{\mu \to e\gamma}\,.
\end{eqnarray}
Clearly, in this case the decay $\mu\to e\gamma$
would play an utmost important role in searching for LFV. \\ [-2pt]
\begin{figure}[t!]
\centering
\includegraphics[width=7.6cm,height=4.0cm]{fig5P}
\vspace{-0.2cm}
\caption{$B_{\tau \to \mu \gamma}$ and $B_{\mu \to e \gamma}$ as a
function of $\sin\theta_{13}$ for the CP conserving cases
$\delta=0,\,\pi$ and $\Lambda_{NP} \sim 10^{-4} (2 v \mu_L)^{1/2}$.
The shading corresponds to a lightest $\nu$ mass in the range 0 -
0.02~eV. (From ref.~\cite{Cirigliano:2005ck}.)
\label{fig:fig5P}
}
\end{figure}
\noindent
{\it 2. Off-shell photonic and contact operators with quarks}.\\[2pt]
They can give important contributions in particular to $\mu-e$
conversion in atoms, and have the form:
\begin{eqnarray}
\nonumber
O^{(H)}_{LL} &=& \bar \ell_\alpha\gamma^\mu\tau^a
\Delta_{8}^{\alpha\beta}
\, \ell_\beta\cdot\left( H^\dagger \tau^aiD_\mu H\right)\,, \\
\nonumber
O^{(Q)}_{LL} &=& \bar \ell_\alpha\gamma^\mu \tau^a
\Delta_{8}^{\alpha\beta}
\, \ell_\beta\cdot \left(\bar Q_L\tau^a\gamma_\mu Q_L \right)\,,\\
O^{(q)}_{LL} &=& \bar \ell_\alpha\gamma^\mu
\Delta_{8}^{\alpha\beta}
\, \ell_\beta\cdot \left(\bar q_R\,\gamma_\mu\, q_R \right)\,,
\end{eqnarray}
where $\tau^a=(1,\vec\tau)$ with $\vec\tau$ the $SU(2)$ matrices
and $q_R=u_R,\,d_R$ denotes the RH quarks. \\ [-2pt]
\noindent
{\it 3. Four leptons contact operators}.\\[2pt]
They can be particularly relevant for $\ell \to 3\ell'$ decays.
The leading operators have the form:
\begin{eqnarray}
\nonumber
O^{(4\ell)}_{LL}
& =&\bar \ell_\alpha\gamma^\mu \tau^a
\Delta_{8}^{\alpha\beta}
\ell_\beta\cdot\left( \bar \ell_L\tau^a\gamma_\mu \ell_L\right) \\
O^{(2\ell\,2e)}_{LL} & =&\bar \ell_\alpha\gamma^\mu
\Delta_{8}^{\alpha\beta}
\ell_\beta\cdot\left( \bar e_R\,\gamma_\mu,\ e_R\right)\,.
\end{eqnarray}
Clearly, in the general case when operators of type 2. and 3. are not
particularly suppressed with respect to the operators in 1., searches
for $\mu-e$ conversion in nuclei and for LFV decays like $\mu \to 3e$,
$\tau\to 3\mu$ etc. become equally important than $\mu\to e\gamma$ to
search for LFV. Here we only consider the radiative decays $\ell\to
\ell'\gamma$, but a detailed analysis of many others LFV processes
within the first MLFV scenario can be found
in~\cite{Cirigliano:2006su}.
\begin{figure}[t!]
\centering
\includegraphics[width=3.8cm,height=4.0cm]{Br-mlP}
\includegraphics[width=3.8cm,height=4.0cm]{Br-emlP}
\vspace{-0.2cm}
\caption{The ratios
$\frac{B_{\mu\,\rightarrow\,e\,\gamma}}{B_{\tau\,\rightarrow\,\mu\,\gamma}}$
(left) and
$\frac{B_{\mu\,\rightarrow\,e\,\gamma}}{B_{\tau\,\rightarrow\,e\,\gamma}}$
(right) as a function of the lightest neutrino mass.
Green-lighter points correspond to normal hierarchy, red-darker points
to inverted hierarchy.
(From ref.~\cite{Alonso:2011jd}.)
\label{fig:brm1}}
\vspace{-.3cm}
\end{figure}
\subsection{Phenomenology}
Let us now discuss for the two cases at hand, the dependence of LFV
processes on low energy parameters. We concentrate on the radiative
decay $\ell_i\to \ell_j\,\gamma$ and on the effects of on-shell
photonic operators $O^{(F)}_{RL}$. The relevant LFV structure is
$\Delta_8$ defined respectively in eqs.~(\ref{eq:ON}) and
(\ref{eq:MMLFV2}), we also assume that all $c_i$ in \eqn{eq:LowE} are of
$\mathcal{O}(1)$. We compare the relevance of different decay channels by
means of the normalized branching fractions:
\begin{equation}
\label{eq:Br}
B_{\ell_i\to \ell_j\gamma} \equiv
\frac{\Gamma_{\ell_i\to \ell_j\gamma}}{\Gamma_{\ell_i\to \ell_j\nu_i\bar{\nu}_j}}.
\end{equation}
When the flavour symmetry $O(3)_N\times CP$ is assumed, one observes
the pattern $B_{\tau \to \mu \gamma} \gg B_{\mu\to e \gamma}$ ($\sim
B_{\tau \to e \gamma} $), which is a consequence of the suppression of
LFV effects when the lightest neutrinos mass eigenvalues are involved.
This is illustrated in Fig.~\ref{fig:fig5P}, taken from
ref.~\cite{Cirigliano:2005ck} that depicts the normalized branching
fractions $B_{\tau \to \mu \gamma}$ and $B_{\mu\to e \gamma}$ assuming
a NP scale $\Lambda_{NP} \sim 10^{-4} \sqrt{2 v \mu_L}$. For a given
choice of $\delta=0$ or $\pi$ (corresponding to CP conservation), the
strength of the $\mu\to e$ suppression is very sensitive to whether
the hierarchy is normal (NH) or inverted (IH). For $\delta = 0$ the present
experimental limit on $B_{\mu \to e \gamma}$ allows large values of
$B_{\tau \to \mu \gamma}$ only for the IH, whereas for
$\delta = \pi$, a large region with a sizable $B_{\tau \to \mu
\gamma}$ is allowed only for the NH. Note that the
overall vertical scale in this figure depends on both the ratio $(v
\mu_L) /\Lambda_{NP}^2$ and on the value of the lightest neutrino mass,
and that a large hierarchy $\Lambda_{NP}/\mu_L \ll 1$ is required to
obtain observable effects.
When the assumed flavour symmetry is $SU(3)_{\ell+N}$, the main
distinctive feature with respect to the previous case is that, due to
the inverse $\mathbf{m}_\nu$ dependence in \eqn{eq:MMLFV2}, LFV
processes are {\it enhanced} when the lighter neutrinos masses are
involved. This implies, in particular, a potentially strong
enhancement of $\mu\to e\gamma$ in the (NH) case. This is better
highlighted by studying ratios of branching ratios for different decay
channels, since they simply reduce to ratios of the modulus squared of
the corresponding $\Delta_8$ entries:
\begin{equation}
\frac{B_{\ell_i\,\rightarrow\,\ell_j\,\gamma}}
{B_{\ell_k\,\rightarrow\,\ell_m\,\gamma}}=
\frac{\left|\left( \Delta_8\right)_{ij}\right|^2}
{\left|\left(\Delta_8\right)_{km}\right|^2}\,.
\end{equation}
Figure~\ref{fig:brm1} (taken from ref.\cite{Alonso:2011jd}) shows two
scatter plots generated with random values for the quantities
$\Delta_8 \sim U\,\frac{1}{\mathbf{m}_\nu^2}\,U^\dagger$, obtained by
allowing the neutrino parameters to vary within their (approximate)
2$\sigma$ c.l. experimental intervals~\cite{nudata}. In the left
panel we plot, as a function of the lightest mass eigenvalue, the ratio
$\frac{B_{\mu\,\rightarrow\,e\,\gamma}}{B_{\tau\,\rightarrow\,\mu\,\gamma}}$,
and in the right panel the ratio
$\frac{B_{\mu\,\rightarrow\,e\,\gamma}}{B_{\tau\,\rightarrow\,e\,\gamma}}$.
Results for the NH ($m_{\nu_l}=m_{\nu_1}$) correspond to the
green-lighter points, while the IH ($m_{\nu_l}=m_{\nu_3}$) to the
red-darker points. From the first panel we see that for NH and small
values of $m_{\nu_1} \lesssim 10^{-2}\,$eV we generically have
$B_{\mu\,\rightarrow\,e\,\gamma}\,>\,B_{\tau\,\rightarrow\,\mu\,\gamma}$.
The enhancement of $B_{\mu\,\rightarrow\,e\,\gamma}$ is obviously due
to ${{\bf m}^2_\nu}$ appearing in the denominator of $\Delta_8$, and
can be of a factor of a few. In the limit of $m_{\nu_1}\ll
m_{\nu_{2,3}}$, and using the best fit values of the mixing angles, we
have:
$\frac{B_{\mu\,\rightarrow\,e\,\gamma}}{B_{\tau\,\rightarrow\,\mu\,\gamma}}
\approx 7.3\; (3.2) $ for $\delta=0$ ($\delta=\pi$).
When $m^2_{\nu_1}\gg \Delta\,m^2_{\rm sol}$ and $m_{\nu_1} \approx
m_{\nu_2}$, the contributions to $\mu\,\rightarrow\,e\,\gamma$
proportional to $\theta_{12}$ suffer a strong GIM suppression, and the
decay rate becomes proportional to $\theta_{13}^2$ . This behavior is
seen clearly in Fig.~\ref{fig:brm1} (left) for values of $m_{\nu_1}
\approx 10^{-2}\,$eV. For IH, in the limit $m_{\nu_3}\ll
m_{\nu_{1,2}}$ and independently of the value of $\delta$ we obtain:
$\frac{B_{\mu\,\rightarrow\,e\,\gamma}}{B_{\tau\,\rightarrow\,\mu\,\gamma}}
\approx 2\, s_{13}^2$.
Approximately the same result is obtained also
in the limit of large masses $m_{\nu_i} \gg \sqrt{\Delta m^2_{\rm atm}}$,
which explains why for $m_{\nu_1}\to 10^{-1}\,$eV the results for IH and NH
converge.
Results for the ratio of the $\mu$ and $\tau$ radiative decays into
electrons are depicted in the right panel in Fig.~\ref{fig:brm1}. At a
glance we see that for both NH and IH the $\mu/\tau$ ratios for decays
into electrons remain centered around one for all values of
$m_{\nu_l}$. Needless to say, since the ratio of normalized branching
ratios of other LFV processes like for example $B_{\mu\to 3e}$,
$B_{\tau\to 3\mu}$, $B_{\tau\to 3e}$ are controlled by the same LFV
factors $\Delta_8$, they are characterized by a completely similar
pattern of enhancements/suppressions.
In view of the ongoing high sensitivity searches for LFV
processes~\cite{MEG}, besides comparing the rates for different LFV
channels, an estimate of the absolute values of the branching
fractions is also of primary interest. In the most favorable case, in
which $\Delta_8$ is a matrix with $\mathcal{O}(1)$ entries, a rough estimate
gives:
\begin{equation}
\label{eq:abs}
B_{\mu\,\rightarrow\,e\,\gamma} \approx 1536\,\pi^3\, \alpha\,
\frac{v^4}{\Lambda_{NP}^4 }\,.
\end{equation}
When compared with the experimental limit $ B^{\rm
exp}_{\mu\,\rightarrow\,e\,\gamma}< 10^{-11}$~\cite{MEGA} this
allows us to conclude that the scale of NP should be rather large:
$\Lambda_{NP} \gtrsim 400\,$TeV.
In summary, this second MLFV scenario~\cite{Alonso:2011jd} is
characterized by a quite different phenomenology from the first one
since it allows the branching fraction
$B_{\mu\,\rightarrow\,e\,\gamma}$ to dominate over
$B_{\tau\,\rightarrow\,\mu\,\gamma}$ and
$B_{\tau\,\rightarrow\,e\,\gamma}$. The enhancement with respect
$B_{\tau\,\rightarrow\,\mu\,\gamma}$ that occurs in the NH case does
not exceed a factor of a few, but it is parametric in the small values
of $m_{\nu_1}$. The strong enhancement with respect to
$B_{\tau\,\rightarrow\,e\,\gamma}$ instead is due to accidental
cancellations that suppress this process, and that become particularly
efficient when $\delta$ is close to zero.
\section*{Acknowledgments}
I thank the authors of ref.~\cite{Cirigliano:2005ck} for permission
to include fig.~\ref{fig:fig5P} in this review.
|
1,116,691,499,762 | arxiv | \section{Introduction}
\label{intro}
The Kelvin-Helmholtz (KH) instability results from a wide array of velocity-shear profiles
in a continuous fluid, or across the interface between two
distinct fluids. The instability is ubiquitous in nature, playing
important roles in meteorology, oceanography, and engineering.
The KH instability plays a particularly prominent role in
astrophysical systems ranging in scale
from stellar interiors \citep[e.\,g.][]{Bruggen2001} and
protoplanetary disks \citep[e.\,g.][]{Johansen2006} to the evolution
of the intergalactic medium \citep[e.\,g.][]{Nulsen1982,Nulsen1986}.
Physically, the KH instability wraps up coherent sheets
of vorticity into smaller, less organized structures. The small scale motion then
stretches and cascades to yet smaller scales. The
instability therefore plays fundamental roles in fluid mixing and in the
transition to turbulence.
Because of its prevalence in nature and its physical significance, KH test problems are commonly used to evaluate the accuracy of different astrophysical hydrodynamics codes
\citep[e.\,g.][]{Springel2010,Hopkins2015,Schaal2015}: if a code can
properly simulate the KH instability, it is presumed to
capture mixing and turbulence in astrophysical simulations.
Ideally, such an important test problem should stand against an
analytic solution to ensure the veracity (not just reproducibility) of
simulation results. Some analytic work addresses the KH instability with a sheet vortex model
\citep{Moore1979}, but only for incompressible fluid equations. For the compressible Navier-Stokes equations relevant to
astrophysics, no analytic description of the nonlinear
KH instability currently exists.
Absent a nonlinear analytic prediction, a resolved reference simulation provides the only reasonable approximation of the true solution. Comparing to a well-controlled and high-resolution benchmark gives a proxy for the true error of a given test.
\citet{Robertson2010} and \citet{McNally2012} present careful studies
of the early evolution of the KH instability. These
authors also point out the numeric ill-posededness of contact-discontinuity simulations, in spite of existing analytical solutions in the linear and/or incompressible regimes. These works emphasize that
converged nonlinear simulations require well-resolved
initial conditions. One limitation of these studies, however, is that \citet{Robertson2010} and \citet{McNally2012} only provide converged reference simulations for the linear (and possibly weakly nonlinear) phase.
In addition, converged nonlinear solutions require solving dissipative equations. Many available astrophysical codes do not implement this essential feature. As a result, these works could only follow the instability for a few e-folding timescales.
Not all works take the benchmark approach, however. In place of a nonlinear reference solution, some authors use apparent small-scale structure as a proxy for the accuracy of their simulations \citep[e.g.,][]{Springel2010,Hopkins2015}. Presumedly, more small-scale structure implies less numerical dissipation, and therefore greater accuracy. We find in the current paper that this intuition can, in some cases, lead to false conclusions. Some tests also abandon the smooth initial conditions of \citet{Robertson2010} and \citet{McNally2012}, even though this choice precludes convergence of even the linear phase of the instability because the linear growth rates increase with wavenumber for an initially discontinuous velocity profile.
In this paper, we extend the work of \citet{McNally2012} by providing
reference solutions for the \textit{strongly nonlinear} evolution of the
KH instability. We use a smooth initial condition and
explicit diffusion. We conduct simulations using both Athena (a
Godunov code), and Dedalus (a pseudo-spectral code that can solve the Navier-Stokes equations of compressible hydrodynamics) and find that both converge to the
same reference solutions. We see agreement among different codes and
different resolutions, with the validity of the reference solution
limited only by (unavoidable) chaotic evolution at late times. We
propose that future code tests include this KH instability problem and compare to our validated, converged, reference solutions.
We organize the remainder of the paper as follows. Section~\ref{sec:method}
describes the equations, initial conditions, and codes used for our
simulations. The results comprise two sections. In section~\ref{subsec:unstratified-results}
we discuss the simpler simulations with constant initial density.
Section~\ref{sec:drat 2} discusses the more complicated simulations with an initial
density jump. Section~\ref{sec:conclusion}, summarizes our results.
\section{Methods}
\label{sec:method}
\subsection{Equations and Initial Conditions}
\label{subsec:setup}
We solve the hydrodynamic equations, including explicit terms for the diffusion of momentum and temperature:
\begin{subequations}\label{eqn:equations of motion}
\begin{align}
\frac{\partial \rho}{\partial t}
+ \vec{\nabla}\dot\left(\rho\, \vec{u}\right) &= 0, \\
\frac{\partial}{\partial t}(\rho\,\vec{u})
+ \vec{\nabla}\dot\left(P\,\boldsymbol{\mathsf{I}}
+ \rho\,\vec{u}\otimes\vec{u} \right) &=
-\vec{\nabla}\dot\boldsymbol{\mathsf{\Pi}},\\
\frac{\partial E}{\partial t}
+ \vec{\nabla}\dot\left[(E+P)\,\vec{u} \right]
&= \vec{\nabla}\dot(\chi\rho\vec{\nabla} T)
- \vec{\nabla}\dot(\vec{u}\dot\boldsymbol{\mathsf{\Pi}}),
\end{align}
\end{subequations}
along with the nondimensionalized ideal gas equation of state, $P=\rho T$, with constant ratio of specific heats $\gamma=5/3$. $\boldsymbol{\mathsf{I}}$ is the identity tensor, $\chi$ is the thermal diffusivity (with units $\text{cm}^2/\text{s}$; $K=n k_{\text{b}} \chi$ is the thermal conductivity), and
\begin{align}
\boldsymbol{\mathsf{\Pi}} = - \nu \rho \left( \vec{\nabla} \vec{u} + (\vec{\nabla} \vec{u})^T - \frac{2}{3} \boldsymbol{\mathsf{I}} \vec{\nabla} \dot \vec{u}\right)
\end{align}
is the viscous stress tensor with viscosity $\nu$ (with units $\text{cm}^2/\text{s}$). We assume both $\nu$ and $\chi$ are constant.
We add a passive scalar to our simulations which we refer to as ``dye.''
The local fraction of dye particles $c$ expresses dye concentration, and initially ranges from 0 to 1.
The local conservation of dye is then
\begin{align}
\frac{\partial}{\partial t}\left(\rho c\right)
+ \nabla\cdot\left(\rho c\,\vec{u}\right)
= \rho \frac{d c}{d t}
&= -\nabla\cdot\vec{Q}_{\text{dye}},\label{eq:dye-evolution}\\
\vec{Q}_{\text{dye}} &= - \rho \nu_{\text{dye}} \nabla c
\end{align}
where $d/dt$ represents the Lagrangian derivative, and $\nu_{\text{dye}}$ represents a diffusion coefficient for dye molecules (with units $\text{cm}^2/\text{s}$). These equations conserve the total dye mass $\int{}\rho\,c\,\mathrm{d}V$.
We define a dye entropy per unit mass $s\equiv-\,c\,\ln{}c$, along with its volume integral
\begin{align}\label{eqn:entropy}
S\equiv\int{}\rho\,s\,\mathrm{d}V.
\end{align}
These evolve such that:
\begin{align}
\rho \frac{d s}{d t} - \nabla\cdot\left[(1+\ln{}c)\,\vec{Q}_{\text{dye}}\right]
&= \rho\nu_{\text{dye}} \frac{|\nabla c|^2}{c} \label{eq:s-evolution} \\
\frac{d S}{d t}
&= \int \rho \nu_{\text{dye}} \frac{|\nabla c|^2}{c} \mathrm{d}V
\geq 0.
\end{align}
The second term on the left-hand side of equation~\ref{eq:s-evolution} represents the entropy flux due to reversible diffusion of the dye. The right-hand side represents entropy generation due to non-reversible dissipation.\footnote{Equation~\ref{eq:s-evolution} can be made to look like the analogous equation for heat conduction with the definition of a new ``temperature'' $T_{\text{dye}} \equiv -\frac{1}{1+\ln{}c}$}
The volume-integrated entropy $S$ satisfies the following important properties:
\begin{enumerate}
\item A fully unmixed fluid with $c=0$ or $c=1$ everywhere has zero entropy ($S=0$).
\item A fully mixed fluid with $c^*=\int{}\rho\,c\,\mathrm{d}V/\int{}\rho\,\mathrm{d}V$ maximizes the entropy, $S_{\text{max}}=-c^{*}\ln{}c^*\,\int{}\rho\,\mathrm{d}V$.
\item $S$ increases monotonically with time if $\nu_{\text{dye}} > 0$, and stays constant otherwise.
\end{enumerate}
We restrict our attention to periodic simulations.
This avoids potential difficulties with imposing Dirichlet and/or Neumann boundary conditions. Our initial conditions are:
\begin{subequations}\label{eqn:ICs}
\begin{align}
\rho &= 1 +
\frac{\Delta\rho}{\rho_0}
\times\frac{1}{2}\left[\tanh\left(\frac{z-z_1}{a}\right) - \tanh\left(\frac{z-z_2}{a}\right)\right]\label{eq:initial-rho}\\
u_x &= u_{\text{flow}} \times \left[\tanh\left(\frac{z-z_1}{a}\right) - \tanh\left(\frac{z-z_2}{a}\right) - 1 \right] \\
u_z &= A \sin(2\pi{}x) \times \left[\exp\left(-\frac{(z-z_1)^2}{\sigma^2}\right) + \exp\left(-\frac{(z-z_2)^2}{\sigma^2}\right)\right] \\
P &= P_0 \\
c &= \frac{1}{2}\left[\tanh\left(\frac{z-z_2}{a}\right) - \tanh\left(\frac{z-z_1}{a}\right) + 2\right],
\end{align}
\end{subequations}
where $a=0.05$ and $\sigma=0.2$ are chosen so that the initial condition is resolved in all of our simulations. We take $u_{\text{flow}}=1$ and $P_0=10$ so that the flow is subsonic with a Mach number $M\sim{}0.25$. The size of the initial vertical velocity perturbation is $A=0.01$. The Athena simulations are initialized with these functions evaluated at cell-centers even though Athena data represents cell-averaged quantities (see Appendix~\ref{sec:interpolate} for more discussion of this effect).
We adopt a rectangular domain with $x$ in $[0,L_x)$, and $z$ in $[0,2L_z)$, with $L_x = 1$ and $2 L_z = 2$, and $z_1=0.5$, $z_2=1.5$, with periodic boundary conditions in both directions. The simulations have a horizontal resolution of $N$ grid points (in Athena) or modes (in Dedalus) in the $x$ direction, and $2N$ grid points/modes in the $z$ direction. Our initial condition has a reflect-and-shift symmetry: taking $z\rightarrow 2-z$ and $x\rightarrow x + 1/2$ changes the sign of $u_z$ but leaves the other quantities invariant. Thus, the simulations solve for the same flow twice. This is a requirement when using periodic boundary conditions, but also provides a test of whether or not the numerical simulations can preserve the symmetry. Almost all simulations presented here maintain the symmetry. We therefore only show the lower half of the domain. We calculate volume-averaged quantities like the dye entropy or the $L_2$ norm with respect to the entire domain.
In equation~\ref{eq:initial-rho}, the free parameter $\Delta \rho/\rho_0$ represents the density jump across the interface.
We study simulations with $\Delta \rho/\rho_0 = 0$ in section~\ref{subsec:unstratified-results} and with $\Delta \rho/ \rho_0=1$ in section~\ref{sec:drat 2}. We refer to this change in density as a ``jump'' throughout, although the transition is smooth, set by the tanh in equation~\ref{eq:initial-rho}.
The Reynolds number $\text{Re}$ quantifies diffusion,
\begin{align}\label{eqn:Re}
\nu = \chi = \nu_{\text{dye}} = \frac{L \Delta u}{\text{Re}},
\end{align}
where $\Delta u = 2u_{\text{flow}}$ is the change in velocity.
Note that we set the thermal diffusivity $\chi$ constant; consequently, the thermal conductivity $K \propto \rho$. Throughout the paper we measure time in units of $L/u_{\text{flow}}$, so $t=1$ corresponds to approximately one turnover time.
Equations~\ref{eqn:equations of motion}--\ref{eqn:Re} specify our system, with the free parameters $\Delta \rho/\rho_0$ and ${\rm Re}$. In the following section we detail our methods for solving this system of equations.
\subsection{Numerical Methods}
\label{sec:setup}
We study the KH instability using two open-source codes employing very different numerical methods: Athena \& Dedalus.
Athena\footnote{Athena is available at \url{https://trac.princeton.edu/Athena/}.} is a finite-volume Godunov code \citep{gs08,stone08}. The scheme represents all field quantities with volume averaged values in each grid element. A Riemann problem solves for fluxes between elements. We use third-order reconstruction with limiting in the characteristic variables to approximate field values at the element walls, the HLLC Riemann solver, and the CTU integrator. We used the ``-O3'' compiler flag using Intel 14.0.1.106 and Mvapich2 2.0b on the Stampede supercomputer. We repeated some runs using second-order reconstruction and/or the Roe Riemann solver and/or stricter compiler flags (e.g., ``-O2 -fp-model strict'') --- these choices did not qualitatively affect the solutions. We use a static, uniform mesh, and a CFL safety factor of $0.8$.
Athena is second-order accurate in both space and time. The leading-order grid-scale errors are diffusive. For most simulations reported here, we include explicit diffusion. A sufficiently large explicit diffusion can dominate grid-scale errors and allow the simulation to remain close to the true solution. However, higher-order grid-scale errors can introduce non-diffusive effects, such as dispersion. If higher-order errors project onto unstable modes, they can cause large differences in the solution, despite being higher order. The grid-scale errors in Athena respect the reflect-and-shift symmetry of our problem up to floating point accuracy, so even non-converged simulations can maintain the initial symmetry of the flow. In practice, we find all simulations maintain the initial symmetry, except simulations with $\Delta\rho/\rho_0=1$ without explicit diffusion. Since Athena's algorithm manifestly preserves this symmetry, we expect the error results from chaotic amplification of floating-point errors.
Dedalus\footnote{Dedalus is available at \url{http://dedalus-project.org}.} is a pseudo-spectral code \citep{burns16}. All field variables are represented as Fourier series, and the simulation solves for the evolution of the spectral-expansion coefficients in time. The code evaluates nonlinear terms on a grid with a factor 3/2 more points than Fourier coefficients; i.e., the 2/3 de-aliasing rule. \citet{lecoanet14} (appendix D.1) describes our implementation of the Navier-Stokes equations. Our implementation of the dye evolution equation is
\begin{subequations}
\begin{align}
\partial_t c - & \nu_{\text{dye}} \left(\partial_x^2 c + \partial_z c_z\right) = \nonumber \\
& \quad \quad - u\partial_x c - w c_z + \nu_{\text{dye}}\left(\partial_x\Upsilon'\partial_x c + \partial_z\Upsilon' c_z\right), \\
& c_z - \partial_z c = 0,
\end{align}
\end{subequations}
where we use the same notation as \citet{lecoanet14}. For timestepping, we use a third-order, four-stage DIRK/ERK method (RK443 of \citealt{ascher97}) with a total CFL safety factor of 0.6 (i.e., 0.15 per stage). This formulation allows implicit timestepping of sound waves. Thus, our timestep size only adjusts with the flow velocity, not the sound speed. The excellent agreement between the highest resolution Dedalus and Athena simulations shows that high-wavenumber sound waves have negligible influence on the solution.
The pseudo-spectral method produces almost no numerical diffusion. Stability concerns require explicit diffusion in nonlinear calculations. In marginally resolved simulations, discretization errors manifest as Gibbs' ringing, which is prominently visible in snapshots. The numerical method does not explicitly preserve the reflect-and-shift symmetry---numerical errors can put power into the asymmetric modes. However, we find that in resolved simulations these asymmetric modes never grow to large amplitudes. Thus, maintaining this symmetry gives a test for a simulation's fidelity.
\section{Results}
\label{sec:results}
This section describes the nonlinear evolution of the KH instability, provides reference solutions, and compares the performance of Dedalus and Athena. Section~\ref{subsec:unstratified-results} considers unstratified simulations with constant initial density; both codes handle this problem easily. Section~\ref{sec:drat 2} concerns simulations with a density jump across the shear interface. This problem shows rich behavior and poses significant numerical challenges.
\subsection{Unstratified simulations ($\Delta\rho/\rho_0 = 0$)}
\label{subsec:unstratified-results}
In this section, we discuss simulations with constant initial density ($\Delta \rho/\rho_0=0$). Figure~\ref{fig:unstratified-dye} visualizes the flow with the dye concentration field of the lower half of the domain for simulations with explicit diffusion at different resolutions and Reynolds number, $\text{Re}$. The snapshots show the state at $t=6$. Strong nonlinearity begins at $t\sim 2$, so this corresponds to at least four turnover times after the initial saturation of the instability. The simulations are labeled by the code used (A for Athena; D for Dedalus), and their horizontal resolution.
\begin{figure*}
\includegraphics[width=\textwidth]{drat_1_snapshots.pdf}
\caption{Snapshots of the dye concentration field in several simulations with $\Delta\rho/\rho_0=0$ at $t=6$. The upper (lower) row shows simulations with ${\rm Re}=10^5$ ($10^6$). All the simulations with ${\rm Re}=10^5$ are well resolved. Small differences exist between the lower-resolution Athena simulations at ${\rm Re}=10^6$ and the highest-resolution Athena simulation \& Dedalus simulations (e.g., near $(x,z)=(0.9,0.6)$, see Figure~\ref{fig:zoomin}).}\label{fig:unstratified-dye}
\end{figure*}
The flow consists of coherent filaments of unmixed fluid with dye concentration close to zero or one. The filaments twist around the central vortex until they become thin enough to diffuse away. The central vortex stays coherent in all simulations, and exhibits a more gradual dye-concentration gradient than in the filaments. This reflects the smooth velocity and dye initial condition.
\subsubsection{${\rm Re}=10^5$}
Many of the simulations with the same $\text{Re}$ but different resolution look similar by eye. To more quantitatively assess convergence, we calculate the $L_2$ norm of the differences between dye concentration fields in different simulations:
\begin{align}
L_2(c_{\rm X} - c_{\rm Y}) = \left[\int \mathrm{d}V \ (c_{\rm X}-c_{\rm Y})^2\right]^{1/2},
\end{align}
where $c_{\rm X}$ and $c_{\rm Y}$ represent the dye concentration fields in two simulations, X and Y. The Athena and Dedalus grids are different, so we use spectrally accurate techniques to interpolate Dedalus solutions to the Athena grid for direct comparison (Appendix~\ref{sec:interpolate}). We argue in Appendix~\ref{sec:convergence} that all simulations converge to our highest-resolution Dedalus simulations; thus, we assume these simulations are a good approximation to the ``true'' solution.
\begin{figure}
\includegraphics[width=\columnwidth]{diff_1e5_1_type1-eps-converted-to.pdf}
\caption{$L_2$ norm of dye-concentration errors for $\Delta\rho/\rho_0=0$ and ${\rm Re}=10^5$. We take D2048 as the ``true'' solution (see Appendix~\ref{sec:convergence}). Both Dedalus and Athena exhibit third-order convergence. D512dt is run with half the timestep size as D512. Its error is similar to D1024, showing that the higher accuracy of D1024 is mostly due to a smaller timestep size rather than higher spatial resolution.}\label{fig:error_1e5_1}
\end{figure}
Figure~\ref{fig:error_1e5_1} shows the $L_2$ norm of the difference between dye concentration fields of D2048 and other simulations with ${\rm Re}=10^5$. Because we believe D2048 closely represents the true solution (Appendix~\ref{sec:convergence}), we call this the $L_2$ norm of the error. Solutions from both codes approach D2048 as resolution increases. At late times, A2048 and D1024 have roughly eight-times smaller errors than A1024 and D512, respectively. That is, both codes exhibit third-order convergence. This indicates that interpolation produces the dominant error in Athena, which is the only third-order part of the algorithm. The Dedalus simulations are spatially resolved, so timestepping produces the dominant error source in the Dedalus simulations, which is also third order. We also plot errors from D512dt, which is run with a horizontal resolution of 512, but with half the CFL safety factor. D512dt is almost as accurate as D1024, showing that the higher accuracy of D1024 is mostly due to taking smaller timesteps. There are certain times (most notably near $t=3.5$) where the flow develops smaller structures, and extra spatial resolution is required. The errors in quantities other than dye concentration (e.g., density) follow similar behavior to that shown in Figure~\ref{fig:error_1e5_1}.
\begin{figure}
\includegraphics[width=\columnwidth]{entropy_1e5_1_type1-eps-converted-to.pdf}
\caption{Volume-integrated dye entropy (equation~\ref{eqn:entropy}) as a function of time for the four simulations with ${\rm Re}=10^5$ shown in Figure~\ref{fig:unstratified-dye}. All simulations are well resolved, so the dye entropies are almost equal.}\label{fig:entropy_1e5_1}
\end{figure}
We calculate the volume-integrated dye entropy for each simulation (equation~\ref{eqn:entropy}). Figure~\ref{fig:entropy_1e5_1} plots the entropy as a function of time. Because all simulations are well resolved, there are no visible differences in the entropy between the different simulations.
\subsubsection{${\rm Re}=10^6$}\label{sec:1e6}
The unmixed filaments are much thinner for ${\rm Re}=10^6$ than for ${\rm Re}=10^5$, challenging the codes. Unlike the ${\rm Re}=10^5$ case, some minor visible differences appear between the solutions for ${\rm Re}=10^6$. The lower-resolution simulations do not fully resolve the flow (one such feature is highlighted in Figure~\ref{fig:zoomin}).
\begin{figure}
\includegraphics[width=\columnwidth]{diff_1e6_1_type1-eps-converted-to.pdf}
\caption{$L_2$ norm of dye-concentration errors for $\Delta\rho/\rho_0=0$ and ${\rm Re}=10^6$. A1024 is not well resolved so its errors follow a different pattern than the other Athena simulations. The errors in A4096 are smaller than the errors in A2048 by $\approx 6$. The errors in D1024 are smaller than the errors in D512 by about 100. This demonstrates the fast (exponential) convergence of spectral methods.}\label{fig:error_1e6_1}
\end{figure}
To assess convergence, we again plot the $L_2$ norm of the error in dye concentration with respect to D2048 (Figure~\ref{fig:error_1e6_1}). A1024 has the largest errors of any simulation. At late times, the errors interact nonlinearly, whereas the errors in the higher-resolution Athena simulations stay linear and the temporal variation of the error is the same independent of the magnitude of the error. The ratio of errors of the two higher-resolution Athena simulations is about $6$---in between second- and third-order convergence. This suggests that the size of interpolation errors roughly match the size of other errors in the code (e.g., from the Riemann problem or timestepping).
The difference in errors between D512 and D1024 is about 100---much larger than the difference in errors between the Athena simulations. D512 (not shown in Figure~\ref{fig:unstratified-dye}) under resolves the flow and includes some low-amplitude Gibbs' ringing. Increasing the resolution from 512 to 1024 eliminates spatial errors because of the exponential convergence of spectral methods. This allows for very large error reduction with only modest resolution changes. The exponential nature of spectral methods makes convergence practically binary: simulations with Gibbs' ringing are not converged; simulations without Gibbs' ringing very likely are converged.
\begin{figure}
\includegraphics[width=\columnwidth]{entropy_1e6_1_type1-eps-converted-to.pdf}
\caption{Volume-integrated dye entropy (equation~\ref{eqn:entropy}) as a function of time for the five simulations with ${\rm Re}=10^6$ shown in Figure~\ref{fig:unstratified-dye}. The entropy of all simulations are very similar except for A1024; this is another indication that A1024 is not well resolved.}\label{fig:entropy_1e6_1}
\end{figure}
We plot volume-integrated dye entropy for ${\rm Re}=10^6$ in Figure~\ref{fig:entropy_1e6_1}. Like for ${\rm Re}=10^5$, all well-resolved simulations produce similar entropy. However, the under-resolved A1024 produces slightly more entropy. This agrees with the heuristic that extra numerical diffusion leads to excess entropy generation.
\subsubsection{An effective Reynolds number?}
We now describe Athena simulations without any explicit diffusion. An important question is, does the numerical diffusion in Athena act like an explicit diffusion? Put another way, does Athena have an effective Reynolds number at a given resolution for this problem? As we describe below and in section~\ref{sec:drat 2}, the answer to this question is very problem dependent.
\begin{figure}
\includegraphics[width=\columnwidth]{entropy_conv_type1-eps-converted-to.pdf}
\caption{Volume-integrated dye entropy (see section~\ref{sec:setup}) as a function of time with $\Delta\rho/\rho_0=0$, for three resolved simulations with different ${\rm Re}$, as well as three Athena simulations with no explicit diffusion (dashed lines; labeled with N, for no explicit diffusion, and their horizontal resolution). The entropy of N1024 and the simulation with ${\rm Re}=10^6$ are very similar. Their flow fields show minor differences (see Figure~\ref{fig:zoomin}). Note that the entropy decreases with increasing resolution in the simulations without explicit diffusion. This is not the case in simulations with an initial density jump (see Figure~\ref{fig:entropy_1e5_2_nodiff}).}\label{fig:entropy_conv}
\end{figure}
To test this, we plot the converged volume-integrated dye entropy for several Reynolds numbers, along with the volume-integrated dye entropy for Athena simulations without explicit diffusion (Figure~\ref{fig:entropy_conv}). The entropy evolution of N1024 is similar to the entropy evolution for ${\rm Re}=10^6$. This might lead one to think that the effective Reynolds number of this Athena simulation is about $10^6$.
\begin{figure}
\includegraphics[width=\columnwidth]{zoom_in.pdf}
\caption{Snapshots of the dye concentration field between $0.89<x<0.95$ and $0.55<z<0.61$, at $t=6$ for $\Delta\rho/\rho_0=0$. All simulations use Athena, either with ${\rm Re}=10^6$ (left column) or no explicit diffusion (right column). The three rows have different resolutions. This zoom-in of Figure~\ref{fig:unstratified-dye} highlights the differences between simulations at different resolutions---however, for the most part, the simulations look very similar. A2048 \& A4096 represent resolved simulations with ${\rm Re}=10^6$. Although the entropies for N1024 (upper right plot) \& A4096 (lower left plot) track each other (Figure~\ref{fig:entropy_conv}), the dye concentration fields exhibit minor differences.}\label{fig:zoomin}
\end{figure}
However, a closer investigation shows that N1024 and the ${\rm Re}=10^6$ simulation have different dye concentration fields which, by chance, result in similar volume-integrated entropies (Figure~\ref{fig:zoomin}). Instead, the dye concentration field of N1024 looks like the dye concentration field of the (under resolved) A1024 simulation with ${\rm Re}=10^6$. Figure~\ref{fig:entropy_1e6_1} shows A1024 has a higher entropy than the true ${\rm Re}=10^6$ solution. By removing the explicit diffusion, the flow evolution remains similar to A1024 (and different from the resolved ${\rm Re}=10^6$ solution), but the interfaces between filaments are sharper, which decreases the entropy. The effects of having the incorrect flow field (increasing entropy), but sharper interfaces between filaments (decreasing entropy) happen to cancel out, so the entropy of N1024 is similar to that of ${\rm Re}=10^6$.
Although we have highlighted the differences between N1024 and the converged solutions with ${\rm Re} = 10^6$, it is worth reiterating that the two solutions are in fact remarkably similar. This shows that N1024 roughly has an effective Reynolds number of $10^6$. In detail, however, the remaining modest differences between N1024 and the ${\rm Re}=10^6$ solution demonstrate that the numerical dissipation in Athena is not exactly equivalent to physical dissipation via viscosity and thermal conduction.
One difficulty with the notion of an effective Reynolds number is that it is extremely problem dependent, even at fixed resolution. In the next section, we introduce a small (by astrophysical standards) density jump into the initial condition. This completely changes the problem by introducing secondary instabilities which enhance mixing, producing very clear differences between resolved simulations and Athena simulations without explicit diffusion (Figure~\ref{fig:nodiff}). For the constant-density problem described here, omitting diffusion produces less entropy. Including a density jump reverses this trend: simulations with only numerical diffusion undergo {\it more} mixing than simulations with explicit diffusion. Although assigning an effective Reynolds number to Athena simulations without explicit diffusion may be reasonably accurate for the constant-initial-density problem, this does not carry over to the problem with an initial density jump.
\begin{figure*}
\includegraphics[width=\textwidth]{drat_2_snapshots_c.pdf}
\caption{Snapshots of the dye concentration field in several simulations with $\Delta\rho/\rho_0=1$ and ${\rm Re}=10^5$. Each row corresponds to a different time. The low-resolution Athena simulations suffer from a secondary instability (seen at $t=4$) in the middle of the vortex, which is not present in the Dedalus simulations nor A16384. This causes substantial differences at later times. A16384 and both Dedalus simulations stay very similar at late times, although small differences develop from chaos (see section~\ref{sec:chaos}).}\label{fig:stratified-dye}
\end{figure*}
\subsection{Simulations with a density jump ($\Delta\rho/\rho_0 = 1$)}\label{sec:drat 2}
Both the qualitative features of the flow and the convergence properties of the simulations change dramatically once we introduce an initial density jump ($\Delta \rho/\rho_0\neq 0$). Unlike the unstratified case, secondary instabilities of the filaments produce small-scale structures in the flow. These secondary instabilities, and the resulting small-scale features, depend on the resolution and the code used. As a result, simulations with a nonzero density jump require far more computational resources than the unstratified simulations presented in the previous section. We limit the simulations with explicit diffusion to ${\rm Re}=10^5$---our finite-computing budget precludes solutions for ${\rm Re}=10^6$. The largest simulations required roughly $10^6$ core-hours.
Figure~\ref{fig:stratified-dye} shows the dye concentration for different simulations at different times. In both Dedalus simulations, and the highest-resolution Athena simulation, the outer filaments (i.e., those outside the central vortex) become unstable to a sausage-like mode (see the panel in Figure~\ref{fig:schematic} for an example). Lower-resolution Athena simulations also undergo a separate instability of the inner filaments of the vortex. We refer to these two instabilities at the outer-filament instability (OFI) and the inner-vortex instability (IVI) (see Figure~\ref{fig:schematic} for examples). These instabilities are similar to the baroclinic secondary instabilities discussed in \citet{Reinaud2000,Fontane2008}. The competition between these two instabilities plays a crucial role in the evolution of the system.
\begin{figure}
\includegraphics[width=\columnwidth]{diff_1e5_2_type1-eps-converted-to.pdf}
\caption{$L_2$ norm of dye-concentration errors for $\Delta\rho/\rho_0=1$ and ${\rm Re}=10^5$. D3072 and D4096 are the closest pair of simulations, suggesting that D4096 is a good approximation to the true solution. All Athena simulations except A16384 diverge away from D4096 exponentially with a rate of 8, suggesting the growth rate of the inner vortex instability (see Figure~\ref{fig:schematic}) is also 8. The errors in the lower-resolution Dedalus simulation and A16384 grow exponentially with a rate of about 2-3. We interpret this divergence as due to chaos (see section~\ref{sec:chaos}). D3072 has errors smaller than D2048 by $\approx 4$, consistent with third-order convergence set by our choice of timestepping algorithm.}\label{fig:diff_1e5_2}
\end{figure}
We plot the $L_2$ norm of the error in dye concentration with respect to D4096 in Figure~\ref{fig:diff_1e5_2}. As described in Appendix~\ref{sec:convergence}, we believe D4096 approximates the true solution. The difference between D3072 and D4096 are smaller than the differences between any other pair of simulations. At later times, even the errors between D3072 and D4096 become large. In section~\ref{sec:chaos} we attribute this late-time behavior to chaos.
Figure~\ref{fig:diff_1e5_2} shows that at early times, the low-resolution Athena simulations diverge exponentially from D4096 with an inferred growth rate of about 8. The IVI produces this divergence. Furthermore, the four Athena simulations with resolutions between 1024 and 8192 are all equally spaced horizontally in Figure~\ref{fig:diff_1e5_2}. The horizontal-axis spacing is $\log{2}/2$ time units. This suggests that the same instability exists independent of resolution, but the amplitude of the perturbation that seeds the instability drops by 16 when the resolution doubles. Though numerical errors seed the growth, the constant growth rate of the IVI suggests it is a physical instability (we demonstrate this in section~\ref{sec:IVI}).
The IVI is a robust feature of low-resolution Athena simulations. Using the Roe integrator, second-order reconstruction, or shifting the initial condition by half a grid point does not affect the development of this instability (as confirmed using the $L_2$ error), but can cause visible differences in the flow evolution. This demonstrates that grid-scale errors drive the IVI. Using first-order reconstruction suppresses the IVI, but the enhanced numerical diffusion causes large errors. We have also tried adding low-amplitude (up to $10^{-4}$) white noise to the initial density or pressure. These do not cause any visible changes to the IVI. The flow forgets some of the detailed information of its initial condition (see section~\ref{sec:perturb}).
The highest-resolution Athena simulation (A16384) does not develop the IVI. This demonstrates that the initial condition is in fact stable to the IVI; the problem is well-posed. Rather, numerical errors seed the IVI at some later time, during the evolution of the flow. Although some numerical errors are still inevitably present, A16384 does not develop the IVI because the ``base state'' of spiralling filaments of unmixed fluid also succombs to the OFI. In this case, the OFI disrupts the inner vortex before the IVI grows to large amplitudes (see Figure~\ref{fig:schematic}).
The absence of the IVI is a robust feature of our Dedalus simulations. We confirmed the stability of the base state by re-running D2048 with low-amplitude white noise added to the initial condition; we also re-initialized D2048 from the Athena initial condition. This introduces small but non-random grid-representation differences (section~\ref{sec:IVI}). In both cases, we recover the same evolution. However, we can trigger the IVI in Dedalus with a large ($\sim 10\%$ by energy) perturbation to the initial condition (section~\ref{sec:perturb}).
\begin{figure*}
\includegraphics[width=\textwidth]{schematic.pdf}
\caption{Schematic phase-space diagram for $\Delta\rho/\rho_0=0$ (left) and $\Delta\rho/\rho_0=1$ (right). For constant initial density, the system has a stable state with ever-narrowing spiral filaments. We hypothesize that there is an initial condition IC' (right panel) leading to a similar spiral state for $\Delta\rho/\rho_0=1$. But this state is now unstable to the outer filament instability (OFI) and the inner vortex instability (IVI). Our chosen initial condition's (IC) trajectory (solid black line) approaches the spiral state, but becomes unstable to the OFI. Errors introduced by the numerical hydrodynamics may cause deviations in the trajectory leading to the IVI (dashed grey lines).}\label{fig:schematic}
\end{figure*}
Figure~\ref{fig:schematic} summarizes the relation between the two secondary instabilities in this problem. For a constant initial density (left panel), the system evolves toward a stable state characterized by spiraling filaments. Small differences in initial conditions, integration algorithms, presence of dissipation, etc., cause only minor changes in the evolution. We hypothesize that a similar spiral state also exists for $\Delta\rho/\rho_0=1$, and that it could be reached from some initial condition IC'. However, our simulations demonstrate that the spiral state is now unstable. Thus, small errors lead to the large differences in evolution.
Small perturbations to the hypothetical IC' of Figure~\ref{fig:schematic} would lead to trajectories that either develop the OFI or the IVI. However, our chosen initial condition, IC, is squarely in the attracting basin of the OFI. Thus, infinitesimal perturbations to IC will still lead to the OFI. Errors introduced by numerical hydrodynamics cause the codes to not follow the correct trajectory (solid black line). Certain types of errors can cause trajectories to diverge from the correct solution, sometimes toward the IVI (dashed grey lines). Alternatively, sufficiently large initial perturbations can also knock the system into the attracting basin of the IVI (section~\ref{sec:perturb}).
We note that the phase space for this problem is very high dimension, and that the outer filament instability and inner vortex instability represent two (likely non-parallel) unstable directions of the spiral state's stable manifold. Thus, both instabilities can act simultaneously, which sometimes occurs in simulations.
\begin{figure}
\includegraphics[width=\columnwidth]{entropy_1e5_2_type1-eps-converted-to.pdf}
\caption{Volume-integrated dye entropy (equation~\ref{eqn:entropy}) as a function of time for simulations with $\Delta\rho/\rho_0 = 1$ and ${\rm Re}=10^5$. The top panel plots the entropy, and the bottom panel plots the entropy deviation from D4096. The entropy of all the simulations diverge from D4096, but the less-accurate simulations diverge faster. For each Athena simulation, the entropy initially increases faster than D4096 when it starts to diverge. At later times, the entropy sometimes drops below the entropy of D4096.}\label{fig:entropy_1e5_2}
\end{figure}
Figure~\ref{fig:entropy_1e5_2} shows the volume-integrated dye entropy of the simulations shown in Figure~\ref{fig:stratified-dye}. The entropy follows a similar evolution in every simulation. To visualize the small deviations, the bottom panel shows the entropy with reference solution D4096 subtracted off. All the simulations diverge from D4096, but more accurate simulation diverge later, with D2048 and A16384 developing small differences later than any other simulation. The relation between entropy and resolution is more complicated for $\Delta \rho/\rho_0 = 1$ than for $\Delta \rho/\rho_0 = 0$ (Figures~\ref{fig:entropy_1e5_1} \& \ref{fig:entropy_1e6_1}).
\begin{figure}
\includegraphics[width=\columnwidth]{fields.pdf}
\caption{Plots of dye concentration ($c$), mass density ($\rho$), the divergence of the velocity ($\vec{\nabla}\dot\vec{u}$), and the vorticity ($\omega=\vec{e}_z\dot\vec{\nabla}\vec{\times}\vec{u}$) in D4096 with $\Delta\rho/\rho_0 =1$, ${\rm Re}=10^5$ at $t=6$. The divergence of the velocity and the vorticity are measured in units of $u_{\rm flow}/L_x$. The dye concentration and mass density fields are almost inverses of each other. The divergence of the velocity is largest at the interfaces between filaments, whereas the vorticity shows the location of vortices.}\label{fig:fields}
\end{figure}
Apart from the dye concentration field, many of the other flow quantities follow similar patterns. Figure~\ref{fig:fields} shows several quantities from D4096 at $t=6$. The mass density is almost the inverse of the dye concentration. This indicates that compression is not an important part of the large-scale dynamics. Lacking mass diffusion, the density shows sharper gradients than the concentration field. Temperature diffusion and rapid sound waves regularize the density evolution. These effects limit large temperature gradients, and keep the flow in local pressure equilibrium.
The velocity divergence field is characterized by a large scale quadrupole centered at the vortex, and large amplitude, small scale features near the boundaries of filaments. The most prominent feature of the vorticity field is the central vortex, which is a remnant of the initial shear. Small-scale vortex sheets and filaments perhaps result from the incomplete roll-up of the initial condition due to secondary instabilities.
Throughout this paper, we compare different solutions by calculating the $L_2$ norm of the difference between dye concentration fields. We have made similar comparisons between simulations with ${\rm Re}=10^5$ and $\Delta\rho/\rho_0=1$ using the $L_1$ norm of the difference between dye concentration fields, and using the $L_2$ norm of the difference between the three other fields shown in Figure~\ref{fig:fields}. We find the results to be qualitatively similar in all cases. This is expected given the similarity between the fields.
\subsubsection{Inner-vortex instability}\label{sec:IVI}
\begin{figure}
\includegraphics[width=\columnwidth]{IVI.pdf}
\caption{Snapshots of dye concentration field for ${\rm Re}=10^5$ and $\Delta\rho/\rho_0=1$. D2048r is a Dedalus simulation restarted with the A2048 output at $t=3.2$. At this time, the inner vortex instability is still in the linear phase, so there are no visible differences between the three simulations. At $t=4$, the IVI is very nonlinear, producing large differences between D2048 and A2048. This instability also takes place in D2048r, and the dye concentration fields of A2048 and D2048r are nearly identical. This demonstrates that the IVI is physical, but is seeded by errors in the lower-resolution Athena simulations that are not present in the Dedalus simulations or the highest-resolution Athena simulations.}\label{fig:IVI}
\end{figure}
To determine the origin (physical vs numerical) of IVI, we initialize a Dedalus simulation with horizontal resolution 2048 with the output from A2048 at $t=3.2$. We call this simulation D2048r. Figure~\ref{fig:diff_1e5_2} shows that A2048 is still in the linear phase of the IVI at this time. In Figure~\ref{fig:IVI}, we plot the dye concentration field at $t=3.2$ and $t=4$ for D2048, A2048, and D2048r. At $t=3.2$, the simulations all look the same. However, the instability becomes nonlinear by $t=4$, producing large changes in the dye concentration field. D2048 shows no signs of the IVI. However, D2048r looks almost identical to A2048. The $L_2$ norm of the difference of dye concentration fields between D2048r and D4096 almost exactly follows the norm of the difference between A2048 and D4096.
This shows that the IVI is a physical instability of this system. It is not seen in the Dedalus simulations or the highest-resolution Athena simulation because the initial condition does not project sufficiently onto its unstable modes. Errors in low resolution Athena simulations incorrectly excite perturbations unstable to the IVI. Dedalus simulations, and the highest-resolution Athena simulation, suppress noise well enough the instability never becomes nonlinear.
In our phase-space diagram (Figure~\ref{fig:schematic}), the lower-resolution Athena simulations do not properly follow the black line, and instead meander to the right, becoming unstable to the IVI. D2048r is initialized to the right of IC', so it develops the IVI just like A2048.
As a final test, we started a Dedalus simulation from the output of an Athena simulation at $t=0$. This tests whether dynamical evolution causes the IVI, rather than small differences between the implementation of the initial conditions. Although this introduced root mean squared differences in the horizontal velocity of $\approx 4\times 10^{-4}$ at $t=0$, the Dedalus simulation did not develop the IVI.
\subsubsection{Chaos}\label{sec:chaos}
At around $t\approx4$, D2048, D3072, and A16384 start to diverge exponentially from D4096 (Figure~\ref{fig:diff_1e5_2}). The differences increase with a growth rate of about 2-3, much lower than the growth rate of 8 of the IVI found in the lower-resolution Athena simulations. We interpret the differences between the simulations as due to chaos. The faster divergence discussed in section~\ref{sec:IVI} is inconsistent with chaos since it is resolution dependent and only seen in low-resolution Athena simulations.
A system is chaotic if small differences between initial conditions grow exponentially in time. To confirm the system is chaotic, we calculate a ``local-in-time'' Lyapunov exponent (i.e., growth rate). We pick a time and simulation, and look for linearly unstable perturbations. This requires solving an eigenvalue problem. The largest unstable eigenvalue is the Lyapunov exponent. Appendix~\ref{sec:eigenvalue} details this procedure.
This calculation does not inculde base-state time evolution (i.e. we consider a ``local-in-time'' calculation). The most unstable eigenvector at a time $t_0$ might differ significantly from the most unstable eigenvector at a nearby time $t_0+\Delta t$. Then it would be impossible for perturbations to grow at the Lyapunov exponent over times $\sim \Delta t$. We interpret our ``local-in-time'' Lyapunov exponents as an upper bound on the growth rate of perturbations due to chaos (up to logarithmic corrections), and as a heuristic measure of the strength of chaos in this problem.
We calculated the Lyapunov exponent for D2048 with ${\rm Re}=10^5$ and $\Delta\rho/\rho_0=1$ at two times, $t=2.5$ and $t=4.5$. We find Lyapunov exponents of $\lambda_{t=2.5}\approx 2.1$, and $\lambda_{t=4.5}\approx 3.7$. Thus, the exponential growth of differences between either D2048, D3072, or A16384 and D4096 is consistent with chaos. However, the growth rate of the differences between the lower-resolution Athena simulations and D4096 is much larger than the Lyapunov exponent. These differences are inconsistent with chaos, instead being due to the IVI (section~\ref{sec:IVI}).
The simulations with $\Delta\rho/\rho_0=0$ do not appear to diverge from one another in the same way. The highest-resolution Dedalus simulations converge at late times. We also calculate the Lyapunov exponent for D1024 with ${\rm Re}=10^6$ and $\Delta\rho/\rho_0=0$ at $t=6$. We find $\lambda_{t=6}\approx 0.4$. Although this seems inconsistent with our finding that the Dedalus simulations approach each other with time, recall that this ``local-in-time'' calculation gives an upper bound on the growth rate due to chaos (up to logarithmic corrections). Because the turnover time is 1, a Lyapunov exponent less than 1 suggests that small perturbations cannot grow before the background state changes substantially. To show definitively that the $\Delta\rho/\rho_0=0$ solution is not chaotic, one should maximize the amplification of an initial perturbation over several turnover times, e.g., between $t=6$ and $t=9$ (for instance, using the adjoint method, e.g., \citealt{kerswell14}).
\subsubsection{Initial condition}\label{sec:perturb}
Although our chosen initial condition does not lead to the IVI for converged simulations, one might wonder if other initial conditions do lead to this instability. We performed several Dedalus simulations that add low-amplitude white noise to the initial condition (e.g., see section~\ref{sec:IVI}). None of these simulations develop the IVI.
We now consider a simulation in which we include perturbations to the initial condition with order unity amplitude and large wavelengths.
Equations~\ref{eqn:ICs} still hold for all quantities except the vertical velocity, which we now take to be
\begin{align}\label{eqn:perturb vz}
u_z &= A \left(\sin(2\pi{}x)+f(x)\right) \times \left[\exp\left(-\frac{(z-z_1)^2}{\sigma^2}\right) + \exp\left(-\frac{(z-z_2)^2}{\sigma^2}\right)\right],
\end{align}
where $f(x)$ includes Fourier modes two--ten. Each mode receives a random phase and random amplitude uniformly distributed between -0.05 and 0.05. Thus, $f(x)$ represents about a 10\% perturbation to the single sine mode initial condition.
\begin{figure}
\includegraphics[width=\columnwidth]{perturb.pdf}
\caption{Snapshots of dye concentration field for ${\rm Re}=10^5$ and $\Delta\rho/\rho_0=1$. D2048p is a Dedalus simulation with an initial vertical velocity that includes power over a range of Fourier modes (equation~\ref{eqn:perturb vz}), in contrast to the single mode initial conditions focused on throughout the rest of this paper. At $t=2$ all solutions look the same, indicating that longest wavelength mode has the largest growth rate. At $t=4$, D2048p has developed the IVI, as well as other deviations from the Dedalus \& Athena simulations away from the vortex.}\label{fig:IC}
\end{figure}
Figure~\ref{fig:IC} shows snapshots of the dye concentration field for this simulation, denoted D2048p, along with D2048 and A2048 for comparison. At $t=2$, all three simulations look identical. This indicates that the lowest wavenumber Fourier mode grows faster than the other modes included in our initial condition.
By $t=4$ the perturbations from the other Fourier modes produce significant changes to the dye concentration field. D2048p now displays the IVI. In addition, large differences appear away from the vortex, where the Dedalus and Athena simulations look almost identical. Because the new initial condition does not respect the shift-and-reflect symmetry of the problem, the two half domains have different features (we only show the bottom half).
\subsubsection{Simulations without explicit diffusion}\label{sec:not explicit}
\begin{figure}
\includegraphics[width=\columnwidth]{nodiff.pdf}
\caption{Snapshots of dye concentration field for $\Delta\rho/\rho_0=1$. N4096 is an Athena simulation with no explicit diffusion. For comparison, we also plot D4096 (${\rm Re}=10^5$). Secondary instabilities occur very early at many locations in N4096. By $t=6$, the simulation has broken its initial symmetry (we only plot the bottom half). The secondary instabilities produce significant mixing, leading to greater entropy generation than in simulations with explicit diffusion (Figure~\ref{fig:entropy_1e5_2_nodiff}).}\label{fig:nodiff}
\end{figure}
Lastly, Figure~\ref{fig:nodiff} compares the resolved simulations at ${\rm Re}=10^5$ with an Athena simulation with horizontal resolution 4096 without explicit diffusion (N4096). The simulation without explicit diffusion exhibits many secondary instabilities early in the evolution (between $t=2$ and $t=4$). Unlike the lower-resolution simulations at ${\rm Re}=10^5$, the secondary instability is not limited to the IVI. Instead, instabilities grow throughout the domain at locations of strong shear.
\begin{figure}
\includegraphics[width=\columnwidth]{entropy_1e5_2_nodiff_type1-eps-converted-to.pdf}
\caption{Volume-integrated dye entropy (equation~\ref{eqn:entropy}) as a function of time for simulations with $\Delta\rho/\rho_0 = 1$. D4096 is run at ${\rm Re}=10^5$, and all simulations labeled with N are run with Athena with no explicit diffusion. At early times, the highest-resolution runs without explicit diffusion have the lowest entropy. However, at around $t=5$, the lower-resolution runs without explicit diffusion have lower entropy. D4096 has the lowest entropy at late times. This indicates that simulations without explicit diffusion have {\it greater} numerical mixing compared to simulations with explicit diffusion. This becomes more prominent as the resolution increases. By contrast, in the simulations without an initial density jump, explicit diffusion leads to more mixing, and for simulations without explicit diffusion, increasing resolution decreases mixing (Figure~\ref{fig:entropy_conv}).}\label{fig:entropy_1e5_2_nodiff}
\end{figure}
These instabilities shred apart the vortex, leading to vigorous mixing. Figure~\ref{fig:entropy_1e5_2_nodiff} compares the volume-integrated dye entropy of Athena simulations with no explicit diffusion at different resolutions with D4096. Simulations without explicit diffusion produce almost no entropy until $t\approx 3.5$. At this time, the secondary instabilities start to cause diffusion at the grid-scale. This generates entropy more rapidly than the explicit diffusion of D4096 (or any of the other simulations with explicit diffusion). For $t>5$, the entropy of the simulations without explicit diffusion is larger than the entropy of D4096. Paradoxically, the entropy increases as the resolution increases. Our expectation is that the entropy generation should decrease as ${\rm Re}$ increases. However, we do not have any resolved simulations with higher ${\rm Re}$ for comparison, so we cannot present evidence that this additional mixing is spurious. But this problem shows that introducing an explicit diffusion in Athena can {\it decrease} the diffusion in the simulation.
\section{Conclusion}\label{sec:conclusion}
This paper describes several converged, nonlinear solutions to the Kelvin-Helmholtz (KH) problem. By using a smooth initial condition and explicit diffusion, we demonstrate that solutions remain virtually identical (for constant initial density) or very similar (for an initial density jump of one) with resolution above a certain threshold. This permits a well-defined reference solution for this problem, against which errors can be accurately estimated. We verify this using two codes, Dedalus and Athena, with very different numerical methods (pseudo-spectral and Godunov, respectively). Previous KH test problems either did not use smooth initial conditions, or did not include explicit diffusion. Absent these two choices, the KH problem cannot be quantitatively compared between codes because the solutions depend sensitively on grid-scale errors and do not converge with increasing resolution.
We first study simulations with a constant initial density (section~\ref{subsec:unstratified-results}). We find converged solutions to this relatively easy problem with Reynolds numbers (${\rm Re}$) as high as $10^6$.
The solution is characterized by the continual roll-up of the initial vortex sheet, producing alternating filaments of unmixed material (Figure~\ref{fig:unstratified-dye}). We find third-order convergence in both Dedalus \& Athena for simulations with ${\rm Re}=10^5$ (Figure~\ref{fig:error_1e5_1}), and better than second-order convergence in both codes for simulations with ${\rm Re}=10^6$ (Figure~\ref{fig:error_1e6_1}).
To quantify mixing in the simulations, we calculate the volume-integrated dye entropy as a function of time for several Reynolds numbers, as well as for Athena simulations without explicit diffusion (Figure~\ref{fig:entropy_conv}). As the Reynolds number increases, the entropy generation decreases monotonically. Similarly, as the resolution of Athena simulations without explicit diffusion increases, the entropy generation also decreases monotonically. The entropy of one Athena simulation without explicit diffusion is very close to the entropy of the ${\rm Re}=10^6$ simulation, although the solutions show minor differences (Figure~\ref{fig:zoomin}). These small differences indicate that the numerical diffusion in Athena does not act precisely as a physical diffusion from viscosity and/or thermal conductivity. For certain applications however, assigning an effective Reynolds number to ideal fluid simulations may suffice. This does not appear to be the case for KH simulations with density jumps, as we now discuss.
Including an initial density gradient aligned with the velocity gradient makes the problem much richer (section~\ref{sec:drat 2}). The rolled-up vortex-sheet filaments becomes unstable in at least two ways: the inner vortex instability, and/or the outer filament instability (Figures~\ref{fig:stratified-dye} \& \ref{fig:schematic}). The Dedalus simulations and highest-resolution Athena simulation only exhibit the outer filament instability, whereas the lower-resolution Athena simulations also exhibit the inner vortex instability. Adding small amplitude noise to the initial condition does not produce the inner vortex instability in Dedalus, demonstrating that our chosen initial condition is not susceptible to this instability; instead, numerical errors seed the inner vortex instability throughout the evolution of the Athena simulations. It is not surprising that Dedalus is more accurate than Athena for this smooth flow---the Godunov method is designed for simulating flows with shocks. However, it is not well appreciated that the pseudo-spectral method is able to solve the full Navier-Stokes equations with Mach number order unity.
We use the $L_2$ norm to quantify the difference between dye concentration fields of different simulations, and find the inner vortex instability grows at a rate of $\approx 8$, independent of resolution (Figure~\ref{fig:diff_1e5_2}). Furthermore, a Dedalus simulation initialized with an Athena state in the linear phase of the inner vortex instability develops the instability in the same way as Athena (Figure~\ref{fig:IVI}), demonstrating the physical, rather than numerical, nature of the instability.
Adding a large ($\sim 10\%$ by energy) perturbation with multiple Fourier modes to the initial velocity in Dedalus can seed the inner vortex instability (section~\ref{sec:perturb}). Although this suggests that the inner vortex instability is possibly generic for KH instabilities in astrophysics, we believe the single-mode initial condition discussed throughout the rest of this paper is still particularly valuable for a test problem. Because small numerical errors can produce large differences in the solution, one can assess by eye the fidelity with which a code is solving the fluid equations. This KH test problem is difficult, which we believe makes it interesting. In contrast, an unresolved KH problem is not a good test of fluid codes, because noise due to numerical errors can masquerade as higher-fidelity solutions.
The Dedalus simulations and highest-resolution Athena simulation also diverge from each other exponentially at late times, but with a much smaller growth rate $\approx 2-3$. In section~\ref{sec:chaos} we calculate the maximum Lyapunov exponent of the flow, and argue that chaos drives the divergence. The Lyapunov exponent represents the maximum possible rate of divergence of solutions due to chaos (up to logarithmic corrections). At late times when the Dedalus simulations and highest-resolution Athena simulation begin to diverge, the Lyapunov exponent is $\approx 3.7$, so the divergence we see is consistent with chaos. Because the system is chaotic, our solutions are not as accurate as the solutions with constant initial density. We still find power-law convergence in the Dedalus simulations at fixed time (Figure~\ref{fig:diff_1e5_2}). However, the amount of time that a solution maintains a fixed level of accuracy increases only logarithmically with resolution.
For the initial condition with a density jump, we also compare a high-resolution Athena simulation without explicit diffusion to our converged (within the limits of chaos) simulations with ${\rm Re}=10^5$. Secondary instabilities pervade the simulation without explicit diffusion (Figure~\ref{fig:nodiff}). The secondary instabilities cause enhanced mixing, and at late times, the simulations without explicit diffusion have higher entropy than the ${\rm Re}=10^5$ simulation (Figure~\ref{fig:entropy_1e5_2}). Introducing explicit diffusion into Athena can {\it reduce} the diffusion in the simulation. For this reason, we hypothesize (but cannot prove) that this small-scale structure is likely unphysical, and would not develop for any reasonable initial condition or Reynolds number. This highlights that a solution with more small-scale structure is not necessarily better.
Although we only describe simulations with an initial density ratio of one, we have experimented with larger initial density ratios (e.g., 4). Preliminary investigation suggests that vigorous secondary instabilities become increasingly prominent as the density ratio increases, greatly enhancing mixing. Though it's a common practice to leave out explicit dissipation to model the high Reynolds numbers relevant in astrophysics, our results suggest that including explicit diffusion may provide a very effective way to reduce diffusion in astrophysical simulations with very large density ratios. We stress that these large density ratios are common in astrophysical problems such as star formation or galaxy formation. Our results demonstrate just how subtle and computationally challenging it is to correctly capture mixing in these environments (even restricting ourselves to hydrodynamics, which is likely a poor approximation).
There are many remaining questions left unanswered in this paper. It is unclear how the Athena algorithm seeds the inner vortex instability. We did not search for the critical perturbation amplitude that will cause a Dedalus simulation to exhibit the inner vortex instability. Because of limited computer time, we did not find converged Dedalus or Athena simulations with $\Delta \rho/\rho_{0} = 1$ and ${\rm Re}=10^6$. Perhaps, contrary to expectation, increasing the Reynolds number of the system does increase the entropy production, as found in the Athena simulations without explicit diffusion. Future work should also test the Galilean invariance of these simulations, test initial conditions with an interface at an angle to the grid, and extend this analysis to larger density ratios.
We hope this study provides a well-posed test problem for future codes used in astrophysics. It would be valuable to carry out this test problem with unstructured/meshless methods \citep[e.g.,][]{Springel2010,Duffell11,Hopkins2015} to understand their convergence properties on this challenging problem. Toward this goal, we include the reference solutions to these KH problems in the supplementary material accompanying this paper. Introducing smooth initial conditions and explicit diffusion allows us to calculate a converged reference solution and compare between codes. The competing secondary instabilities for initial conditions with a density jump of one provides a stringent test of the fidelity with which a code solves the Navier-Stokes equations, making it a great test problem.
\section*{Acknowledgments}
\noindent{}The authors would like to thank Ramesh Narayan, Phil Hopkins, Paul Duffell, and C\'edric Beaume for helpful discussions. D.L. is supported by the Hertz Foundation. M.M. was supported by the National Science Foundation grant AST-1312651. E.Q. is supported in part by a Simons Investigator Award from the Simons Foundation. G.M.V acknowledges support from the Australian Research Council, project number DE140101960. J.S.O. is supported by a Provost's Research Fellowship from Farmingdale State College. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper.
This work used the Extreme Science and Engineering Discovery Environment (XSEDE allocations TG-AST140039, TG-AST140047, and TG-AST140083), which is supported by National Science Foundation grant number ACI-1053575. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center and the NASA Center for Climate Simulation (NCCS) at Goddard Space Flight Center. This project was supported by NASA under
TCAN grant number NNX14AB53G.
|
1,116,691,499,763 | arxiv | \section{Introduction}
Recently Li and coworkers \cite{Li-PRL-07} reported new results on transport
properties of the stripe phase in La$_{\text{1.875}}$Ba$_{\text{0.125}}$CuO$%
_{\text{4}}$. They found that 2-dimensional superconducting (SC)
fluctuations appear at an onset temperature
T$_{\text{c}}^{\text{2D}}$(=42K) which greatly exceeds the
critical temperature for 3-dimensional SC order, T$_{\text{c}} $
(=4K). These results contradicted the long standing belief that
the onset of SC behavior was suppressed to very low temperatures
in the presence of the static spin and charge density wave (SDW
and CDW hereafter) or more precisely spin and charge stripe
orderings. Li \textit{et al.,} \cite{Li-PRL-07} found strong
evidence for a Berezinskii-Kosterlitz-Thouless transition (BKT) at
T$_{\text{BKT}}$ (=16K). This implies that the Josephson coupling
between the CuO$_{\text{2}}$ planes strictly vanishes for
T$>$T$_{\text{c}}$. Shortly afterwards Berg \textit{et al.}
\cite{Berg-PRL-07} proposed that the strict interplanar decoupling
arises because the planar superconductivity contains a periodic
array of lines of $\pi $-phase shift which rotate through $\pi /2$
up the c-axis together with the spin and charge stripe ordering in
the low temperature tetragonal (LTT) phase. SDW order also appears
at the same onset temperature, $T_{c}^{2D}$ in zero magnetic field
and this temperature is clearly separated from the crystallographic
transition temperature $T_{co}$ separating the low temperature
orthorhombic (LTO) and LTT phases. In this material the LTT phase
shows a superlattice ordering at all temperatures below $T_{co}
$. \cite{Kim-PRB-08} Note however recent experiments by Fink \textit{et al.} on La$_{\text{%
1.8-x}}$Eu$_{\text{0.2}}$Sr$_{\text{x}}$CuO$_{4}$
\cite{Fink-arXiv-08} found different temperatures with the
superlattice onset below the crystallographic phase transition
temperature. Earlier studies by Lee \textit{et al.} on
superoxygenated La$_{\text{2}}$CuO$_{\text{4}}$
\cite{YSLee-PRB-99} found the same onset temperature for both SC
and SDW order (T=42K). They also noted that signs of a CDW
superlattice at higher temperature (T=55K) has been reported.
These temperatures coincide with
the values found by Li \textit{et al.} in La$_{\text{1.875}}$Ba$_{\text{0.125%
}}$CuO$_{\text{4}}$ which suggests that Lee \textit{et al.} were observing
a similar stripe order with coexisting SDW and SC. In this case, however, the SC order is 3-dimensional, consistent with the absence of $\pi/2$-rotations in the crystal
structure. These experiments lead us to conclude that in the presence of a CDW superlattice, coexisting SDW and antiphase d-wave SC can be favored.
Actually a similar ordering was suggested on general grounds earlier by
Zhang \cite{Zhang-JPCS-98} and also by Himeda , Kato and Ogata \cite{Himeda-PRL-02} on the
basis of variational Monte Carlo calculations (VMC) for the strongly
correlated one band $t-t^{\prime }-J$ model. Himeda \textit{et al} \cite%
{Himeda-PRL-02} found that a modulated state with combined SDW,
and CDW and d-wave superconductivity (dSC) containing site- or bond- centered anti-phase
domain walls ($\pi$DW) ( a state we denote as SDW+CDW+APdSC$^{s/b}$) had a
lower energy than a uniform d-wave SC state over a wide range of
parameters and was even lower than a modulated state without
anti-phase (denoted as SDW+CDW+dSC$^{s/b}$) in a narrower parameter range.
Recent VMC and renormalized mean field theory (RMFT) calculations
\cite{Raczkowski-PRB-07} have found that CDW+APdSC$^{s/b}$ state ($\pi
$-DRVB state in ref.\cite{Raczkowski-PRB-07}) cost surprisingly
little energy even in the absence of SDW modulations.
In this paper we report on calculations using the RMFT method to examine in
greater detail the energetics of these novel modulated states within the
generalized $t-t^{\prime }-t^{\prime \prime }-J$ model. This method
approximates the strong correlation condition of no double occupancy by
Gutzwiller renormalization factors and generally agrees well with full VMC
calculations which treat the strong correlation condition exactly. The
static stripe phase appears in the LTT phase
of La$_{\text{1.875}}$Ba$_{\text{0.125}}$CuO$_{\text{4}}$. This
crystallographic phase is entered at a temperature T$_{\text{co}}$ (=52K $>$%
T $_{\text{c}}^{\text{2D}}$) and displays a complex crystal structure which
has not been fully determined to the best of our knowledge. Note that
although the overall crystal structure is tetragonal the individual CuO$_{%
\text{2}}$ planes do not have square symmetry. Along one (x-) axis
the Cu-O-Cu bonds are straight but in the perpendicular direction
they are buckled \cite{Buchner-PRL-94}. Since the Cu-Cu distance
is required to be the same in both directions there is a
compressive stress along the x-axis which may well be the origin
of the CDW superlattice that appears at the crystallographic phase
transition into the LTT phase. At present the detailed
displacements inside the supercell have not been refined. In our
calculations we introduce a site dependent potential shift to
mimic this effect. In addition we examine the effect of the
hopping anisotropy between x- and y-axes which results from the
different Cu-O-Cu bonding in the x and y directions. Such
anisotropy was also considered by Capello \textit{et al.}
\cite{Capello-PRB-08} in their work on stripes made from
anti-phase shifts in the superconductivity.
\section{Renormalized Mean Field Theory for the extended $t - J$ model}
The $t-J$ model was introduced in the early days of cuprate research by
Anderson and by Zhang and Rice to describe lightly hole doped CuO$_{\text{2}%
} $ planes \cite{t-J-model}. In this single band model configurations with doubly occupied
sites are strictly forbidden due to the strong onsite Coulomb repulsion. The
Hamiltonian takes the form, suppressing the constraint
\begin{eqnarray}
H_{tj}&=&-\sum_{\left( i,j\right) ,\sigma }t_{\left( i,j\right) }\left( \hat{c}%
_{i,\sigma }^{\dag }\hat{c}_{j,\sigma }+h.c.\right) +\sum_{\langle
i,j\rangle }J_{\langle i,j\rangle }\mathbf{\hat{S}}_{i}\cdot \mathbf{\hat{S}}
_{j} \notag \\ &&+\sum_{i}V_{i}\hat{n}_{i}. \label{eq:tj}
\end{eqnarray}%
In the first term we include hopping processes between nearest
neighboring (nn) sites (denoted by $\langle i,j\rangle $), next
neighboring sites (nnn) and 3rd neighboring sites (nnnn) on a
square lattice with matrix elements $t$ , $t^{\prime }$ ,
$t^{\prime \prime }$\ respectively. We will measure all energies
in unit of $t_{0}$ (300 meV) --- a standard value for the nn
hopping matrix element $t$. The superexchange spin-spin
interaction between nn sites $J=0.3$, and $\sigma $ the spin index
takes the value $\pm $. In addition we introduce a potential shift
$V_{i}$ which varies from site to site within the supercell to
mimic the effect of the crystallographic superlattice in the LTT
crystal structure. The strong coupling constraint of no double
occupancy is very difficult to treat analytically. Zhang and
coworkers introduced Gutzwiller renormalization factors to
approximate the constraint \cite{RMFT}. This approximation has
been shown to be quite accurate for mean field theories when
compared to numerical evaluations by VMC
of expectation values of the corresponding mean field wavefunctions, $%
\left\vert \Psi \right\rangle $, which are exactly projected down to the
constrained Hilbert space \cite{RMFT-VMC}. Later the case of AF ordering was considered by
Himeda and Ogata, who showed that an anisotropic spin renormalization term
is required to reproduce the VMC results \cite{Himeda-Ogata}. The resulting renormalized
Hamiltonian is
\begin{eqnarray}
H &=&-\sum_{\left( i,j\right) ,\sigma }g_{\left( i,j\right) ,\sigma
}^{t}t_{\left( i,j\right) }\left( \hat{c}_{i,\sigma }^{\dag }\hat{c}%
_{j,\sigma }+h.c.\right) \notag
\\
&&+\sum_{\langle i,j\rangle }J_{\langle i,j\rangle }\left[ g_{\langle
i,j\rangle }^{s,z}\hat{S}_{i}^{z}\hat{S}_{j}^{z}+g_{\langle i,j\rangle
}^{s,xy}\left( \hat{S}_{i}^{+}\hat{S}_{j}^{-}+\hat{S}_{i}^{-}\hat{S}%
_{j}^{+}\right) /2\right] \notag
\\ && +\sum_{i}V_{i}\hat{n}_{i} . \label{eq:GZHamiltonian}
\end{eqnarray}%
The renormalization factors $g^{t}$, $g^{s,xy}$ and $g^{s,z}$ used
to evaluate a projected mean field wavefunction depend on the
local values of the magnetic and pairing order parameters and the
local kinetic energy and hole density which are defined as follows
\begin{eqnarray}
m_{i} &=&\left\langle \Psi _{0}\right\vert \hat{S}_{i}^{z}\left\vert \Psi
_{0}\right\rangle ;\notag \\
\Delta _{\left\langle i,j\right\rangle ,\sigma }&=&\sigma
\left\langle \Psi _{0}\right\vert \hat{c}_{i,\sigma
}\hat{c}_{j,-\sigma }\left\vert \Psi _{0}\right\rangle ; \notag\\
\chi _{\left( i,j\right) ,\sigma } &=&\left\langle \Psi _{0}\right\vert \hat{%
c}_{i,\sigma }^{\dag }\hat{c}_{j,\sigma }\left\vert \Psi
_{0}\right\rangle ;\notag \\
\delta_{i}&=&1-\left\langle \Psi
_{0}\right\vert \hat{n}_{i}\left\vert \Psi _{0}\right\rangle ,
\label{eq:orderP}
\end{eqnarray}
where $\left\vert \Psi _{0}\right\rangle $ is the unprojected
wavefunction. The two pairing amplitudes $\Delta _{\left\langle
i,j\right\rangle ,\sigma =\pm }$ are treated independently to
incorporate a possible triplet component. The explicit
renormalization factors introduced first by Himeda and Ogata are
quite complex, \cite{Himeda-Ogata} and we use here a simpler form
as follows,\cite{simplify-G}
\begin{eqnarray}
g_{\left( i,j\right) ,\sigma }^{t} &=&g_{i,\sigma }^{t}g_{j,\sigma }^{t}; \notag \\
g_{i,\sigma }^{t}&=&\sqrt{\frac{%
2\delta _{i}\left( 1-\delta _{i}\right) }{1-\delta _{i}^{2}+4m_{i}^{2}}\frac{%
1+\delta _{i}+\sigma 2m_{i}}{1+\delta _{i}-\sigma 2m_{i}}}; \notag \\
g_{\langle i,j\rangle }^{s,xy} &=&g_{i}^{s,xy}g_{j}^{s,xy};\notag \\
g_{i}^{s,xy}&=&\frac{2\left( 1-\delta _{i}\right) }{1-\delta
_{i}^{2}+4m_{i}^{2}}; \notag \\
g_{\langle i,j\rangle }^{z} &=&g_{\langle i,j\rangle
}^{s,xy}\frac{2\left( \overline{\Delta}_{\langle i,j\rangle
}^{2}+\overline{\chi}_{\langle i,j\rangle
}^{2}\right) -4m_{i}m_{j}X_{\langle i,j\rangle }^{2}}{2\left( \overline{\Delta}%
_{\langle i,j\rangle }^{2}+\overline{\chi}_{\langle i,j\rangle
}^{2}\right)
-4m_{i}m_{j}};\notag \\
X_{\langle i,j\rangle } &=&1+\frac{12\left( 1-\delta _{i}\right) \left(
1-\delta _{j}\right) \left( \overline{\Delta}_{\langle i,j\rangle }^{2}+\overline{\chi}%
_{\langle i,j\rangle }^{2}\right) }{\sqrt{\left( 1-\delta
_{i}^{2}+4m_{i}^{2}\right) \left( 1-\delta _{j}^{2}+4m_{j}^{2}\right) }} ,\notag \\
\label{eq:gfactors}
\end{eqnarray}
where $\overline{\Delta}_{\langle i,j\rangle }=\sum_{\sigma
}\Delta _{\left\langle i,j\right\rangle ,\sigma }/2$,
$\overline{\chi}_{\langle i,j\rangle }=\sum_{\sigma }\chi
_{\left\langle i,j\right\rangle ,\sigma }/2$. Since the g-factors
depends on the order parameters, the approach by direct
diagonalization of the mean field Hartree-Fock Hamiltonian
obtained from the Hamiltonian Eq[\ref{eq:GZHamiltonian}] will not
give the best energy of the Hamiltonian
\begin{eqnarray}
&E_{t}& =\left\langle \Psi _{0}\right\vert H\left\vert \Psi
_{0}\right\rangle \notag \\
&=&-\sum_{\left( i,j\right) ,\sigma }g_{\left( i,j\right)
,\sigma }^{t}t_{\left( i,j\right) }\left[ \chi _{\left( i,j\right) ,\sigma
}+h.c.\right] \notag \\
&&-\sum_{\langle i,j\rangle ,\sigma }J_{\langle i,j\rangle } \left(
\frac{g_{\langle i,j\rangle }^{s,z}}{4}+\frac{g_{\langle i,j\rangle }^{s,xy}%
}{2}\frac{\Delta _{\langle i,j\rangle ,\overline{\sigma}}^{\ast
}}{\Delta _{\langle i,j\rangle ,\sigma }^{\ast }}\right) \Delta
_{\langle i,j\rangle
,\sigma }^{\ast }\Delta _{\langle i,j\rangle ,\sigma } \notag \\
&&-\sum_{\langle i,j\rangle ,\sigma }J_{\langle i,j\rangle }\left( \frac{%
g_{\langle i,j\rangle }^{s,z}}{4}+\frac{g_{\langle i,j\rangle }^{s,xy}}{2}%
\frac{\chi _{\langle i,j\rangle ,\overline{\sigma}}^{\ast }}{\chi
_{\langle i,j\rangle ,\sigma }^{\ast }}\right) \chi _{\langle
i,j\rangle ,\sigma
}^{\ast }\chi _{\langle i,j\rangle ,\sigma } \notag \\
&&+\sum_{i}V_{i}n_{i}+\sum_{\langle i,j\rangle }g_{\langle
i,j\rangle }^{s,z}J_{\langle i,j\rangle }m_{i}m_{j} \label{eq:energy}
\end{eqnarray}%
Instead, we minimize the energy with respect to the unprojected
wave function $\left\vert \Psi _{0}
\right\rangle $ under the constraints $\sum_{i}{n}_{i}=N_{e}$,
$\left\langle \Psi _{0}|\Psi _{0}\right\rangle =1$, $N_{e}$ is the
total electron density.
That is equivalent to minimizing the
function
\begin{equation}
W=\left\langle \Psi _{0}\right\vert H\left\vert \Psi _{0}\right\rangle
-\lambda \left( \left\langle \Psi _{0}|\Psi _{0}\right\rangle -1\right) -\mu
\left( \sum_{i}\hat{n}_{i}-N_{e}\right) \label{eq:freeenergy}
\end{equation}%
which results in the following variational relation
\begin{eqnarray}
0&=&\frac{\delta W}{\delta \left\langle \Psi _{0}\right\vert }\notag \\
&=&\sum_{\left(
i,j\right) ,\sigma }\frac{\partial W}{\partial \chi _{\left( i,j\right)
,\sigma }}\frac{\delta \chi _{\left( i,j\right) ,\sigma }}{\delta
\left\langle \Psi _{0}\right\vert }+h.c. \notag \\
&&+\sum_{\left\langle i,j\right\rangle ,\sigma }\frac{\partial
W}{\partial \Delta _{\left\langle i,j\right\rangle
,\sigma }}\frac{\delta \Delta _{\left\langle i,j\right\rangle ,\sigma }}{%
\delta \left\langle \Psi _{0}\right\vert }+h.c. \notag \\
&&+\sum_{i,\sigma }\frac{%
\partial W}{\partial n_{i,\sigma }}\frac{\delta n_{i,\sigma }}{\delta
\left\langle \Psi _{0}\right\vert }-\lambda \left\vert \Psi
_{0}\right\rangle . \label{eq:partialW}
\end{eqnarray}%
For an operator $\hat{O}$ with the expectation value \ $O=\left\langle \Psi
_{0}\right\vert \hat{O}\left\vert \Psi _{0}\right\rangle $, $\delta
\left\langle \Psi _{0}\right\vert \hat{O}\left\vert \Psi _{0}\right\rangle
/\delta \left\langle \Psi _{0}\right\vert =\hat{O}\left\vert \Psi
_{0}\right\rangle $. Thus one obtains the following mean field
Hamiltonian,
\begin{eqnarray}
H_{MF}&=&\sum_{\left( i,j\right) ,\sigma }\frac{\partial W}{\partial \chi
_{\left( i,j\right) ,\sigma }}\hat{c}_{i,\sigma }^{\dag }\hat{c}_{j,\sigma
}+h.c. \notag \\
&&+\sum_{\left\langle i,j\right\rangle ,\sigma }\frac{\partial W}{\partial
\Delta _{\left\langle i,j\right\rangle ,\sigma }}\sigma \hat{c}_{i,\sigma }%
\hat{c}_{j,\overline{\sigma}}+h.c. \notag \\
&&+\sum_{i,\sigma }\frac{\partial W}{\partial
n_{i,\sigma }}\hat{n}_{i,\sigma }, \label{eq:variationH}
\end{eqnarray}%
which satisfies the Schr\"{o}dinger equation $H_{MF}\left\vert \Psi
_{0}\right\rangle =\lambda \left\vert \Psi _{0}\right\rangle $. The
coefficients of $H_{MF}$ are given as
\begin{eqnarray}
\frac{\partial W}{\partial \chi _{\left( i,j\right) ,\sigma }}
&=&-\delta _{\left( i,j\right) ,\langle i,j\rangle }J_{\langle
i,j\rangle }\left( \frac{g_{\langle i,j\rangle
}^{s,z}}{4}+\frac{g_{\langle i,j\rangle }^{s,xy}}{2}\frac{\chi
_{\langle i,j\rangle ,\overline{\sigma}}^{\ast }}{\chi _{\langle
i,j\rangle
,\sigma }^{\ast }}\right) \chi _{\langle i,j\rangle ,\sigma }^{\ast } \notag \\
&&-g_{\left(
i,j\right) ,\sigma }^{t}t_{\left( i,j\right) }+\left[
\frac{\partial W}{\partial \chi _{\left( i,j\right) ,\sigma }}\right] _{g};
\notag \\
\frac{\partial W}{\partial \Delta _{\langle i,j\rangle ,\sigma }}
&=&-J_{\langle i,j\rangle }\left( \frac{g_{\langle i,j\rangle }^{s,z}}{4}+%
\frac{g_{\langle i,j\rangle }^{s,xy}}{2}\frac{\Delta _{\langle i,j\rangle ,%
\overline{\sigma}}^{\ast }}{\Delta _{\langle i,j\rangle ,\sigma
}^{\ast }}\right)
\Delta _{\langle i,j\rangle ,\sigma }^{\ast } \notag \\
&&+\left[ \frac{\partial W}{%
\partial \Delta _{\langle i,j\rangle ,\sigma }}\right] _{g}; \notag
\\
\frac{\partial W}{\partial n_{i,\sigma }} &=&-\left( \mu -V_{i}\right) +%
\frac{1}{2}\sigma \sum_{j}g_{\langle i,j\rangle
}^{s,z}J_{\langle i,j\rangle }m_{j} \notag +\left[ \frac{\partial W}{\partial
n_{i,\sigma }}\right] _{g} \notag\\
\label{eq:coefficient}
\end{eqnarray}%
with $\partial W/\partial \chi _{\left( i,j\right) ,\sigma }^{\ast }=\left[
\partial W/\partial \chi _{\left( i,j\right) ,\sigma }\right] ^{\ast }$, $%
\partial W/\partial \Delta _{\left( i,j\right) ,\sigma }^{\ast }=\left[
\partial W/\partial \Delta _{\left( i,j\right) ,\sigma }\right] ^{\ast }$, $%
\delta _{\left( i,j\right) ,\langle i,j\rangle }=1$ only when i and j are
nn, otherwise it equals 0, the partial derivative terms $\left[ \frac{%
\partial W}{\partial O}\right] _{g}$ in the above equations refer to the derivative of $W$ with respect to the mean field $O$
via the Gutzwiller g-factors (see Eq[\ref{eq:gfactors}]). This
mean field Hamiltonian $H_{MF}$ in Eq[\ref{eq:variationH}] is then solved
self-consistently. In the numerical calculations, we always
diagonalize $H_{MF}$ for a sample consisting of 257 supercells
along the direction with periodic boundary condition unless stated
explicitly otherwise.
\section{Simplified Model: Site-centered Anti-Phase Domain Walls with d-wave Superconductivity
}
We begin the discussion of the results with the simplest case namely
site-centered anti-phase domain walls in a d-wave superconductor (APdSC$^{s}$). To
this end we restrict the Hamiltonian to the two terms without SDW order, and
solve it self-consistently without considering explicitly the doping
dependence of g-factors,
\begin{eqnarray}
H_{s}&=&-\sum_{\left\langle i,j\right\rangle ,\sigma }\left(
g^{t}t_{0}+g^{s}J_{0}\tilde{\chi}_{i,j}^{\ast }\right)
\hat{c}_{i,\sigma }^{\dag
}\hat{c}_{j,\sigma } \notag \\
&&-\sum_{\left\langle i,j\right\rangle }g^{s}J_{0}\tilde{\Delta}%
_{i,j}^{\ast }\hat{c}_{i,\uparrow }^{\dag }\hat{c}_{j,\downarrow
}^{\dag }+h.c. \label{eq:Htoy}
\end{eqnarray}%
Note that $\tilde{\chi}_{i,j}\neq \chi _{i,j}$, and $\widetilde{\Delta}%
_{i,j}\neq \Delta _{i,j}$ but has the same symmetry as $\Delta
_{i,j}$. To keep the model simple, we set
$\widetilde{\chi}_{i,j}=\widetilde{\chi}_{p}$ independent of
$\left\langle i,j\right\rangle $, and $g^{t}=2\delta /\left(
1+\delta \right) $, $g^{s}=4/\left( 1+\delta \right) ^{2}$ where
$\delta $
is the average doping away from half-filling. We consider first an isolated $%
\pi $DW which lies in the center ($i_{x}=28$) of a finite sample
with open boundary condition along x direction and width
$L_{x}=55$. To this end we set $\left\vert
\widetilde{\Delta}_{i,j}\right\vert =\widetilde{\Delta}_{p}$
except for the bonds
along the domain wall which are set to zero, i.e $\widetilde{\Delta}%
_{i,j}|_{i_{x}=j_{x}=28}=0$. The $\pi $-phase shift requires that for the
two bonds $\left\langle i,j\right\rangle $ and $\left\langle i^{\prime
},j^{\prime }\right\rangle $ which are located symmetrically on the two sides of
the domain wall, $\Delta _{i,j}|_{i_{x},j_{x}\leq 28}=-\Delta _{i^{\prime
},j^{\prime }}|_{i_{x}^{\prime },j_{x}^{\prime }\geq 28}$. The change of
sign at the domain wall causes an Andreev bound state (ABS) to appear at the
chemical potential which we take as the energy zero. This shows up clearly
when we calculate the local density of states (LDOS) as illustrated in Fig[%
\ref{fig:toy}a,b]. For the case of weak coupling in
Fig[\ref{fig:toy}a] a clear peak appears in the LDOS at zero
energy for sites at or near the domain wall ($i_{x}=27,28,29$)
while far away sites show peaks at the bulk gap edges and reduced
values at zero energy. This behavior shows up also very clearly in
the spatial dependence of the quasiparticle spectral weight. This
is illustrated in Fig[\ref{fig:toy_ABS}a,b] for the case of a weak and a
moderate gap value of the pairing amplitude
$\widetilde{\Delta}_{p}=0.02$($0.08$). The spectral weight is
concentrated close to the $\pi $DW at quasiparticle energies
$E_{k}\simeq 0$, but away from the $\pi $DW for values of $E_{k}$
near the bulk gap energy
$E_{k}=2g^{s}J_{0}\widetilde{\Delta}_{p}$. The total
energy differences between the states with and without $\pi $DW for the two $\widetilde{\Delta}_{p}$ are $0.0066t_{0}$ and $0.0365t_{0}$, respectively. The energy cost of the domain
wall
is substantial, consistent with the creation of a LDOS peak in the center of
energy gap. Note that for the case of a moderate gap value of $\widetilde{%
\Delta}_{p}$, the peak of LDOS near $E_{k}\simeq 0$ shows structures consistent with
the development of a one-dimensional band of Andreev bound states which
propagate along the domain wall. This can be also seen in the quasiparticle
dispersion which is a function only of $k_{y}$. \newline
\begin{figure}[t]
\includegraphics
[width=7.0cm,height=16.0cm,angle=270]
{fig/toy.eps}
\caption{(Color online) Local density of states (LDOS) for a simplified model (Hamiltonian
$H_{s}$ defined in Eq[\protect\ref{eq:Htoy}]) for an isolated site-centered anti-phase domain wall in a d-wave SC. Periodic boundary condition along y direction and open boundary condition along x direction are
imposed. The width of the system along x direction is $L_{x}=55$ with the domain wall located at site 28. The average doping
concentration is fixed at $\protect\delta =0.25$, and
$\widetilde{\protect\chi}_{p}=0.20$. Panels (a) and (b) are for
$\widetilde{\Delta}_{p}=0.02$, and $0.08$ respectively. The 14th
site is halfway between the domain wall (site 28) and the
edge, and the
two sites (27, 29) are neighbors of the domain wall. A broadening factor $%
0.004t_{0}$ is used.}
\label{fig:toy}
\end{figure}
\begin{figure}[t]
\includegraphics
[width=18.0cm,height=13.0cm,angle=00]
{fig/toy_ABS.eps}
\caption{(Color online) The spatial (I) and wavevector ($k_{y}$) dependence of the quasiparticle
spectral weight $A_{I,k_{y}}(E)$ for the simplified model ($H_{s}$ in Eq[%
\protect\ref{eq:Htoy}]) with an isolated site-centered anti-phase domain wall in a d-wave SC. The parameters are the same as that used in Fig[\protect\ref{fig:toy}%
]. Panels (a) and (b) are for $\widetilde{\Delta}_{p}=0.02$
($0.08$), respectively. The energies $E$ corresponds to the
Andreev bound states (ABS) in the r.h.s.
panels and the bulk SC gap in the l.h.s. panels as shown in Fig[%
\protect\ref{fig:toy}a,b]. In panel (a2) ABS extends away from the domain wall at site 28 into the bulk of the superconducting state due to $%
\left\vert \Delta \right\vert /E_{F}<<1$, while in panel(b2) where $%
\left\vert \Delta \right\vert $ is much larger the ABS is much more confined
in a small region around the domain wall. For the states close to the SC gap, small $\Delta $ leads to a more homogeneous state,
while moderate $\Delta $ results in a great suppression of the state close
to domain wall.}
\label{fig:toy_ABS}
\end{figure}
Turning our attention to a periodic array of parallel $\pi $DW, we focus on
the case of period $L_{x}=4$, relevant to the cuprates, illustrated in Fig[%
\ref{fig:PDRVB_L4}]. In this case the Andreev bound states on neighboring
domain walls will overlap strongly leading to a more complex dispersion
relation for the associated quasiparticle states. Note the d-wave form of
the bulk superconductivity leads to gapless excitations in the nodal
directions which in turn leads to stronger overlap for near nodal
quasiparticles. To illustrate this more complex behavior we focus on a
particular model which can be solved analytically. To this end we set $%
\delta =0$ (i.e. half-filling), $g^{t}=0$ and set $\widetilde{\chi}_{p}=\widetilde{%
\Delta}_{p}$ and $g^{s}J\widetilde{\chi}_{p}=1$. In this case the
quasiparticle dispersion is obtained by diagonalizing the
Hamiltonian
\begin{eqnarray}
H_{k} &=&-\mathbf{X}_{k}^{\dag }\left(
\begin{array}{cc}
A_{k} & B_{k} \\
B_{k}^{\ast } & -A_{-k}^{\ast }\label{Eq:Ham_PDRVB4L}%
\end{array}%
\right) \mathbf{X}_{k}; \\
A_{k} &=&\left(
\begin{array}{cccc}
2\cos k_{y} & e^{ik_{x}} & 0 & e^{-ik_{x}} \\
e^{-ik_{x}} & 2\cos k_{y} & e^{ik_{x}} & 0 \\
0 & e^{-ik_{x}} & 2\cos k_{y} & e^{ik_{x}} \\
e^{ik_{x}} & 0 & e^{-ik_{x}} & 2\cos k_{y}%
\end{array}%
\right) ; \notag \\B_{k}&=&\left(
\begin{array}{cccc}
0 & e^{ik_{x}} & 0 & -e^{-ik_{x}} \\
e^{-ik_{x}} & -2\cos k_{y} & e^{ik_{x}} & 0 \\
0 & e^{-ik_{x}} & 0 & -e^{ik_{x}} \\
-e^{ik_{x}} & 0 & -e^{-ik_{x}} & 2\cos k_{y}%
\end{array}%
\right),\notag
\end{eqnarray}%
where $\mathbf{X}_{k}^{\dag }=\left( \hat{c}_{I,k,\uparrow }^{\dag
},\hat{c}_{I,-k,\downarrow }\right) $ with $I=1,2,3,4$ denoting
the sites inside a supercell. The quasiparticle dispersion takes a
simple form,
\begin{eqnarray}
E_{k}=\pm \sqrt{6\cos ^{2}k_{y}+4\pm 2\sqrt{\left( 2+\cos ^{2}k_{y}\right)
^{2}-4\sin ^{2}2k_{x}}}.\notag \\ \label{eq:Dis_PDRVB4L}
\end{eqnarray}%
For a wavevector $\left( k_{x},k_{y}\right) $ close to $\left( \pi /2,\pi
/2\right) $, the two quasiparticle bands close to the Fermi level have an
anisotropic nodal structure with
\begin{equation}
E_{k}=\pm 2\sqrt{K_{y}^{2}+2K_{x}^{2}}, \label{eq:node_PDRVB4L}
\end{equation}%
where $\left( K_{x},K_{y}\right) =\left( \pi /2-k_{x},\pi /2-k_{y}\right) $.
This nodal structure completely suppresses the the density of states (DOS)
at zero energy as shown in Fig[\ref{fig:PDRVB_L4}], and pushes the peaks in
the DOS of the Andreev bound states away from the chemical potential
\begin{figure}[tbp]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=9.0cm,height=7.0cm, angle=0]
{fig/PDRVB_L4_deltacfg.eps}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=7.0cm,height=6.5cm, angle=0]
{fig/PDRVB_L4.eps}
\end{minipage}
\caption{(Color online) (a) Schematic illustration of the
modulation of the pairing parameter $ \Delta$ for the simplified
model (Hamiltonian $H_{s}$ defined in Eq[\ref{eq:Htoy}]) for dSC
state with periodic site-centered anti-phase domain walls with the
shortest periodicity of $L_{x}=4$. The anti-phase pattern of
$\Delta$ is illustrated by the color scheme. (b) Local
density of states (LDOS) with parameter values as doping concentration $\protect\delta %
=0$ and $\widetilde{\protect\chi}_{p}=\widetilde{\Delta}_{p}=0.2$. The domain walls are close to each other, they form bands with weak dispersion along $%
k_{x}$ but strong dispersion along $k_{y}$ parallel to the domain wall. At half filling, these bands display an
anisotropic nodal structure as demonstated in
Eq[\protect\ref{eq:node_PDRVB4L}] and by the low energy LDOS
behavior.} \label{fig:PDRVB_L4}
\end{figure}
\section{Coexisting Anti-Phase Superconductivity and Spin and Charge Density
Waves}
Anti-phase domain walls in a superconductor usually cost a
substantial energy. The key question raised by the recent
experimental results of Li \textit{et al} \cite{Li-PRL-07}. on the
static stripe phase is whether SDW and CDW coexisting with $\pi
$DW lead to a state with a net energy gain. The VMC calculations
of Himeda \textit{et al.} \cite{Himeda-PRL-02} found a small
energy gain for a longer superlattice with a larger separation
between $\pi $DW within a restricted parameter range. Recent
calculations for a 8-superlattice without SDW order by Raczkowski
\textit{et al.} \cite{Raczkowski-PRB-07} did not yield an energy
gain but the energy cost to introduce $\pi $DW was quite small.
These results motivated us to examine a wider parameter range
within a RMFT approach and look for a possible net energy gain in
an 8-superlattice (with site-centered anti-phase domain walls) at a hole concentration $\delta
=1/8$ when coexisting SDW order and $\pi $DW
are included. A longer 10-superlattice (with bond-centered anti-phase domain walls) state gives similar results.
In view of the orthorhombic nature of the individual CuO$_{%
\text{2}}$-planes in the LTT phase, we allowed for anisotropy in
the hopping $t_{x(y)}$ and exchange coupling $J_{x(y)}$. Below we
keep the nn hopping in
the y-direction fixed, $t_{y}=t_{0}$, and scale $%
J_{x}/J_{y}=t_{x}^{2}/t_{y}^{2}$. In addition the presence of a
crystallographic superlattice in the LTT phase motivated us to
examine also the effect of the lattice inhomogeneity by including
a site dependent potential modulation, $V_{i}$.
\subsection{Site-centered anti-phase dSC}
The RMFT approximation yields a series of coupled nonlinear
equations. An iteration method is used to obtain optimal values of
the four order parameters: the pairing and hopping amplitudes,
sublattice magnetization and hole density. When the solution
iterates to a stable set of values we can conclude that a local
energy minimum exists, but on occasion no stable solution can be
found, which indicates that no local minimum exists with this
symmetry. In general we find stable solutions for the case of
coexisting CDW and SDW with or without $\pi $DW. Typical patterns
for an 8-superlattice are illustrated in Fig[\ref{fig:DWconfig}]
with or without site-centered $\pi $DW in systems where the
modulation of the pairing amplitude is site centered. The
antiferromagnetic domain wall (AFDW) coincides with the maximum
hole density while the $\pi $DW appears at the minimum hole
density. (In the case without SDW, the $\pi $DW appears at the
maximum hole density \cite{Raczkowski-PRB-07}.)
\begin{figure}[tbp]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.5in, angle=0]
{fig/DW_deltaconf.eps}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3.5in, angle=0]
{fig/DW_PDRVB_deltaconf.eps}
\end{minipage}
\caption{(Color online) Schematic illustration of the modulations of the parameters pairing
amplitude $\Delta $, hopping amplitude $\protect\chi $, hole concentration $%
\protect\delta $ and antiferromagnetic moment $m$, for two states:
SDW+CDW+dSC$^{s}$ [panels (a)] and SDW+CDW+APdSC$^{s}$ [panels (b)] (without and with site-centered anti-phase domain walls). The
average doping
is $1/8$ and the periodicity $L_{x}=8$. In panels (a/b 1-2) The amplitudes $%
\Delta $ and $\protect\chi $ are denoted by the width of the bond, the
spatial modulation of the staggered antiferromagnetic moment $m_{i}$ is
denoted by the arrows, the hole concentration modulation is represented by
the size of the dots. The anti-phasing of $\Delta $ in panel(b.1) is
illustrated by the different color pattern at either side of the domain wall with cyan (magenta) for positive (negative) value. D-wave pairing
symmetry is still preserved between two neighboring domain walls. The
anti-phase domain walls coincidence with the sites which have the largest sublattice
antiferromagentic moment and smallest hole concentration. However, for the
case without SDW, the domain walls locate at the sites with the largest
hole concentration. \protect\cite{Raczkowski-PRB-07} Panels (a/b 3) show the
spatial hole density (red solid) and the AF moment (green dash) modulations. The site-centered anti-phase domain walls lead to an anisotropy of $\protect\chi $, and an enhancement
of the hole density and antiferromagnetic moment modulation. }
\label{fig:DWconfig}
\end{figure}
In table \ref{table:energy} the results for the ground state
energy and local values of the order parameters are presented. The
upper lines are for the case of nn hopping only ($t^{\prime }=0$),
with and without, anisotropic component in the nn hopping
$t_{x(y)}$. In this case $t_{x}=t_{y}$ the results show that the
uniform AFM+dSC state is lowest. When AFDW and the associated
modulation of the hole density are included the resulting state
(denoted by SDW+CDW+dSC$^{s}$) has an energy that is slightly higher. Introducing $%
\pi $DW to create antiphase superconducting order (SDW+CDW+APdSC$^{s}$) raises the
energy further. Anisotropy in the nn hopping narrows the energy differences
but does not change the relative ordering of the states with and without $%
\pi $DW. When a weak nnn hopping is added, the SDW+CDW+dSC$^{s}$ state gains in
energy and when anisotropy is also added this state has the lowest energy.
In this case when we consider anisotropic nn hopping, the energy cost of
introducing $\pi $DW in the superconducting is further reduced to small but
still positive values. A further increase in the nnn hopping term (shown in
Fig[\ref{fig:energy_comparison}a]) however does not lead to an energy gain
for $\pi $DW. The energy cost of $\pi $DW remains very small but positive.
\begin{figure}[b]
\includegraphics
[width=12.0cm,height=12.0cm,angle=0]
{fig/energy_comparison.eps}
\caption{(Color online) (a) The energy (shown in Eq[\ref{eq:energy}]) dependence of the two
states SDW+CDW+dSC$^{s}$ and SDW+CDW+APdSC$^{s}$ (without and with site-centered anti-phase domain walls) on the nnn hopping integral
$t^{\prime }$ for isotropic and
anisotropic nn hopping ratio $t_{x}/t_{y}$. The energy unit is $t_{0}=300meV$. The nnn hopping integral $%
t^{\prime }$ does not but anisotropic $t_{x\left( y\right) }$ and $%
J_{x\left( y\right) }$ do help to push the energy of SDW+CDW+APdSC$^{s}$
state (the solid and red symbol) closer to SDW+CDW+dSC$^{s}$ state (the
open and blue
symbol). Square (circle) symbols are for the values $t_{x}/t_{y}=1.00 (0.85)$%
. Panels (b, c, d): the energy, charge and magnetization moment
modulations of these two states with additional external
potentials which are imposed to enhance the charge and magnetic
modulations by shifting the local potential by $+V$ $(V>0) $ up
for the sites with the largest antiferromagnetic moment and $-V$
down for the sites with zero antiferromagnetic moment. A
substantial anisotropy $t_{x}/t_{y}=0.85$ is used. The diamond
(triangle) symbols are for $t^{\prime}=-0.3 (-0.1)$. The larger
antiferromagnetic moment and hole concentration modulations drive
the energy difference smaller between the two states. }
\label{fig:energy_comparison}
\end{figure}
The presence of substantial local modulations in the hole density
in these states led us to investigate the effect of introducing a
site dependent potential shift. Such a shift can result from the
crystallographic superlattice modulation that appears at the
crystallographic transition into the LTT state. The results in
Fig[\ref{fig:energy_comparison}b] show that this potential shift
reduces the energy cost of the site-centered anti-phase domain wall and enhances the charge
and spin modulation but still does not lead to a net energy gain
for the SDW+CDW+APdSC$^{s}$ state even in the most favorable case of anisotropic
nn hopping and substantial nnn hopping. Within the RMFT the $\pi
$DW always demands an energy cost even though it may be only a
very small amount. Bond-centered $\pi $DW with anisotropic nn
hopping and longer periodicity $L_{x}=10$ shows that the energy
difference between these two states, with and without $\pi$DW, can
be also very close.
\begin{widetext}
\begin{center}
\begin{table}[tbp]
\tiny
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline\hline
$t_{\text{x}},t^{\prime} $ & state & $E_{t}$ & $E_{\text{kin}}$ & $E_{\text{%
J }}$ & $\delta_{\max }$ & $\delta _{\min }$ & $m_{max}$ &
$\overline\Delta_{max}$ & $\overline\Delta_{min}$ &
$\overline\chi_{max}$ & $\overline\chi_{min}$ \\ \hline\hline &
AFM+dSC & -0.4878 & -0.3287 & -0.1593 & 0.12500 & 0.12500 &
0.08524 &
0.1142 & 0.1142 & 0.1903 & 0.1903 \\
\raisebox{0ex}{$t_{x}=1.00$} & dSC & -0.4863 & -0.3428 & -0.1435 & 0.12500 &
0.12500 & 0 & 0.1152 & 0.1152 & 0.1928 & 0.1928 \\
\raisebox{0ex} { $t^{\prime}=0.0$} & SDW+CDW+dSC$^{s}$ & -0.4865 & -0.3373 &
-0.1492 & 0.1372 & 0.1134 & 0.08412 & 0.1214 & 0.09917 & 0.2215 & 0.1821 \\
& SDW+CDW+APdSC$^{s}$ & -0.4782 & -0.3292 & -0.1490 & 0.1498 & 0.09604 &
0.1418 & 0.1114 & 0 & 0.2639 & 0.1111 \\ \hline\hline
& AFM+dSC & -0.4536 & -0.3117 & -0.1419 & 0.12500 & 0.12500 & 0.07432 &
0.08399 & 0.07724 & 0.2652 & 0.1098 \\
\raisebox{0ex}{$t_{x}=0.85$} & dSC & -0.4526 & -0.3225 & -0.1301 & 0.12500 &
0.12500 & 0 & 0.08409 & 0.07593 & 0.2675 & 0.1122 \\
\raisebox{0ex}{$t^{\prime}=0.0$} & SDW+CDW+dSC$^{s}$ & -0.4560 & -0.3050 &
-0.1510 & 0.1737 & 0.07130 & 0.1720 & 0.06538 & 0.05154 & 0.2756 & 0.07822 \\
& SDW+CDW+APdSC$^{s}$ & -0.4554 & -0.3036 & -0.1518 & 0.1815 & 0.05752 &
0.1871 & 0.04479 & 0 & 0.2866 & 0.06392 \\
& SDW+CDW+dSC$^{b}$ & -0.4567 & -0.3002 & -0.1564 & 0.1911 & 0.06029 & 0.1831
& 0.06692 & 0.05151 & 0.2769 & 0.05869 \\
& SDW+CDW+APdSC$^{b}$ & -0.4563 & -0.2978 & -0.1586 & 0.1988 & 0.04986 &
0.1941 & 0.06055 & 0 & 0.2841 & 0.03663 \\ \hline\hline
& AFM+dSC & -0.4841 & -0.3219 & -0.1622 & 0.12500 & 0.12500 & 0.09356 &
0.1149 & 0.1149 & 0.1893 & 0.1893 \\
\raisebox{0ex}{$t_{x}=1.00$} & dSC & -0.4817 & -0.3372 & -0.1445 & 0.12500 &
0.12500 & 0 & 0.1179 & 0.1179 & 0.1920 & 0.1920 \\
\raisebox{0ex}{$t_{x}=-0.1$} & SDW+CDW+dSC$^{s}$ & -0.4829 & -0.3268 &
-0.1561 & 0.1525 & 0.1008 & 0.1268 & 0.1304 & 0.09917 & 0.2215 & 0.1654 \\
& SDW+CDW+APdSC$^{s}$ & -0.4759 & -0.3202 & -0.1557 & 0.1650 & 0.08549 &
0.1731 & 0.09773 & 0 & 0.2642 & 0.1034 \\ \hline\hline
& AFM+dSC & -0.4507 & -0.3056 & -0.1451 & 0.12500 & 0.12500 & 0.08474 &
0.07982 & 0.07170 & 0.2688 & 0.1031 \\
\raisebox{0ex}{$t_{x}=0.85$} & dSC & -0.4490 & -0.3182 & -0.1308 & 0.12500 &
0.12500 & 0 & 0.08140 & 0.07155 & 0.2721 & 0.1054 \\
\raisebox{0ex}{$t^{\prime}=-0.1$} & SDW+CDW+dSC$^{s}$ & -0.4539 & -0.3024 &
-0.1515 & 0.1750 & 0.07336 & 0.1751 & 0.06401 & 0.04880 & 0.2742 & 0.08047 \\
& SDW+CDW+APdSC$^{s}$ & -0.4533 & -0.3021 & -0.1512 & 0.1801 & 0.06318 &
0.1867 & 0.04286 & 0 & 0.1840 & 0.06953 \\
& SDW+CDW+dSC$^{b}$ & -0.4538 & -0.2991 & -0.1547 & 0.1833 & 0.07206 & 0.1740
& 0.07046 & 0.05567 & 0.2728 & 0.07248 \\
& SDW+CDW+APdSC$^{b}$ & -0.4533 & -0.2972 & -0.1560 & 0.1890 & 0.06202 &
0.1860 & 0.05946 & 0 & 0.2810 & 0.04685 \\ \hline\hline
& AFM+dSC & -0.4813 & -0.3151 & -0.1662 & 0.12500 & 0.12500 & 0.1188 & 0.1086
& 0.1086 & 0.1866 & 0.1866 \\
\raisebox{0ex}{$t_{x}=1.00$} & dSC & -0.4750 & -0.3303 & -0.1446 & 0.12500 &
0.12500 & 0 & 0.1216 & 0.1216 & 0.1899 & 0.1899 \\
\raisebox{0ex}{$t^{\prime}=-0.3$} & SDW+CDW+dSC$^{s}$ & -0.4814 & -0.3213 &
-0.1602 & 0.1709 & 0.09043 & 0.1746 & 0.1263 & 0.07922 & 0.2351 & 0.1280 \\
& SDW+CDW+APdSC$^{s}$ & -0.4760 & -0.3236 & -0.1523 & 0.1700 & 0.09028 &
0.2002 & 0.07064 & 0 & 0.2431 & 0.1266 \\ \hline\hline
& AFM+dSC & -0.4491 & -0.2986 & -0.1506 & 0.12500 & 0.12500 & 0.1127 &
0.06892 & 0.05955 & 0.2683 & 0.09673 \\
\raisebox{0ex}{$t_{x}=0.85$} & dSC & -0.4436 & -0.3122 & -0.1314 & 0.12500 &
0.12500 & 0 & 0.08064 & 0.06819 & 0.2762 & 0.09695 \\
\raisebox{0ex}{$t^{\prime}=-0.3$} & SDW+CDW+dSC$^{s}$ & -0.4523 & -0.3008 &
-0.1515 & 0.1774 & 0.08221 & 0.1883 & 0.06455 & 0.04278 & 0.2681 & 0.08799 \\
& SDW+CDW+APdSC$^{s}$ & -0.4518 & -0.3017 & -0.1501 & 0.1787 & 0.08034 &
0.1822 & 0.03503 & 0 & 0.2768 & 0.07811 \\
& SDW+CDW+dSC$^{b}$ & -0.4513 & -0.2985 & -0.1528 & 0.1789 & 0.09117 & 0.1638
& 0.06976 & 0.04589 & 0.2680 & 0.08756 \\
& SDW+CDW+APdSC$^{b}$ & -0.4506 & -0.2984 & -0.1523 & 0.1806 & 0.08737 &
0.1700 & 0.04898 & 0 & 0.2762 & 0.06648 \\ \hline\hline
\end{tabular}%
\caption{ Key results for various states obtained by selfconsistently solving the Hamiltonian in Eq[\ref{eq:variationH}] with an
average hole density of 1/8. Listed are the mean field energy $%
E_{t}$, kinetic energy $E_{kin}$, spin-spin superexchange energy $E_{J}$ and
the modulation of hole concentration ($\protect\delta _{max}$ and $\protect%
\delta _{min}$), the largest antiferromagnetic moment ($|m|_{max}$), pairing
amplitude $\overline{\Delta}_{max}$ and $\overline{\Delta}_{min}$, and $\overline{\protect%
\chi}_{max}$ and $\overline{\protect\chi}_{min}$ ($\overline{\Delta}=\sum_{\protect%
\sigma}\overline\Delta _{\protect\sigma}$, $\overline{\protect\chi}=\sum_{\protect%
\sigma}\protect\chi _{\protect\sigma }$) for various states
including homogeneous AFM+dSC and dSC states,
SDW+CDW+dSC$^{s}$ ($L_{x}=8$), SDW+CDW+APdSC$^{s}$ ($L_{x}=8$) and
SDW+CDW+dSC$^{b}$ ($L_{x}=5$), SDW+CDW+APdSC$^{b}$ ($L_{x}=10$)
states [APdSC (dSC) stands for d-wave SC state with (without) anti-phase domain walls site-centered ($^{s}$) or bond-centered ($^{b}$)]. The energy unit is $t_{0}=300meV$, $V\equiv 0$, $J_{x}/J_{y}=t_{x}^{2}/t_{y}^{2}$, $%
t_{y}=1$, $J_{y}=0.3$. Anisotropic nn hopping tends to
energetically favor the homogeneous AFM+dSC state compared to the
homogeneous dSC state. However, it also causes some inhomogeneous
state to be energetically more favored, here for instance, the
SDW+CDW+dSC$^{s}$ state. Note that to introduce anti-phase domain wall in the
pairing order parameter in the renormalized mean field theory for the $t-J$ model always
cost energy, although it can be very small.} \label{table:energy}
\end{table}
\end{center}
\end{widetext}
\subsection{Bond-centered anti-phase dSC}
Alternative bond-centered anti-phase modulations of the pairing amplitude
were considered by several groups.\cite{Himeda-PRL-02, Raczkowski-PRB-07,
Capello-PRB-08} In the case of the 8-superlattice we did not find any
stable bond centered solution with nonzero SDW in the doping
regime around 1/8 when requiring there is antiferromagnetic domain wall ($m_{I}=0$). But for longer periodicity $L_{x}$=10 we found
a stable solution. In Fig[\ref{fig:BDWconfig}] a typical pattern for
this long 10-superlattice with and without the bond-centered $\pi$DW is
illustrated. The energy cost of the APdSC$^{b}$ is also positive for the
bond-centered case but is even smaller compared with the
site-centered case (see table \ref{table:energy}) at some cases.
\begin{figure}[tbp]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=10.0cm,height=12.0cm, angle=0]
{fig/BDW5L_deltaconf.eps}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=10.0cm,height=12.0cm, angle=0]
{fig/BDW_PDRVB5L_deltaconf.eps}
\end{minipage}
\caption{(Color online) Schematic illustration of the modulations of the parameters pairing
amplitude $\Delta $, hopping amplitude $\protect\chi $, hole concentration $%
\protect\delta $ and antiferromagnetic moment $m$, for two states:
SDW+CDW+dSC$^{b}$ [panels (a)] and SDW+CDW+APdSC$^{b}$ [panels (b)] (without or
with bond-centered anti-phase domain walls). The average doping
is $1/8$ and the periodicity $L_{x}=10$. As shown in panel (b1)
the anti-phase modulation of the pairing $\Delta$ is bond-centered
with the domain wall located at the bonds connecting two nn
sites with maximum stagger antiferromagnetic moment $\pm |m|$ along x
direction. The energy difference between the two states with and
without bond-centered domain wall is even smaller than the
case for site-centered domain wall with anisotropic nn hopping
$t_{x}/t_{y}=0.85$. The modulation magnitude of the hole density and
antiferromagnetic moment in these two states are close to each
other.} \label{fig:BDWconfig}
\end{figure}
\section{Spectral Properties of the Modulated Phases}
Next we examine the density of states in the modulated phases
which gives us insight into the interplay between the SDW and SC with
either dSC or APdSC order in the stripe phases. We restrict our
considerations to the case of site-centered pairing modulation
relevant for 8-superlattice. It is instructive to calculate
several density of states, starting with the local density of
states (LDOS)
\begin{equation}
A_{I}\left( \omega \right) =-\frac{1}{\pi }\sum_{\sigma }ImG_{I,\sigma
}\left( \omega \right), \label{LDOS}
\end{equation}%
where $G_{I,\sigma }\left( \omega \right) $ is the Fourier transform of the
time dependent onsite Green's function $G_{I,\sigma }\left( t\right)
=-i\left\langle T_{t}c_{I,\sigma }\left( t\right) c_{I,\sigma }^{\dag
}\left( 0\right) \right\rangle $. The averaging of the LDOS over all sites
gives
\begin{equation}
\overline{A}\left( \omega \right) =1/N_{c}\sum_{I}A_{I}\left(
\omega \right) , \label{ADOS}
\end{equation}%
where $N_{c}$ is the size of a supercell. Also of interest is the
quasiparticle (QP) density of states
\begin{equation}
N\left( \omega \right) =\frac{1}{V_{RBZ}}\int dk\sum_{l}\delta \left( \omega
-E_{k}^{l}\right), \label{QPDOS}
\end{equation}%
where $l$ denotes all the quasiparticle bands in the reduced
Brillouin zone (RBZ), $V_{RBZ}$ is the volume of RBZ, $k\subset
$RBZ. This latter is the density of states which determines the
sum of the quasiparticle eigenvalues which enters the ground state
energy in mean field theory. The results for these DOS in the
various modulated phases are presented below. First we consider the cases of a dSC with array of $\pi$DW and of a SDW separately and then the results when both orders coexist.
\textbf{(a) anti-phase dSC}
We start with the DOS for an array of $\pi $DW with a superlattice
periodicity of 8 and an average hole density of 1/8. The LDOS is shown in
Fig[\ref{fig:PDRVB}], for the 3 independent lattice sites, site 1 at the $\pi $%
DW, site 3 halfway between the $\pi $DW and the remaining
equivalent sites 2, 4. In the energy region near zero, the
prominent features are a finite LDOS at all sites, which is
largest at the center of a $\pi $DW (site 1) and two sharp peaks
(labeled as A and B) symmetrically placed at positive and negative
energies. The finite LDOS at $E=0$ implies a finite quasiparticle
Fermi surface in this APdSC$^{s}$ state. The quasiparticle energy
dispersion is quite complex and is illustrated in
Fig[\ref{fig:PDRVB}c]. Along the high symmetry line, $k_{x}=0$, in
RBZ there are 3 nodal points. These expand into nodal lines for
finite $k_{x}$ to create two closed Fermi loops shown in
Fig[\ref{fig:PDRVB}a]. The two sharp peaks labeled A and B in the
DOS, $\overline{A}\left( \omega \right) $, can
be shown to originate from the almost flat bands displaced away from zero energy in Fig[%
\ref{fig:PDRVB}c]. The LDOS that appears in Fig[\ref{fig:PDRVB}d] shows
clearly an enhanced DOS near zero energy which implies a substantial energy
cost to introduce the $\pi $DW into a uniform dSC state.
\begin{figure}[t]
\includegraphics
[width=15.0cm,height=15.0cm,angle=270]
{fig/PDRVB.eps}
\caption{(Color online) dSC state with site-centered anti-phase domain wall but without antiferromagnetism (doping
concentration $\protect\delta =1/8$, $t^{\prime }=0.0$, $V=0$, and a
supercell $L_{x}=8$, the energy unit is $t_{0}=300meV$). The pattern for the pairing amplitude $\Delta $ is
similar to the case shown in Fig[\protect\ref{fig:DWconfig}]. (a) Fermi
surface in the reduced Brillouin zone. (b) Quasiparticle (QP) DOS (blue) and average DOS (red). The two peaks A and B at
negative energies are a consequence of the flat dispersion along $k_{y}$
direction [shown in panel (c)] formed by the propagating of Andreev bound state along y
direction. (d) Local DOS (LDOS), near to the Fermi level the largest portion of the
density of states is at the center of the domain wall.}
\label{fig:PDRVB}
\end{figure}
\textbf{(b) SDW}
The second case we considered is a simple SDW state in which an
array of AFDW
is introduced to create a 8-superlattice. Again the LDOS (see Fig[\ref%
{fig:DW_noSC}]) shows finite values at zero energy with the
largest value at the center of the AFDW ($m_{i}=0$). As a
consequence this SDW state is metallic. Note a uniform state would
also be metallic at this hole concentration of $\delta =1/8$. It
is however very relevant that the SDW superlattice does not
truncate the Fermi surface completely to give an insulating state,
since then coexistence with d-wave pairing would be disfavored.
Further any coexisting state would not be superconducting. The
Fermi surface shown in Fig[\ref{fig:DW_noSC}a] consists of
standing waves along $k_{y}$ \textit{i.e.} perpendicular to the
AFDW and two one-dimensional bands propagating along AFDW.
\begin{figure}[t]
\includegraphics
[width=6.0cm,height=15.0cm,angle=270]
{fig/DW_noSC.eps}
\caption{(Color online) SDW state (without dSC $\Delta =0$) with a periodicity of $L_{x}=8$ and an average
doping concentration $\protect\delta =1/8, t^{\prime}=-0.10$ (the energy is in unit of $t_{0}=300meV$). The
antiferromagnetic sublattice moment pattern is the same as that shown in Fig[%
\protect\ref{fig:DWconfig}]. (a) Fermi surface in reduced Brillouin Zone. (b) Local density of states (LDOS). Note that
this SDW state is a metallic state.}
\label{fig:DW_noSC}
\end{figure}
\textbf{(c) Coexisting SDW, CDW and dSC or anti-phase dSC}
We examine the coexisting state to look for possible synergy between the SDW
and dSC and also to compare the two possibilities for the superconducting
uniform dSC and the APdSC$^{s}$, \textit{i.e.} superconductivity without and with an array of $%
\pi $DW. The favorable choice of the relative position of the two domain
walls is to stagger the $\pi $DW and AFDW as shown in
Fig[\ref{fig:DWconfig}]. From Fig[\ref{fig:SDWCDWDRVB}a,b] one
sees that in both cases the LDOS develops a strong minimum around
zero energy with even a drop to zero in a narrow range around zero
energy. The site dependence of the LDOS is weaker than in the
previous cases. This strong energy minimum indicates a certain
synergy between the SDW and dSC which can lower the energy through
a truncation of the finite Fermi surface that exists in the both
cases separately, SDW and
APdSC$^{s}$. The energy difference in the LDOS between the two cases with and without $%
\pi $DW, is small. But when one compares the total ground state
energy, a finite energy cost to introduce $\pi $DW into
the superconductivity always appears.
The strong minimum in the DOS at the Fermi level in the SDW+CDW+APdSC$^{s}$ state is consistent with the spectra obtained in angle resolved photoemission (ARPES) and
scanning tunnelling (STM) experiments on La$_{\text{1.875}}$ Ba$_{\text{0.125}}$CuO$_{\text{4}}$ reported by Valla \textit{et al} \cite{valla-science-06}. Our calculations give a complex quasiparticle dispersion associated with the
8-fold superlattice which does not seem to be resolved in the ARPES spectra. So a more detailed comparison with experiment is not possible at this time but the main feature of the experimental DOS is reproduced in our calculations.
\begin{figure}[t]
\includegraphics
[width=11.0cm,height=16.0cm,angle=270]
{fig/DW_DRVB.eps}
\caption{(Color online) SDW+CDW+dSC$^{s}$ and SDW+CDW+APdSC$^{s}$ states (without or with site-centered anti-phase domain wall). The
upper figures (a1, b1) show the local density of states (LDOS) at the three inequivalent sites with
max $|m|$, zero $|m|$, and the middle site. (doping $%
\protect\delta =1/8$, $t^{\prime }=-0.1$, $V=0$, and isotropic
$t_{x}=t_{y}$). The energy is in unit of $t_{0}=300meV$. The lower figures (a2, b2) show the average DOS and
quasiparticle (QP) DOS. In order to facilitate the
comparison between the states with and without the domain wall in
panel (b2) the cyan curve is the QP DOS for the SDW+CDW+dSC$^{s}$ state,
replotted from panel (a2). A small gap opens at zero energy.
However, a substantial part of the DOS located at lower energy is
pushed to closer to the Fermi level. This may be the reason that
the opening of a gap in the SCW+CDW+APdSC$^{s}$ state does not lead to a
lower energy relative to the state without the domain wall. Note
that a small broadening $\protect\delta =0.004t_{y}$ is used to
smooth the curve.
The nodal behavior in SDW+CDW+dSC$^{s}$ state is not a general phenomenon. For larger $%
t^{\prime }$, anisotropic $t_{x(y)}$, or external additional potential this
nodal structure may disappear. Also for other cases, (e.g. $t^{\prime
}=0,t_{x}=1,V=0$) no gap opens in SDW+CDW+APdSC$^{s}$ state.}
\label{fig:SDWCDWDRVB}
\end{figure}
\section{discussion}
Anti-phase domain wall or $\pi$DW generally cost considerable energy in a
superconductor because they generate an Andreev bound state at the
Fermi energy due to the interference between reflected electrons
and holes. This effect is illustrated in Fig[\ref{fig:toy}a] which
shows a peak in the LDOS centered on an isolated $\pi$DW. In an
array of parallel $\pi$DW this DOS peak broadens into a
2-dimensional band due to both the propagation of the ABS along
the $\pi$DW, as illustrated in Fig[\ref{fig:toy}b], and the
overlap of the ABS on neighboring $\pi$DW. This leads to structure
which can lead to a pronounced minimum in the LDOS in certain
cases such as the case of a closely spaced array of $\pi$DW shown
in Fig[\ref{fig:PDRVB_L4}b]. This structure in the LDOS lowers the
energy cost to introduce $\pi$DW in the dSC, but leaves it still positive. For the period 8
supercell the modification of the DOS is less important. As
illustrated in Fig[\ref{fig:PDRVB}c] the APdSC$^{s}$ bandstructure is
quite complex and displays a finite Fermi surface (see
Fig[\ref{fig:PDRVB}a]). The resulting LDOS has a
finite value at the Fermi energy which is largest at the center of the $\pi$%
DW.
In the case of coexisting SDW and CDW one must first consider how the effect
of these superlattices alone. The results are presented in Fig[\ref%
{fig:DW_noSC}] which shows a metallic state with a finite DOS and Fermi
surface. This is important since if the SDW resulted in an insulating
groundstate, the addition of Cooper pairing would be less energetically
favorable and would not change the state from insulating to superconducting.
The bandstructure consists of standing waves in the direction perpendicular
to the AFDW which are propagating in the direction parallel to the AFDW.
Coexisting SDW and dSC leads to a substantial interplay between the
two broken symmetries. Recently Agterberg and Tsunetsugu showed
that there can be a synergy between the two orders due to the
presence of cross terms involving both order parameters in a
Landau expansion \cite{Agterberg-08}. The cross term depends
crucially on the relative orientation of the wavevector of the SDW
and APdSC. For the case of parallel $\mathbf{q}$-vectors under consideration
here (eg. as illustrated in Fig[\ref{fig:BDWconfig}]), however the cross
term vanishes. Nonetheless in the present case there is still a
considerable synergy between the two broken symmetries. This shows
up in the DOS as a pronounced dip at the chemical potential as illustrated in
Fig[\ref{fig:SDWCDWDRVB}b]. However, this effect is not confined
to case of APdSC but is also present in the case of a uniform
phase dSC coexisting with SDW as illustrated in
Fig[\ref{fig:SDWCDWDRVB}a]. We have not found a simple explanation
for this synergy. The quasiparticle bands in the vicinity of the
Fermi energy have a complex dispersion for which we do not have a
simple interpretation. Remarkably the form of the DOS near the
Fermi energy is very similar for coexisting SDW and dSC with and
without the array of $\pi$DW the dSC. This subtle difference in
the DOS shows up as only a small difference in the ground state
energy so that the energy cost of introducing $\pi$DW is very
small.
\section{Conclusions}
The small energy difference that we find agrees with the earlier
calculations reported by Himeda \textit{et al.} \cite{Himeda-PRL-02} for coexisting SDW and
APdSC$^{s/b}$. These authors used a VMC method in which the strong coupling onsite
constraint is exactly treated whereas here it is only approximated through
the Gutzwiller factors. This suggests that our failure to find a clear
explanation for the stabilization of APdSC$^{s/b}$ does not result from the
Gutzwiller approximation but may be because the t-J model omits some
relevant physical effect. Alternatively the special cross term between SDW
and APdSC order found by Agterberg and Tsunetsugu \cite{Agterberg-08} which favors oblique
wavevectors for the two periodicities may mean that our simple pattern with
parallel arrays of AFDW and $\pi$DW is not optimal, although on the surface
it looks very plausible to simple stagger the two domain walls. After completing this paper, we learned that a related work was posted by Chou \textit{et al.} \cite{TKLee-08}.
\section{acknowledges}
We are grateful to John Tranquada, Alexei Tsvelik and Daniel
Agterberg for stimulating discussions. KYY, TMR and MS gratefully
acknowledge financial support from the Swiss Nationalfonds through
the MANEP network. This work was also in part supported by RGC at
HKSAR (FCZ and WQC).
|
1,116,691,499,764 | arxiv | \section{Introduction}
Accreting white dwarfs (WDs) become transient, intermittent, and persistent
supersoft X-ray sources (SSSs) depending on the mass-accretion rate. We find
various phenomena for wide ranges of time-scales and wavelength.
In this paper I will review when and how the supersoft X-rays emerge from
the WDs.
Section 2 briefly introduces the stability a-nalysis of accreting WDs.
Section 3 deals with low mass-accretion rates; WDs experience nova outbursts
and become transient SSSs in the later phase.
With intermediate accretion rate hydrogen burning is stable and
the WDs become persistent SSSs, which is the subject of Section 4.
In case of high mass-accretion rate, optically thick wind inevitably occurs
from the WD surface (accretion wind).
Section 5 introduces quasi-periodic SSSs as a related object to this regime.
\section{Stability analysis of accreting WDs}
Sienkiewicz (1980) examined thermal stability
of steady-state models for accreting WDs of various mass.
For low accretion rate, the envelope is thermally unstable which
triggers a hydrogen shell flash, but for higher accretion rate, nuclear
burning is stable. Nomoto et al. (2007) reexamined this stability using
OPAL opacity and confirmed Sienkiewicz' results. The stable and unstable
regions are denoted in Figure \ref{hr_simple}.
The unstable part represents WDs with thin envelope
in which energy generation is mainly due to compressional heating, and stable region does
WDs with nuclear burning at the bottom of an extended envelope.
\begin{quote}
\begin{figure}
\includegraphics[width=80mm]{f1.epsi}
\caption{Loci of accreting WDs in the HR diagram.
Each sequence corresponds to a mass of the WD which
accretes matter of solar composition.
The dotted part indicates unstable hydrogen burning, and solid and
dashed parts stable burning.
The dashed part is the region of optically thick wind mass loss in
which supersoft X-ray flux cannot be expected due to the self absorption
by the wind.
``$+$'' marks connected with a solid line denote ``stable'' solution
claimed by Starrfield et al. (2004) for 1.35 $\mathrm{M_\odot}$.}
\label{hr_simple}
\end{figure}
\end{quote}
These results on stability for steady-state models are consistent with
evolutionary calculations of hydrogen-shell flashes on accreting WDs
(e.g., Paczy\'nski \& \.Zytkow 1978; Sion et al. 1979; Prialnik \& Kovetz 1995;
Sparks, Starrfield \& Truran 1978; Nariai, Nomoto \& Sugimoto 1980;
Townsley \& Bildsten 2004)
and also with static envelope analysis (Iben 1982, and Sala \& Hernanz 2005).
Therefore, these results on stability of accreting WDs are considered
as a kind of consensus among the researchers in this field.
It has to be noticed that Starrfield et al. (2004) presented different results
for accreting WDs, that they call ``surface hydrogen burning models''. In their
calculation, WDs stably burn hydrogen for the accretion rates ranging
from $1.6\times 10^{-9}$ to $8.0\times 10^{-7}~\mathrm{M_\odot}$yr$^{-1}$ and become
type Ia supernovae.
This result is in contradiction to our present understandings of stability of shell flash
and all of the previous numerical results cited above.
Nomoto et al. (2007) pointed out that these stable ``surface hydrogen burning''
is an artifact which arose from the lack of resolution in the envelope
structure of Starrfield et al.'s models.
\begin{quote}
\begin{figure}
\includegraphics[width=80mm]{f2.eps}
\caption{Response of WDs to mass accretion is illustrated
in the WD mass and the mass-accretion rate plane.
In the region above $\dot M_{\rm cr}$ strong optically thick winds blow.
Hydrogen shell burning is stable for the region of
$\dot M_{\rm acc} > \dot M_{\rm std}$.
Steady hydrogen shell burning with no optically thick winds occur
between the two horizontal lines, i.e.,
$\dot M_{\rm std} \le \dot M_{\rm acc} \le \dot M_{\rm cr}$.
There is no steady state burning at $\dot M_{\rm acc} < \dot M_{\rm std}$,
where unstable shell flash triggers
nova outbursts. The ignition mass for shell flash is indicated beside the
locus of the same ignition mass.
See Hachisu and Kato (2001) for more detail.
}
\label{accmap_z02}
\end{figure}
\end{quote}
Figure \ref{accmap_z02} shows the response of the accreting WDs
in the mass-accretion rate vs. WD mass diagram.
The lower horizontal line denotes $\dot M_{\rm std}$, the boundary
between the stable and unstable regions of nuclear
burning (i.e., the boundary of solid and dotted
regions in Figure \ref{hr_simple}).
If $\dot M_{\rm acc} < \dot M_{\rm std}$, hydrogen shell burning is unstable
and nova outbursts occur. Otherwise, hydrogen burning is stable and no
nova occurs.
The upper horizontal line in Figure \ref{accmap_z02} indicates $\dot M_{\rm cr}$,
the boundary that the optically thick winds occurs (i.e., the small circle
at the left edge of the dashed line in Figure \ref{hr_simple}.)
In the region above $\dot M_{\rm cr}$,
strong optically thick winds always blow (Kato and Hachisu 1994, 2009).
In the intermediate accretion rate, i.e.,
$\dot M_{\rm std} \le \dot M_{\rm acc} \le \dot M_{\rm cr}$,
the hydrogen burning is stable and no wind mass loss occurs. The WD burns
hydrogen at the rate equal to mass accretion, and the WD keeps staying on
the same position on the thick part in the HR diagram.
The surface temperature is high enough to emit supersoft X-rays
(see Figure \ref{hr_simple}).
Therefore, this region corresponds to persistent X-ray sources.
\begin{quote}
\begin{figure}
\includegraphics[width=80mm]{f3.epsi}
\caption{Evolution of nova outbursts. After the nova explosion sets in,
the companion star is engulfed deep inside the photosphere (a);
the photospheric radius moves inward with time due to strong mass
loss. The companion emerges from the WD photosphere (d) and
an accretion disk may appear or reestablish again (e).
The optically thick wind stops (f).
Hydrogen nuclear burning stops and the nova enters a cooling phase (g).
The main emitting wavelength region shifts from optical to UV and then
to supersoft
X-rays. (taken Hachisu and Kato 2006)
}
\label{novaexplosion}
\end{figure}
\end{quote}
\section{Low mass-accretion rate}
\subsection{Nova as transient SSSs}
When the mass-accretion rate onto a WD is smaller than the critical
value ($\dot M_{\rm acc} < \dot M_{\rm std}$),
unstable hydrogen shell flash triggers a nova outburst. The WD envelope quickly
expands and it moves from the lower region (Fig. 1, dotted region) to the
upper right region in the HR diagram.
Figure \ref{novaexplosion} shows the evolutional change of nova binary during an
outburst.
After the nova outburst sets in, the envelope of the WD widely expands
and strong wind mass loss begins. The optical photons dominate
in the first stage which
is replaced by the UV and then the X-ray photons as the photospheric temperature
rises with time. The time scale of optical decline, UV and X-ray phases
depends strongly on the WD mass and secondary
on the chemical composition (e.g. Hachisu and Kato 2006).
In general, a nova on a massive WD evolves fast so duration of the X-ray
phase is also short, but for less massive WDs it lasts long.
From the theoretical point of view, all novae become SSS in the later phase of
the outburst, although the time scale is very different from nova to nova.
Supersoft X-rays are probably observed only after the optically thick
wind stops, because supersoft X-rays are
absorbed by the wind itself. Therefore, the X-ray turn on time and turn off
time correspond to the epoch when the wind stops (f) and when hydrogen burning
stops (g), respectively, in Figure \ref{novaexplosion}.
Hard X-rays originate from internal shocks between ejecta (Friedjung 1987;
Cassatella et al. 2004; Mukai and Ishida 2001)
or between ejecta and the companion (Hachisu and Kato 2009b), therefore, it can
be detected during the period as indicated by the dashed line in Figure \ref{novaexplosion}.
\begin{quote}
\begin{figure}
\includegraphics[width=80mm]{f4.epsi}
\caption{Light-curve fitting for V1974 Cyg.
The supersoft X-ray data (open squares) as well as the UV 1455 \AA~
(large open circles), visual (small dot) and $V$-magnitudes (small open circle)
are shown.
The lines denote theoretical curves for a chemical composition of $X= 0.55$,
$X_{\rm CNO}= 0.10$, $X_{\rm Ne}= 0.03$, and $Z= 0.02$.
The model of $1.05 \mathrm{M_\odot}$ WD (thick solid line) shows a best fitting to
these observational data simultaneously.
Two epochs, which are observationally suggested, are indicated by an arrow:
when the optically thick wind stops and when the hydrogen shell-burning ends.
(taken from Hachisu and Kato 2006)}
\label{V1974Cyg}
\end{figure}
\end{quote}
\subsection{Light-curve fitting of classical nova}
\begin{quote}
\begin{figure}
\includegraphics[width=80mm]{f5.epsi}
\caption{Light-curve fitting for V2491 Cyg.
The upper bunch of data indicates optical and near-IR observational data, and
the lower X-ray data.
The best-fit theoretical model is a $1.3~\mathrm{M_\odot}$ (thick blue line)
for the envelope chemical composition with
$X=0.20$, $Y=0.48$, $X_{\rm CNO} =0.20$, $X_{\rm Ne} =0.10$, and $Z=0.02$.
Supersoft X-rays are probably not detected
during the wind phase (dashed part) because of self-absorption by the wind itself.
The $F_\lambda \propto t^{-3}$ law is added for the
nebular phase. See Hachisu and Kato (2009a) for more detail.
}
\label{V2491Cyg}
\end{figure}
\end{quote}
Nova light curves can be theoretically calculated using optically thick
wind theory of nova outburst for a given
set of WD mass and chemical composition of the envelope (Kato and Hachisu 1994).
In general, novae evolve fast in massive WDs and slowly in less massive
WDs, mainly due to the difference of ignition mass (less massive
ignition mass in massive WDs). The optical and
infrared (IR) fluxes can be basically well represented by free-free emission.
There found a beautiful scaling law of optical and IR fluxes among
a number of novae in different speed class, i.e.,
"universal decline law of classical nova" (Hachisu and Kato 2006).
This property is useful to
understand nova light curves that show a wide range of varieties. For
example, we can extract a basic shape from a given light curve
and recognize secondary shapes such as oscillatory behavior,
multiple peaks, sudden optical drop associated to dust formation, and
additional brightness due to emission lines in the nebula phase.
Figure \ref{V1974Cyg} shows an example of light-curve fitting.
The lines marked ``opt'' represent calculated light curves. In the
later phase the visual light curve deviates from the theoretical lines due to
contribution of strong emission lines (see Hachisu and Kato 2006 for more detail).
The decline rate of optical flux and durations of UV and X-ray
fluxes depend
differently on the WD mass and composition, therefore, multiwavelength observation
is important to determine these parameters. In this case, the above authors
determined
the WD mass to be about 1.05 $\mathrm{M_\odot}$ for a set of chemical composition shown
in the figure caption.
The second example of light-curve fitting is V2491 Cyg. This nova is a
very fast nova of which supersoft X-ray phase lasts only 10 days.
Figure \ref{V2491Cyg} shows the best fit model, that reproduces simultaneously
the light curves of visual, IR and X-ray, is
$\approx\,1.3\,\mathrm{M_\odot}$ WD with the set of chemical composition
given in the legend of the figure. This nova shows the secondary maximum
about 15 days from the optical peak. Except this secondary maximum and
the very later nebula phase the optical and IR light curves follow the
universal decline law which is indicated by solid lines.
(see Hachisu and Kato 2009a
for the magnetic origin of the secondary maximum.)
\subsection{X-ray turn on/off time and WD mass}
Hachisu and Kato (2009b) presented light-curve analysis for more than ten
novae in which supersoft X-rays are detected and determined the WD mass.
For example, $0.85~\mathrm{M_\odot}$ for V2467 Cyg (CO nova), $0.95~\mathrm{M_\odot}$
for V458 Vul (CO nova), $1.15~\mathrm{M_\odot}$ for V4743 Sgr, and
$1.2~\mathrm{M_\odot}$ for V597 Pup.
Kato, Hachisu and Cassatella (2009) suggested that Ne novae
have a more massive WD than CO novae and the boundary of CO and Ne
WDs is at $\approx\,1.0\,\mathrm{M_\odot}$ from their mass estimates for seven IUE novae.
The mass estimates in X-ray nova (Hachisu and Kato 2009b) is consistent
with the above boundary of $\approx\,1.0\,\mathrm{M_\odot}$
although the chemical composition is not known in some novae.
Umeda et al. (1999) obtained
that the lowest mass of an ONeMg WD is $1.08~\mathrm{M_\odot}$
from evolutional calculation of intermediate stars in binary.
This means that a WD is not eroded much, even though
it had suffered many cycles of nova outbursts.
This may provide interesting information for binary evolution scenarios
and chemical evolution of galaxies.
\subsection{Recurrent novae}
Recurrent novae repeat outbursts every 10-80 years. The evolution of the
outburst is very fast. As the heavy element
enhancement is not detected, their WD mass is supposed to
increase after each
outburst. One of the interesting light curve properties is the presence of
plateau phase. U Sco shows a plateau phase of 18 days (Hachisu et al. 2000)
and RS Oph 60 days which are an indication
of the irradiated disk (Hachisu et al. 2006). Hachisu, Kato and Luna (2007) showed
that the turn off epoch of supersoft X-ray corresponds to the sharp drop
immediately after the optical plateau phase (see Figure \ref{V2491Cyg.RSOph});
They presented an idea that the long duration of the plateau
in RS Oph is a results of additional heat flux from hot helium ash layer
developed underneath the hydrogen burning zone.
Therefore, the plateau is another evidence of increasing WD mass.
\begin{quote}
\begin{figure}
\includegraphics[width=65mm]{f6.epsi}
\caption{Comparison of light curves of RS Oph and V2491 Cyg. X-ray
count rates and optical magnitudes are denoted by open triangles and
filled circles, respectively. RS Oph data is taken from Hachisu et al. (2007), and
V2491 Cyg data from Hachisu (2009a).
}
\label{V2491Cyg.RSOph}
\end{figure}
\end{quote}
It is interesting to compare the visual and X-ray light curves of RS Oph
with a classical nova V2491 Cyg. These objects show a similar rapid
decline in the first optical phase except the secondary maximum of V2491 Cyg,
and contain a very massive WD ($1.35~\mathrm{M_\odot}$ in RS Oph:
Hachisu et al. 2007 and $1.3~\mathrm{M_\odot}$ in V2491 Cyg).
However, RS Oph shows a long duration of supersoft X-ray phase, while V2491 does not.
This difference may be explained by the presence of a hot ash layer.
In classical novae, hydrogen ignites somewhat below the WD surface due to diffusion
during the long quiescent phase (Prialnik 1986), and ash produced in
nuclear burning is carried upwards by convection and blown off in the winds.
Then no helium layer develops underneath the burning zone. Heavy element
enrichment observed in ejecta may support this hypothesis.
On the other hand, in recurrent novae, diffusion process
does not work in a short quiescent period, so hot helium ash can pile up
and act as heat reservoir. This hypothesis needs to be examined more,
perhaps in a next recurrent nova outburst.
\section{Intermediate mass-accretion rate}
In the intermediate mass-accretion rate
($\dot M_{\rm std} \le \dot M_{\rm acc} \le \dot M_{\rm cr}$),
the hydrogen burning is stable and optically thick winds do not occur.
The photospheric temperature of the WD is relatively high as indicated
by solid lines in Figure \ref{hr_simple}. These WDs are observed
as persistent SSSs.
\subsection{Steady hydrogen burning}
van den Heuvel et al. (1992) interpolated supersoft X-ray sources as
an accreting WD with high accretion rate ($\approx\,10^{-7}\,\mathrm{M_\odot}~$yr$^{-1}$)
so that it can undergo steady hydrogen nuclear burning.
Figure \ref{SSX.SMC13} indicates the position of the SSSs in the HR diagram
which are roughly consistent with theoretical steady burning phase
(thick part), considering difficulties in determining observationally
the temperature and luminosity.
\begin{quote}
\begin{figure}
\includegraphics[width=80mm]{f7.epsi}
\caption{Same as Figure \ref{hr_simple} but with SSSs
(taken from Starrfield et al. 2004 except 1E0035).
Three squares with small open circles denote SMC13 (G: Greiner (2000),
SI: Suleimanov and Ibragimov (2003), K:Kahabka et al. (1999).
}
\label{SSX.SMC13}
\end{figure}
\end{quote}
\subsection{SMC13: a possible very slow nova?}
It is to be noticed that some supersoft X-ray sources may
be not exactly steady burning sources, but may be a remnant of
nova outburst of very slow evolution.
Kahabka and Ergma (1997) proposed an idea that the observational data of
1E0035.4-7230 (SMC13) can be explained in the framework of standard
cataclysmic variable evolution of low mass WDs ($\approx$ 0.6-0.7
$\mathrm{M_\odot}$).
Figure \ref{lightM04.Z} demonstrates that a low mass WD (0.4 $\mathrm{M_\odot}$)
undergoes nova outburst of extremely slow evolution.
Its X-ray turn on/off times are 300 and 600 yrs, respectively for $Z=0.02$
and more slower for population II stars ($Z=0.004$ and 0.001).
In these cases the supersoft X-ray phase starts when the
optical magnitude drops by 6 mag, long after the optical peak. Therefore,
we can detect no optical counter part of a SSS nor find any
record in literature.
Figure \ref{SSX.SMC13} also shows the estimated position of SMC13 by three squares
with small open circles at each corners (two squares are very small).
These positions are scattered among authors with different method of analysis,
but roughly consistent with solid part of the theoretical lines (a persistent
source). If SMC13 is a very slow nova, its X-ray emission more than
a decade suggests a less massive WD ($<0.6~\mathrm{M_\odot}$) that has a smaller
temperature. Thus, it is valuable to update the temperature and luminosity
of SMC13 using
unanalyzed high quality data recently obtained with satellites after BeppoSax
and ROSAT.
\begin{quote}
\begin{figure}
\includegraphics[width=80mm]{f8.epsi}
\caption{Theoretical light curves of visual and supersoft X-ray (0.1-0.6 keV)
fluxes for $0.4~\mathrm{M_\odot}$ WD of various population ($Z=0.02,~0.004$ and 0.001).
}
\label{lightM04.Z}
\end{figure}
\end{quote}
\section{High mass-accretion rate}
\subsection{Accretion wind}
When the accretion rate is larger than $\dot M_{\rm cr}$,
the WDs cannot consume all of the accreted matter which is piled
up to form an extended envelope. As the
photospheric temperature decreases to reach the critical value
(i.e., the rightmost point of the thick part in
Figure \ref{hr_simple}), optically thick winds is accelerated due to
Fe peak (at around $\log T ($K$)\approx 5.2$) of the
OPAL opacity (Kato and Hachisu 1994, 2009).
Hachisu and Kato (2001) proposed a binary system in which the WD accretes
matter from the companion from the equatorial region and loses matter as
a wind from the other regions as illustrated in Figure \ref{accwind}.
They named such a configuration ``accretion wind''.
In such a case, the WD burns hydrogen at the rate of $\dot M_{\rm nuc}$ and
blows the rest of the accreted matter in the winds at the rate of about
$\dot M_{\rm acc} - \dot M_{\rm nuc}$,
where $\dot M_{\rm nuc}$ is the nuclear burning rate.
Such a WD in this "accretion wind"
corresponds to the dashed part in Figure \ref{hr_simple}.
This accretion wind is an important elementary process for binary evolution
scenario to Type Ia supernova, because it governs the growth rate of the WD mass
(e.g., Hachisu, Kato \& Nomoto 1999a, Hachisu et al., 1999b, Han \&
Podsiadlowski 2006),
as well as the mass-transfer rate from the companion which is regulated
by stripping of companion surface by the wind
(e.g., Hachisu, Kato \& Nomoto 2008).
\begin{quote}
\begin{figure}
\includegraphics[width=65mm]{f9.eps}
\caption{Optically thick winds blow from mass-accreting WDs
when the mass-transfer rate from a lobe-filling companion exceeds a critical
rate, i.e., $\dot M_{\rm acc} > \dot M_{\rm cr}$.
The white dwarf accretes mass from the equatorial region and
at the same time blows winds from the polar regions.}
\label{accwind}
\end{figure}
\end{quote}
\begin{quote}
\begin{figure}
\includegraphics[width=80mm]{f10.eps}
\caption{Self-sustained model of spontaneous winds for RX J05134$-$6951.
(a) long term evolution of V magnitude.
(b) model light curve of $M_{\rm WD}=1.3~\mathrm{M_\odot}$.
(c) change of accretion rate and wind mass-loss from WD envelope.
(d) Change of WD radius and its temperature.
(Taken from Hachisu and Kato 2003b.)
}
\label{rxj0513}
\end{figure}
\end{quote}
\subsection{Accretion wind and SSS}
There are two objects closely related to the accretion winds: RX J0513$-$69 and
V Sge. Both of them are supersoft X-ray sources.
RX J0513$-$69 is an LMC SSS that shows quasi-regular transition between
optical high and low states as shown in Figure \ref{rxj0513} in which
supersoft X-rays are detected only in the optical low states (Reinsch et al. 2000;
Schaeidt, Hasinger, and Truemper 1993).
Hachisu and Kato (2003b) presented a transition mechanism between the high
and low states. In the optical high state, the accretion rate is high enough
and the photosphere expands to accelerate the winds (Figure \ref{accwind}). The WD
locates in the low temperature region (dashed part in Figure \ref{SSX.SMC13})
and no X-rays are expected.
In the optical low state, mass-accretion rate is low and
the photospheric temperature is enough high
(in the solid part of Figure \ref{SSX.SMC13}) to emit supersoft X-rays.
No wind is accelerated.
The above authors proposed a self-regulation transition
mechanism that makes the binary
back and forth between the optical high and low states.
When the mass-accretion rate is large, the WD is in the optical
high state. The strong winds hit the companion and strip off
a part of the companion surface. Thus the mass-transfer rate
onto the WD reduces and finally stops, which causes the wind stop and
the system goes into the optical low state. After a certain time, the
companion recovers to fill the Roche lobe again and the mass transfer resumes,
which cases wind mass loss.
The resultant theoretical light curves depend on the WD mass and other
parameters. The best fit model that reproduces
the observed light curve best indicates the WD
mass to be 1.2 - 1.3 $\mathrm{M_\odot}$ (see Figure \ref{rxj0513}).
The second object is V Sge that also shows the similar semi-regular transition
of light curve, although timescales are different.
Its light curve is also reproduced by the transition model with
the WD mass of 1.2 - 1.3 $\mathrm{M_\odot}$ (Hachisu and Kato 2003a).
In these two systems the WD mass is increasing with time, because steady
nuclear burning produces helium ash which accumulate on the WD. Therefore,
they are candidates of type Ia supernova progenitor.
|
1,116,691,499,765 | arxiv | \section{Introduction}
\label{sec:intro}
The study of perturbative probes of the quark-gluon plasma, and QCD jets in particular, is currently in its golden age with the development of jet reconstruction techniques for heavy-ion collisions at LHC and RHIC, see e.g. \cite{Muller:2012zq,Spousta:2013aaa,Armesto:2015ioy} and these proceedings. These measurements provide in many ways a more rigorous connection between experimental measurements and theory or Monte-Carlo studies because of the implicit resummation of collinear divergences. On the other hand, successful jet reconstruction in the extreme environment of heavy-ion collisions is challenging and comparisons between models and data should be done with care \cite{Cacciari:2010te,Cacciari:2011tm}.
Until recently most studies, both experimental and phenomenological, dealt with jet and di-jet rates as well as inclusive properties of jets, fragmentation functions and jet shapes, and measurements of large-angle energy flow around jets. However, novel measurements of jet substructures in nuclear collisions \cite{CMS:2016jys} have recently invigorated the discussion and opened new possibilities for measuring and understanding medium modifications of jets.
The alleys of recent progress can predominantly be categorised according to two chief aspects of in-medium jet physics. Firstly, the propagation of a single colour charge in the medium and, secondly, the generalisation to multiple charges accounting for possible interference effects. We will review the former aspects in Sec.~\ref{sec:radiative} and the latter in Sec.~\ref{sec:coherence}. We will also discuss the application of the medium modifications on the level of jet substructure measurements in Sec.~\ref{sec:substructure}. This discussion is in no way meant to be exhaustive but will immediately illustrate the importance of whether sub-jets are treated as independent or coherent. Jet substructure provides therefore a new handle on the dynamics that can help pinpoint the microscopic processes underlying the measured modifications.
The choice of focus here is of course a biased selection, and not all recent progress in the field can be covered. A very interesting topic which deserves further study is the back-reaction of medium dynamics to the propagation of the jet, see e.g. \cite{Zapp:2012ak,Wang:2013cia,He:2015pra,Casalderrey-Solana:2016jvj}. While these aspects certainly are important for quantitative comparisons to experimental data, we currently have not much to say about their qualitative features.
We summarise briefly in Sec.~\ref{sec:conclusions}.
\section{Radiative parton energy loss}
\label{sec:radiative}
A single hard parton traversing a coloured medium undergo successive elastic interactions which modify their kinematics, mainly leading to the transverse momentum broadening $\langle k_\perp^2 \rangle = \hat q L$, characterised by the parameter $\hat q$ in a medium of length $L$. The most efficient energy degradation mechanism is therefore realised through an enhanced rate of splitting. Assuming multiple soft scattering, the spectrum of induced quanta with energy $\omega$ radiated off a hard gluon is strongly cut-off at a characteristic energy $\omega_c \equiv \hat q L^/2$ and reads \cite{Baier:1996kr,Baier:1996sk,Zakharov:1996fv,Zakharov:1997uu}
\begin{eqnarray}
\label{eq:BDMPSspectrum}
\omega\frac{\text{d} N_\text{\tiny BDMPS}}{\text{d} \omega} = \bar \alpha\left\{ \begin{array}{lc} \sqrt{\frac{\omega_c}{2 \omega}} & \omega < \omega_c \\ \frac{1}{12} \left(\frac{\omega_c}{\omega} \right)^2 & \omega > \omega_c \end{array} \right. \,,
\end{eqnarray}
where $\bar \alpha \equiv 2\alpha_s N_c/\pi$. For further details and refinements, see e.g. \cite{Mehtar-Tani:2013pia,Blaizot:2015lma}.
The behaviour in the soft sector is characteristic of the Landau-Pomeranchuk-Migdal (LPM) interference between scattering centres and arises because the formation time of the gluon scales as $t_\text{f} = \sqrt{\omega/\hat q}$. One can also find a compact analytical expression for uncorrelated scatterings, the so-called ``first order in opacity'' spectrum \cite{Gyulassy:2000er,Wiedemann:2000za}; in this case, the LPM effect suppresses the hard sector.
The parameter $\omega_c$ determines the energy of gluons that have been broadened along the whole medium length and are emitted at the minimal angle $(\hat q L^3)^{-{\nicefrac{1}{2}}}$. It is also controls the mean energy loss $\langle \Delta E \rangle \sim \hat q L^2 $. These emissions are rare $\mathcal{O}(\alpha_s)$, though. However, the energy scale $\omega_s = \bar \alpha^2\omega_c$ determines the regime when we have to take into account multiple branchings, i.e. $\int_{\omega_s} \text{d} \omega \,\text{d} N_\text{\tiny BDMPS}/\text{d} \omega > 1$. Since their formation times is shorter than the medium length, a cascading process takes place which transports these gluons to large angles, $\theta > \bar \alpha^{-2}(\hat q L^3)^{-{\nicefrac{1}{2}}}$. When we reconstruct the energy of the leading parton in a cone, this effect is responsible for sizeable energy leakage \cite{Blaizot:2013hx,Blaizot:2014ula,Blaizot:2014rla,Kurkela:2014tla}.
To get a clearer picture, let us put some numbers on these equations. For $L=4$ fm, $\hat q = 1$ GeV$^2$/fm and $\bar \alpha = 0.3$, we find $\omega_c = 80$ GeV and $\omega_s =7$ Gev. For this energy range, the corresponding range of emission angles, estimated from momentum broadening as $\theta \sim \sqrt{\hat q L}/\omega$, yields $0.025 < \theta_\text{\tiny BDMPS} < 0.28 $. For a jet reconstructed in a cone of $R=0.3$, this typical choice of medium parameters indicate that rare and hard BDMPS emissions populate the in-cone jet distribution while multiple branching transport energy out-of-cone. The details of this soft cascade has been studied in quite some detail and its connection with the physics of thermalisation has been highlighted \cite{Iancu:2015uja}. We will come back to this insight in Sec.~\ref{sec:substructure}.
As a reminder, we note that the soft emissions can be resummed into a probability distribution, called the quenching weight (QW), of losing a finite amount of energy \cite{Baier:2001yt,Salgado:2003gb,Baier:2006fr}. Taken the form of the spectrum in the first line of Eq.~(\ref{eq:BDMPSspectrum}), this distribution becomes
\begin{eqnarray}
D_\text{\tiny QW} (\epsilon) = \sqrt{\frac{\omega_s}{\epsilon^3 }} \exp\left[-\frac{\pi \omega_s}{\epsilon} \right] \,,
\end{eqnarray}
where a more realistic form can be tabulated \cite{Salgado:2003gb}. It can relate the jet spectrum in the presence of a medium to that in vacuum, $\text{d} N_{\text{jet}(0)}/\text{d} p_T^2 $, as
\begin{eqnarray}
\label{eq:QuenchingFactor}
\frac{\text{d} N_{\text{jet}}}{\text{d} p_T^2} = \int_0^\infty \text{d} \epsilon \, D_\text{\tiny QW}(\epsilon) \frac{\text{d} N_{\text{jet}(0)}(p_T+\epsilon)}{\text{d} p_T^2 }\,.
\end{eqnarray}
This allows to calculate the quenching factor $Q_\text{\tiny QW}(p_T)$ as the ratio of medium to vacuum spectra.
While the physics of transverse momentum broadening and radiative energy loss has been known for a while, recently progress has been made toward understanding their respective radiative corrections \cite{Liou:2013qya,Blaizot:2014bha,Iancu:2014kga}. Usually, one assumes that the interactions with the medium are quasi-instantaneous (with respect to the relevant timescales). However, allowing for short-lived, and thus soft, fluctuations one finds corrections which can most naturally be recast as corrections to the medium parameter $\hat q$. For instance, the first double-logarithmic correction reads
\begin{eqnarray}
\Delta\hat q \simeq \frac{\alpha_s N_c}{2 \pi} \hat q \ln^2\frac{L}{l_0} \,,
\end{eqnarray}
where the shortest timescale $l_0$ is some cut-off scale. The inclusion of these fluctuations to all orders leads to a renormalisation equation that accounts for a tower of fluctuations, ordered in formation time, and takes one from the value of $\hat q (l_0)$, i.e. describing the microscopic properties of the medium at scale $l_0$, to $\hat q(L)$, which includes the contribution from additional fluctuations in the medium. For a large medium, $\hat q (L) \propto L^\gamma$ where the anomalous dimension $\gamma = 2\sqrt{\bar \alpha}$ \cite{Blaizot:2014bha}. This novel relation affects how both the average transverse momentum broadening and energy loss scale with the size of the medium.
\section{Interference in multi-gluon processes}
\label{sec:coherence}
The ``running'' of $\hat q$ is an example of a resummation of fluctuations in the medium that overlap. In this particular situation, the fluctuations are strongly ordered and can easily be resummed. However, one could worry that in other situations, multiple fluctuations that interfere which each other would arise and thus ruin the probabilistic picture of independent emissions that underlies much of the discussion in the previous section. Besides, as known from jet physics in vacuum, these corrections give crucial input to Monte-Carlo shower generators of the fragmentation process and would serve for the same purpose for dedicated generators of jets in heavy-ion collisions.
The two-gluon rate in a dense medium was calculated in a series of noteworthy works \cite{Arnold:2015qya,Arnold:2016kek,Arnold:2016mth,Arnold:2016jnq}. They provided an independent confirmation of the double-logarithmic contributions discussed above. For most configurations the corrections to the probabilistic picture were small except whenever the gluon energies were strongly separated, i.e. one gluon being much softer than the other. Strikingly, in this case the found corrections were negative implying a reduced rate. This can be interpreted as an interference effect owing to the fact that, from the viewpoint of the shortest-lived fluctuation, the parent parton and the other, relatively long-lived fluctuation cannot be resolved \cite{Arnold:2016kek}. Physically this means that the shortest fluctuation can only be emitted off the total colour charge and not by each of the legs independently.
This striking result connects the physics of multiple medium-induced emission to the physics of jet fragmentation and modification in the medium. However, there are several subtle differences between the two cases. Firstly, splittings induced by the medium are not collinear divergent in contrast to vacuum radiation. Secondly, their formation is similar to their decoherence time, i.e. the time when a typical medium fluctuation can resolve it from the parent, see for a discussion on this point. These timescales can possibly differ a lot for vacuum radiation and we will come back to two cases below.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{Tywoniuk_K_Fig1a.pdf}
\caption{}
\label{fig:CohEnLossa}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.81\textwidth]{Tywoniuk_K_Fig1b.pdf}
\caption{}
\label{fig:CohEnLossb}
\end{subfigure}
\caption{Sketch of the two kinematic limits of the double emission rate calculated in \cite{Casalderrey-Solana:2015bww}. In both panels the hard gluon is blue while the soft gluon is red and the blob represents all possible placements of the in-medium exchange. (a) The collinear limit, left panel: the angle of emission of the hard gluon is very small and its formation time is long compared to the soft gluon formation time. (b) The soft limit, right panel: in this limit the formation time of the hard gluon is very short compared to the soft gluon one and the angles of emission of both gluons are comparable. Figures taken from \cite{Casalderrey-Solana:2015bww}.}
\label{fig:CohEnLoss}
\end{figure*}
In order to shed more light on these issues, one should consider the full two-gluon spectrum, differential in both energies and angles. While the full splitting function was first calculated in \cite{Fickinger:2013xwa} at first order in opacity, two limits of the spectrum, relevant for jet fragmentation in medium, were meticulously analysed \cite{Casalderrey-Solana:2015bww}. For simplification, one of the gluons was treated as ``hard'', i.e. its transverse momentum is much bigger than the medium kick, while the other not. Let us spend some time explaining these limits separately. These are illustrated in Fig.~\ref{fig:CohEnLoss}.
In the first limit, see Fig.~\ref{fig:CohEnLossa}, the formation time of the hard gluon is much longer than the formation time of the soft one. This is denoted the ``collinear limit'' since the hard gluon is emitted very close in angle to the parent parton. In fact, due to angular ordering the soft gluon is formally only radiated off the parent parton in the vacuum.\footnote{Soft emissions can only be emitted within a cone determined by the emitter. In the collinear limit, this cone shrinks to zero.} Nevertheless, in a large medium the two colour sources will ultimately be resolved and permitted to radiate. After this particular time one therefore finds an additional contribution to the spectrum, namely that of an emission spectrum off an on-shell colour current (Gunion-Bertsch spectrum). The timescale where the positive contribution to the rate sets in is simply the formation time of the hard gluon. Hence, the decoherence time is equal to the formation time or, in other words, the hard gluon gets resolved immediately after emission.
In the second limit, see Fig.~\ref{fig:CohEnLossb}, the formation time ordering is reversed. This happens whenever the energy of the soft gluon is small. In this case, the physical picture is quite intuitive: the parent parton and the hard gluon form a dipole that interact and radiate in the medium. In fact, one recovers exactly the spectrum off a colour charged ``antenna'' that was initially calculated at first order in opacity \cite{MehtarTani:2010ma,MehtarTani:2011gf} and generalised to multiple scattering in \cite{MehtarTani:2011tz,MehtarTani:2012cy,CasalderreySolana:2011rz}. In the latter, general situation the interference effects are controlled by the so-called decoherence parameter
\begin{eqnarray}
\label{eq:DecoherenceParameter}
\Delta_\text{decoh} = 1-e^{- (L/t_\text{d})^{3}} \,,
\end{eqnarray}
where we identify the decoherence time $t_\text{d} =[12/(\hat q \theta_\text{\tiny H}^2)]^{{\nicefrac{1}{3}}}$, where $\theta_\text{\tiny H}$ is the emission angle of the hard gluon. For long decoherence times, $t_\text{d} > L$, the dipole is not resolved by the medium and radiates medium-induced radiation coherently as the total colour charge. Additionally, it can radiate (fragment) vacuum-like according to the rules of angular ordering. In the opposite case, $t_\text{d} \ll L$, the dipole de-coheres, i.e. both constituents become independent of one another. Note that in both cases the decoherence time is much larger than the formation time, $t_\text{f} \ll t_\text{d}$.
Further work is need to understand intermediate regimes. Nevertheless, to summarise this section, the effects of colour coherence have been firmly established by several calculations. This points to a simple organising principle put forward in \cite{CasalderreySolana:2012ef}. Rewriting the decoherence parameter (\ref{eq:DecoherenceParameter}) to highlight a characteristic decoherence angle $\theta_\text{d} = \sqrt{12/(\hat q L^3)}$, one argues that the medium only can modify jet substructures at large angles $\theta > \theta_\text{d}$. The resolved substructures, in particular the jet core, fragment internally as in the vacuum and lose energy independently of one another. A significant fraction of typical jets in heavy-ion collisions could remain completely unresolved by the medium however they are still affected by energy loss effects due to the total (quark/gluon) colour charge of the jet. Corrections to this picture also can also account for the gradual eradication of angular ordering of the jet constituents and lead to an enhancement of soft gluons radiated within the jet cone \cite{Mehtar-Tani:2014yea}. Nevertheless, a complete understanding of how jets form and interact in the medium is still missing.
\section{Jet substructure in medium}
\label{sec:substructure}
In order to gain further insight into the mechanisms at play, and also encouraged by recent experimental measurements, it is natural to consider jet substructure observables. A particularly clear procedure, called ``SoftDrop''\footnote{Whenever $\beta = 0$, SoftDrop is equivalent to the modified MassDrop procedure \cite{Dasgupta:2013ihk}.} \cite{Larkoski:2014wba,Larkoski:2015lea}, selects a pair of subjets, starting from a maximal angular separation at the jet cone size $R$, that satisfies the criterion
\begin{eqnarray}
\label{eq:SoftDrop}
z > z_\text{cut} \theta^\beta \,,
\end{eqnarray}
where $z\equiv \min({p_{T1},p_{T2}})/(p_{T1} + p_{T2})$, $p_{T1(2)}$ is the subjet energy and $\theta$ their angular separation. Candidates that do not satisfy the condition (\ref{eq:SoftDrop}) are discarded or ``groomed''. This procedure therefore corresponds to clustering all jet constituents into an angular ordered tree and look for the first ``hard'' branching, according to (\ref{eq:SoftDrop}). It is also worth keeping in mind that the procedure can be made to terminate at some minimal resolution angle $R_0$. Typical values chosen for the experimental analyses are $z_\text{cut} =0.1$, $\beta = 0$ and $R=0.4$, $R_0=01$.
In vacuum, the ``hard'' branching is inherently sensitive to the fundamental splitting function, which for gluon-gluon splitting reads $\mathcal{P}^\text{vac}(z,\theta) = \bar \alpha P(z)/\theta$ where $P(z)$ is the relevant Altarelli-Parisi splitting function (stripped of its colour factor). However, given it's collinear divergence $\sim\bar \alpha \ln\big(R/R_0 \big)$ ($\beta =0$) we have to resum multiple emissions into the relevant Sudakov form factor. Physically, this means taking into account all the groomed emissions for $R\gg R_0$. We can then, for instance, define the probability to split to two sub-jets with momentum fraction $z_g$ as
\begin{eqnarray}
\label{eq:SplittingProbVacuum}
\textsl{p}(z_g) = \int_{0}^R \text{d} \theta\, \Delta(\theta) \mathcal{P}^\text{vac}(z_g,\theta) \Theta_\text{cut}(z_g,\theta) \,,
\end{eqnarray}
where $R_0\to0$ and the step-function in (\ref{eq:SplittingProbVacuum}) embodies the condition in Eq.~(\ref{eq:SoftDrop}), for details see \cite{Larkoski:2014wba,Larkoski:2015lea,Mehtar-Tani:2016aco}. The relevant Sudakov reads
\begin{eqnarray}
\Delta(\theta) = \exp \left[-\!\!\int_\theta^R \!\!\text{d} \theta' \!\!\int_0^1 \!\!\text{d} z \, \mathcal{P}^\text{vac}(z,\theta') \Theta_\text{cut}(z,\theta') \right] \,,
\end{eqnarray}
and is equivalent to the 1-jet rate, i.e. it is the probability of no splittings between the maximal angle $R$ and $\theta$. Given a resolution angle $R_0$, the probability of finding a pair that satisfies the SoftDrop condition, aka the two-pronged probability, is therefore $\mathbb{P}_{2\text{prong}} = 1-\Delta(R_0)$ \cite{Mehtar-Tani:2016aco}.
Strikingly, after the resummation the splitting {\it probability} becomes independent of $\bar \alpha$, thus not on the value of $\alpha_s$ nor the colour or {\it flavour} of the splitting, and exhibits the universal $1/z$-behaviour at small-$z$ for the $\beta = 0$ case \cite{Larkoski:2015lea}.
When considering the medium modifications of this observable, we are guided by the insight found in the previous sections that imply an approximate separation of two types of radiation: multiple, soft on large-angles and rare, hard emission in the jet cone \cite{Mehtar-Tani:2016aco}, see also \cite{Chien:2016led}. Hence, having to deal with two sub-jets we have to decide, according to some criterium, whether they lose energy coherently or independently. Secondly, for jets with $p_\text{T} = 100-200$ GeV our back of the envelope estimate shows that hard BDMPS radiation could be identified by the SoftDrop as actual jet substructures. The effect should be small $\mathcal{O}(\alpha_s)$ and care should be taken when aiming for a quantitative a description of the data. Nevertheless, let us come back to this exciting point later and currently focus on the first aspect, sub-jet coherence.
Due to energy loss effects the probability of the splitting is intimately related to the suppression of the spectrum itself. In order to simplify the discussion, let us consider two clearly defined scenarios and review their consequences, for more details see \cite{Mehtar-Tani:2016aco}. In the first scenario the whole jet, and therefore all its sub-jets, is unresolved by the medium. In the second scenario all sub-jets are resolved, thus independent. In order to study these scenarios we will make use of a probabilistic setup where energy loss (whether elastic or radiative) can affect any resolved sub-jet.
In the former, ``coherent'' case none of the inter-jet splittings are modified but the spectrum is overall suppressed because of energy loss, as given by Eq.~(\ref{eq:QuenchingFactor}). This implies that, in the absence of any other source of radiation, Eq.~(\ref{eq:SplittingProbVacuum}) holds. The proper way of adding a new radiative mechanism, namely in-cone BDMPS emissions, is on the level of probabilities. Hence, we have to reduce the vacuum probability in order to obtain a properly normalised total probability of radiation. After taking appropriate care of the angular restrictions (for instance, the introduction of a minimal resolution angle should further suppress the contribution of vacuum radiation) we should expect an enhancement of the splitting probability at small-$z$ because of the medium-induced bremsstrahlung that scales as $z^{-{\nicefrac{3}{2}}}$. This enhancement dies rapidly off with energy $\sim p_T^{-{\nicefrac{1}{2}}}$, see Eq.~(\ref{eq:BDMPSspectrum}). In effect, the two-pronged probability $\mathbb{P}_{2\text{prong}}$ should be enhanced compared to the vacuum.
Taken at face value, this scenario illustrates that the SoftDrop procedure presents a unique possibility to measure directly medium-induced quanta rather than simply being sensitive to its general consequences, such as energy loss, etc.
The second scenario sketched above is more complicated. Let us first analyse the effects of energy loss for vacuum radiation in a limited angular range, $R \gtrsim R_0$. The splitting probability is now explicitly convoluted with the final-state jet spectrum, and reads ($\beta =0$)
\begin{align}
\label{eq:SplittingFunctionIncoherent}
&\frac{\text{d} N}{\text{d} p^2_T}\textsl{p}(z_g) = \bar \alpha \ln\frac{R}{R_0} \int_0^\infty\text{d} \epsilon \int_0^\epsilon \text{d} \epsilon' D_\text{\tiny QW}(\epsilon-\epsilon') D_\text{\tiny QW}(\epsilon') \nonumber\\
&\times \frac{p_T}{p_T+\epsilon} P\left(\frac{z_g p_T + \epsilon'}{p_T+ \epsilon} \right)\frac{\text{d} N_{(0)}(p_T+\epsilon)}{\text{d} p^2_T} \Theta(z_g-z_\text{cut}) ,
\end{align}
for $z_g< 1/2$. This time the splitting function itself is directly affected by the fact that energy loss of the outgoing legs is independent. This can be seen by expanding the Altarelli-Parisi splitting function for $\epsilon,\epsilon' \ll p_T$ in the small-$z_g$ region where it reads
\begin{eqnarray}
P\left(\frac{z_g p_T + \epsilon'}{p_T+ \epsilon} \right) \simeq \frac{1}{z_g}\left(1-\frac{\epsilon'}{z_g p_T} \right) \,.
\end{eqnarray}
The characteristic energy-splitting variable can be seen to shift as $z_g \to z_g + z_\text{loss}$, where $z_\text{loss} \sim \omega_s/p_T$ from dimensional arguments, resulting in a flattening of the $z_g$-distribution. Furthermore, Eq.~(\ref{eq:SplittingFunctionIncoherent}) contains two quenching weights in contrast to only one in the ``coherent'' scenario. This signals for the first time the strong effects of energy loss when applied to incoherent substructures within the original jet.
In order to understand how to disentangle the splitting probability in Eq.~(\ref{eq:SplittingFunctionIncoherent}), imagine a situation where most of the quenching the jet as a whole is taken by the most energetic leg (carrying momentum fraction $1-z_g$, for $z_g <1/2$). The jet spectrum on the left-hand side of Eq.~(\ref{eq:SplittingFunctionIncoherent}) is again given by (\ref{eq:QuenchingFactor}). The remaining quenching affects only the soft leg and can now be resummed into a modified Sudakov form factor that accounts for energy loss. This resummed quenching effect strongly suppresses the probability of two-pronged objects compared to the vacuum.
It becomes clear that adding the BDMPS spectrum on the level of probabilities complicates the situation further and it is not our goal here to present a definite answer. We could argue that the strong effects of incoherent energy loss strongly distorts the vacuum spectrum, thus being hard to reconcile with the trends observed in experimental data. A more realistic calculation should provide an interpolation between the two extreme scenarios discussed so far. Besides, effects of a soft background correlated with the jet, e.g. generated by back-reaction, could influence the interpretation of the result.
Nevertheless, the potentially unique prospect of a (semi-)direct measurement of the medium-induced bremsstrahlung and its interplay with jet coherence in heavy-ion collisions motivate further investigations into this and related jet substructure observables.
\section{Conclusions \& outlook}
\label{sec:conclusions}
Jet physics in medium is currently witnessing notable advances from the theory side and enjoys a well of excellent experimental data that continues to push for further improvements. It is therefore pertinent to understand the process of jet fragmentation in a medium in great detail. Only then can we claim to extract reliable information about the properties of the medium.
In many cases, we can however completely neglect in-cone jet modifications with a suitable adjustment of medium parameters. Jet substructure measurements are a door-opener in this context since they demand a treatment of well-defined sub-jets. The guiding insights come from the analysis of both the fragmentation of soft medium-induced gluons and the study of interference effects of hard radiation. These new class of observables also allow to test and benchmark these insights against full-fledged Monte Carlo generators for jets in heavy-ion collisions, e.g. \cite{Casalderrey-Solana:2016jvj,Zapp:2012ak}. This promises a very fruitful synergy in the future.
\section*{Acknowledgements}
Thank you Y. Mehtar-Tani and J. Casalderrey-Solana for fruitful discussions. KT has been supported by a Marie Sk\l{}odowska-Curie Individual Fellowship of the European Commission's Horizon 2020 Programme under contract number 655279 ``ResolvedJetsHIC''.
\nocite{*}
\bibliographystyle{elsarticle-num}
|
1,116,691,499,766 | arxiv | \section{Introduction}
This paper is concerned with the problem of finding free quotients of
finitely generated groups in which non-conjugate elements have
non-conjugate images. Given a finitely generated group $G$ which is
not a limit group, there is a finite collection of limit group
quotients $G\twoheadrightarrow L_1,\dotsc,G\twoheadrightarrow L_n$ of $G$ such that every
homomorphism $G\to\freegrp$, where $\freegrp$ is a free group, factors
through one of the factor groups $G\twoheadrightarrow L_i$, hence it suffices to
consider the problem only for limit groups.
We are reduced then to the problem of finding free quotients of limit
groups in which non-conjugate elements have non-conjugate images. A
group is \emph{freely conjugacy separable}, or $\free$-conjugacy separable, if for any pair
$u,v \in G$ of non-conjugate elements there is a homomorphism to some
free group $G\to\mathbb{F}$ such that the images of $u$ and $v$ in $\mathbb{F}$
are non-conjugate.
We will give two different types of examples of limit groups which are
not $\free$-conjugacy separable\ for entirely different reasons. In Section
\ref{sec:magnus-pairs} we produce a limit group $L$ with elements
$u,v$ such that the cyclic groups $\bk{u},\bk{v}$ are non-conjugate,
but whose normal closures $\ncl u$ and $\ncl v$ coincide. We call such
a pair of elements a \define{Magnus pair} (see Definition
\ref{defn:Magnus-pair}.) Such elements must have conjugate images in
any free quotient by a theorem of Magnus \cite{Magnus-1931}. In
Section \ref{sec:other-case} we construct a limit group which is a
double of a free group over a cyclic group generated by a $C$-test
word (see Definition \ref{def:c-test}). These limit groups, called
\define{$C$-doubles}, are low rank and we are able to construct
their Makanin-Razborov diagrams encoding all homomorphisms into any
free group and directly observe the failure of free conjugacy
separability. This limit group was independently discovered and
studied by Simon Heil \cite{heil2016jsj}, who published a preprint
while this paper was in preparation. He uses this limit group to show
that limit groups are not freely subgroup separable.
\begin{defn}\label{defn:discriminating}
A sequence of homomorphisms $\{\phi_i\colon G \to H\}$ is
\define{discriminating} if for every finite subset $P \subset G
\setminus \{1\}$ there is some $N$ such that for all $j \geq N, 1
\not\in \phi_j(P)$.
\end{defn}
\begin{defn}
A finitely generated group $L$ is a \define{limit group} if there is
a \define{discriminating} sequence of homomorphisms $\{\phi_i\colon L \to
\mathbb{F}\}$, where $\mathbb{F}$ is a free group.
\end{defn}
\begin{mainthm}\label{thm:not-conj-res-free}
The class of limit groups is not freely conjugacy separable.
\end{mainthm}
This should be seen in contrast to the fact that limit groups are
conjugacy separable \cite{C-Z-separable}. Furthermore Lioutikova in
\cite{Lioutikova-CRF} proves that iterated centralizer extensions (see
Definition \ref{defn:tower}) of a free group $\mathbb{F}$ are $\free$-conjugacy separable. It
is a result of of Kharlampovich and Miasnikov \cite{KM-IrredII} that
all limit groups embed in to iterated centralizer extensions. Moreover
by \cite[Theorem 5.3]{gaglione2009almost} almost locally free groups
\cite[Definition 4.2]{gaglione2009almost} cannot have Magus
pairs. This class includes the class of limit groups which are
$\forall\exists$-equivalent to free groups. The class of iterated
centralizer extensions and the class of limit groups
$\forall\exists$-equivalent to free groups are contained in the class
of towers, also known as NTQ groups. We generalize these previous
results to the class of towers with the following strong $\free$-conjugacy separability\
result:
\begin{mainthm}\label{thm:generic-sequence}
Let $\mathbb{F}$ be a non-abelian free group and let $G$ be a tower
over $\mathbb{F}$ (see Definition \ref{defn:tower}). There is a
discriminating sequence of retractions $\{\phi_i\colon G \twoheadrightarrow
\mathbb{F}\}$, such that for any finite subset $S \subset G$ of
pairwise non-conjugate elements, there is some positive integer $N$
such that for all $j \geq N$ the elements of $\phi_j(S)$ are
pairwise non-conjugate in $\mathbb{F}$. Similarly for any indivisible
$\gamma \in L$ with cyclic centralizer there is some positive
integer $M$ such that for all $k \geq M$, $r_k(\gamma)$ is
indivisible.
\end{mainthm}
This theorem also settles \cite[Question 7.1]{gaglione2009almost},
which asks if arbitrarily large collections of pairwise nonconjugate
elements can have pairwise nonconjugate images via a homomorphism to a
free group. The proof of Theorem \ref{thm:generic-sequence} is in
Section \ref{sec:towers-fcs} and follows from results of Sela
\cite{Sela-Dioph-II} and Kharlampovich and Myasnikov \cite{KM-ift},
which form the first step in their (respective) systematic studies of
the $\forall\exists$-theory of free groups.
Finally, in Section \ref{sec:refinements} we analyze the failure of
free conjugacy separability\ of our limit group with a Magnus pair and show that this is
very different from $C$-double constructed in Section
\ref{sec:other-case}. We then show that the free conjugacy separability\ does not isolate
the class of towers within the class of limit groups.
Throughout this paper, unless mentioned otherwise, $\mathbb{F}$ will
denote a non-abelian free group, $\mathbb{F}_n$ will denote a
non-abelian free group of rank $n$, and $\mathbb{F}(X)$ will denote the
free group on the basis $X$.
\section{A limit group with a Magnus pair}\label{sec:magnus-pairs}
Consider the of the fundamental group of the graph of spaces $\ensuremath{\mathbb{U}}$
given in Figure \ref{fig:1}.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.5]
\abovecollar{4}{2}{0.5}{0.25}
\draw (3.5,1.75) node {${\blacktriangleleft}$};
\abovecollar{8}{2}{0.5}{0.25}
\draw (7.5,1.75) node {${\blacktriangleleft}$};
\abovecollar{10}{2}{0.5}{0.25}
\draw (9.5,1.75) node {${\blacktriangleleft}$};
\abovecollar{12}{2}{0.5}{0.25}
\draw (11.5,1.75) node {${\blacktriangleright}$};
\lazyllipse{2}{-2}{0.5}{0.25}
\draw (1.5,-2.25) node {${\blacktriangleright}$};
\lazyllipse{4}{-2}{0.5}{0.25}
\draw (3.5,-2.25) node {${\blacktriangleleft}$};
\lazyllipse{6}{-2}{0.5}{0.25}
\draw (5.5,-2.25) node {${\blacktriangleleft}$};
\lazyllipse{10}{-2}{0.5}{0.25}
\draw (9.5,-2.25) node {${\blacktriangleleft}$};
\draw (3,2) arc (180:0:4.5 and 3.5);
\draw (4,2) arc (180:0:1.5 and 1);
\draw (8,2) arc (180:0:0.5 and 0.5);
\draw (10,2) arc (180:0:0.5 and 0.5);
\draw (10,-2) arc (360:180:4.5 and 3.5);
\draw (9,-2) arc (360:180:1.5 and 1);
\draw (5,-2) arc (360:180:0.5 and 0.5);
\draw (3,-2) arc (360:180:0.5 and 0.5);
\lazyllipse{4}{0}{0.5}{0.25}
\lazyllipse{10}{0}{0.5}{0.25}
\draw (3.5,1.5) -- (3.5,0.5)
(3,-0.5) -- (2,-1.5)
(3.5,-0.5) -- (3.5,-1.5)
(4,-0.5) -- (5,-1.5);
\draw (9.5,-1.5) -- (9.5,-0.5)
(9,0.5) -- (8,1.5)
(9.5,0.5) -- (9.5,1.5)
(10,0.5) -- (11,1.5);
\draw (3.5,-0.25) node{$\large{\blacktriangleleft}$}
(2.75,0) node{$u$};
\draw (9.5,-0.25) node{$\large{\blacktriangleleft}$}
(8.75,0) node{$v$};
\draw(9,4) node {${\Sigma_u}$};
\draw(4,-4) node {${\Sigma_v}$};
\end{tikzpicture}
\caption{The graph of spaces $\ensuremath{\mathbb{U}}$. The attaching maps are of
degree 1 and the black arrows show the orientations.}
\label{fig:1}
\end{figure}
We pick elements $u,v \in \pi_1(\ensuremath{\mathbb{U}})$ corresponding to the
similarly labelled loops given in Figure \ref{fig:1} and we also
consider groups $\pi_1(\Sigma_u),\pi_1(\Sigma_v)$ to be embedded into
$\pi_1(\ensuremath{\mathbb{U}})$.
\begin{defn}
\label{defn:Magnus-pair}
Let $G$ be a group, and let $\sim_{\pm}$ be the equivalence relation
$g\sim_{\pm}h$ if and only if $g$ is conjugate to $h$ or $h^{-1}$, and denote
by $\left[g\right]_{\pm}$ the $\sim_{\pm}$ equivalence class of
$g$. A \emph{Magnus pair} is a pair of $\sim_{\pm}$ classes
$\left[g\right]_{\pm}\neq\left[h\right]_{\pm}$ such that
$\ncl{g}=\ncl{h}$.
\end{defn}
Note that if $h\in\left[g\right]_{\pm}$ then $\ncl{g}=\ncl{h}$, and
that the relation ``have the same normal closure'' is coarser than
$\sim_{\pm}$, and if a group has a Magnus pair then it is strictly
coarser than $\sim_{\pm}$. To save notation we will say that $g$ and
$h$ are a Magnus pair if their corresponding equivalence classes are.
\begin{lem}\label{lem:uv-Magnus-pair}
The elements $u$ and $v$ in $\pi_1(\ensuremath{\mathbb{U}})$ are a Magnus pair.
\end{lem}
\begin{proof}
The graph of spaces given in Figure \ref{fig:1} gives rise to a
cyclic graph of groups splitting $D$ of $\pi_1(\ensuremath{\mathbb{U}})$. The
underlying graph $X$ has 4 vertices and 8 edges where the vertex
groups are $\bk{u},\bk{v},\pi_1(\Sigma_u)$, and
$\pi_1(\Sigma_v)$. Now note that $\pi_1(\Sigma_u)$ can be given the
presentation
\[ \pi_1(\Sigma_u) = \bk{a,b,c,d \mid abcd=1} = \bk{a,b,c}\] and
that the incident edge groups have images
$\bk{a},\bk{b},\bk{c},\bk{abc} = \bk{d}$. Without loss of generality
$v^{\pm 1}$ is conjugate to $a$,$b$, and $c$ in $\pi_1(\ensuremath{\mathbb{U}})$ and
$u^{\pm 1}$ is conjugate to $d = abc$ in $\pi_1(\ensuremath{\mathbb{U}})$ which means
that $u\in \ncl{v}$ and, symmetrically considering $\Sigma_v$,
$v \in \ncl{u}$.
On the other hand, the elements $\bk{a},\bk{b},\bk{c},\bk{abc}$ are
pairwise non-conjugate in $\bk{a,b,c}$ and we now easily see that
$u$ and $v$ are non-conjugate by considering the action on the
Bass-Serre tree. $u$ and $v$ therefore form a Magnus pair.
\end{proof}
\subsection{Strict homomorphisms to limit groups}
\begin{defn}\label{defn:QH}
Let $G$ be a finitely generated group and let $D$ be a
2-acylindrical cyclic splitting of $G$. We say that a vertex group
$Q$ of $D$ is \define{quadratically hanging (QH)} if it satisfies
the following:
\begin{itemize}
\item $Q = \pi_1(\Sigma)$ where $\Sigma$ is a compact surface such
that $\chi(\Sigma) \leq -1$, with equality only if $\Sigma$ is
orientable or $\partial(\Sigma)\neq \emptyset$.
\item The images of the edge groups incident to $Q$ correspond to
the $\pi_1$-images of $\partial(\Sigma)$ in $\pi_1(\Sigma)$.
\end{itemize}
\end{defn}
\begin{defn}\label{defn:strict}
Let $G$ be torsion-free group. A homomorphism $\rho\colon G \to H$ is
\define{strict} if there some 2-acylindrical abelian
splitting $D$ of $G$ such that the following hold:
\begin{itemize}
\item $\rho$ is injective on the subgroup $A_D$ generated by the
incident edge groups of each each abelian vertex group $A$ of $D$.
\item $\rho$ is injective on each edge group of $D$.
\item $\rho$ is injective on the ``envelope'' $\hat{R}$ of each
non-QH, non-abelian vertex group $R$ of $D$, where $\hat{R}$ is
constructed by first replacing each abelian vertex group $A$ of
$D$ by $A_D$ and then taking $\hat{R}$ to be the subgroup
generated by $R$ and the centralizers of the edge groups incident
to $R$.
\item the $\rho$-images of QH subgroups are non-abelian.
\end{itemize}
\end{defn}
This next Proposition is a restatement of Proposition 4.21 of
\cite{CG-limits} in our terminology. It is also given as Exercise 8 in
\cite{BF-notes,Wilton-solutions}.
\begin{prop}\label{prop:strict-limit}
If $L$ is a limit group, $G$ is some finitely generated group such
that there is a strict homomorphism $\rho: G \to L$, then $G$ is also
limit group.
\end{prop}
\subsection{$\pi_1(\ensuremath{\mathbb{U}})$ is a limit group but it is not freely conjugacy separable.}\label{sec:not-fcs}
Consider the sequence of continuous maps given in Figure \ref{fig:2}.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.5]
\draw (-2.5,0) circle (2);
\draw (-2.5,0) circle (0.5);
\draw (0,0) circle (2);
\draw (0,0) circle (0.5);
\draw (2.5,0) circle (2);
\draw (2.5,0) circle (0.5);
\belowcollar{-0.5}{0}{0.75}{0.25}
\draw (1.25,0.25) node {$\blacktriangleleft$}
(1.25,0.5) node {$v$};
\belowcollar{2}{0}{0.75}{0.25}
\draw (-1.25,0.25) node {$\blacktriangleleft$}
(-1.25,0.5) node {$u$};
\draw (-0.12,2) -- (-0.12,2.5);
\draw (0.12,2) -- (0.12,2.5);
\blankcircle{(0,2)}{0.12}
\draw (-2.62,-2) -- (-2.62,-2.5);
\draw (-2.38,-2) -- (-2.38,-2.5);
\blankcircle{(2.5,2)}{0.12}
\draw (-0.12,-2) -- (-0.12,-2.5);
\draw (0.12,-2) -- (0.12,-2.5);
\blankcircle{(0,-2)}{0.12}
\draw (2.38,2) -- (2.38,2.5);
\draw (2.62,2) -- (2.62,2.5);
\blankcircle{(-2.5,-2)}{0.12}
\draw (-0.12,2.5) arc (180:0:1.37 and 0.74);
\draw (0.12,2.5) arc (180:0:1.13 and 0.5);
\draw (-0.12,-2.5) arc (360:180:1.13 and 0.5);
\draw (0.12,-2.5) arc (360:180:1.37 and 0.74);
\belowcollar{0.12}{2.25}{0.12}{0.07}
\belowcollar{-2.38}{-2.25}{0.12}{0.07}
\belowcollar{0.12}{-2.25}{0.12}{0.07}
\belowcollar{2.62}{2.25}{0.12}{0.07}
\draw (1.25,3.5) node {$H_1$}
(-1.25,-3.5) node {$H_2$}
(3.75,0) node {$\Sigma_u$}
(-3.75,0) node {$\Sigma_v$};
\begin{scope}[shift={(10,-5)}]
\draw (-2.5,0) circle (2);
\draw (-2.5,0) circle (0.5);
\draw (0,0) circle (2);
\draw (0,0) circle (0.5);
\draw (2.5,0) circle (2);
\draw (2.5,0) circle (0.5);
\belowcollar{-0.5}{0}{0.75}{0.25}
\draw (1.25,0.25) node {$\blacktriangleleft$}
(1.25,0.5) node {$v$};
\belowcollar{2}{0}{0.75}{0.25}
\draw (-1.25,0.25) node {$\blacktriangleleft$}
(-1.25,0.5) node {$u$};
\draw[very thick] (0,2) arc (180:0:1.25 and 0.74);
\draw[very thick] (0,-2) arc (360:180:1.25 and 0.74);
\draw (1.25,3.5) node {$h_1$}
(-1.25,-3.5) node {$h_2$} ;
\end{scope}
\begin{scope}[shift={(0,-10)}]
\draw (0,0) circle (2);
\draw (0,0) circle (0.5);
\belowcollar{-0.5}{0}{0.75}{0.25}
\draw (-1.25,0.25) node {$\blacktriangleleft$}
(-1.25,0.5) node {$u$};
\draw[very thick] (0,2) .. controls (-0.5,2.5) and (-0.5,3)
.. (0,3) .. controls (0.5,3) and (0.5,2.5) ..(0,2);
\draw[very thick] (0,-2) .. controls (-0.5,-2.5) and (-0.5,-3)
.. (0,-3) .. controls (0.5,-3) and (0.5,-2.5) ..(0,-2);
\draw (0,3.5) node {$h_1$} (0,-3.5) node {$h_2$};
\end{scope}
\begin{scope}[shift={(7,-12)}]
\draw[very thick] (0,0) .. controls (-0.5,0.5) and (-0.5,2) .. (0,2)
.. controls (0.5,2) and (0.5,0.5) ..(0,0); \draw[very
thick,rotate=120] (0,0) .. controls (-0.5,0.5) and (-0.5,2) .. (0,2)
.. controls (0.5,2) and (0.5,0.5) ..(0,0); \draw[very
thick,rotate=-120] (0,0) .. controls (-0.5,0.5) and (-0.5,2)
.. (0,2) .. controls (0.5,2) and (0.5,0.5) ..(0,0);
\draw (90:2.5) node{$h_1$}
(-30:2.5) node{$h_2$}
(-150:2.5) node{$u$} ;
\fill[color=black] (0,0) circle (0.25);
\end{scope}
\draw[very thick,->] (5,-2) -- (6,-3);
\draw[very thick,->] (6,-7) -- (3,-9);
\draw[very thick,->] (3,-11) -- (5,-12);
\end{tikzpicture}
\caption{A continuous map from $\ensuremath{\mathbb{U}}$ to the wedge of three
circles. The space on the top left is homeomorphic to $\ensuremath{\mathbb{U}}$. This
can be seen by cutting along the curves labelled $u,v$.}\label{fig:2}
\end{figure}
The space on the top left obtained by taking three disjoint tori,
identifying them along the longitudinal curves as shown, and then
surgering on handles $H_1,H_2$ is homeomorphic to the space
$\ensuremath{\mathbb{U}}$. A continuous map from $\ensuremath{\mathbb{U}}$ to the wedge of three circles
is then constructed by filling in and collapsing the handles to arcs
$h_1,h_2$, identifying the tori, and then mapping the resulting torus
to a circle so that the image of the longitudinal curve $u$ (or $v$,
as they are now freely homotopic inside a torus) maps with degree 1
onto a circle in the wedge of three circles.
\begin{lem}\label{lem:map-is-nice}
The homomorphism $\pi_1(\ensuremath{\mathbb{U}}) \rightarrow \mathbb{F}_3$ given by the
continuous map in Figure \ref{fig:2} is onto, the vertex groups
$\pi_1(\Sigma_v),\pi_1(\Sigma_u)$ have non-abelian image and the
edge groups $\bk{u},\bk{v}$ are mapped injectively.
\end{lem}
\begin{proof}
The surjectivity of the map $\pi_1(\ensuremath{\mathbb{U}}) \rightarrow \mathbb{F}_3$
as well as the injectivity of the restrictions to $\bk{u},\bk{v}$
are obvious. Note moreover that the image of $\pi_1(\Sigma_u)$
contains (some conjugate of) $\bk{u, h_1 u h_1^{-1}}$ and is
therefore non-abelian, the same is obviously true for the image of
$\pi_1(\Sigma_v)$.
\end{proof}
The final ingredient is a classical result of Magnus.
\begin{thm}[\cite{Magnus-1931}]\label{thm:Magnus}
The free group $\freegrp$ has no Magnus pairs.
\end{thm}
\begin{prop}\label{prop:counter-eg}
$\pi_1(\ensuremath{\mathbb{U}})$ is a limit group. For every homomorphism
$\rho\colon\pi_1(\ensuremath{\mathbb{U}}) \rightarrow \mathbb{F}$ the images $\rho(u)$,
$\rho(v)$ of the elements $u$, $v$ given in Lemma
\ref{lem:uv-Magnus-pair} are conjugate in $\mathbb{F}$ even though the
pair $u,v$ are not conjugate in $\pi_1(\ensuremath{\mathbb{U}})$.
\end{prop}
\begin{proof}
Lemma \ref{lem:map-is-nice} and Proposition \ref{prop:strict-limit}
imply that $\pi_1(\ensuremath{\mathbb{U}})$ is a Limit group. Lemma
\ref{lem:uv-Magnus-pair} and Theorem \ref{thm:Magnus} imply that,
for every homomorphism $\pi_1(\ensuremath{\mathbb{U}}) \to \mathbb{F}$ to a free group
$\mathbb{F}$, the image of $u$ must be conjugate to the image of
$v^{\pm 1}$ even though $u \not\sim_\pm
v$.
\end{proof}
\section{A different failure of free conjugacy separability}
\label{sec:other-case}
We now construct another limit group $\idouble$ that is not freely conjugacy separable, but for a completely different reason.
\begin{defn}[$C$-test words {\cite{ivanov1998certain}}]\label{def:c-test}
A non-trivial word $w(x_1,\ldots,x_n)$ is a \define{$C$-test word} in $n$
letters for $\mathbb{F}_m$ if for any two $n$-tuples
$(A_1,\ldots,A_n), (B_1,\ldots,B_n)$ of elements of $\mathbb{F}_m$
the equality $w(A_1,\ldots,A_n) = w(B_1,\ldots,B_n) \neq 1$
implies the existence of an element $S \in \mathbb{F}_m$ such that
$B_i = SA_i S^{-1}$ for all $i=1,2,\ldots,n.$
\end{defn}
\begin{thm}[{\cite[Main Theorem]{ivanov1998certain}}]\label{thm:c-test}
For arbitrary $n \geq 2$ there exists a non-trivial indivisible word
$w_n(x_1,\ldots,x_n)$ which is a $C$-test word in $n$ letters for
any free group $\mathbb{F}_m$ of rank $m \geq 2$.
\end{thm}
\begin{defn}[Doubles and retractions]
\label{def:double}
Let $\mathbb{F}(x,y)$ denote the free group on two generators, let
$w=w(x,y)$ denote some word in $\{x,y\}^{\pm 1}$. The amalgamated
free product
\[ D(x,y;w) = \bk{\mathbb{F}(x,y),\mathbb{F}(r,s)\mid w(x,y)= w(r,s)}\]
is the \define{double of $\mathbb{F}({x,y})$ along $w$}. The
homomorphism $\rho\colon D(x,y;w) \twoheadrightarrow \mathbb{F}(x,y)$ given by $r \mapsto
x, s\mapsto y$ is the \define{standard retraction.}
\end{defn}
\begin{defn}\label{defn:mirror}
Let $u \in \mathbb{F}(x,y)\leq D(x,y;w)$, but with
$u \not\sim_\pm w^n$ for any $n$, be given by a specific word
$u(x,y)$. Its \define{mirror image} is the distinct element
$u(r,s) \in \mathbb{F}(r,s) \leq D(x,y;w)$. $u(x,y)$ and $u(r,s)$ form
a \define{mirror pair.}
\end{defn}
It is obvious that mirror pairs are not $\sim_{\pm}$-equivalent. Let
$w$ be a $C$-test word and let $\idouble= D(x,y;w)$. It is well known
that any such double is a limit group. We will call $\idouble$ a
\define{$C$-double}.
\begin{lem}\label{lem:corank2}
The $C$-double $\idouble$ cannot map onto a free group of rank more than
$2$.
\end{lem}
\begin{proof}
$w$ is not primitive in $\mathbb{F}(x,y)$ therefore by
\cite{shenitzer1955decomposition} $\idouble = D(x,y;w)$ is not free. Theorem
\ref{thm:c-test} specifically states that $w$ is not a proper
power. It now follows from \cite[Theorem 1.5]{louder2013scott} that
$D(w)$ cannot map onto $\mathbb{F}_3$.
\end{proof}
The proof of the next theorem amounts to analyzing a Makanin-Razborov
diagram. We refer the reader to \cite{heil2016jsj} for an explicit
description of this diagram.
\begin{thm}
For any map $\phi\colon \idouble \to \mathbb{F}$ from a $C$-double
to some free group, if $u(x,y) \in \mathbb{F}(x,y)$ lies in the
commutator subgroup $[\mathbb{F}(x,y),\mathbb{F}(x,y)]$, but is not
conjugate to $w^n$ for any $n$, then the images
$\phi\left(u(x,y)\right)$ and $\phi\left(u(r,s)\right)$ of mirror
pairs are conjugate. In particular the limit group $\idouble$ is not
freely conjugacy separable. Furthermore mirror pairs $u(x,y),u(r,s)$ do not form Magnus
pairs.
\end{thm}
\begin{proof}
To answer this question we must analyze all maps for $\idouble$ to a free
group. By Lemma \ref{lem:corank2}, any such map factors through a
surjection onto $\mathbb{F}_2$, or factors through $\mathbb{Z}$.
~ \\ \textbf{Case 1:} \emph{$\phi(w)= 1$.} In this case the factor
$\mathbb{F}(x,y)$ does not map injectively, it follows that its image
is abelian. It follows that $\phi$ factors through the free product
\[
\pi_{ab}\colon D(x,y;w) \to \mathbb{F}(x,y)^{\mathrm{ab}}*\mathbb{F}(r,s)^{\mathrm{ab}}.
\]
In this case all elements of the commutator subgroups of
$\mathbb{F}(x,y)$ and $\mathbb{F}(r,s)$ are mapped to the identity and
therefore have conjugate images.
~\\ \textbf{Case 2:} \emph{$\phi(w)\neq 1.$} In this case the
factors $\mathbb{F}(x,y),\mathbb{F}(r,s)\leq D(x,y;w)$ map
injectively. By Theorem \ref{thm:c-test}, since $w$ is a C-test word
and $\phi(w(x,y)) = \phi(w(r,s)$, there is some $S \in \mathbb{F}_2$
such that $S\phi(x)S^{-1} =\phi(r)$ and $S\phi(y)S^{-1}
=\phi(s)$. Suppose now that $w(x,y)$ mapped to a proper power, then
by \cite[Main Theorem]{Baumslag-1965} $w(x,y) \in \mathbb{F}(x,y)$ is
part of a basis, which is impossible. It follows that the
centralizer of $\phi\left(w\right)$ is $\bk{\phi(w)}$ so that
$S = \phi(w)^n$. Therefore $\phi(r) = w^n\phi(x)w^{-n}$ and
$\phi(s) = w^n\phi(y)w^{-n}$ and the result follows in this case as
well.
We now show that a mirror pair $u(x,y)$ and $u(r,s)$ is not a Magnus
pair. Consider the quotient $D(x,y;w)/\ncl{u(x,y)}$. By using a
presentation with generators and relations, the group canonically
splits as the amalgamated free product
\[ \left(\mathbb{F}(x,y)/\ncl{u(x,y)}\right)*_{\bk{\overline w}}
\left(\mathbb{F}(r,s) / \ncl{w^n}\right)
\] where $\bk{w^n} = \bk{w} \cap \ncl{u}$ and $\overline{w}$ is the
image of $w$ in $\bk{w}/\bk{w^n}$. Now if
$\ncl{u(x,y)} = \ncl{u(r,s)}$ then we must have
$D(x,y;w)/\ncl{u(r,s)} = D(x,y;w)/\ncl{u(x,y)}$. This implies
$\mathbb{F}(r,s)/\ncl{(u(r,s))} = \mathbb{F}(r,s)/\ncl{w^n}$, which
implies by Theorem \ref{thm:Magnus} that $u(r,s) \sim_\pm w^n$,
which is a contradiction.
\end{proof}
It seems likely that failure of free conjugacy separability\ should
typically follow from C-test word like behaviour, rather than from
existence of Magnus pairs.
\section{Towers are freely conjugacy separable.}\label{sec:towers-fcs}
\begin{defn}\label{defn:quadratic-extension}
Let $G$ be a group. A \define{regular quadratic extension} of $G$ is
an extension $G\leq H$ such that \begin{itemize}
\item $H$ splits as a fundamental group of a graph of groups with
two vertex groups: $H_{v_1} = G$ and $H_{v_2} = \pi_1(\Sigma)$
where $H_{v_2}$ is a QH vertex group (See Definition
\ref{defn:QH}.)
\item There is a retraction $H \twoheadrightarrow G$ such that the image of
$\pi_1(\Sigma)$ in $G$ is non abelian.
\end{itemize}
We say that $\Sigma$ is the \define{surface associated to the
quadratic extension}. And note that if $\partial \Sigma = \emptyset$
then $H = G*\pi_1(\Sigma)$.
\end{defn}
\begin{defn}\label{defn:abelian-extension}
Let $G$ be a group. An \define{abelian extension by the free
abelian group $A$} is an extension $G \leq G*_\bk{u} (\bk{u}\oplus
A) =H$ where $u \in G$ is such that either its centralizer $Z_G(u) =
\bk{u}$, or $u=1$. In the case where $u=1$ the extension is $G \leq
G*A$ and it is called a \define{singular abelian extension}.
\end{defn}
\begin{defn}\label{defn:tower}
Let $\mathbb{F}$ be a (possibly trivial) free group. A \define{tower
of height $n$ over $\mathbb{F}$} is a group $G$ obtained from a
sequence of extensions \[ \mathbb{F} = G_0 \leq G_1 \leq \ldots \leq
G_n = G \] where $G_i \leq G_{i+1}$ is either a regular quadratic
extension or an abelian extension. The $G_i's$ are the
\define{levels} of the tower $G$ and the sequence of levels is a
\define{tower decomposition}. A tower consisting entirely of abelian
extensions is an \define{iterated centralizer extension.}
\end{defn}
\begin{defn}\label{defn:level-decomposition}
Let $\mathbb{F} = G_0 \leq \ldots \leq G_n = G$ be a tower
decomposition of $G$. We call the graphs of groups decomposition of
$G_i$ with one vertex group $G_{i-1}$ and the other vertex group a
surface group or a free abelian group as given in Definitions
\ref{defn:quadratic-extension} and \ref{defn:abelian-extension} the
\define{$i^{\textrm{th}}$ level decomposition.}
\end{defn}
Towers appear as NTQ groups in the work of Kharlampovich and
Miasnikov, and as $\omega$-residually free towers, as well as
completions of strict resolutions in the the work of Sela. It is a
well known fact that towers are limit groups \cite{KM-IrredI}. This
also follows easily from Proposition \ref{prop:strict-limit} and the
definitions.
\begin{prop}\label{prop:towers-discriminate}
Let $G$ be a tower of height $n$ over $\mathbb{F}$. Then $G$ is
discriminated by retractions $G\rightarrow G_{n-1}$. $G$ is also
discriminated by retractions onto $\mathbb{F}$.
\end{prop}
Following Definition 1.15 of \cite{Sela-Dioph-II} we have:
\begin{defn}\label{defn:closure}
Let $G$ be a tower. A \define{closure} of $G$ is another tower
$\cl{G}$ with an embedding $\theta\colon G \hookrightarrow \cl{G}$ such that
there is a commutative diagram\[
\begin{tikzpicture}[scale=1.5]
\node (T0) at (0,0) {$G_0$};
\node (Ti0) at (0.5,0) {$\leq$};
\node (T1) at (1,0) {$G_1$};
\node (Ti1) at (1.5,0) {$\leq$};
\node (T2) at (2,0) {$\ldots$};
\node (Ti2) at (2.5,0) {$\leq$};
\node (T3) at (3,0) {$G_n$};
\node (Ti3) at (3.5,0) {$=$};
\node (T4) at (4,0) {$G$};
\node (B0) at (0,-1) {$G_0$};
\node (Bi0) at (0.5,-1) {$\leq$};
\node (B1) at (1,-1) {$\cl{G}_1$};
\node (Bi1) at (1.5,-1) {$\leq$};
\node (B2) at (2,-1) {$\ldots$};
\node (Bi2) at (2.5,-1) {$\leq$};
\node (B3) at (3,-1) {$\cl{G}_n$};
\node (Bi3) at (3.5,-1) {$=$};
\node (B4) at (4,-1) {$\cl{G}$};
\draw[->] (T0) -- node[rotate=-90,above]{$=$} (B0);
\draw[right hook->] (T1) -- (B1);
\draw[right hook->] (T3) -- (B3);
\end{tikzpicture}\]
where the injections $G_i \hookrightarrow \cl{G}_i$ are restrictions of $\theta$
and the horizontal lines are tower decompositions. Moreover the
following must hold:
\begin{enumerate}
\item If $G_i \leq G_{i+1}$ is a regular quadratic extension with
associated surface $\Sigma$ such that $\partial \Sigma$ is
``attached'' to $\bk{u_1},\ldots,\bk{u_n} \leq G_i$ then $\cl{G}_i
\leq \cl{G}_{i+1}$ is a regular quadratic extension with
associated surface $\Sigma$ such that $\partial \Sigma$ is
``attached'' to $\bk{\theta(u_1)},\ldots,\bk{\theta(u_n)} \leq
\cl{G}_i$, in such a way that $\theta\colon G_i\hookrightarrow\cl{G}_i$
extends to a monomorphism $\theta\colon G_{i+1} \hookrightarrow \cl{G}_{i+1}$
which maps the vertex group $\pi_1(\Sigma)$ surjectively onto the
vertex group $\pi_1(\Sigma) \leq \cl{G}_{i+1}$.
\item If $G_i \leq G_{i+1}$ is an abelian extension then $\cl{G}_i
\leq \cl{G}_{i+1}$ is also an abelian extension. Specifically
(allowing $u_i=1$) if $G_{i+1} = G_i *_\bk{u_i}(\bk{u_i}\oplus
A_i)$, then $\cl{G}_{i+1} = \cl{G}_i
*_\bk{\theta(u_i)}(\bk{\theta(u_i)}\oplus A_i')$. Moreover we require the
embedding $\theta\colon G_{i+1}\rightarrow \cl{G}_{i+1}$ to map
$\bk{u_i}\oplus A_i$ to a finite index subgroup of
$\bk{\theta(u_i)}\oplus A_i'$.
\end{enumerate}
\end{defn}
We will now state one of the main results of \cite{KM-ift} and
\cite{Sela-Dioph-II} but first some explanations of terminology are in
order. Towers are groups that arise as completed limit groups
corresponding to a strict resolution and the definition of closure
corresponds to the one given in \cite{Sela-Dioph-II}. We also note
that our requirement on the Euler characteristic of the surface pieces
given in Definitions \ref{defn:QH} and \ref{defn:quadratic-extension}
ensures that our towers are coordinate groups of \emph{normalized} NTQ
systems as described in the discussion preceding \cite[Lemma
76]{KM-ift}, we also point out that a \emph{correcting embedding} as
described right before \cite[Theorem 12]{KM-ift} is in fact a
closure in the terminology we are using.
We now give an obvious corollary (in fact a weakening) of
\cite[Theorem1.22]{Sela-Dioph-II}, or \cite[Theorem 12]{KM-ift}; they
are the same result. Let $X,Y$ denote fixed tuples of variables.
\begin{lem}[$\forall\exists$-lifting Lemma]\label{lem:lift}
Let $\mathbb{F}$ be a fixed non-abelian free group and let
\[G=\bk{\mathbb{F},X \mid R(\mathbb{F},X)}\] be a standard finite
presentation of a tower over $\mathbb{F}$. Let $W_i(X,Y,\mathbb{F})=1$
and $V_i(X,Y,\mathbb{F})\neq 1$ be (possibly empty) finite systems of
equations and inequations (resp.) If the following holds:
\[
\mathbb{F} \models \forall X \exists Y \Big( R(\mathbb{F},X) = 1
\rightarrow \bigvee_{i=1}^m\big(W_i(X,Y,\mathbb{F})=1 \wedge
V_i(X,Y,\mathbb{F})\neq 1 \big) \Big)
\]
then there is an embedding $\theta\colon G \hookrightarrow \cl{G}$ into some
closure such that
\[
\cl{G} \models \exists Y
\bigvee_{i=1}^m\Big(W_i(\theta(X),Y,\mathbb{F})=1 \wedge
V_i(\theta(X),Y,\mathbb{F})\neq 1\Big)
\]
where $X$ and $\mathbb{F}$ are interpreted as the corresponding
subsets of $G = \bk{\mathbb{F},X \mid R(\mathbb{F},X)}$
\end{lem}
In the terminology of \cite{Sela-Dioph-II} we have
$G = \bk{\mathbb{F},X}$ and $\cl{G} = \bk{\mathbb{F},X,Z}$ for some
collection of elements $Z$. Let $Y = (y_1,\ldots,y_k)$ be a tuple of
elements in $\cl{G}$ that witness the existential sentence above. A
collection of words $y_i(\mathbb{F},X,Z) =_{G^*} y_i$ is called a set of
\define{formal solution in $\cl{G}$.} According to \cite[Definition
24]{KM-ift} the tuple $Y \subset \cl{G}$ is an \define{$R$-lift}.
\begin{prop}\label{prop:towers-freely-conj-sep}
Let $G$ be a tower over a non abelian free group $\mathbb{F} $and
let $S \subset G$ be a finite family of pairwise non-conjugate
elements of $G$. There exists a discriminating family of retractions
$\psi_i\colon G\twoheadrightarrow \mathbb{F}$ such that for each $\psi_i$ the elements of
$\psi_i(S)$ are pairwise non-conjugate.
\end{prop}
\begin{proof}
Suppose towards a contradiction that this was not the case. Then
either there exists a finite subset $P \subset G\setminus \{1\}$
such that for every retraction $r\colon G \twoheadrightarrow \mathbb{F}$, $1 \in r(P)$ or
the elements of $r(S)$ are not pairwise non-conjugate. If we write
elements of $P$ and $S$ as fixed words $\{p_i(\mathbb{F},X)\}$ and
$\{s_j(\mathbb{F},X)\}$ (resp.) then we can express this as a
sentence. Indeed, consider first the
formula: \[\Phi_{P,S}(\mathbb{F},X,t) = \left(\left[ \bigvee_{p_i \in P}
p_i(\mathbb{F},X)=1 \right] \vee \left[\bigvee_{(s_i,s_j) \in \Delta(S)}
t^{-1} s_i(\mathbb{F},X)t = s_j(\mathbb{F},X) \right]\right)\] where
$\Delta(S) = \{(x,y) \in S\times S \mid x \neq y)\}$. In English
this says that either some element of $P$ vanishes or two distinct
elements of $S$ are conjugated by some element $t$. We therefore
have:
\begin{equation}\label{eqn:formula}\mathbb{F} \models \forall X \left[\left(R(\mathbb{F},X))=1 \right) \rightarrow \exists
t\Phi_{P,S}(\mathbb{F},X,t)\right].
\end{equation}
It now follows by Lemma \ref{lem:lift} that there is some closure
$\theta\colon G \hookrightarrow \cl{G}$ such that \[\cl{G} \models \exists t
\Phi_{P,S}(\mathbb{F},\theta(X),t).\] Since $1 \not\in P$ and $\theta$ is
a monomorphisms none of the $p_i(\mathbb{F},X)$ are trivial
so \[\cl{G} \models \exists t \left[ \bigvee_{(s_i,s_j) \in
\Delta(S)} \left(t^{-1} s_i(\mathbb{F},X)t = s_j(\mathbb{F},X)\right) \right].\] In
particular there are elements $u,v \in G$ which are not conjugate in
$G$ but are conjugate in $\cl{G}$. We will derive a
contradiction by showing that this is impossible.
We proceed by induction on the height of the tower. If the tower has
height 0 then $G = \mathbb{F}$ and the result obviously holds. Suppose
now that the claim held for all towers of height $m \leq n$. Let $G$
have height $n$ and let $u,v$ be non-conjugate elements of $G$ let $G
\leq \cl{G}$ be any closure and suppose that there is some $t \in
\cl{G} \setminus G$ such that $t u t^{-1} = v$.
Let $D$ be the $n^\nth$ level decomposition of $\cl{G}$ and let $T$
be the corresponding Bass-Serre tree. Let $T(G)$ be the minimal
$G$-invariant subtree and let $D_G$ be the splitting induced by the
action of $G$ on $T(G)$. By Definition \ref{defn:closure} $D_G$ is
exactly the $n^\nth$ level decomposition of $G$ and two edges of
$T(G)$ are in the same $G$-orbit if and only if they are in the same
$\cl{G}$-orbit. We now consider separate cases:
\\~\\
{\bf Case 1:} \emph{Without loss of generality $u$ is hyperbolic in the $n^\nth$ level
decomposition of $G$.} If $v$ is elliptic in the $n^\nth$ level
decomposition of $G$ then it is elliptic in the $n^\nth$-level
decomposition of $\cl{G}$ and therefore cannot be conjugate to $u$
which acts hyperbolically on $T$.
It follows that both $u,v$ must be hyperbolic elements with respect to the
$n^\nth$ level decomposition of $G$. Let $l_u,l_v$ denote the axes
of $u,v$ (resp.) in $T(G) \subset T$. Since $t u t^{-1} = v$, we
must have $t\cdot l_u = l_v$. Let $e$ be some edge in $l_u$ then by
the previous paragraph $t\cdot e \subset l_v$ must be in the same
$G$-orbit as $e$, which means that there is some $g \in G$ such that
$gt \cdot e = e$, but again by Definition \ref{defn:closure} the
inclusion $G\leq \cl{G}$ induces a surjection of the edge groups of
the $n^\nth$ level decomposition of $G$ to the edge groups of the
$n^\nth$ level decomposition of $\cl{G}$, it follows that $gt \in G$
which implies that $t \in G$ contradicting the fact that $u,v$ were
not conjugate in $G$.
\\~\\
{\bf Case 2:} \emph{The elements $u,v$ are elliptic in the $n^\nth$
level decomposition of $G$.} Suppose first that $u,v$ were
conjugate into $G_{n-1}$, then the result follows from the fact that
there is a retraction $G \twoheadrightarrow G_{n-1}$ and by the induction
hypothesis. Similarly by examining the induced splitting of $G \leq
\cl{G}$, we see that $u$ cannot be conjugate into $G_{n-1}$ and $v$
into the other vertex group of the $n^\nth$-level decomposition. We
finally distinguish two sub-cases.
\\~\\
{\bf Case 2.1:} \emph{$G_{n-1} \leq G$ is an abelian extension by
the free abelian group $A$ and $u,v$ are conjugate in $G$ into
some free abelian group $\bk{w}\oplus A$.} Any homomorphic image
of $\bk{w}\oplus A$ in $\mathbb{F}$ must lie in a cyclic group, since
$u \neq v$ in $\cl{G}$ and $\cl{G}$ is discriminated by retractions
onto $\mathbb{F}$, there must be some retraction $r\colon \cl{G}\rightarrow
\mathbb{F}$ such that $r(u)\neq r(v)$ which means that $u,v$ are sent
to distinct powers of a generator of the cyclic subgroup
$r(\bk{w}\oplus A)$. It follows that their images are not conjugate
in $\mathbb{F}$ so $u,v$ cannot be conjugate in $\cl{G}$.
\\~\\
{\bf Case 2.2:} \emph{$G_{n-1} \leq G$ is a quadratic extension and
$u$ and $v$ are conjugate in $G$ into the vertex group
$\pi_1(\Sigma)$.} Arguing as in Case 1 we find that if there is
some $t \in \cl{G}$ such that $t u t^{-1} = v$ then there is some $g
\in G$ such that $gt$ fixes a vertex of $T(G) \subset T$ whose
stabilizer is conjugate to $\pi_1(\Sigma)$. Again by the
surjectively criterion in item 1. of Definition \ref{defn:closure},
$gt \in G$ contradicting the fact that $u,v$ were not conjugate in
$G$. All the possibilities have been exhausted so the result
follows.
\end{proof}
\begin{proof}[proof of Theorem \ref{thm:generic-sequence}]
Let $S_1 \subset S_2 \subset S_3 \subset \ldots$ be an exhaustion of
representatives of distinct conjugacy classes of $G$ by finite
sets. For each $S_j$ let $\{\psi^j_i\}$ be the discriminating
sequence given by Proposition \ref{prop:towers-freely-conj-sep}. We take
$\{\phi_i\}$ to be the diagonal sequence $\{\psi^i_i\}$. This
sequence is necessarily discriminating and the result follows.
\end{proof}
It is worthwhile to point out that \emph{test sequences} given in the
proof of \cite[Theorem 1.18]{Sela-Dioph-II} or the \emph{generic
sequence} given in \cite[Definition 44]{KM-ift}, because of their
properties, must satisfy the conclusions of Theorem
\ref{thm:generic-sequence}. As an immediate consequence of the Sela's
completion construction (\cite[Definition 1.12]{Sela-Dioph-II}) or
canonical embeddings into NTQ groups (\cite[\S 7]{KM-elementary})
Theorem \ref{thm:generic-sequence} implies the following:
\begin{cor}\label{cor:strict-discriminate}
Let $L$ be a limit group and suppose that for some finite set $S
\subset L$ there is a homomorphism $f\colon L\to\mathbb{F}$ such
that: \begin{itemize}
\item The elements of $f(S)$ are pairwise non-conjugate.
\item There is a factorization \[f = f_m \circ f_{m-1} \circ \cdots
\circ f_1\] such that each $f_i$ is a \emph{strict} homomorphisms
between limit groups (see Definition \ref{defn:strict}).
\end{itemize}
Then there is a discriminating sequence $\psi_i\colon L\to \mathbb{F}$ such
that for all $i$ the elements $\psi_i(S)$ are pairwise non-conjugate.
\end{cor}
\section{Refinements}\label{sec:refinements}
\subsection{$\pi_1(\ensuremath{\mathbb{U}})$ is almost freely conjugacy separable.}
\label{sec:almost-fcs}
The limit group $L$ constructed in Section \ref{sec:other-case} had an
abundance of pairs of nonconjugate elements whose images had to have
conjugate images in every free quotient. The situation is completely
different for our Magnus pair group.
\begin{prop}\label{prop:only-u-v}
$\bk{u},\bk{v} \leq \pi_1(\ensuremath{\mathbb{U}})$ are the only maximal cyclic
subgroups of $\pi_1(\ensuremath{\mathbb{U}})$ whose conjugacy classes cannot be
separated via a homomorphism to a free group $\pi_1(\ensuremath{\mathbb{U}}) \to
\mathbb{F}$.
\end{prop}
\begin{proof}
We begin by embedding $\pi_1(\ensuremath{\mathbb{U}})$ into a hyperbolic tower. Let
$\rho\colon \pi_1(\ensuremath{\mathbb{U}}) \twoheadrightarrow \mathbb{F}_3$ be the strict homomorphism
given in Figure \ref{fig:2}. Consider the group
\[ L =
\bk{\pi_1(\ensuremath{\mathbb{U}}),\mathbb{F}_3 ,s \mid u = \rho(u), s v s^{-1}= \rho(v)}.
\]
This presentation naturally gives a splitting $D$ of $L$ given in
Figure \ref{fig:3}.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.5]
\abovecollar{4}{2}{0.5}{0.25}
\draw (3.5,1.75) node {${\blacktriangleleft}$};
\abovecollar{8}{2}{0.5}{0.25}
\draw (7.5,1.75) node {${\blacktriangleleft}$};
\abovecollar{10}{2}{0.5}{0.25}
\draw (9.5,1.75) node {${\blacktriangleleft}$};
\abovecollar{12}{2}{0.5}{0.25}
\draw (11.5,1.75) node {${\blacktriangleright}$};
\lazyllipse{2}{-2}{0.5}{0.25}
\draw (1.5,-2.25) node {${\blacktriangleright}$};
\lazyllipse{4}{-2}{0.5}{0.25}
\draw (3.5,-2.25) node {${\blacktriangleleft}$};
\lazyllipse{6}{-2}{0.5}{0.25}
\draw (5.5,-2.25) node {${\blacktriangleleft}$};
\lazyllipse{10}{-2}{0.5}{0.25}
\draw (9.5,-2.25) node {${\blacktriangleleft}$};
\draw (3,2) arc (180:0:4.5 and 3.5);
\draw (4,2) arc (180:0:1.5 and 1);
\draw (8,2) arc (180:0:0.5 and 0.5);
\draw (10,2) arc (180:0:0.5 and 0.5);
\draw (10,-2) arc (360:180:4.5 and 3.5);
\draw (9,-2) arc (360:180:1.5 and 1);
\draw (5,-2) arc (360:180:0.5 and 0.5);
\draw (3,-2) arc (360:180:0.5 and 0.5);
\lazyllipse{4}{0}{0.5}{0.25}
\lazyllipse{10}{0}{0.5}{0.25}
\draw (3.5,1.5) -- (3.5,0.5)
(3,-0.5) -- (2,-1.5)
(3.5,-0.5) -- (3.5,-1.5)
(4,-0.5) -- (5,-1.5);
\draw (9.5,-1.5) -- (9.5,-0.5)
(9,0.5) -- (8,1.5)
(9.5,0.5) -- (9.5,1.5)
(10,0.5) -- (11,1.5);
\draw (3.5,-0.25) node{$\large{\blacktriangleleft}$}
(2.75,0) node{$u$};
\draw (9.5,-0.25) node{$\large{\blacktriangleleft}$}
(10.25,0) node{$v$};
\draw(9,4) node {${\Sigma_u}$};
\draw(4,-4) node {${\Sigma_v}$};
\draw (6,0) -- (4.5,0);
\draw (7,0) -- (8.5,0);
\node (a) at (6.5,0) [rectangle,draw,fill=white] {$\mathbb{F}_3$};
\end{tikzpicture}
\caption{The splitting $D$ of $L$.}
\label{fig:3}
\end{figure}
We have a retraction $\rho*\colon L \twoheadrightarrow \mathbb{F}_3$ given by\[
\rho*\colon \left\{\begin{array}{l}
g \mapsto \rho(g); g \in \pi_1(\ensuremath{\mathbb{U}})\\
f \mapsto f; f \in \mathbb{F}_3\\
s \mapsto 1\\
\end{array}\right.
\] It therefore follows that $L$ is a hyperbolic tower over $\mathbb{F}_3$.
Claim: \emph{if $\alpha, \beta \in \pi_1(\ensuremath{\mathbb{U}}) \leq L$ are
non-conjugate in $\pi_1(\ensuremath{\mathbb{U}})$ and $\alpha,\beta$ are not both
conjugate to $\bk{u}$ or $\bk{v}$ in $\pi_1(\ensuremath{\mathbb{U}})$ then they are
not conjugate in $L$.} If both $\alpha$ and $\beta$ are elliptic,
then this follows easily from the fact that the vertex groups are
malnormal in $L$. Also $\alpha$ cannot be elliptic while $\beta$ is
hyperbolic. Suppose now that $\alpha,\beta$ are hyperbolic. Let $T$ be
the Bass-Serre tree corresponding to $D$ and let $T' =
T(\pi_1(\ensuremath{\mathbb{U}}))$ be the minimal $\pi_1(\ensuremath{\mathbb{U}})$ invariant
subtree. Suppose that there is some $s \in L$ such that $s \alpha
s^{-1} = \beta$, then as in the proof of Proposition
\ref{prop:towers-freely-conj-sep} and Proposition
we find that for some $g \in \pi_1(\ensuremath{\mathbb{U}})$ either $gs$ permutes two
edges in $T'$ that are in distinct $\pi_1(\ensuremath{\mathbb{U}})$-orbits or it fixes
some edge in $T'$. The former case is impossible and it is easy to see
that the latter case implies that $gs \in \pi_1(\ensuremath{\mathbb{U}})$. Therefore we
have a contradiction to the assumption that $\alpha,\beta$ are not
conjugate in $\pi_1(\ensuremath{\mathbb{U}})$. The claim is now proved.
It therefore follows that if $\alpha,\beta \in \pi_1(\ensuremath{\mathbb{U}}) \leq L$
are as above, then by Theorem \ref{thm:generic-sequence} there exists
some retraction $r\colon L \twoheadrightarrow \mathbb{F}_3$ such that $r(\alpha),r(\beta)$
are non-conjugate.
\end{proof}
This construction gives an alternative proof to the fact that
$\pi_1(\ensuremath{\mathbb{U}})$ is a limit group. The group $L$ constructed is a
triangular quasiquadratic group and the retraction $\rho^*$ makes it
non-degenerate, and therefore an NTQ group. $L$ and therefore
$\pi_1(\ensuremath{\mathbb{U}})\leq L$ are therefore limit groups by \cite{KM-IrredI}.
\subsection{$C$-doubles do not contain Magnus pairs.}\label{sec:no-mag-pairs}
Theorem \ref{thm:generic-sequence} enables us to examine a $C$-double
$\idouble$ more closely.
\begin{prop}\label{prop:ivanov-double-no-magnus}
The $C$-double $\idouble$ constructed in Section
\ref{sec:other-case} does not contain a Magnus pair.
\end{prop}
\begin{proof}
We need to show that if two elements $u,v$ of $\idouble$ have the
same normal closure in $\idouble$ then they must be
conjugate. Suppose that $u,v$ are both elliptic with respect to the
splitting (as a double) of $\idouble$ but not conjugate. By Theorem
\ref{thm:c-test} if they are conjugate to a mirror pair $(u^g,v^h)$
for some $g,h \in \idouble$ then they do not form a Magnus pair,
i.e. they have separate normal closures. Otherwise there are
homomorphisms $\idouble \to \mathbb{F}$ in which $u,v$ have
non-conjugate images, therefore by Theorem \ref{thm:Magnus} the
normal closures of their images are distinct; so
$\ncl{u} \neq \ncl{v}$ as well.
Suppose now that $u$ or $v$ is hyperbolic in $\idouble$. Recall the
generating set $x,y,r,s$ for $\idouble$ given in Definition
\ref{def:double}. Let $\mathbb{F} = \mathbb{F}(x,y)$ and consider the
embedding into a centralizer extension, represented as an HNN
extension
\begin{align*}
\idouble & \hookrightarrow \bk{\mathbb{F},t | t w(x,y) = w(x,y)t} = \mathbb{F}*^t_\bk{w}\\
x & \mapsto x,~~~ y \mapsto y\\
r & \mapsto t^{-1}xt,~~~ s \mapsto t^{-1}yt
\end{align*}
The stable letter $t$ makes mirror pairs conjugate in this bigger
group. A hyperbolic element of $\idouble$ can be written as a product
of syllables\[ u = a_1(x,y)a_2(r,s)\cdots a_l(r,s)
\] with $a_1$ or $a_l$ possibly trivial. The image of $u$ in
$\mathbb{F}*^t_\bk{w}$ is \[
u = a_1(x,y)\left(t^{-1}a_2(x,y)t\right)\cdots
\left(t^{-1}a_l(x,y)t\right).\] Consider the set of words of the
form \[
w_1(x,y)\left(t^{-1}w_2(x,y)t\right)\cdots w_{N-1}(x,y)\left(t^{-1}w_{N}(x,y)t\right),
\] with $w_1$ or $w_N$ possibly trivial. This set is clearly closed
under multiplication, inverses and passing to
$\mathbb{F}_{\bk w}^t$-normal form. It follows that we can identify
the image of $\idouble$ with this set of words, which we call
\define{$t^{-1}*t$-syllabic words}. Each factor $w_i(x,t)$ or
$t^{-1}w_j(x,y)t$ is called a \define{$t^{-1}*t$-syllable}.
It is an easy consequence of Britton's Lemma that if $u$ is a
hyperbolic, i.e. with cyclically reduced syllable length more than
1, $t^{-1}*t$-syllabic word and $g^{-1}ug$ is again
$t^{-1}*t$-syllabic for some $g$ in $\mathbb{F}*_\bk{w}^t$ then $g$
must itself be $t^{-1}*t$-syllabic. Indeed this can be seen by
cyclically permuting the $\mathbb{F}*_\bk{w}^t$-syllables of a
cyclically reduced word $u$. We refer the reader to \cite[\S
IV.2]{Lyndon-Schupp-1977} for further details about normal forms and
conjugation in HNN extensions.
Suppose now that $u,v$ are non conjugate in $\idouble$, but have the
same normal closure in $\idouble$. Since at least one of them is
hyperbolic in $\idouble$, it is clear from the embedding that its
image must also be hyperbolic with respect to the HNN splitting
$\mathbb{F}*_\bk{w}^t$. Now, since
$\ncl{u}_\idouble = \ncl{v}_\idouble$, in the bigger group
$\mathbb{F}*_\bk{w}^t$ we
have:\[ \ncl{u}_{\mathbb{F}*_\bk{w}^t} =
\ncl{\ncl{u}_\idouble}_{\mathbb{F}*_\bk{w}^t} =
\ncl{\ncl{v}_\idouble}_{\mathbb{F}*_\bk{w}^t} =
\ncl{v}_{\mathbb{F}*_\bk{w}^t} \]
By Theorem \ref{thm:generic-sequence} or \cite{Lioutikova-CRF}
centralizer extensions are freely conjugacy separable, therefore
they cannot contain Magnus pairs. It follows that $u,v$ must be
conjugate in the bigger $\mathbb{F}*_\bk{w}^t$. Let
$g^{-1}ug=_{\mathbb{F}*_{\bk w}^t} v$. Now both $u$ and $v$ must be
hyperbolic so it follows that $g$ must also be a $t^{-1}*t$-syllabic
word; thus $g$ is in the image of $\idouble$ of
$\mathbb{F}*_{\bk w}^t$. Furthermore since the map
$\idouble \hookrightarrow \mathbb{F}*_{\bk w}^t$ is an
embedding\[ g^{-1}ug=_{\mathbb{F}*_{\bk w}^t} v \Rightarrow
g^{-1}ug=_\idouble v,
\] contradicting the fact that $u,v$ are non conjugate in $\idouble$.
\end{proof}
\subsection{A non-tower limit group that is freely conjugacy separable}
\label{sec:tower-non-eg}
In this section we construct a limit group that is freely conjugacy separable\ but which
does not admit a tower structure. Let $H \leq [\mathbb{F},\mathbb{F}]$ be
some f.g. malnormal subgroup of $\mathbb{F}$, e.g.
$H = \bk{aba^{-1}b^{-1},b^{-2}a^{-1}b^2a} \leq \mathbb{F}(a,b)$. And
pick $h \in H \setminus [H,H]$ such that $H$ is \define{rigid}
relative to $h$, i.e. $H$ has no non-trivial cyclic or free splittings
relative to $\bk{h}$. Because $h \in [\mathbb{F},\mathbb{F}]$ there is is
a quadratic extension \[\mathbb{F} < \mathbb{F}*_\bk{h} \pi_1(\Sigma) \]
where $\Sigma$ has one boundary component and has genus
$g = \textrm{genus}(h)$, in particular there is a retraction onto
$\mathbb{F}$. Consider now the subgroup $L = H*_\bk{h}\pi_1(\Sigma)$.
\begin{prop}
$L$ as above is freely conjugacy separable.
\end{prop}
\begin{proof}
Because $H \leq \mathbb{F}$ was chosen to be malnormal, an easy
Bass-Serre theory argument (e.g. apply \cite[Theorem
IV.2.8]{Lyndon-Schupp-1977}) tells us that $\alpha,\beta \in L$ are
conjugate if and only if they are conjugate in
$\mathbb{F}*_\bk{h} \pi_1(\Sigma)$. On the other hand by Theorem
\ref{thm:generic-sequence}, $\mathbb{F}*_\bk{h} \pi_1(\Sigma)$, and
hence $L$, are freely conjugacy separable.
\end{proof}
\begin{defn}
A splitting $\mathbb X$ is \emph{elliptic} in a splitting
$\mathbb Y$ if every edge group in $\mathbb X$ is conjugate to a
vertex group of $\mathbb Y$. Otherwise we say $\mathbb X$ is
hyperbolic in $\mathbb Y$.
\end{defn}
\begin{thm}[{\cite[Theorem 7.1]{R-S-JSJ}}]\label{thm:JSJ} Let $G$ be an
f.p. group with a single end. There exists a reduced, unfolded
$\mathbb{Z}$-splitting of $G$ called a JSJ decomposition of $G$ with the
following properties:
\begin{enumerate}
\item\label{it:jsj-cmq} Every canonical maximal QH (recall definition
\ref{defn:QH}) subgroup (CMQ) of $G$ is conjugate to a vertex
group in the JSJ decomposition. Every QH subgroup of $G$ can be
conjugated into one of the CMQ subgroups of $G$. Every non-CMQ
vertex groups in the JSJ decomposition is elliptic in every
$\mathbb{Z}$-splitting of $G$.
\item\label{it:jsj-hyphyp}An elementary $\mathbb{Z}$-splitting $G = A*_CB$ or $G=A*_C$ which
is hyperbolic in another elementary $\mathbb{Z}$-splitting is obtained
from the JSJ decomposition of $G$ by cutting a 2-orbifold
corresponding to a CMQ subgroup of $G$ along a weakly essential
simple closed curve (s.c.c.).
\item\label{it:jsj-ell} Let $\Theta$ be an elementary $\mathbb{Z}$-splitting $G = A*_CB$ or $G=A*_C$ which
is elliptic with respect to any other elementary $\mathbb{Z}$ splitting of
$G$. There exists a $G$-equivariant simplicial map between a
subdivision of $T_{\mathrm{JSJ}}$, the Bass-Serre tree
corresponding to the JSJ decomposition, and $T_\Theta$, the
Bass-Serre tree corresponding to $\Theta$.
\item\label{it:jsj-to-general} Let $\Lambda$ be a general $\mathbb{Z}$-splitting of $G$. There exists
a $\mathbb{Z}$-splitting $\Lambda_1$ obtained from the JSJ decomposition
by splitting the CMQ subgroups along weakly essential s.c.c. on
their corresponding 2-orbifolds, so that there exists a
$G$-equivariant simplicial map between a subdivision of the
Bass-Serre tree $T_{\Lambda_1}$ and $T_{\Lambda}.$
\item\label{it:jsj-canonical} If $\mathrm{JSJ}_1$ is another JSJ
decomposition of $G$, then there exists a $G$-equivariant
simplicial map $h_1$ from a subdivision of $T_{\mathrm{JSJ}}$ to
$T_{\mathrm{JSJ}_1}$, and a $G$-equivariant simplicial map $h_2$
from a subdivision of $T_{\mathrm{JSJ}_1}$ to $T_{\mathrm{JSJ}}$,
so that $h_1\circ h_2$ and $h_2 \circ h_1$ are $G$-homotopic to
the corresponding identity maps.
\end{enumerate}
\end{thm}
We note that item \ref{it:jsj-canonical}. of the above theorem
describes the canonicity of a JSJ decomposition.
\begin{lem}
The splitting $L = H*_\bk{h}\pi_1(\Sigma)$ is a cyclic JSJ
splitting.
\end{lem}
\begin{proof}
This is an elementary $\mathbb{Z}$ splitting of $L$, let's see how it can be
obtained from the JSJ decomposition given in Theorem
\ref{thm:JSJ}. The first case is if $h$ is elliptic in every other
splitting. Then by \ref{it:jsj-ell}. of Theorem \ref{thm:JSJ} there exists an $L$-
equivariant map $\rho$ from $T_{\mathrm{JSJ}}$ to the Bass-Serre
tree $T$ corresponding to $H*_\bk{h}\pi_1(\Sigma)$ in which $H$
stabilizes a vertex $v$. It follows that $H$ acts on
$\phi^{-1}(\{v\}) = T_H \subset T_{\mathrm{JSJ}}$. Since $H$ is
rigid relative to $h$ and $h$ acts elliptically on
$T_{\mathrm{JSJ}}$, $T_H$ cannot be infinite, since that would imply
that $H$ admits an essential cyclic splitting relative to $h$. $T_H$
must in fact be a point. Otherwise $T_H$ is a finite tree tree and
there must be a ``boundary'' vertex $u\in T_H$ such that
$H \not \geq L_u$. Since $\phi(u) = v$, $L$-equivariance implies
that $L_u$ fixes $v$ so that $L_u \leq H$, which is a
contradiction. It follows that in this case $H$ is actually a vertex
group of the JSJ decomposition and $\pi_1(\Sigma)$ must be a CMQ
vertex group.
The second case is that $h$ is hyperbolic in some other
$\mathbb{Z}$-splitting $\mathbb D$ of $L$. Since $H$ is rigid relative to
$h$, $H$ must be hyperbolic with respect to $\mathbb D$. Now by
\ref{it:jsj-hyphyp}. of Theorem \ref{thm:JSJ} the splitting
$L = H*_\bk{h}\pi_1(\Sigma)$ can be obtained from the JSJ splitting
of $L$ by cutting along a simple closed curve on some CMQ vertex
group, and this curve is conjugate to $h$. But this means that $H$
admits a cyclic splitting as a graph of groups with a QH vertex
group $\pi_1(\Sigma')$ such that the $\pi_1$-image of some connected
component of $\partial \Sigma'$ is conjugate to $\bk{h}$, in particular
$H$ must have a cyclic splitting relative to $h$, which contradicts
the fact that $H$ is rigid relative to $h$.
\end{proof}
\begin{prop}\label{prop:not-a-tower}
The limit group $L = H*_\bk{h}\pi_1(\Sigma)$ does not admit a tower
structure.
\end{prop}
\begin{proof}
Suppose towards a contradiction that $L$ was a tower, consider the
last level:
\[L_{n-1} < L_n = L.\] Since $L$ has no non-cyclic abelian subgroups
$L_{n-1} < L$ must be a hyperbolic extension. This means that $L$
admits a cyclic splitting $\mathbb D$ with a vertex group $L_{n-1}$
and a QH vertex group $Q$. Since $L = H*_\bk{h}\pi_1(\Sigma)$ is a
JSJ decomposition and $\pi_1(\Sigma)$ is a CMQ vertex group. By
\ref{it:jsj-cmq}. and \ref{it:jsj-to-general}. of Theorem
\ref{thm:JSJ}, the QH vertex group $Q$ must be represented as
$\pi_1(\Sigma_1)$, where $\Sigma_1$ is a connected subsurface
$\Sigma_1 \subset \Sigma$. It follows from
\ref{it:jsj-to-general}. of Theorem \ref{thm:JSJ} that the other
vertex group must be $L_{n-1} = H*_{\bk{h}} \pi_1(\Sigma')$ where
$\Sigma' = \Sigma \setminus \Sigma_1$.
Since $L_{n-1} < L$ is a quadratic extension there is a retraction
$L \twoheadrightarrow L_{n-1}$. Note however that because $\Sigma'$ has at least
two boundary components \[H*_{\bk{h}} \pi_1(\Sigma') =
H*\mathbb{F}_m\] where $m = -\chi(\Sigma')$. Now since we have a
retraction $L \twoheadrightarrow L_{n-1}$ there is are $x_i,y_i \in L_{n-1}$ such
that \[h = \prod_{i=1}^g[x_i,y_i]\] But this would imply that $h \in
[L_{n-1},L_{n-1}]$ which is clearly seen to be false by abelianizing
$H*\mathbb{F}_m$ and remembering that $h \not\in [H,H]$.
\end{proof}
\bibliographystyle{alpha} |
1,116,691,499,767 | arxiv | \section{Introduction} Recently, the L3 collaboration reported
\cite{1} the observation of high-mass $\gamma\gamma$ pairs in the
reactions $e^+ e^- \rightarrow l \overline{l} + \gamma\gamma$ in the
$Z^0$ resonance region; in particular, four such events were reported
with the invariant $\gamma\gamma$ mass near 60 GeV from a data sample
corresponding to 950,000 produced $Z^0$'s in the LEP $e^+e^-$
colliding beam device. More recently, related data from the ALEPH,
DELPHI, and OPAL Collaborations \cite{2} have become available, and
hence, an immediate issue which needs to be addressed is that of the
theoretical expectations from the basic QED processes themselves for
such high-mass photon pairs. It is this issue which we shall address
in what follows.
More specifically, we want to use the YFS Monte Carlo approach
\cite{3} to higher-order $SU_{2L}\times U_1$ processes introduced by
two of us (S.J.and B.F.L.W.) and the recent exact results \cite{4} by
the three of us on the processes $e^+ e^- \rightarrow l \overline{l} +
\gamma\gamma$,\ $l=e,\mu,\tau$, in the $Z^0$ resonance region to
assess the probability that the observations in Refs.\ \cite{1,2} are
consistent with higher-order QED processes. This means that the
over-all normalization of our calculations in the L3 acceptance must
be known even in the presence of the strong initial state radiative
effects associated with the $Z^0$ resonance line shape. Accordingly,
we will employ our YFS Monte Carlo event generator \cite{3}, which
treats $e^+ e^- \rightarrow f \overline{f} + n(\gamma)$, in the $Z^0$
resonance region with the $n(\gamma)$ multiple photon radiation for
both the initial and final fermion. Here, $f$ is a fundamental
$SU_{2L}\times U_1$ fermion. We should stress that, strictly
speaking, $f \ne e$ is implicit in YFS3. However, in the L3
acceptance for the high mass $\gamma\gamma$ pairs, kinematical cuts
eliminate any large effect from the exchanges in $e^+ e^- \rightarrow
e^+ e^- + n(\gamma)$ which are not the $s$-channel $Z^0$ exchange, so
that we can use YFS3 for our analysis at the currently required level
of precision. Our recent results in Ref.\ \cite{4} of course do not
have any such qualification in their applicability to the L3-type
events for $e^+ e^- \gamma\gamma$ final states: the full matrix
element is available from Ref.\ \cite{4}, and indeed, it will be used
to check the validity of our YFS3 $s$-channel exchange approach to the
high $\gamma\gamma$ mass L3-type $e^+ e^- \gamma\gamma$ final states,
for example.
What we will do in this paper then is to set the QED higher order
expectations for the L3-type high $\gamma\gamma$ invariant mass
events. We hope that sufficient data will be taken so that the
statistical errors on the experimental results analyzed in this paper
will cease to be the over-riding dominant error in the comparison
between theory and experiment. We encourage the LEP/SLC
experimentalists to strive to accumulate the attendant factor $\sim
10$ in statistics required to reach this latter goal.
Our work is organized as follows. In the next section, we present
some relevant theoretical and experimental background information. In
Section 3, we compare our theoretical predictions with the LEP data.
In Section 4, we present some summarizing discussions.
\section{Preliminaries} The basic framework in which we shall work
will be that of the renormalization group improved YFS theory that is
realized via Monte Carlo methods via the event generators YFS2,
BHLUMI, and YFS3 in Refs.\ \cite{3}. Since we shall focus on the YFS3
predictions for L3-type events, we begin by describing the relevant
aspects of the Monte Carlo realization of our YFS methods as they
relate to YFS3.
Specifically, for a process such as $e^+ e^- \rightarrow f
\overline{f} + n(\gamma)$, we have from Refs.\ \cite{5,6}, the
fundamental differential cross section \begin{eqnarray}
d\sigma_{{}_{\hbox{\tiny YFS}}} &=&
\exp\left\{2\alpha\mathop{\hbox{Re}} B + 2\alpha \widetilde B\right\}
\sum_{n=0}^\infty \int\prod_{j=1}^n {d^3 k_j\over k^0_j} \int {d^4
y\over (2\pi)^4}\ \nonumber\\* &\times& e^{iy(p_e + p_{\bar e} - p_f -
p_{\bar f} - \sum_j k_j) + D(y)} \;\overline\beta_n(k_1,\ldots,k_n)
{d^3 p_f d^3 p_{\bar f}\over p^0_f p^0_{\bar f}} \end{eqnarray} where
\begin{eqnarray} D(y) &=& \int{d^3 k\over k^0} \widetilde S(k)\;
\left(e^{-iyk} - \theta(k_{\hbox{\tiny max}} - k^0)\right), \\
2\alpha\widetilde B &=& \int_{k^0\le k_{{}_{\hbox{\tiny max}}}} {d^3
k\over k^0} \widetilde S(k), \\* B &=& {-i\over 8\pi^3} \int{d^4
k\over k^2 - m_\gamma^2 + i\epsilon} \nonumber\\*
&\times&\left[-\left({-2p_e - k \over k^2 + 2k\cdot p_e + i\epsilon} +
{-2p_{\bar e} + k \over k^2 - 2k\cdot p_{\bar e} + i\epsilon}
\right)^2 + \cdots\right], \end{eqnarray} with \begin{equation}
\widetilde S(k) = {\alpha\over 4\pi}\left[ -\left({p_{\bar e}\over k
\cdot p_{\bar e}} - {p_e \over k \cdot p_e}\right)^2 + \cdots\right].
\end{equation} Here, the kinematics is that illustrated in Fig.\ 1,
$m_\gamma$ is the photon infrared regulator mass, and $\bar\beta_n$
are the YFS hard photon residuals defined in Refs.\ \cite{5,6}, for
example.
\testfig{L3fig1.ps}%
{
\begin{figure}
\epsfysize=2in
\center
\leavevmode
\epsffile{L3fig1.ps}
\caption{\figone}
\end{figure}
}
In YFS3 \cite{3}, two of us (S.J. and B.F.L.W.) have realized (1)
via Monte Carlo methods for the case $f\ne e$ for both $n(\gamma)$
radiation from the initial state and $n(\gamma)$ radiation from the
final state, with the hard photon residuals $\overline\beta_{0,1,2}$
implemented in the respective MC to $\mathop{\cal O}(\alpha^2)$ at the
leading log level. Thus, the $\mathop{\cal O}(\alpha)$ contributions
to $\overline\beta_{0,1,2}$ are exact and the $\mathop{\cal
O}(\alpha^2)$ contributions are correct to the leading log level. It
follows that, in a special region of the event phase space for events
of the L3-type, it is necessary to check that the leading-log
$\mathop{\cal O}(\alpha^2)$ approximation for the respective
hard-photon effects is indeed accurate to the desired level of
accuracy. It is this issue that we discuss in our following analysis.
\section{Comparison of YFS3 and Exact $\mathop{\cal O}(\alpha^2)$
Results for L3-Type Events}
In this section, we compare the exact $\mathop{\cal O}(\alpha^2)$
results in Ref.\ \cite{4} for the process $e^+ e^- \rightarrow l
\overline l + 2\gamma$, restricted to the L3-type phase space cuts as
they are given in Ref.\ \cite{1}, with those predicted by YFS3. Here,
we emphasize immediately that, due to the wide angles of the photons
with respect to the charged particles, the effect of radiative
corrections on the $Z^0$ line shape has been taken into account
properly in YFS3, and this amounts to an over-all normalization
correction to the cross section for L3-type events. Upon removing
this $Z^0$ line shape effect, we are left with a comparison of the
YFS3 two hard photon distributions in the L3-type events phase space
with the analogous distributions as given by the exact $\mathop{\cal
O}(\alpha^2)$ result. It is this comparison which we now present.
\testfig{L3fig2.ps}%
{
\begin{figure}
\center
\leavevmode
\epsfysize=3in
\epsffile{L3fig2.ps}
\caption{\figtwo}
\end{figure}
}
Specifically, we focus on the ratio of the YFS3 two hard photon
leading log matrix element squared and the exact $\mathop{\cal
O}(\alpha^2)$ two hard photon matrix element squared in the L3-type
events phase space by plotting this ratio as a function of
$M_{\gamma\gamma}$, the respective di-photon invariant mass, in such
L3-type events. This we do in Fig.\ 2 for the cases $e^+ e^-
\rightarrow e^+ e^- + 2\gamma$ and $e^+ e^- \rightarrow \mu^+\mu^- +
2\gamma$. We see that, indeed, our YFS3 matrix element squared is
within $78\%$ of the exact $\mathop{\cal O}(\alpha^2)$ result for the
$e^+ e^- \rightarrow e^+ e^- + 2\gamma$ case in the 60 GeV regime, and
is within $94\%$ of the exact $\mathop{\cal O}(\alpha^2)$ result in
the $e^+ e^- \rightarrow \mu^+ \mu^- + 2\gamma$ case. Alternatively,
we plot in Figs. 3 and 4 (for the $e^+e^-\gamma\gamma$ and
$\mu^+\mu^-\gamma\gamma$ cases respectively) the di-photon mass
distribution in the L3-event phase space for the exact $\mathop{\cal
O}(\alpha^2)$ and YFS3 two hard photon matrix elements squared, both
(a) as $d\sigma/dM_{\gamma\gamma}$, and (b) as a histogram of the
number of expected events versus $M_{\gamma\gamma}$ for 27 pb${}^{-1}$
of integrated luminosity. Also shown in Figs.\ 3(b) and 4(b) are the
MC results from the L3 paper, Ref.\ \cite{1}, which are just the YFS3
results, of course. We conclude that, in all cases in Figs.\ 3 and 4,
there is good agreement between our exact $\mathop{\cal O}(\alpha^2)$
expectations and those results generated by YFS3.
\testfig{L3fig3a.ps}%
{
\testfig{L3fig3b.ps}%
{
\begin{figure}
\center
\leavevmode
\epsfysize=3in
\epsffile{L3fig3a.ps}
\par
\center
\leavevmode
\epsfysize=3in
\epsffile{L3fig3b.ps}
\caption{\figthree}
\end{figure}
}
}
\testfig{L3fig4a.ps}%
{
\testfig{L3fig4b.ps}%
{
\begin{figure}
\center
\leavevmode
\epsfysize=3in
\epsffile{L3fig4a.ps}
\par
\center
\leavevmode
\epsfysize=3in
\epsffile{L3fig4b.ps}
\caption{\figfour}
\end{figure}
}
}
Recently, all LEP collaborations have searched for L3-type events.
The results of their search are discussed in Ref.\ \cite{2}. Here, we
note that, in the regime of $M_{\gamma\gamma}$ between 50 GeV and 80
GeV, the LEP collaborations find 15 events in the L3-type $2\gamma$
phase space, and YFS3 predicts 9. The probability that this is a
statistical fluctuation is $1.9\%$. What we can say here is that the
statistical effects in the YFS3 comparison with data is indeed the
dominant source of uncertainty in that comparison. Thus, we urge the
experimentalists to strive for more data so that the nature of these
observations can be clarified.
\section{Conclusions} We have analyzed the L3-type
$l\overline{l}\gamma\gamma$ event high di-photon mass spectrum in
YFS3, the second order leading log YFS-exponentiated final + initial
state $n(\gamma)$ radiation event generator, in comparison to the
exact $\mathop{\cal O}(\alpha^2)$ prediction as determined by the
results in Ref.\ \cite{4}. We find good agreement between these two
independent calculations of the respective spectra. This agreement
means that the use of YFS3 to estimate the probability that the
observed L3-type events at LEP are a QED fluctuation does not suffer
from an unknown physical precision error associated with its use of
$\mathop{\cal O}(\alpha^2)$ leading-log matrix elements for hard
2-photon emission into the respective L3-type
$l\overline{l}\gamma\gamma$ phase space.
We want to note that a comparison of YFS3 with the $\mathop{\cal O}
(\alpha^2)$ exact results in Ref.\ \cite{7} has also been carried out
\cite{1,8} and it agrees with our findings. The way is open to
incorporate the exact $\mathop{\cal O}(\alpha^2)$ matrix element for
the L3-type $l\overline{l}\gamma\gamma$ event phase space into YFS3 if
the statistics on these events would be increased to require such
accuracy in the YFS3 predictions. We encourage the LEP
experimentalists to strive for such an increase in L3-type
$l\overline{l}\gamma\gamma$ event statistics.
\section*{Acknowledgments} Two of the authors (S.J. and B.F.L.W.)
thank Prof.\ J. Ellis for the kind hospitality and support of the
CERN TH Division, where a part of this work was completed. The
authors have also benefitted from discussions with J. Qian, K.
Riles, E. R.-W\accent'30 as, Z. W\accent'30 as and B. Wyslouch.
|
1,116,691,499,768 | arxiv |
\section*{\hfil Appendix \label{appendix} \hfil}
\addcontentsline{toc}{section}{\currentname}
\let\Osubsection\subsection
\renewcommand{\section}[1]{\stepcounter{section}
\Osubsection*{A.\arabic{section}.~~{#1}}
\addcontentsline{toc}{subsection}{\currentname}}%
\let\Osubsubsection\subsubsection
\renewcommand{\subsection}[1]{\stepcounter{subsection}
\Osubsubsection*{A.\arabic{section}.\arabic{subsection}.~~{#1}}
\addcontentsline{toc}{subsubsection}{\currentname}}%
This appendix provides additional information on the \cite{holston.etal:2017}
model, their estimation procedure as well as snippets of R-Code. Matrix
details regarding the three stages of their procedure are taken from the
file \texttt{HLW\_Code\_Guide.pdf} which is contained in the \texttt{%
HLW\_Code.zip} file available from John Williams' website at the Federal
Reserve Bank of New York: %
\url{https://www.newyorkfed.org/medialibrary/media/research/economists/williams/data/HLWCode.zip}%
.
The state-space model notation is:%
\begin{equation}
\begin{array}{l}
\mathbf{y}_{t}=\mathbf{Ax}_{t}+\mathbf{H}\boldsymbol{\xi }_{t}+\boldsymbol{%
\nu }_{t} \\
\boldsymbol{\xi }_{t}=\mathbf{F}\boldsymbol{\xi }_{t-1}+\underbrace{\mathbf{S%
}\boldsymbol{\varepsilon }_{t}}_{\boldsymbol{\epsilon }_{t}}%
\end{array}%
\text{, \ \ where }%
\begin{bmatrix}
\boldsymbol{\nu }_{t} \\
\boldsymbol{\varepsilon }_{t}%
\end{bmatrix}%
\sim \mathsf{MNorm}\left(
\begin{bmatrix}
\boldsymbol{0} \\
\boldsymbol{0}%
\end{bmatrix}%
,%
\begin{bmatrix}
\mathbf{R} & \boldsymbol{0} \\
\boldsymbol{0} & \mathbf{W}%
\end{bmatrix}%
\right) ,
\end{equation}%
where $\mathbf{S}\boldsymbol{\varepsilon }_{t}=\boldsymbol{\epsilon }_{t}$,
so that $\mathrm{Var}(\mathbf{S}\boldsymbol{\varepsilon }_{t})=\mathrm{Var}(%
\boldsymbol{\epsilon }_{t})=\mathbf{SWS}^{\prime }=\mathbf{Q}$, with $%
\boldsymbol{\epsilon }_{t}$ and $\mathbf{Q}$ being the notation used in the
online appendix of \cite{holston.etal:2017} for the state vector's
disturbance term and its variance-covariance matrix.
\section{Stage 1 Model \label{sec:AS1}}
The first Stage model is defined by the following system matrices:\bsq\label%
{AS1_M}
\begin{align}
\mathbf{y}_{t}& =[y_{t},~\pi _{t}]^{\prime } \label{AS1:y} \\
\mathbf{x}_{t}& =[y_{t-1},~y_{t-2},~\pi _{t-1},~\pi _{t-2,4}]^{\prime }
\label{AS1:x} \\
\boldsymbol{\xi }_{t}& =[y_{t}^{\ast },~y_{t-1}^{\ast },~y_{t-2}^{\ast
}]^{\prime }, \label{AS1:xi}
\end{align}%
\esq\vsp[-5]
\begin{equation*}
\mathbf{A}=%
\begin{bmatrix}
a_{y,1} & a_{y,2} & 0 & 0 \\
b_{y} & 0 & b_{\pi } & (1-b_{\pi })%
\end{bmatrix}%
,~\mathbf{H}=%
\begin{bmatrix}
1 & -a_{y,1} & -a_{y,2} \\
0 & -b_{y} & 0%
\end{bmatrix}%
,~\mathbf{F}=%
\begin{bmatrix}
1 & 0 & 0 \\
1 & 0 & 0 \\
0 & 1 & 0%
\end{bmatrix}%
,~\mathbf{S}=%
\begin{bmatrix}
1 \\
0 \\
0%
\end{bmatrix}%
.
\end{equation*}%
From this, the measurement relations are:%
\begin{align}
\mathbf{y}_{t}& =\mathbf{Ax}_{t}+\mathbf{H}\boldsymbol{\xi }_{t}+\boldsymbol{%
\nu }_{t} \notag \\
\begin{bmatrix}
y_{t} \\
\pi _{t}%
\end{bmatrix}%
& =%
\begin{bmatrix}
a_{y,1} & a_{y,2} & 0 & 0 \\
b_{y} & 0 & b_{\pi } & (1-b_{\pi })%
\end{bmatrix}%
\begin{bmatrix}
y_{t-1} \\
y_{t-2} \\
\pi _{t-1} \\
\pi _{t-2,4}%
\end{bmatrix}%
+%
\begin{bmatrix}
1 & -a_{y,1} & -a_{y,2} \\
0 & -b_{y} & 0%
\end{bmatrix}%
\begin{bmatrix}
y_{t}^{\ast } \\
y_{t-1}^{\ast } \\
y_{t-2}^{\ast }%
\end{bmatrix}%
+%
\begin{bmatrix}
\varepsilon _{t}^{\tilde{y}} \\
\varepsilon _{t}^{\pi }%
\end{bmatrix}
\label{AS1:m}
\end{align}%
with the corresponding state equations being:%
\begin{align}
\boldsymbol{\xi }_{t}& =\mathbf{F}\boldsymbol{\xi }_{t-1}+\mathbf{S}%
\boldsymbol{\varepsilon }_{t} \notag \\
\begin{bmatrix}
y_{t}^{\ast } \\
y_{t-1}^{\ast } \\
y_{t-2}^{\ast }%
\end{bmatrix}%
& =%
\begin{bmatrix}
1 & 0 & 0 \\
1 & 0 & 0 \\
0 & 1 & 0%
\end{bmatrix}%
\begin{bmatrix}
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
y_{t-3}^{\ast }%
\end{bmatrix}%
+%
\begin{bmatrix}
1 \\
0 \\
0%
\end{bmatrix}%
\begin{bmatrix}
\varepsilon _{t}^{y^{\ast }}%
\end{bmatrix}%
. \label{AS1:s}
\end{align}
Expanding \ref{AS1:m} and \ref{AS1:s} yields:\bsq\label{AS1_0}%
\begin{align*}
y_{t}& =y_{t}^{\ast }+a_{y,1}(y_{t-1}-y_{t-1}^{\ast
})+a_{y,2}(y_{t-2}-y_{t-2}^{\ast })+\varepsilon _{t}^{\tilde{y}} \\
\pi _{t}& =b_{y}(y_{t-1}-y_{t-1}^{\ast })+b_{\pi }\pi _{t-1}+\left( 1-b_{\pi
}\right) \pi _{t-2,4}+\varepsilon _{t}^{\pi }
\end{align*}%
and%
\begin{align*}
y_{t}^{\ast }& =y_{t-1}^{\ast }+\varepsilon _{t}^{y^{\ast }} \\
y_{t-1}^{\ast }& =y_{t-1}^{\ast } \\
y_{t-2}^{\ast }& =y_{t-2}^{\ast },
\end{align*}%
\esq respectively, for the measurement and state equations. Defining output $%
y_{t}$ as trend plus cycle, and ignoring the identities, yields then the
following relations for the Stage 1 model:\bsq\label{AS1}%
\begin{align}
y_{t}& =y_{t}^{\ast }+\tilde{y}_{t} \label{AS1:a} \\
\pi _{t}& =b_{\pi }\pi _{t-1}+\left( 1-b_{\pi }\right) \pi _{t-2,4}+b_{y}%
\tilde{y}_{t-1}+\varepsilon _{t}^{\pi } \label{AS1:b} \\
\tilde{y}_{t}& =a_{y,1}\tilde{y}_{t-1}+a_{y,2}\tilde{y}_{t-2}+\varepsilon
_{t}^{\tilde{y}} \label{AS1:c} \\
y_{t}^{\ast }& =y_{t-1}^{\ast }+\varepsilon _{t}^{y^{\ast }}. \label{AS1:d}
\end{align}%
\esq If we disregard the inflation equation \ref{AS1:b} for now, the
decomposition of output into trend and cycle can be recognized as the
standard Unobserved Component (UC) model of \cite{harvey:1985}, \cite%
{clark:1987}, \cite{kuttner:1994}, \cite{morley.etal:2003} and others. \cite%
{holston.etal:2017} write on page S64: "\dots \textit{we follow Kuttner
(1994) and apply the Kalman filter to estimate the natural rate of output,
omitting the real rate gap term from Eq. (4)} [our Equation \ref{AS1:c}]
\textit{and assuming that the trend growth rate, }$g$\textit{, is constant.}"
One key difference is, nevertheless, that no drift term is included in the
trend specification in \ref{AS1:d}, so that $y_{t}^{\ast }$ follows a random
walk \emph{without} drift. Evidently, this cannot match the upward sloping
pattern in the GDP series. The way that \cite{holston.etal:2017} deal with
this mismatch is by `\textit{detrending'} output $y_{t}$ in the estimation.
This is implemented by re-placing $\{y_{t-j}\}_{j=0}^{2}$ in $\mathbf{y}_{t}$
and $\mathbf{x}_{t}$ in \ref{AS1_M} by $(y_{t}-gt)$, where $g$ is a
parameter (and not a trend growth state variable) to be estimated, and $t$
is a linear time trend defined as $t=[1,\ldots ,T]^{\prime }$. This is
hidden away from the reader and is not described in the documentation in
either text or equation form. Only from the listing of the vector of
parameters to be estimated by MLE, referred to as $\boldsymbol{\theta }_{1}$
in the middle of page 10 in the documentation, does it become evident that
an additional parameter --- confusingly labelled as $g$ --- is included in
the estimation. That is, the vector of Stage 1 parameters to be estimated is
defined as:
\begin{equation}
\boldsymbol{\theta }_{1}=[a_{y,1},~a_{y,2},~b_{\pi },~b_{y},~g,~\sigma _{%
\tilde{y}},~\sigma _{\pi },~\sigma _{y^{\ast }}]^{\prime }. \label{AS1:t1}
\end{equation}
Note that the parameter $g$ in $\boldsymbol{\theta }_{1}$ is not found in
any of the system matrices that describe the Stage 1 model on page 10 of the
documentation. This gives the impression that it is a typographical error in
the documentation, rather than a parameter that is added to the model in the
estimation. However, from their R-Code file \texttt{%
unpack.parameters.stage1.R}, which is reproduced in \coderef{R:unpack}{3},
one can see that part of the unpacking routine, which is later called by the
log-likelihood estimation function, `\textit{detrends'} the data (see the
highlighted lines 29 to 31 in \coderef{R:unpack}{3}, where \texttt{$\ast $
parameter[5]} refers to parameter $g$ in $\boldsymbol{\theta }_{1}$). Due to
the linear time trend removal in the estimation stage, it has to be added
back to the Kalman Filter and Smoother extracted trends $y_{t}^{\ast }$,
which is is done in \texttt{kalman.states.wrapper.R }(see the highlighted
lines 29 to 30 in \coderef{R:wrapper}{4}, where the if statement: \texttt{if
(stage == 1) \{} on line 28 of this file ensures that this is only done for
the Stage 1 model). The actual equation for the trend term $y_{t}^{\ast }$
is thus:%
\begin{align}
y_{t}^{\ast }& =g+y_{t-1}^{\ast }+\varepsilon _{t}^{y^{\ast }}
\label{s1:y*1} \\
& =y_{0}^{\ast }+gt+\sum_{s=1}^{t}\varepsilon _{s}^{y^{\ast }},
\label{s1:y*2}
\end{align}%
where $g$ is an intercept term that captures \textit{constant} trend growth,
and $y_{0}^{\ast }$ is the initial condition of the state vector set to
806.45 from the HP filter output as discussed in \fnref{fn:1}. Why \cite%
{holston.etal:2017} prefer to use this way of dealing with the drift term
rather than simply adding an intercept term to the state equation in \ref%
{AS1:s} is not clear, and not discussed anywhere.
In the estimation of the Stage 1 model, the state vector $\boldsymbol{\xi }%
_{t}$ is initialized using the same procedure as outlined in \ref{eq:P00S1a}
and \fnref{fn:1} with the numerical value of $\boldsymbol{\xi }_{00}$ and $%
\mathbf{P}_{00}$ set at:%
\begin{align}
\boldsymbol{\xi }_{00}& =[806.4455,~805.2851,~804.1248] \label{AS1:xi00} \\
\mathbf{P}_{00}& =%
\begin{bmatrix}
0.4711 & 0.2 & 0.0 \\
0.2 & 0.2 & 0.0 \\
0.0 & 0.0 & 0.2%
\end{bmatrix}%
. \label{AS1:P00}
\end{align}
\section{Stage 2 Model\label{sec:AS2}}
The second Stage model of \cite{holston.etal:2017} is defined by the
following model matrices:%
\begin{align}
\mathbf{y}_{t}& =[y_{t},~\pi _{t}]^{\prime } \label{AS2:y} \\
\mathbf{x}_{t}& =[y_{t-1},~y_{t-2},~r_{t-1},~r_{t-2},~\pi _{t-1},~\pi
_{t-2,4},~1]^{\prime } \label{AS2:x} \\
\boldsymbol{\xi }_{t}& =[y_{t}^{\ast },~y_{t-1}^{\ast },~y_{t-2}^{\ast
},~g_{t-1}]^{\prime } \label{AS2:xi}
\end{align}%
\vsp[-11]
\begin{align*}
\mathbf{A}& =%
\begin{bmatrix}
a_{y,1} & a_{y,2} & \frac{a_{r}}{2} & \frac{a_{r}}{2} & 0 & 0 & a_{0} \\
b_{y} & 0 & 0 & 0 & b_{\pi } & (1-b_{\pi }) & 0%
\end{bmatrix}%
,~\mathbf{H}=%
\begin{bmatrix}
1 & -a_{y,1} & -a_{y,2} & a_{g} \\
0 & -b_{y} & 0 & 0%
\end{bmatrix}%
, \\
& \\
\mathbf{F}& =%
\begin{bmatrix}
1 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1%
\end{bmatrix}%
,~\mathbf{S}=%
\begin{bmatrix}
1 & 0 \\
0 & 0 \\
0 & 0 \\
0 & 1%
\end{bmatrix}%
.
\end{align*}%
The measurement and state relations are given by:%
\begin{align}
\mathbf{y}_{t}& =\mathbf{Ax}_{t}+\mathbf{H}\boldsymbol{\xi }_{t}+\boldsymbol{%
\nu }_{t} \notag \\
\begin{bmatrix}
y_{t} \\
\pi _{t}%
\end{bmatrix}%
& =%
\begin{bmatrix}
a_{y,1} & a_{y,2} & \frac{a_{r}}{2} & \frac{a_{r}}{2} & 0 & 0 & a_{0} \\
b_{y} & 0 & 0 & 0 & b_{\pi } & (1-b_{\pi }) & 0%
\end{bmatrix}%
\begin{bmatrix}
y_{t-1} \\
y_{t-2} \\
r_{t-1} \\
r_{t-2} \\
\pi _{t-1} \\
\pi _{t-2,4} \\
1%
\end{bmatrix}%
+%
\begin{bmatrix}
1 & -a_{y,1} & -a_{y,2} & a_{g} \\
0 & -b_{y} & 0 & 0%
\end{bmatrix}%
\begin{bmatrix}
y_{t}^{\ast } \\
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
g_{t-1}%
\end{bmatrix}%
+%
\begin{bmatrix}
\varepsilon _{t}^{\tilde{y}} \\
\varepsilon _{t}^{\pi }%
\end{bmatrix}
\label{AS2:m}
\end{align}%
and%
\begin{align}
\boldsymbol{\xi }_{t}& =\mathbf{F}\boldsymbol{\xi }_{t-1}+\mathbf{S}%
\boldsymbol{\varepsilon }_{t} \notag \\
\begin{bmatrix}
y_{t}^{\ast } \\
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
g_{t-1}%
\end{bmatrix}%
& =%
\begin{bmatrix}
1 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1%
\end{bmatrix}%
\begin{bmatrix}
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
y_{t-3}^{\ast } \\
g_{t-2}%
\end{bmatrix}%
+%
\begin{bmatrix}
1 & 0 \\
0 & 0 \\
0 & 0 \\
0 & 1%
\end{bmatrix}%
\begin{bmatrix}
\varepsilon _{t}^{y^{\ast }} \\
\varepsilon _{t-1}^{g}%
\end{bmatrix}%
. \label{AS2:s}
\end{align}%
Note that $\sigma _{g}^{2}$ in $\mathrm{Var}(\boldsymbol{\varepsilon }_{t})=%
\mathbf{W}=\mathrm{\mathrm{diag}}([\sigma _{y^{\ast }}^{2},~\sigma
_{g}^{2}]) $ is replaced by $(\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2}$
where $\hat{\lambda}_{g}$ is the estimate from the first Stage, so that we
obtain: \vsp[-8]
\begin{align}
\mathrm{Var}(\mathbf{S}\boldsymbol{\varepsilon }_{t})& =\mathbf{SWS}^{\prime
} \notag \\
& =%
\begin{bmatrix}
1 & 0 \\
0 & 0 \\
0 & 0 \\
0 & 1%
\end{bmatrix}%
\begin{bmatrix}
\sigma _{y^{\ast }}^{2} & 0 \\
0 & (\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2}%
\end{bmatrix}%
\begin{bmatrix}
1 & 0 \\
0 & 0 \\
0 & 0 \\
0 & 1%
\end{bmatrix}%
^{\prime } \notag \\[0.08in]
\mathbf{Q}& =%
\begin{bmatrix}
\sigma _{y^{\ast }}^{2} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & (\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2}%
\end{bmatrix}%
, \label{S2Q}
\end{align}%
which is then used in the Kalman Filter routine and ML to estimate the Stage
2 model parameters.
Expanding the relations in \ref{AS2:m} and \ref{AS2:s} leads to the
measurement:\bsq\label{AS2m}%
\begin{align}
y_{t}& =y_{t}^{\ast }+a_{y,1}(y_{t-1}-y_{t-1}^{\ast
})+a_{y,2}(y_{t-2}-y_{t-2}^{\ast })+\tfrac{a_{r}}{2}%
(r_{t-1}+r_{t-2})+a_{0}+a_{g}g_{t-1}+\varepsilon _{t}^{\tilde{y}} \\
\pi _{t}& =b_{y}(y_{t-1}-y_{t-1}^{\ast })+b_{\pi }\pi _{t-1}+\left( 1-b_{\pi
}\right) \pi _{t-2,4}+\varepsilon _{t}^{\pi }
\end{align}%
\esq and corresponding state relations\bsq%
\begin{align}
y_{t}^{\ast }& =y_{t-1}^{\ast }+g_{t-2}+\varepsilon _{t}^{y^{\ast }} \\
y_{t-1}^{\ast }& =y_{t-1}^{\ast } \\
y_{t-2}^{\ast }& =y_{t-2}^{\ast } \\
g_{t-1}& =g_{t-2}+\varepsilon _{t-1}^{g}.
\end{align}%
\esq Defining output $y_{t}$ as before as trend plus cycle, dropping
identities, simplifying and rewriting gives the following Stage 2 system
relations:\bsq\label{AS2}%
\begin{align}
y_{t}& =y_{t}^{\ast }+\tilde{y}_{t} \\
\pi _{t}& =b_{\pi }\pi _{t-1}+\left( 1-b_{\pi }\right) \pi _{t-2,4}+b_{y}%
\tilde{y}_{t-1}+\varepsilon _{t}^{\pi } \\
a_{y}(L)\tilde{y}_{t}& =a_{0}+\tfrac{a_{r}}{2}%
(r_{t-1}+r_{t-2})+a_{g}g_{t-1}+\varepsilon _{t}^{\tilde{y}} \\
y_{t}^{\ast }& =y_{t-1}^{\ast }+g_{t-2}+\varepsilon _{t}^{y^{\ast }}
\label{ystar_mis} \\
g_{t-1}& =g_{t-2}+\varepsilon _{t-1}^{g},
\end{align}%
\esq where the corresponding vector of parameters to be estimated by MLE is:%
\begin{equation}
\boldsymbol{\theta }_{2}=[a_{y,1},~a_{y,2},~a_{r},~a_{0},~a_{g},~b_{\pi
},~b_{y},~\sigma _{\tilde{y}},~\sigma _{\pi },~\sigma _{y^{\ast }}]^{\prime
}. \label{eq:theta2}
\end{equation}%
The state vector $\boldsymbol{\xi }_{t}$ in the estimation of the Stage 2
model is initialized using the procedure outlined in \ref{eq:P00S1a} and %
\fnref{fn:1}, with the numerical value of $\boldsymbol{\xi }_{00}$ and $%
\mathbf{P}_{00}$ set at:%
\begin{align}
\boldsymbol{\xi }_{00}& =[806.4455,~805.2851,~804.1248,~1.1604]
\label{AS2:xi00} \\
\mathbf{P}_{00}& =%
\begin{bmatrix}
0.7185 & 0.2 & 0.0 & 0.2 \\
0.2 & 0.2 & 0.0 & 0.0 \\
0.2 & 0.2 & 0.0 & 0.0 \\
0.2 & 0.0 & 0.2 & 0.2009%
\end{bmatrix}%
. \label{AS2:P00}
\end{align}
Notice from the trend specification in \ref{ystar_mis} that $g_{t-2}$
instead of $g_{t-1}$ is included in the equation. This is not a
typographical error, but rather a `\emph{feature}' of the Stage 2 model
specification of \cite{holston.etal:2017}, and is not obvious until the
Stage 2 model relations are written out as above in equations \ref{AS2:m} to %
\ref{AS2}.\ I use the selection matrix $\mathbf{S}$ to derive what the
variance-covariance matrix of $\mathbf{S}\boldsymbol{\varepsilon }_{t}$,
that is, $\mathrm{Var}(\mathbf{S}\boldsymbol{\varepsilon }_{t})=\mathrm{Var}(%
\boldsymbol{\epsilon }_{t})=\mathbf{SWS}^{\prime }=\mathbf{Q}$, should look
like. \cite{holston.etal:2017} only report the $\mathbf{Q}$ matrix in their
online appendix included in the R-Code zip file (see page 10, lower half of
the page in Section 7.4).
In the Stage 3 model, \cite{holston.etal:2017} use a `\emph{trick}' to
arrive at the correct trend specification for $y_{t}^{\ast }$ by including
both, the $\varepsilon _{t-1}^{g}$ as well as the $\varepsilon
_{t}^{y_{t}^{\ast }}$ error terms in the equation for $y_{t}^{\ast }$ (see %
\ref{eq:g_trick} below). This can also be seen from the $\mathbf{Q}$ matrix
on page 11 in Section 7.5 of their online appendix or \ref{Q3} below, which
now includes off-diagonal terms in the Stage 3 model.
\subsection{Getting the correct Stage 2 Model from the Stage 3 Model \label%
{sec:AS21}}
We can apply this same `\emph{trick}' for the Stage 2 model, by taking the
Stage 3 model state-space form and deleting the row, respectively, column
entries of the $\mathbf{F}$, $\mathbf{H}$, and $\mathbf{S}$ matrices to make
them conformable with the required Stage 2 model. The state and measurement
equations of the correct Stage 2 model then look as follows:%
\begin{align}
\mathbf{y}_{t}& =\mathbf{Ax}_{t}+\mathbf{H}\boldsymbol{\xi }_{t}+\boldsymbol{%
\nu }_{t} \notag \\
\begin{bmatrix}
y_{t} \\
\pi _{t}%
\end{bmatrix}%
& =%
\begin{bmatrix}
a_{y,1} & a_{y,2} & \frac{a_{r}}{2} & \frac{a_{r}}{2} & 0 & 0 \\
b_{y} & 0 & 0 & 0 & b_{\pi } & (1-b_{\pi })%
\end{bmatrix}%
\begin{bmatrix}
y_{t-1} \\
y_{t-2} \\
r_{t-1} \\
r_{t-2} \\
\pi _{t-1} \\
\pi _{t-2,4}%
\end{bmatrix}%
+%
\begin{bmatrix}
1 & -a_{y,1} & -a_{y,2} & -\frac{a_{r}}{2} & -\frac{a_{r}}{2} \\
0 & -b_{y} & 0 & 0 & 0%
\end{bmatrix}%
\begin{bmatrix}
y_{t}^{\ast } \\
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
g_{t-1} \\
g_{t-2}%
\end{bmatrix}%
+%
\begin{bmatrix}
\varepsilon _{t}^{\tilde{y}} \\
\varepsilon _{t}^{\pi }%
\end{bmatrix}%
\end{align}%
\begin{align*}
\boldsymbol{\xi }_{t}& =\mathbf{F}\boldsymbol{\xi }_{t-1}+\mathbf{S}%
\boldsymbol{\varepsilon }_{t} \\
\begin{bmatrix}
y_{t}^{\ast } \\
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
g_{t-1} \\
g_{t-2}%
\end{bmatrix}%
& =%
\begin{bmatrix}
1 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 & 0%
\end{bmatrix}%
\begin{bmatrix}
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
y_{t-3}^{\ast } \\
g_{t-2} \\
g_{t-3}%
\end{bmatrix}%
+%
\begin{bmatrix}
1 & 1 \\
0 & 0 \\
0 & 0 \\
0 & 1 \\
0 & 0%
\end{bmatrix}%
\begin{bmatrix}
\varepsilon _{t}^{y^{\ast }} \\
\varepsilon _{t-1}^{g}%
\end{bmatrix}%
, \\
\intxt{which, upon expanding and dropping of identities, yields:}y_{t}&
=y_{t}^{\ast }+\tilde{y}_{t} \\
\pi _{t}& =b_{\pi }\pi _{t-1}+\left( 1-b_{\pi }\right) \pi _{t-2,4}+b_{y}%
\tilde{y}_{t-1}+\varepsilon _{t}^{\pi } \\
a_{y}(L)\tilde{y}_{t}& =\tfrac{a_{r}}{2}(r_{t-1}-g_{t-1})+\tfrac{a_{r}}{2}%
(r_{t-2}-g_{t-2})+\varepsilon _{t}^{\tilde{y}} \\
y_{t}^{\ast }& =y_{t-1}^{\ast }+\overbrace{g_{t-2}+\varepsilon _{t-1}^{g}}%
^{g_{t-1}}+\varepsilon _{t}^{y^{\ast }} \\
g_{t-1}& =g_{t-2}+\varepsilon _{t-1}^{g}.
\end{align*}%
These last relations correspond to \ref{S2full0}, with $\varepsilon _{t}^{%
\tilde{y}}$ being the counterpart to $\mathring{\varepsilon}_{t}^{\tilde{y}%
}=-a_{r}(L)z_{t}+\varepsilon _{t}^{\tilde{y}}$ if we take the full Stage 3
model as the true model.
Using the Stage 3 state-space form and simply adjusting it as shown above
yields the correct Stage 2 equations for trend $y_{t}^{\ast }$ and the
output gap $\tilde{y}_{t}$. With this form of the state-space model, it is
also clear that the variance-covariance matrix $\mathbf{Q}=\mathrm{Var}(%
\mathbf{S}\boldsymbol{\varepsilon }_{t})$ will be:%
\begin{align}
\mathbf{Q}& =\mathbf{SWS}^{\prime } \notag \\
& =%
\begin{bmatrix}
1 & 1 \\
0 & 0 \\
0 & 0 \\
0 & 1 \\
0 & 0%
\end{bmatrix}%
\begin{bmatrix}
\sigma _{y^{\ast }}^{2} & 0 \\
0 & (\lambda _{g}\sigma _{y^{\ast }})^{2}%
\end{bmatrix}%
\begin{bmatrix}
1 & 1 \\
0 & 0 \\
0 & 0 \\
0 & 1 \\
0 & 0%
\end{bmatrix}%
^{\prime } \notag \\
& =%
\begin{bmatrix}
\sigma _{y^{\ast }}^{2}+(\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2} & 0 & 0 &
(\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2} & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
(\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2} & 0 & 0 & (\hat{\lambda}%
_{g}\sigma _{y^{\ast }})^{2} & 0 \\
0 & 0 & 0 & 0 & 0%
\end{bmatrix}%
, \label{S2Qcorrect}
\end{align}%
where $(\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2}$ again replaces $\sigma
_{g}^{2}$, as before.\ Since the\textbf{\ }$\mathbf{Q}$ matrix in \cite%
{holston.etal:2017} takes the form of \ref{S2Q} and not \textrm{\ref%
{S2Qcorrect}}, we can see that this `\emph{trick}' of rewriting the trend
growth equation as in the Stage 3 model specification was not applied to the
Stage 2 model. Given that the correct Stage 2 model is easily obtained from
the full Stage 3 model specification, it is not clear why the Stage 2 model
is defined incorrectly as in \ref{eq:stag2}.
\section{Stage 3 Model\label{sec:AS3}}
The third and final Stage model is defined as follows:%
\begin{align}
\mathbf{y}_{t}& =[y_{t},~\pi _{t}]^{\prime } \label{AS3:y} \\
\mathbf{x}_{t}& =[y_{t-1},~y_{t-2},~r_{t-1},~r_{t-2},~\pi _{t-1},~\pi
_{t-2,4}]^{\prime } \label{AS3:x} \\
\boldsymbol{\xi }_{t}& =[y_{t}^{\ast },~y_{t-1}^{\ast },~y_{t-2}^{\ast
},~g_{t-1},~g_{t-2},~z_{t-1},~z_{t-2}]^{\prime } \label{AS3:xi}
\end{align}%
\vsp[-11]
\begin{eqnarray*}
\mathbf{A} &=&%
\begin{bmatrix}
a_{y,1} & a_{y,2} & \frac{a_{r}}{2} & \frac{a_{r}}{2} & 0 & 0 \\
b_{y} & 0 & 0 & 0 & b_{\pi } & (1-b_{\pi })%
\end{bmatrix}%
,~\mathbf{H}=%
\begin{bmatrix}
1 & -a_{y,1} & -a_{y,2} & -\frac{a_{r}}{2} & -\frac{a_{r}}{2} & -\frac{a_{r}%
}{2} & -\frac{a_{r}}{2} \\
0 & -b_{y} & 0 & 0 & 0 & 0 & 0%
\end{bmatrix}%
, \\[5mm]
\mathbf{F} &=&%
\begin{bmatrix}
1 & 0 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0%
\end{bmatrix}%
,~\mathbf{S}=%
\begin{bmatrix}
1 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0%
\end{bmatrix}%
.
\end{eqnarray*}%
The measurement and state relations are:\vsp[-11]\BAW[9]
\begin{align}
\mathbf{y}_{t}& =\mathbf{Ax}_{t}+\mathbf{H}\boldsymbol{\xi }_{t}+\boldsymbol{%
\nu }_{t} \notag \\
\begin{bmatrix}
y_{t} \\
\pi _{t}%
\end{bmatrix}%
& =%
\begin{bmatrix}
a_{y,1} & a_{y,2} & \frac{a_{r}}{2} & \frac{a_{r}}{2} & 0 & 0 \\
b_{y} & 0 & 0 & 0 & b_{\pi } & (1-b_{\pi })%
\end{bmatrix}%
\begin{bmatrix}
y_{t-1} \\
y_{t-2} \\
r_{t-1} \\
r_{t-2} \\
\pi _{t-1} \\
\pi _{t-2,4}%
\end{bmatrix}%
+%
\begin{bmatrix}
1 & -a_{y,1} & -a_{y,2} & -\frac{a_{r}}{2} & -\frac{a_{r}}{2} & -\frac{a_{r}%
}{2} & -\frac{a_{r}}{2} \\
0 & -b_{y} & 0 & 0 & 0 & 0 & 0%
\end{bmatrix}%
\begin{bmatrix}
y_{t}^{\ast } \\
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
g_{t-1} \\
g_{t-2} \\
z_{t-1} \\
z_{t-2}%
\end{bmatrix}%
+%
\begin{bmatrix}
\varepsilon _{t}^{\tilde{y}} \\
\varepsilon _{t}^{\pi }%
\end{bmatrix}
\label{AS3:m}
\end{align}%
\vsp[-15]\EAW and%
\begin{align}
\boldsymbol{\xi }_{t}& =\mathbf{F}\boldsymbol{\xi }_{t-1}+\mathbf{S}%
\boldsymbol{\varepsilon }_{t} \notag \\
\begin{bmatrix}
y_{t}^{\ast } \\
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
g_{t-1} \\
g_{t-2} \\
z_{t-1} \\
z_{t-2}%
\end{bmatrix}%
& =%
\begin{bmatrix}
1 & 0 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0%
\end{bmatrix}%
\begin{bmatrix}
y_{t-1}^{\ast } \\
y_{t-2}^{\ast } \\
y_{t-3}^{\ast } \\
g_{t-2} \\
g_{t-3} \\
z_{t-2} \\
z_{t-3}%
\end{bmatrix}%
+%
\begin{bmatrix}
1 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0%
\end{bmatrix}%
\begin{bmatrix}
\varepsilon _{t}^{y_{t}^{\ast }} \\
\varepsilon _{t-1}^{g} \\
\varepsilon _{t-1}^{z}%
\end{bmatrix}%
. \label{AS3:s}
\end{align}%
In the Stage 3 model, \cite{holston.etal:2017} replace $\sigma _{g}^{2}$ and
$\sigma _{z}^{2}$ in $\mathrm{Var}(\boldsymbol{\varepsilon }_{t})=\mathbf{W}=%
\mathrm{\mathrm{diag}}([\sigma _{y^{\ast }}^{2},~\sigma _{g}^{2},~\sigma
_{z}^{2}])$ with $(\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2}$ and $(\hat{%
\lambda}_{z}\sigma _{\tilde{y}}/a_{r})^{2}$, respectively, from the two
previous estimation steps, so that:\vsp[-4]%
\begin{align}
\mathrm{Var}(\mathbf{S}\boldsymbol{\varepsilon }_{t})& =\mathbf{SWS}^{\prime
} \notag \\
& =%
\begin{bmatrix}
1 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0%
\end{bmatrix}%
\begin{bmatrix}
\sigma _{y^{\ast }}^{2} & 0 & 0 \\
0 & (\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2} & 0 \\
0 & 0 & (\hat{\lambda}_{z}\sigma _{\tilde{y}}/a_{r})^{2}%
\end{bmatrix}%
\begin{bmatrix}
1 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0%
\end{bmatrix}%
^{\prime } \notag \\[0.08in]
\mathbf{\mathbf{Q}}& =%
\begin{bmatrix}
\sigma _{y^{\ast }}^{2}+(\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2} & 0 & 0 &
(\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
(\hat{\lambda}_{g}\sigma _{y^{\ast }})^{2} & 0 & 0 & (\hat{\lambda}%
_{g}\sigma _{y^{\ast }})^{2} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & (\hat{\lambda}_{z}\sigma _{\tilde{y}}/a_{r})^{2} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0%
\end{bmatrix}%
, \label{Q3}
\end{align}%
which enters the Kalman Filter routine and ML estimation of the final Stage
3 parameters.
Expanding the relations in \ref{AS3:m} and \ref{AS3:s} leads to the
following measurement:%
\begin{align*}
y_{t}& =y_{t}^{\ast }+a_{y,1}(y_{t-1}-y_{t-1}^{\ast
})+a_{y,2}(y_{t-2}-y_{t-2}^{\ast })+\tfrac{a_{r}}{2}%
(r_{t-1}-g_{t-1}-z_{t-1})+\tfrac{a_{r}}{2}(r_{t-2}-g_{t-2}-z_{t-2})+%
\varepsilon _{t}^{\tilde{y}} \\
\pi _{t}& =b_{y}(y_{t-1}-y_{t-1}^{\ast })+b_{\pi }\pi _{t-1}+\left( 1-b_{\pi
}\right) \pi _{t-2,4}+\varepsilon _{t}^{\pi }
\end{align*}%
and corresponding state relations%
\begin{align}
y_{t}^{\ast }& =y_{t-1}^{\ast }+\overbrace{g_{t-2}+\varepsilon _{t-1}^{g}}%
^{g_{t-1}}+\varepsilon _{t}^{y_{t}^{\ast }} \label{eq:g_trick} \\
y_{t-1}^{\ast }& =y_{t-1}^{\ast } \notag \\
y_{t-2}^{\ast }& =y_{t-2}^{\ast } \notag \\
g_{t-1}& =g_{t-2}+\varepsilon _{t-1}^{g} \notag \\
g_{t-2}& =g_{t-2} \notag \\
z_{t-1}& =z_{t-2}+\varepsilon _{t-1}^{z} \notag \\
z_{t-2}& =z_{t-2}. \notag
\end{align}%
\bigskip Defining output $y_{t}$ once again as trend plus cycle, dropping
identities and simplifying gives the following system of Stage 3 relations:%
\bsq\label{AS3}%
\begin{align}
y_{t}& =y_{t}^{\ast }+\tilde{y}_{t} \\
\pi _{t}& =b_{\pi }\pi _{t-1}+\left( 1-b_{\pi }\right) \pi _{t-2,4}+b_{y}%
\tilde{y}_{t-1}+\varepsilon _{t}^{\pi } \\
a_{y}(L)\tilde{y}_{t}& =\tfrac{a_{r}}{2}(r_{t-1}-g_{t-1}-z_{t-1})+\tfrac{%
a_{r}}{2}(r_{t-2}-g_{t-2}-z_{t-2})+\varepsilon _{t}^{\tilde{y}} \\
y_{t}^{\ast }& =y_{t-1}^{\ast }+g_{t-1}+\varepsilon _{t}^{y^{\ast }} \\
g_{t-1}& =g_{t-2}+\varepsilon _{t-1}^{g} \\
z_{t-1}& =z_{t-2}+\varepsilon _{t-1}^{z},
\end{align}%
\esq with the corresponding vector of Stage 3 model parameters to be
estimated by MLE being:%
\begin{equation}
\boldsymbol{\theta }_{3}=[a_{y,1},~a_{y,2},~a_{r},~b_{\pi },~b_{y},~\sigma _{%
\tilde{y}},~\sigma _{\pi },~\sigma _{y^{\ast }}]^{\prime }.
\end{equation}%
For the Stage 3 model, the variance of the state vector $\boldsymbol{\xi }%
_{t}$ is initialized once more as outlined in \ref{eq:P00S1a} and %
\fnref{fn:1}, with the numerical value of $\boldsymbol{\xi }_{00}$ and $%
\mathbf{P}_{00}$ being:%
\begin{align}
\boldsymbol{\xi }_{00}& =[806.4455,~805.2851,~804.1248,~1.1604,~1.1603,~0,~0]
\label{AS3:xi00} \\
\mathbf{P}_{00}& =%
\begin{bmatrix}
0.7272 & 0.2 & 0 & 0.2009 & 0.2 & 0 & 0 \\
0.2 & 0.2 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0.2 & 0 & 0 & 0 & 0 \\
0.2009 & 0 & 0 & 0.2009 & 0.2 & 0 & 0 \\
0.2 & 0 & 0 & 0.2 & 0.2 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0.2227 & 0.2 \\
0 & 0 & 0 & 0 & 0 & 0.2 & 0.2%
\end{bmatrix}%
. \label{AS3:P00}
\end{align}
\smallskip
\section{Additional simulation results\label{sec:AS4}}
As an additional experiment, I simulate entirely unrelated univariate time
series processes as inputs into the $\mathcal{Y}_{t}$ and $\boldsymbol{%
\mathcal{X}}_{t}$ vector series needed for the structural break regressions
in \ref{eqS2regs}. As before, the simulated inputs that are required are the
cycle variable $\tilde{y}_{t}$, trend growth $g_{t}$ as well as the real
rate $r_{t}$. To avoid having to use the observed exogenous interest rate
series that makes up the real rate via the relation $r_{t}=i_{t}-\pi
_{t}^{e} $ ($\pi _{t}^{e}$ is expected inflation as defined in \ref{pi}) as
it will be function of $r_{t}^{\ast }$ and hence $g_{t}$ and $z_{t}$, I fit
a low order ARMA process to $r_{t}$. I then use the coefficients from this
estimated ARMA\ model to generate a simulated sequence of $T$ observations
from the real interest rate. I follow the same strategy to generate a
simulated series for $\tilde{y}_{t}$. Note that I do not simply use the
AR(2) model structure for the cycle series $\tilde{y}_{t}$ as is implied by
the left hand side of \ref{S2:ytilde} together with the $a_{y,1}$ and $%
a_{y,2}$ estimates from the Stage 2 model in the simulation. The reason for
this is that the empirical $\hat{\tilde{y}}_{t|T}$ series that \cite%
{holston.etal:2017} use in their procedure is the Kalman Smoother based
estimate of $\tilde{y}_{t}$ which portrays a more complicated
autocorrelation pattern than an AR(2) process. In order to match the
autocorrelation pattern of the $\hat{\tilde{y}}_{t|T}$ series as closely as
possible, I fitted the best (low order) ARMA\ process to $\hat{\tilde{y}}%
_{t|T}$, and used those coefficients to generate the simulated cycle series.
For the $g_{t}$ series, I use three different simulation scenarios. First, I
replace the trend growth estimate in $\boldsymbol{\mathcal{X}}_{t}$ by the
Kalman Smoother estimate of $g_{t}$ denoted by $\hat{g}_{t-1|T}$ above. This
is the same series that \cite{holston.etal:2017}. Second, I simulate $%
g_{t-1} $ from a pure random walk (RW) process with the standard deviation
of the error term set equal to $\hat{\sigma}_{g}=0.0305205$, the implied
estimate reported in column 1 of \ref{tab:Stage2}. Third, I simulate a
simple (Gaussian) white noise (WN) for $g_{t-1}$. And last, I fit a low
order ARMA\ process to the first difference of $\hat{g}_{t-1|T}$. The "\emph{%
empirical}" $\hat{g}_{t-1|T}$ series is very persistent and its dynamics are
not sufficiently captured by a pure RW. I therefore use the coefficients
from a fitted ARMA\ model to $\Delta \hat{g}_{t-1|T}$ to simulate the first
difference process $\Delta g_{t-1}$, and then construct the $g_{t-1}$ as the
cumulative sum of\ $\Delta \hat{g}_{t-1|T}$ in $\boldsymbol{\mathcal{X}}_{t}$%
. All simulation scenarios are based on 1000 repetitions of sample size $T$
and the \textrm{EW} structural break test.
In \autoref{tab:MUE2_Sim_extra}, summary statistics of the $\lambda _{z}$
estimates obtained from implementing \cites{holston.etal:2017} MUE procedure
in Stage 2 are shown. The summary statistics are means and medians, as well
as empirical probabilities of observing an $\lambda _{z}$ estimate computed
from the simulated data being larger than the Stage 2 estimate of $0.030217$
from \cite{holston.etal:2017}. In \autoref{fig:MUE2_Sim_extra}, I show
histogram plots corresponding to the summary statistics of the $\lambda _{z}$
estimates computed from the simulated data. These are shown as supplementary
information to complement the summary statistics in \autoref%
{tab:MUE2_Sim_extra} and to avoid concerns related to unusual simulation
patterns.
\BT[h!]%
\caption{Summary statistics of $\lambda_z$ estimates of the Stage 2
MUE procedure applied to data simulated from unrelated univariate ARMA
processes} \centering\vspace*{-2mm}\renewcommand{\arraystretch}{1.1}%
\renewcommand\tabcolsep{7pt} \fontsize{11pt}{13pt}\selectfont%
\newcolumntype{N}{S[table-format =
3.8,round-precision = 6]}
\newcolumntype{K}{S[table-format =
5.9,round-precision = 6]}
\newcolumntype{Q}{S[table-format =
1.0,round-precision = 6]}
\begin{tabular*}{1\columnwidth}{p{40mm}NNNN}
\topline Summary Statistic & {$g_{t-1}=\hat{g}_{t-1|T}$} & {$g_{t-1}\sim
\mathrm{RW}$} & {~~~$g_{t-1}\sim \mathrm{WN}$~~~} & {\hsp[3]$\Delta
g_{t-1}\sim \mathrm{ARMA}$\hsp[3]} \\ \midrule
Minimum & 0 & 0 & 0 & 0 \\
Maximum & 0.09701863 & 0.09591360 & 0.09678872 & 0.09334030 \\
Standard deviation & 0.01523990 & 0.01585832 & 0.01680254 & 0.01633487 \\
Mean & 0.03179750 & 0.02970785 & 0.02611747 & 0.03044940 \\
Median & 0.03016459 & 0.02864655 & 0.02425446 & 0.02943455 \\
$\mathrm{Pr}(\lambda^s_z> 0.030217)$ & 0.49800000 & 0.45600000 & 0.38400000 & 0.48200000 \\
\bottomrule
\end{tabular*}\label{tab:MUE2_Sim_extra}
\tabnotes[-3mm][.995\columnwidth][-1.25mm]{This table reports summary
statistics of the Stage 2 estimates of $\lambda_{z}$ that one obtains when
applying \cites{holston.etal:2017} MUE procedure to simulated data without
the $z_t$ process. The summary statistics that are reported are the minimum,
maximum, standard deviation, mean, median, as well as the empirical
frequency of observing a value larger than the estimate of $0.030217$
obtained by \cite{holston.etal:2017}, denoted by $\Pr
(\hat{\lambda}_{z}^{s}>0.030217)$. The columns show the estimates for the
four different data generating processes for trend growth $g_t$. The first
column reports results when the Kalman Smoothed estimate $\hat{g}_{t-1|T}$
is used for $g_{t-1}$. The second and third columns show estimates when
$g_{t-1}$ is generated as pure random walk (RW) or (Gaussian) white noise
(WN) process. The last column reports results when $g_{t-1}$ is computed as
the cumulative sum of $\Delta g_{t-1}$, which is simulated from the
coefficients obtained from a low order ARMA process fitted to
$\Delta\hat{g}_{t-1|T}$.The cycle and real rate series are also constructed
by first finding the best fitting low order ARMA processes to the individual
series and then simulating from fitted coefficients.} \ET
\begin{figure}[h]
\centering
\includegraphics[width=.825\textwidth]{MUE2c.pdf} \vspace{-0mm}
\caption{Histograms of the estimated $\{ \hat{\protect\lambda}%
_{z}^{s}\} _{s=1}^{S}$ sequence corresponding to the summary
statistics shown in \autoref{tab:MUE2_Sim_extra}}
\label{fig:MUE2_Sim_extra}
\end{figure}
Looking over the results in \autoref{tab:MUE2_Sim_extra} and histograms in %
\autoref{fig:MUE2_Sim_extra}, it is clear that there are many instances
where the estimates of $\lambda _{z}$ from the simulated data are not only
non-zero, but rather sizeable, being larger than the estimate of $\lambda
_{z}=0.030217$ that \cite{holston.etal:2017} compute from the empirical
data. Note that there is no $z_{t}$ process simulated, yet with \cite%
{holston.etal:2017} Stage 2 MUE procedure one can recover an estimate that
is at least as large as the empirical one around 40 to 50 percent of the
time, depending on how $g_{t}$ is simulated. This simulation exercise thus
highlights how spurious \cites{holston.etal:2017} MUE procedure to estimate $%
\lambda _{z}$ is. As the downward trend in the $z_{t}$ process drives the
movement in the natural rate, where the severity of the downward trend is
related to the magnitude of $\sigma _{z}$, which is through $\lambda _{z}$, %
\cites{holston.etal:2017} estimates of the natural rate are likely to be
downward biased.
\section{Additional figures and tables\label{sec:AS_FT}}
This section presents additional figures and tables to complement the
results reported in the main text. Some of these results
are based on an expanded sample period using data that ends in 2019:Q2.
\cleardoublepage\newpage
\begin{figure}[p!]
\centering
\includegraphics[width=1\textwidth,trim={0 0 0 0},clip]
{Stage2_MUE_input_comparison_2017Q1.pdf}\vspace{-0mm}
\caption{Kalman smoothed estimates of (annualized) trend growth $g_t$
and output gap (cycle) $\tilde{y}_t$ from \cites{holston.etal:2017}
\emph{`misspecified'} Stage 2 model (HLW blue solid line) and the \emph{`correctly specified'} Stage 2 model
(MLE$(\sigma _{g}).\mathcal{M}_{0}$ red dashed lined). These are used as inputs into the structural break
dummy variable regression in \ref{eqS2regs}.}
\label{fig:MUE_comp_input}
\end{figure}
\cleardoublepage\newpage
\begin{figure}[p!]
\centering
\includegraphics[width=1\textwidth,trim={0 0 0 0},clip]
{Stage2_MUE_comparison_2017Q1}\vspace{-0mm}
\caption{Sequences of $\{F(\protect\tau )\}_{\protect\tau =%
\protect\tau _{0}}^{\protect\tau _{1}}$ statistics from the structural break
dummy variable regressions in \ref{eqS2regs} for the different scenarios that are considered.}
\label{fig:MUE_comp}
\end{figure}
\cleardoublepage\newpage
\begin{figure}[p!]
\centering
\includegraphics[width=1\textwidth,trim={0 0 0
0},clip]{Fstat_MUE2_2019Q2.pdf}\vspace{-0mm}
\caption{Sequence of $\{F(\protect\tau )\}_{\protect\tau =%
\protect\tau _{0}}^{\protect\tau _{1}}$ statistics on the dummy variable
coefficients $\{\hat{\protect\zeta}_{1}(\protect\tau )\}_{\protect\tau =%
\protect\tau _{0}}^{\protect\tau _{1}}$ used in the construction of the
structural break test statistics.}
\label{fig:seqaF_2019Q2}
\end{figure}
\cleardoublepage\newpage
\BST[p!]%
\BAW[7]%
\caption{Stage 2 MUE results of $\lambda_z$ with corresponding 90\%
confidence intervals, structural break test statistics and $p-$values using data up to 2019:Q2} %
\centering\vspace*{-2mm}\renewcommand{%
\arraystretch}{1.15}\renewcommand\tabcolsep{7pt} %
\fontsize{10pt}{12pt}\selectfont %
\newcolumntype{N}{S[table-format = 1.6,round-precision = 6]} %
\newcolumntype{K}{S[table-format = 1.5,round-precision = 6]} %
\newcolumntype{Q}{S[table-format = 1.4,round-precision = 6]} %
\begin{tabular*}{1.07\columnwidth}{p{7mm}NNNNNNNp{0mm}NNNNNN}
\topline
\multirow{2}{*}{\hsp[2]$\lambda_{z}$}
& \multicolumn{7}{c}{Time varying $\boldsymbol{{\phi}}$}
&& \multicolumn{6}{c}{Constant $\boldsymbol{{\phi}}$} \\ \cmidrule(rr){2-8} \cmidrule(rr){10-15}
& {HLW.R-File} & {\hsp[-2] Replicated \hsp[-2]} & {[90\% CI]} & {\hsp[-2]
MLE($\sigma_g$) \hsp[-2]} & {[90\% CI]} & {\hsp[-2]
MLE($\sigma_g$).$\mathcal{M}_0$ \hsp[-2]} & {[90\% CI]} & {} & {\hsp[-2]
Replicated \hsp[-2]} & {[90\% CI]} & {\hsp[-2] MLE($\sigma_g$) \hsp[-2]} &
{[90\% CI]} & {\hsp[-2] MLE($\sigma_g$).$\mathcal{M}_0$ \hsp[-2]} & {[90\%
CI]}
\\
\midrule
$L$ & {---} & 0 & {[0, 0.02]} & 0 & {[0, 0.00]} & 0.01159724 & {[0, 0.07]} && 0 & {[0, 0.02]} & 0 & {[0, 0.00]} & 0.01159724 & {[0, 0.05]} \\
MW & 0.03169923666015 & 0.03169924 & {[0, 0.14]} & 0.03925465 & {[0, 0.17]} & 0.01462208 & {[0, 0.08]} && 0 & {[0, 0.03]} & 0 & {[0, 0.02]} & 0.01104361 & {[0, 0.06]} \\
EW & 0.03520151475188 & 0.03520152 & {[0, 0.13]} & 0.04041613 & {[0, 0.14]} & 0.01477308 & {[0, 0.07]} && 0 & {[0, 0.03]} & 0 & {[0, 0.02]} & 0.01227187 & {[0, 0.06]} \\
QLR & 0.04402871075924 & 0.04402873 & {[0, 0.15]} & 0.04800876 & {[0, 0.16]} & 0.02265371 & {[0, 0.09]} && 0 & {[0, 0.05]} & 0 & {[0, 0.04]} & 0.02006166 & {[0, 0.07]} \\
\midrule
\multirow{1}{*}{} & \multicolumn{14}{c}{Structural break test statistics ($p-$values in parenthesis)\hsp[0]} \\
\midrule
$L$ & {---} & 0.04985073 & {(0.8750)} & 0.03736874 & {(0.9450)} & 0.15984012 & {(0.3600)} && 0.04985073 & {(0.8750)} & 0.03736874 & {(0.9450)} & 0.15984012 & {(0.3600)} \\
MW & 2.67857637252549 & 2.67857737 & {(0.0600)} & 3.79606247 & {(0.0200)} & 1.10732323 & {(0.3050)} && 0.33027268 & {(0.8100)} & 0.25484507 & {(0.8850)} & 0.92809881 & {(0.3750)} \\
EW & 2.48662072082676 & 2.48662194 & {(0.0300)} & 3.14005922 & {(0.0100)} & 0.73638858 & {(0.3000)} && 0.20088486 & {(0.7900)} & 0.15008242 & {(0.8750)} & 0.64238445 & {(0.3450)} \\
QLR & 12.22151692530160 & 12.22152141 & {(0.0100)} & 13.64872048 & {(0.0050)} & 5.98786206 & {(0.1600)} && 2.76790912 & {(0.5900)} & 2.17481293 & {(0.7250)} & 5.43201383 & {(0.2000)} \\
\bottomrule
\end{tabular*}\label{Atab:Stage2_lambda_z_2019}
\tabnotes[-2.5mm][1.06\columnwidth][-0.5mm]{This table reports the Stage 2
estimates of $\lambda _{z}$ for the different
$\boldsymbol{\theta }_{2}$ estimates corresponding to the \emph{%
"misspecified"} and \emph{"correctly specified"} Stage 2 models reported in %
\autoref{tab:Stage2} using data updated to 2019:Q2. The table is split into
two column blocks, showing the
results for the \emph{"time varying} $\boldsymbol{\phi }$" and \emph{%
"constant} $\boldsymbol{\phi }$" scenarios in the left and right blocks,
respectively. In the bottom half of the table, the four different structural
break test statistics for the considered models are shown. The results under
the heading `HLW.R-File' show the $\lambda _{z}$ estimates obtained from
running \cites{holston.etal:2017} R-Code for the Stage 2 model as reference
values. The second column `Replicated' shows my replicated results. Under
the heading `MLE($\sigma _{g}$)', results for the \emph{"misspecified}"
Stage 2 model are shown with $\sigma _{g}$ estimated directly by MLE rather
than from the first stage estimate of $\lambda _{g}$. Under the heading `MLE(%
$\sigma _{g}$).$\mathcal{M}_{0}$', results for the \emph{"correctly
specified"} Stage 2 model are reported where $\sigma _{g}$ is again
estimated by MLE. The values in square brackets in the top half of the table
report 90\% confidence intervals for $\lambda _{z}$ computed from %
\cites{stock.watson:1998} tabulated values provided in their GAUSS\ files.
These were divided by sample size $T$ to make them comparable to $\lambda
_{z}$. In the bottom panel, $p-$values of the various structural break tests
are reported in round brackets. These were also extracted from %
\cites{stock.watson:1998} GAUSS\ files.}%
\EAW \EST%
\cleardoublepage\newpage
\BT[p!] \caption{Stage 3 parameter estimates using data up to
2019:Q2}\centering\vspace*{-2mm}\renewcommand{\arraystretch}{1.1}%
\renewcommand\tabcolsep{7pt}\fontsize{11pt}{13pt}\selectfont%
\newcolumntype{N}{S[table-format = 4.8,round-precision = 8]} %
\newcolumntype{U}{S[table-format = 4.8,round-precision = 8]} %
\newcolumntype{L}{S[table-format = 4.8,round-precision = 8]} \BAW[7]
\begin{tabular*}{1.085\columnwidth}{p{27mm}NNNNN}
\topline
\hsp[5]$\boldsymbol{\theta }_{3}$
& {\hsp[2]HLW.R-File\hsp[-4]}
& {\hsp[2]Replicated\hsp[-4]}
& {\hsp[2]MLE($\sigma_g|\lambda_z^{\mathrm{HLW}})$\hsp[-3]}
& {\hsp[3]MLE($\sigma_g|\lambda_z^{\mathcal{M}_0})$\hsp[-4]}
& {\hsp[2]MLE($\sigma_g,\sigma_z)$\hsp[-4]}\\
\midrule
$\hsp[3]a_{y,1} $ & 1.5387645830 & 1.5387645934 & 1.5108322346 & 1.5165911490 & 1.5166962002 \\
$\hsp[3]a_{y,2} $ & -0.5970026371 & -0.5970026497 & -0.5705368374 & -0.5763753973 & -0.5764593051 \\
$\hsp[3]a_{r} $ & -0.0685404334 & -0.0685404321 & -0.0756111341 & -0.0702967130 & -0.0700067459 \\
$\hsp[3]b_{\pi } $ & 0.6733154496 & 0.6733154483 & 0.6763890042 & 0.6746345037 & 0.6748341057 \\
$\hsp[3]b_{y} $ & 0.0775545028 & 0.0775545062 & 0.0745405463 & 0.0788842702 & 0.0788524299 \\
$\hsp[3]\sigma _{\tilde{y}}$ & 0.3359069291 & 0.3359069170 & 0.3359838098 & 0.3474767025 & 0.3482610569 \\
$\hsp[3]\sigma _{\pi } $ & 0.7881255370 & 0.7881255365 & 0.7892181385 & 0.7885495236 & 0.7886203640 \\
$\hsp[3]\sigma _{y^{\ast }}$ & 0.5757731876 & 0.5757731957 & 0.5678951962 & 0.5635923645 & 0.5632780334 \\
$\hsp[3]\sigma _{g}$ {(implied)} & (0.03082331) & (0.03082331) & 0.0451784946 & 0.0438616880 & 0.0437898193 \\
$\hsp[3]\sigma _{z}$ {(implied)} & (0.17251762) & (0.17251762) & 0.1564206011 & 0.0606598161 & 0.0525049366 \\
$\hsp[3]\lambda_g $ {(implied)} & 0.0535337714 & 0.0535337714 & (0.07955428) & (0.07782519) & (0.07774103) \\
$\hsp[3]\lambda_z $ {(implied)} & 0.0352015148 & 0.0352015148 & (0.03520151) & (0.01227186) & (0.01055443) \\ \cmidrule(ll){1-6}
{Log-likelihood} & -533.3698452415 & -533.3698455030 & -533.1654750148 & -532.8287486005 & -532.8263754097 \\
\bottomrule
\end{tabular*}\label{Atab:S3_2019}%
\tabnotes[-2.6mm][1.08\columnwidth][-4.0mm]{This table reports replication
results for the Stage 3 model parameter vector $\boldsymbol{\theta }_{3}$ of
\cite{holston.etal:2017}. The first column (HLW.R-File) reports estimates
obtained by running \cites{holston.etal:2017} R-Code for the Stage 3 model.
The second column (Replicated) shows the replicated results using the same set-up as in \cites{holston.etal:2017}. The third column
(MLE($\sigma_g|\lambda_z^{\mathrm{HLW}})$) reports estimates when $\sigma
_{g}$ is directly estimated by MLE together with the other parameters of the
Stage 3 model, while $\lambda _{z}$ is held fixed at
$\lambda _{z}^{\mathrm{HLW}}=0.035202$ obtained from \cites{holston.etal:2017} \emph{"misspecified"} Stage 2 procedure. In the
forth column (MLE($\sigma_g|\lambda_z^{\mathcal{M}_0})$), $\sigma _{g}$ is
again estimated directly by MLE together with the other parameters of the
Stage 3 model, but with $\lambda _{z}$ now fixed at $\lambda
_{z}^{\mathcal{M}_{0}}=0.012272$ obtained from the \emph{"correctly
specified"} Stage 2 model in \ref{S2full0}. The last column
(MLE($\sigma_g,\sigma_g)$) shows estimates when all parameters are computed
by MLE. Values in round brackets give the implied $\{\sigma _{g}, \sigma
_{z}\}$ or $\{\lambda _{g},\lambda _{z}\}$ values when either is fixed or
estimated. The last row (Log-likelihood) reports the value of the
log-likelihood function at these parameter estimates. The Matlab file
\texttt{Stage3\_replication.m} replicates these results.} \EAW \ET
\cleardoublepage\newpage
\begin{figure}[p!]
\BAW[10]\centering\vspace{-15mm}
\includegraphics[width=1.05\textwidth,trim={0 0 0
0},clip]{Stage3_estimates_filtered_HLW_prior_2019Q2.pdf} \vspace{-3mm} \EAW
\caption{Filtered estimates of the natural rate $r^{\ast}_t$, annualized
trend growth $g_t$, \emph{`other factor'} $z_t$, and the output gap (cycle)
variable $\tilde{y}_t$ up to 2019:Q2.}
\label{Afig:2019KF}
\end{figure}
\cleardoublepage\newpage
\begin{figure}[p!]
\BAW[10]\centering\vspace{-15mm}
\includegraphics[width=1.05\textwidth,trim={0 0 0
0},clip]{Stage3_estimates_smoothed_HLW_prior_2019Q2.pdf} \vspace{-3mm} \EAW
\caption{Smoothed estimates of the natural rate $r^{\ast}_t$, annualized
trend growth $g_t$, \emph{`other factor'} $z_t$, and the output gap (cycle)
variable $\tilde{y}_t$ up to 2019:Q2.}
\label{Afig:2019KS}
\end{figure}
\cleardoublepage\newpage
\vspace*{-13mm}
\begin{figure}[H]
\centering
\includegraphics[width=.96\textwidth,trim={0 0 0 0},clip]{recursive_GDP.pdf} \vspace{0mm}
\caption{GDP growth and recursively estimated mean of GDP growth from 2009:Q3 to 2019:Q2.}
\label{Afig:rec_mean}%
\end{figure}
\vspace*{1mm}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth,trim={0 0 0 0},clip]{SPF_10_year_expected_real_GDP_growth}\vspace{-0mm}
\caption{Mean and Median 10 year (real) GDP growth forecasts from the Survey of
Professional forecasters (SPF) obtained from
\url{https://www.philadelphiafed.org/research-and-data/real-time-center/survey-of-professional-forecasters/data-files/rgdp10}.
The blue shaded region marks the $25^{th}$ to $75^{th}$ percentile
region of the cross-section of forecaster.}
\label{Afig:SPF_GDP_growth} %
\end{figure}
\vspace*{2mm}
\begin{figure}[H]
\centering
\includegraphics[width=.99\textwidth,trim={0 0 0 0},clip]{expected_gdp_growth.pdf} \vspace{0mm}
\caption{GMSU-Vanguard survey based expected 3 year and 10 year (real) GDP growth from February 2017
to April 2020, taken from Figure II on page 5 in \cite{giglio.etal:2020} (see the appendix in
\cite{giglio.etal:2020} for more details on the design of the client/investor survey).}
\label{Afig:giglio_GDP_growth} %
\end{figure}
\clearpage
\section*{{Figures and Tables}}\addcontentsline{toc}{section}{Figures and Tables}
\vspace*{05mm}
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth,rotate=00,trim={0 0 0 0},clip]{gdp_growth_inflation_interest_rates_since_1947.pdf} %
\vspace{-06mm}
\caption{Inflation, interest rates, and GDP growth (annualized) from 1947:Q1 to 2019:Q2.}
\label{fig:HLW_factors}
\end{figure}
\cleardoublepage\newpage
\BT[p!]\caption{Summary statistics of GDP growth over various sub-periods and expansion periods only} %
\centering\vspace*{-2mm}\renewcommand{\arraystretch}{1.1}\renewcommand %
\tabcolsep{7pt}\fontsize{11pt}{13pt}\selectfont %
\newcolumntype{N}{S[table-format = 2.5,round-precision = 4]} %
\newcolumntype{K}{S[table-format = 5.2,round-precision = 0]} %
\begin{tabular*}{.97\columnwidth}{p{45mm}NNNKNN} %
\topline %
{\hsp[10] Time period} & %
{Mean} & %
{Median} & %
{Stdev} & %
{$T$} & %
{Stderr} & %
{HAC-Stderr} \\ \midrule %
{\hsp[3] 1947:Q2 $-$ 1981:Q3} & 3.57460623 & 3.45005454 & 4.62614839 & 138 & 0.39380390 & 0.48408048 \\
{\hsp[3] 1983:Q1 $-$ 2001:Q1} & 3.64188790 & 3.68619125 & 2.29561133 & 73 & 0.26868098 & 0.40155454 \\
{\hsp[3] 1947:Q2 $-$ 2001:Q1} & 3.46739214 & 3.48094368 & 4.04057864 & 216 & 0.27492655 & 0.35302270 \\ \cmidrule(ll){1-7}
{\hsp[3] 1947:Q2 $-$ 1948:Q4} & 2.79515476 & 2.28504979 & 3.40717233 & 7 & 1.28779009 & 1.25301772 \\
{\hsp[3] 1950:Q1 $-$ 1953:Q2} & 7.34709885 & 7.11936691 & 4.92757075 & 14 & 1.31694868 & 1.70601756 \\
{\hsp[3] 1954:Q3 $-$ 1957:Q3} & 3.93896808 & 3.90045256 & 3.65372953 & 13 & 1.01336224 & 1.29851867 \\
{\hsp[3] 1958:Q3 $-$ 1960:Q2} & 5.38582377 & 8.25098474 & 4.77957417 & 8 & 1.68983465 & 1.76820793 \\
{\hsp[3] 1961:Q2 $-$ 1969:Q4} & 4.78091472 & 4.34276189 & 2.98499125 & 35 & 0.50455561 & 0.61041410 \\
{\hsp[3] 1971:Q1 $-$ 1973:Q4} & 4.96221193 & 4.05503386 & 3.82343827 & 12 & 1.10373156 & 0.90409085 \\
{\hsp[3] 1975:Q2 $-$ 1980:Q1} & 4.17954338 & 2.94228977 & 3.66368599 & 20 & 0.81922509 & 0.74844011 \\
{\hsp[3] 1980:Q4 $-$ 1981:Q3} & 4.23481003 & 6.07631882 & 4.98886112 & 4 & 2.49443056 & 1.96043173 \\
{\hsp[3] 1983:Q1 $-$ 1990:Q3} & 4.17109994 & 3.80992734 & 2.21941490 & 31 & 0.39861868 & 0.63002180 \\
{\hsp[3] 1991:Q2 $-$ 2001:Q1} & 3.55219201 & 3.62908024 & 1.88730051 & 40 & 0.29840841 & 0.30730490 \\ \cmidrule(ll){1-7}
{\hsp[3] 2002:Q1 $-$ 2007:Q4} & 2.85460101 & 2.47095951 & 1.49946085 & 24 & 0.30607617 & 0.33238451 \\
{\hsp[3] 2009:Q3 $-$ 2019:Q2} & 2.28637245 & 2.25280082 & 1.47396679 & 40 & 0.23305461 & 0.19459307 \\
\bottomrule
\end{tabular*}
\tabnotes[-2.5mm][.965\columnwidth][-2.0mm]{This table reports estimates of
trend growth computed as the \textit{`average'} of annualized GDP\ growth
computed over various sub-periods and expansion periods only. Columns 2 to 5
report means, medians, standard deviations (Stdev) and sample sizes ($T$)
for the different sub-periods that are listed in column 1. The last two
columns provide simple (Stderr) and HAC\ robust (HAC-Stderr) standard errors
of the sample mean. The first three rows show time periods that include
recession as well as expansion periods over which GDP growth was larger on
average and/or more volatile than the last two rows (excluding the global
financial crisis recession period). The ten rows in the middle provide
summary statistics from 1947:Q2 to 2001:Q1 for expansion periods only.
} \label{tab:sumstatGDP}%
\ET
\cleardoublepage\newpage
\BT[p!]%
\caption{Replicated results of Tables 4 and 5 in
\cite{stock.watson:1998}}\centering\vspace*{-2mm} \renewcommand{%
\arraystretch}{1.1}\renewcommand\tabcolsep{7pt}\fontsize{11pt}{13pt}%
\selectfont \newcolumntype{K}{S[table-format = 4.5,round-precision = 4]} %
\newcolumntype{L}{S[table-format = 2.7,round-precision = 4]} %
\newcolumntype{U}{S[table-format = 1.2,round-precision = 2]} %
\newcolumntype{N}{S[table-format = 3.6,round-precision = 4]}
\begin{tabular*}{\columnwidth}{p{30mm}NLKp{10mm}Up{1mm}Kp{8mm}U}
\topline
Test& \multicolumn{1}{c}{Statistic} & \multicolumn{1}{c}{\hsp[-4]$p-$value} & \multicolumn{1}{c}{\hsp[4]$\lambda$} & \multicolumn{2}{c}{90\% CI} & &\multicolumn{1}{c}{\hsp[4]$\sigma_{\Delta \beta}$} & \multicolumn{2}{c}{90\% CI} \\
\midrule
$L$ & 0.209398 & 0.25 & 4.055867 & [0, & \llap{19.36]} & & 0.130303 & [0, & \llap{0.62]} \\
MW & 1.158779 & 0.285 & 3.433543 & [0, & \llap{18.76]} & & 0.11031 & [0, & \llap{0.60]} \\
EW & 0.682116 & 0.325 & 3.071203 & [0, & \llap{17.01]} & & 0.098669 & [0, & \llap{0.54]} \\
QLR & 3.310513 & 0.48 & 0.778615 & [0, & \llap{13.26]} & & 0.025015 & [0, & \llap{0.42]} \\[1mm]
\end{tabular*}
\renewcommand{\arraystretch}{1.1}\renewcommand\tabcolsep{7pt} %
\fontsize{11pt}{13pt}\selectfont
\newcolumntype{K}{S[table-format = 3.4,round-precision = 4]} %
\newcolumntype{L}{S[table-format = 2.0,round-precision = 2]} %
\newcolumntype{U}{S[table-format = 1.2,round-precision = 2]} %
\newcolumntype{N}{S[table-format = 4.8,round-precision = 8]}
\begin{tabular*}{\columnwidth}{p{30mm}NNNNN}
\topline
Parameter & \multicolumn{1}{r}{MPLE\hsp[4]} & \multicolumn{1}{r}{MMLE\hsp[3]} & \multicolumn{1}{r}{MUE(0.13)} & \multicolumn{1}{r}{MUE(0.62)} & \multicolumn{1}{r}{SW.GAUSS} \\
\midrule
$\sigma_{\Delta\beta}$ & 0 & 0.044400981501 & {0.13~~~~~~} & {0.62~~~~~~} & {0.13~~~~~~} \\
$\sigma_{\varepsilon}$ & 3.851994804846 & 3.858594227694 & 3.846619226998 & 3.782106578788 & 3.846619168993 \\
AR(1) & 0.337083211459 & 0.340252340453 & 0.335014533140 & 0.315444720280 & 0.335014539465 \\
AR(2) & 0.128903279894 & 0.130746074778 & 0.127423133110 & 0.120156417873 & 0.127423091443 \\
AR(3) & -0.009173836441 & -0.007251079335 & -0.010170600327 & -0.014889876797 & -0.010170523754 \\
AR(4) & -0.085644420982 & -0.082478616377 & -0.086802971766 & -0.091560659988 & -0.086802976773 \\
$\beta_{00}$ & 1.795899355603 & {~~~~~~---} & 2.440999263153 & 2.671500070276 & 2.440999404153 \\
\cmidrule(lr){1-6}
{Log-likelihood} & -539.772747031207 & -547.480464499946 & -540.692677059246 & -544.907181136766 & -540.692677059247 \\
\bottomrule
\end{tabular*}
\tabnotes[-3mm][.99\columnwidth][-1.0mm]{This table reports replication
results that correspond to Tables 4 and 5 in \cite{stock.watson:1998} on
page 354. The top part of the table shows the 4 different structural break
test statistics together with their $p-$values in the first two columns,
followed by the corresponding MUE estimates of $\lambda $ with 90\% CIs in
square brackets. The last two columns show the implied $\sigma _{\Delta
\beta }$ estimate computed from $T^{-1}{\lambda}\times {\sigma}_{\varepsilon
}/{a}(1)$ and 90\% CIs in square brackets. The first two columns of the
bottom part of the table report results from Maximum Likelihood based
estimation, where MPLE estimates the initial value of the state vector
$\beta_{00}$, while MMLE uses a diffuse prior for the initial value of the
state vector with mean zero and the variance set to $10^6$. Columns under
the heading MUE(0.13) and MUE(0.62) show Median Unbiased Estimates when
$\sigma_{\Delta \beta}$ is held fixed at 0.13, respectively, 0.62, which
correspond to the estimate of $\sigma_{\Delta \beta}$ when $\lambda$ is
computed using \cites{nyblom:1989} $L$ test (and its upper $90\%$ CI). The
last column under the heading SW.GAUSS lists the corresponding MUE(0.13)
estimates obtained from running \cites{stock.watson:1998} GAUSS code. The
row Log-likelihood displays the value of the log-likelihood at the reported
parameter estimates.
The Matlab file \texttt{SW1998\_MUE\_replication.m} replicates these results.}%
\label{tab:sw98_T4}\ET
\cleardoublepage\newpage
\begin{figure}[p!]
\centering
\includegraphics[width=1\textwidth,rotate=00,trim={0 129mm 0
81mm},clip]{SW98trend_growth_all_2017} %
\caption{Smoothed trend growth estimates of US real GDP per capita.}
\label{fig:sw98_F4}
\end{figure}
\cleardoublepage\newpage
\BT[p!]%
\caption{Broader replication results of Tables 4 and 5 in
\cite{stock.watson:1998} using per capita real GDP data from the Federal
Reserve
Economic Data database (FRED2)}\centering\vspace*{-2mm} \renewcommand{%
\arraystretch}{1.1}\renewcommand\tabcolsep{7pt}\fontsize{11pt}{13pt}%
\selectfont \newcolumntype{K}{S[table-format = 4.5,round-precision = 4]} %
\newcolumntype{L}{S[table-format = 2.7,round-precision = 4]} %
\newcolumntype{U}{S[table-format = 1.2,round-precision = 2]} %
\newcolumntype{N}{S[table-format = 3.6,round-precision = 4]}
\begin{tabular*}{\columnwidth}{p{30mm}NLKp{10mm}Up{1mm}Kp{8mm}U}
\topline
Test& \multicolumn{1}{c}{Statistic} & \multicolumn{1}{c}{\hsp[-4]$p-$value} & \multicolumn{1}{c}{\hsp[4]$\lambda$} & \multicolumn{2}{c}{90\% CI} & &\multicolumn{1}{c}{\hsp[4]$\sigma_{\Delta \beta}$} & \multicolumn{2}{c}{90\% CI} \\
\midrule
$L$ & 0.046701 & 0.895 & 0.000000 & [0, & \llap{4.099]} & & 0.000000 & [0, & ~~~\llap{0.1092]} \\
MW & 0.251436 & 0.890 & 0.000000 & [0, & \llap{4.296]} & & 0.000000 & [0, & ~~~\llap{0.1145]} \\
EW & 0.132064 & 0.900 & 0.000000 & [0, & \llap{3.910]} & & 0.000000 & [0, & ~~~\llap{0.1042]} \\
QLR & 0.883403 & 0.980 & 0.000000 & [0, & \llap{0.000]} & & 0.000000 & [0, & ~~~~\llap{0.0000]} \\[1mm]
\end{tabular*}
\renewcommand{\arraystretch}{1.1}\renewcommand\tabcolsep{7pt} %
\fontsize{11pt}{13pt}\selectfont
\newcolumntype{K}{S[table-format = 3.4,round-precision = 4]} %
\newcolumntype{L}{S[table-format = 2.0,round-precision = 2]} %
\newcolumntype{U}{S[table-format = 1.2,round-precision = 2]} %
\newcolumntype{N}{S[table-format = 4.9,round-precision = 8]} %
\begin{tabular*}{\columnwidth}{p{44mm}NNNN}
\topline
Parameter
& \multicolumn{1}{r}{MPLE\hsp[6]}
& \multicolumn{1}{r}{MMLE\hsp[5]}
& \multicolumn{1}{r}{MUE($\sigma_{\Delta \beta}^L$)\hsp[2]}
& \multicolumn{1}{r}{MUE($\mathrm{CI}-\sigma_{\Delta \beta}^L$)\hsp[-2]} \\
\midrule
$\sigma_{\Delta\beta}$ & 0 & 0 & 0 & 0.10926099 \\
$\sigma_{\varepsilon}$ & 3.86603366 & 3.87619022 & 3.86603367 & 3.87574722 \\
AR(1) & 0.31646541 & 0.32120674 & 0.31646541 & 0.32138794 \\
AR(2) & 0.14652905 & 0.14903845 & 0.14652905 & 0.14924197 \\
AR(3) & -0.11122061 & -0.10873408 & -0.11122061 & -0.10846721 \\
AR(4) & -0.09512645 & -0.09050024 & -0.09512645 & -0.08983094 \\
$\beta_{00}$ & 2.12011198 & {~~~~---} & 2.12011200 & 2.07784473 \\ \cmidrule(ll){1-5}
{Log-likelihood} & -540.49919714 & -548.38308851 & -540.49919714 & -541.89394940 \\
\bottomrule
\end{tabular*}
\tabnotes[-3mm][.99\columnwidth][-1.0mm]{This table reports replication
results that correspond to Tables 4 and 5 in \cite{stock.watson:1998} on
page 354, but now using real GDP per capita data (2012 chained dollars)
obtained from the Federal Reserve Economic Data database (FRED2) with series
ID: A939RX0Q048SBEA. The top part of the table shows the 4 different
structural break test statistics together with their $p-$values in the first
two columns, followed by the corresponding MUE estimates of $\lambda $ with
90\% CIs in square brackets. The last two columns show the implied $\sigma
_{\Delta \beta }$ estimate computed from $T^{-1}{\lambda}\times
{\sigma}_{\varepsilon }/{a}(1)$ and 90\% CIs in square brackets. The first
two columns of the bottom part of the table report results from Maximum
Likelihood based estimation, where MPLE estimates the initial value of the
state vector $\beta_{00}$, while MMLE uses a diffuse prior for the initial
value of the state vector with mean zero and the variance set to $10^6$.
Columns under the heading MUE($\sigma_{\Delta \beta}^L$) and
MUE($\mathrm{CI}-\sigma_{\Delta \beta}^L$) show Median Unbiased Estimates
when $\sigma_{\Delta \beta}$ is held fixed again at \cites{nyblom:1989} $L$
test statistic based structural break estimate, respectively, when the upper
$90\%$ CI value is used. The row Log-likelihood displays the value of the
log-likelihood at the reported parameter estimates. The sample period is the
same as in \cite{stock.watson:1998}, that is, from 1947:Q2 to 1995:Q4.
The Matlab file \texttt{estimate\_percapita\_trend\_growth\_v1.m} replicates these results.}%
\label{tab:sw98_T4_2}\ET
\cleardoublepage\newpage
\BT[p!]\caption{Stage 1 parameter estimates}\centering\vspace*{-2mm}%
\renewcommand{\arraystretch}{1.1}\renewcommand\tabcolsep{7pt}%
\fontsize{11pt}{13pt}\selectfont%
\newcolumntype{N}{S[table-format =
3.6,round-precision = 6]}
\begin{tabular*}{1\columnwidth}{p{30mm}NNNNp{0mm}NN}
\topline
\multirow{2}{*}{\hsp[4]$\boldsymbol{\theta }_{1}$} & \multicolumn{4}{c}{HLW Prior} & & \multicolumn{2}{c}{Diffuse Prior} \\ \cmidrule(rr){2-5} \cmidrule(rr){7-8}
& {\hsp[0.5]HLW.R-File\hsp[-1.5]} & {\hsp[0.5]$b_{y}\geq 0.025$\hsp[-1.5]} & {\hsp[.5]Alt.Init.Vals\hsp[-1.5]} & {\hsp[3]$b_{y}$ Free} & &{\hsp[0.5]$b_{y}\geq 0.025$\hsp[-1.5]}&{\hsp[3]$b_{y}$ Free} \\
\midrule
$\hsp[3]a_{y,1}$ & 1.51706947457 & 1.51706921391 & 1.55766702528 & 1.45969739764 & & 1.64644427 & 1.56782968 \\
$\hsp[3]a_{y,2}$ & -0.52880389083 & -0.52880365096 & -0.62244312676 & -0.46382830448 & & -0.67273238 & -0.57782969 \\
$\hsp[3]b_{\pi }$ & 0.71249409600 & 0.71249401280 & 0.66995668421 & 0.72908945897 & & 0.71787120 & 0.73306306 \\
$\hsp[3]b_{y}$ & 0.02500000000 & 0.02500000000 & 0.09718496555 & 0.00574089531 & & 0.02500000 & -0.00094678 \\
$\hsp[3]g$ & 0.77639673232 & 0.77639643600 & 0.74377517665 & 0.77947223337 & & 0.70948301 & 0.60465544 \\
$\hsp[3]\sigma _{\tilde{y}}$ & 0.53494302228 & 0.53494313648 & 0.40538024505 & 0.61719013907 & & 0.40636026 & 0.47407731 \\
$\hsp[3]\sigma _{\pi}$ & 0.80773582307 & 0.80773566453 & 0.79068291968 & 0.81180107613 & & 0.80917055 & 0.81302356 \\
$\hsp[3]\sigma _{y^{\ast }}$ & 0.51191069710 & 0.51191040251 & 0.61768886637 & 0.41897679640 & & 0.57474708 & 0.53075566 \\
\cmidrule(lr){1-8}
{Log-likelihood} & -531.87471383407 & -531.87471383414 & -531.45144619596 & -531.05106629477 & & -536.98033619 & -535.95961197 \\
\bottomrule
\end{tabular*}%
\tabnotes[-3mm][.99\columnwidth][-1.0mm]{This table reports replication
results for the Stage 1 model parameter vector $\boldsymbol{\theta }_{1}$ of
\cite{holston.etal:2017}. The table is split in two blocks. The left block
(under the heading HLW Prior) reports estimation results of the Stage 1
model using the initialisation of \cite{holston.etal:2017} for the state
vector $\boldsymbol{\xi }_{t}$, where $\boldsymbol{\xi
}_{00}=[806.45,805.29,804.12]$ and $\mathbf{P}_{00}$ as defined in
\ref{eq:P00S1c}. The right block (under the heading Diffuse Prior) uses a
diffuse prior for $\boldsymbol{\xi }_{t}$ with
$\mathbf{P}_{00}=10^{6}\times\mathbf{I}_{3}$, where $\mathbf{I}_{3}$ is a 3
dimensional identity matrix. In the left block, 4 sets of results are
reported. The first column (HLW.R-File) reports estimates obtained by
running \cites{holston.etal:2017} R-Code for the Stage 1 model. The second
column ($b_{y}\geq 0.025$) shows estimation results using
\cites{holston.etal:2017} initial values for parameter vector
$\boldsymbol{\theta }_{1}$ in the optimisation routine, together with the
lower bound restriction $b_{y}\geq 0.025$. \fnref{FN:initVals} describes how
these initial values were found. The third column (Alt.Init.Vals) shows
estimates when alternative initial values for $\boldsymbol{\theta }_{1}$ are
used, with the $b_{y}\geq 0.025$ restriction still in place. The fourth
column ($b_{y}$ Free)\ reports estimates when the restriction on $b_{y}$ is
removed. The right column block displays estimates of $\boldsymbol{\theta
}_{1}$ with and without the restriction on $b_{y}$ being imposed, but with a
diffuse prior on the state vector. The last row (Log-likelihood) reports the
value of the log-likelihood function at these parameter estimates.
The Matlab file \texttt{Stage1\_replication.m} computes these results.}\label%
{tab:Stage1}\ET
\clearpage\newpage
\BT[p!]%
\caption{Stage 1 MUE results of $\lambda_g$ for various
$\skew{0}\boldsymbol{\hat{\theta}}_{1}$ and structural break tests} %
\centering\vspace*{-2mm} \renewcommand{\arraystretch}{1.1}%
\renewcommand\tabcolsep{7pt} \fontsize{11pt}{13pt}\selectfont
\newcolumntype{N}{S[table-format =
2.8,round-precision = 7]}
\begin{tabular*}{1\columnwidth}{p{22mm}NNNNp{0mm}NS}
\topline
\multirow{2}{*}{\hsp[2]${\lambda}_{g}$}
& \multicolumn{4}{c}{HLW Prior}
&& \multicolumn{2}{c}{Diffuse Prior} \\ \cmidrule(lr){2-5} \cmidrule(lr){7-8}
& {HLW.R-File} & {$b_{y}\geq 0.025$} & {Alt.Init.Vals} & {$b_y$ Free} & & {$b_{y}\geq 0.025$} & {$b_y$ free} \\
\midrule
{$L$} & {---} & 0.073287980348 & 0.094199118618 & 0.032862425776 & & 0.047520315055 & 0 \\
{MW} & 0.06518061332957 & 0.065180695002 & 0.089453277795 & 0.031465385660 & & 0.041827386386 & 0 \\
{EW} & 0.05386903777145 & 0.053869107878 & 0.080675754152 & 0.025383515899 & & 0.042378997103 & 0 \\
{QLR} & 0.04938177758158 & 0.049381833434 & 0.079201458499 & 0.019428919676 & & 0.041187690190 & 0 \\
\bottomrule
\end{tabular*}\label{tab:Stage1_lambda_g}
\tabnotes[-3mm][.99\columnwidth][-1.0mm]{This table reports Stage 1
estimates of the ratio $\lambda _{g}=\sigma_{g}/\sigma _{y^{\ast }}$ which
is equal to \cites{stock.watson:1998} MUE $\lambda/T$ for the various estimates
of $\boldsymbol{{\theta}}_{1}$ reported in \autoref{tab:Stage1} and the four
different structural break tests. The table is split into left and right
column blocks as in \autoref{tab:Stage1}. Under the heading HLW.R-File,
estimates of $\lambda _{g}$ obtained from running \cites{holston.etal:2017}
R-Code are reported for reference. These are computed for the MW, EW and QLR
structural break tests only. The remaining columns report the replicated
$\lambda _{g}$ from the various $\boldsymbol{{\theta}}_{1}$ estimates from
\autoref{tab:Stage1}. } \ET
\cleardoublepage\newpage
\BT[p!]%
\caption{Stage 1 MUE results of $\lambda_g$ after AR(1) filtering $\Delta
\hat{y}^\ast_{t|T}$ as in \cite{stock.watson:1998}}
\centering\vspace*{-2mm} \renewcommand{%
\arraystretch}{1.1}\renewcommand\tabcolsep{7pt}\fontsize{11pt}{13pt}\selectfont
\newcolumntype{K}{S[table-format = 4.5,round-precision = 4]} %
\newcolumntype{L}{S[table-format = 2.7,round-precision = 4]} %
\newcolumntype{U}{S[table-format = 1.2,round-precision = 2]} %
\newcolumntype{N}{S[table-format = 3.6,round-precision = 4]}
\begin{tabular*}{.85\columnwidth}{p{25mm}NLKp{10mm}Up{0mm}K}
\topline
Test& \multicolumn{1}{c}{Statistic} & \multicolumn{1}{c}{\hsp[-4]$p-$value} & \multicolumn{1}{c}{\hsp[2]$\lambda$} & \multicolumn{2}{c}{90\% CI} & &\multicolumn{1}{c}{\hsp[4]$\lambda_g=\frac{\lambda}{T}$}\\
\midrule
$L$ & 2.28146146 & 0.00500000 & 20.38331071 & [4.36, & {\hsp[-4]80.00]} & & 0.09059249 \\
MW & 15.35436143 & 0.00500000 & 20.58395335 & [4.47, & {\hsp[-4]80.00]} & & 0.09148424 \\
EW & 8.45806593 & 0.00500000 & 15.99034241 & [3.53, & {\hsp[-4]52.81]} & & 0.07106819 \\
QLR & 20.75957973 & 0.00500000 & 14.81635270 & [3.14, & {\hsp[-4]48.48]} & & 0.06585046 \\[1mm]
\bottomrule
\end{tabular*}
\tabnotes[-3mm][.84\columnwidth][-2.0mm]{This table reports
\cites{stock.watson:1998} MUE estimation results after the constructed
$\Delta \hat{y}^\ast_{t|T}$ variable was AR(1) filtered to remove the serial
correlation. The first two columns report the 4 different structural break
test statistics together with the corresponding $p-$values, followed by the
implied MUE estimates of $\lambda $ with 90\% CIs in square brackets. The
last column lists \cites{holston.etal:2017} $\lambda_g=\frac{\lambda}{T}$
to facilitate the comparison to the results listed under column one in
\autoref{tab:Stage1_lambda_g}.}%
\label{tab:stage1_MUE_AR1}\ET
\cleardoublepage\newpage
\BT[p!]%
\caption{MUE estimates of the transformed Stage 1 model using an
AR(4) model for $u_t$} \centering\vspace*{-2mm}\renewcommand{%
\arraystretch}{1.1} \renewcommand\tabcolsep{7pt}\fontsize{11pt}{13pt}%
\selectfont\newcolumntype{K}{S[table-format = 3.6,round-precision = 6]} %
\newcolumntype{L}{S[table-format = 2.7,round-precision = 4]} %
\newcolumntype{U}{S[table-format = 1.2,round-precision = 2]} %
\newcolumntype{N}{S[table-format = 3.6,round-precision = 4]}
\begin{tabular*}{1\columnwidth}{p{25mm}NLKp{10mm}Up{1mm}Kp{8mm}U}
\topline
Test& \multicolumn{1}{c}{Statistic} &
\multicolumn{1}{c}{\hsp[-4]$p-$value} &
\multicolumn{1}{c}{\hsp[4]$\lambda$} &
\multicolumn{2}{c}{90\% CI} & &
\multicolumn{1}{c}{\hsp[4]$\sigma_{g}$} &
\multicolumn{2}{c}{90\% CI} \\
\midrule
$L$ & 0.316151035998 & 0.120000000000 & 5.914618545073 & [0, & \llap{23.95]} & & 0.154213440892 & [0, & \llap{0.62]} \\
MW & 1.787457322584 & 0.145000000000 & 5.650430931635 & [0, & \llap{23.88]} & & 0.147325206157 & [0, & \llap{0.62]} \\
EW & 1.066331073356 & 0.180000000000 & 4.883718810124 & [0, & \llap{20.97]} & & 0.127334514698 & [0, & \llap{0.54]} \\
QLR & 4.602919591679 & 0.285000000000 & 3.511961413462 & [0, & \llap{17.65]} & & 0.091568314968 & [0, & \llap{0.46]} \\[1mm]
\end{tabular*}\renewcommand{\arraystretch}{1.1}\renewcommand\tabcolsep{7pt} %
\fontsize{11pt}{13pt}\selectfont%
\newcolumntype{N}{S[table-format =
4.10,round-precision = 8]}
\begin{tabular*}{1\columnwidth}{p{38mm}NNNN}
\topline
Parameter & \multicolumn{1}{c}{MPLE} & \multicolumn{1}{c}{MMLE} & \multicolumn{1}{c}{$\mathrm{MUE}(\lambda_{\mathrm{EW}})$} & \multicolumn{1}{c}{$\mathrm{MUE}(\lambda_{\mathrm{EW}}^{\mathrm{Up}})$} \\
\midrule
$\sigma_{g}$ & 0 & 0.10621860661 & 0.12733451470 & 0.54678210579 \\
$\sigma_{\varepsilon}$ & 2.99782489961 & 2.98030098731 & 2.97346405372 & 2.90800214575 \\
AR(1) & 0.28603147365 & 0.27433173122 & 0.26988229275 & 0.24291125758 \\
AR(2) & 0.16828174224 & 0.16079307466 & 0.15789805142 & 0.14866123650 \\
AR(3) & -0.02046076069 & -0.02734561640 & -0.02996690605 & -0.03106235496 \\
AR(4) & 0.06570210187 & 0.05750551407 & 0.05423838150 & 0.06119692006 \\
$g_{00}$ & 3.02198580916 & {---} & 4.09740641791 & 5.17204700003 \\
\cmidrule(ll){1-5}
Log-likelihood & -566.39181042995 & -573.64230971420 & -566.57435245187 & -570.81021839246 \\
\bottomrule
\end{tabular*}
\tabnotes[-3mm][.99\columnwidth][-1.0mm]{This table reports MUE estimation
results of the transformed (expressed in local level model form) Stage 1
model, using an AR(4) process for $u_{t}$. The top part of the table shows
the 4 different structural break test statistics together with their
$p-$values in the first two columns, followed by the corresponding MUE
estimates of $\lambda $ with 90\% CIs in square brackets. The last two
columns show the implied $\sigma _{g}$ estimate computed from
$T^{-1}{\lambda}\times {\sigma}_{\varepsilon }/{a}(1)$ and 90\%
CIs in square brackets. The first two columns of the bottom part of the
table report results from Maximum Likelihood based estimation, where MPLE
estimates the initial value of the state vector $g_{00}$, while MMLE uses a
diffuse prior for the initial value of the state vector with mean zero and
the variance set to $10^{6}$. Columns under the heading
MUE(${\lambda}_{\mathrm{EW}}$) and
MUE(${\lambda}_{\mathrm{EW}}^{\mathrm{UP}}$) show Median Unbiased
Estimates when $\sigma _{g}$ is held fixed at its MUE point estimate and
upper $90\%$ CI, respectively, from the EW structural break test. The row
Log-likelihood displays the value of the log-likelihood at the reported
parameter estimates. The Matlab file
\texttt{Stage1\_local\_level\_model\_SW98\_MUE\_Clark\_UC.m} replicates these results.}%
\label{tab:MUE_S1}\ET
\cleardoublepage\newpage
\BT[p!]\caption{Parameter estimates of \cites{clark:1987} UC model}%
\centering\vspace*{-2mm}\renewcommand{\arraystretch}{1.1}\renewcommand%
\tabcolsep{7pt}\fontsize{11pt}{13pt}\selectfont%
\newcolumntype{N}{S[table-format =
4.8,round-precision = 8]}
\newcolumntype{K}{S[table-format =
4.10,round-precision = 8]}
\begin{tabular*}{1\columnwidth}{p{38mm}NNp{7mm}NK}
\topline
{Parameter} & {\hsp[4]Clark's UC0} & {\hsp[5]Std.error} & & {\hsp[5]Clark's UC} & {\hsp[2]Std.error} \\
\midrule
$a_{y,1}$ & 1.6688617339 & 0.1094874136 & & 1.2954481785 & 0.2353595457 \\
$a_{y,2}$ & -0.7242805140 & 0.1124274886 & & -0.5674869068 & 0.2168834975 \\
$\sigma _{y^{\ast}}$ & 0.5898417486 & 0.0584209144 & & 1.1575382576 & 0.2250901380 \\
$\sigma _{g}$ & 0.0463214922 & 0.0227693495 & & 0.0321901826 & 0.0222178828 \\
$\sigma _{\tilde{y}}$ & 0.3462603749 & 0.0972702766 & & 0.8095072197 & 0.3646114322 \\
$\mathrm{Corr}(\mathring{\varepsilon}_{t}^{\tilde{y}},\varepsilon _{t}^{y^{\ast }})$ & 0 & {~~~~---} & & -0.9426313454 & 0.0971454061 \\
\cmidrule(lr){1-6}
{Log-likelihood} & -270.0007183929 & {~~~~---} & & -269.8750406078 & {~~~~---} \\
\bottomrule
\end{tabular*}
\tabnotes[-3mm][.99\columnwidth][-1.0mm]{This table reports parameter
estimates of \cites{clark:1987} UC model. Two sets of results are reported.
In the left part of \autoref{tab:clarkddUC}, parameter estimates and
standard errors (Std.errors) from Clark's UC0\ model which assumes
$\mathrm{Corr}(\mathring{\varepsilon}_{t}^{\tilde{y}},\varepsilon
_{t}^{y^{\ast }})=0$ are reported. In the right part, parameter estimates
and standard errors for Clark's correlated UC\ model are shown, where
$\mathrm{Corr}(\mathring{\varepsilon}_{t}^{\tilde{y}},\varepsilon
_{t}^{y^{\ast }})$ is explicitly estimated. Standard errors are computed
from the inverse of the Hessian matrix of the log-likelihood. I use a
diffuse prior for the $I(1)$ part of the state vector, with the variance set
to $10^{6}$. The stationary part of the state vector is initialized at its
unconditional mean and variance. I do not estimate the initial value of the
state vector. This is analogous to MMLE in \cite{stock.watson:1998}. The
Matlab file \texttt{Stage1\_local\_level\_model\_SW98\_MUE\_Clark\_UC.m}
replicates
these results.} \label{tab:clarkddUC}\ET
\cleardoublepage\newpage
\begin{figure}[p!]
\centering
\includegraphics[width=1.01\textwidth,trim={0 00mm 0
50mm},clip]{Stage1_LLM_growth_trend_2017}
\caption{Smoothed trend growth estimates from the modified Stage 1 model.}
\label{fig:MUE_S1}
\end{figure}
\cleardoublepage\newpage
\BT[p!]
\caption{Stage 2 parameter estimates}\centering\vspace*{-2mm}%
\renewcommand{\arraystretch}{1.1}\renewcommand\tabcolsep{7pt}%
\fontsize{11pt}{13pt}\selectfont%
\newcolumntype{N}{S[table-format = 5.8,round-precision = 7]}
\newcolumntype{U}{S[table-format = 5.7,round-precision = 7]}
\newcolumntype{L}{S[table-format = 4.7,round-precision = 7]}
\begin{tabular*}{.93\columnwidth}{p{38mm}NNNN}
\topline
\hsp[5]$\boldsymbol{\theta }_{2}$
& {\hsp[0]HLW.R-File\hsp[-4.0]}
& {\hsp[6]Replicated\hsp[1.5]}
& {\hsp[4]MLE($\sigma_g$) \hsp[-3]}
& {\hsp[3]MLE($\sigma_g$).$\mathcal{M}_0$\hsp[-2]}\\
\midrule
$\hsp[3]a_{y,1} $ & 1.5139908988 & 1.5139908874 & 1.4735944910 & 1.4947610514 \\
$\hsp[3]a_{y,2} $ & -0.5709338972 & -0.5709338892 & -0.5321668125 & -0.5531450715 \\
$\hsp[3]a_{r} $ & -0.0736646657 & -0.0736646651 & -0.0831539459 & -0.0755562707 \\
$\hsp[3]a_{0} $ & -0.2630693940 & -0.2630693778 & -0.2548597292 & {~~~~~---} \\
$\hsp[3]a_{g} $ & 0.6078665929 & 0.6078665690 & 0.6277123919 & {~~~~~---} \\
$\hsp[3]b_{\pi } $ & 0.6627428265 & 0.6627428246 & 0.6655286128 & 0.6692918543 \\
$\hsp[3]b_{y} $ & 0.0844720258 & 0.0844720318 & 0.0819057776 & 0.0802934388 \\
$\hsp[3]\sigma _{\tilde{y}}$ & 0.3582701455 & 0.3582701554 & 0.3636497583 & 0.3742315512 \\
$\hsp[3]\sigma _{\pi } $ & 0.7872279651 & 0.7872279652 & 0.7881905740 & 0.7895136932 \\
$\hsp[3]\sigma _{y^{\ast }}$ & 0.5665698145 & 0.5665698109 & 0.5534536560 & 0.5526272640 \\
$\hsp[3]\sigma _{g}$ {(implied)} & (0.0305205) & (0.0305205) & 0.0437060828 & 0.0448689280 \\
$\hsp[3]\lambda_g $ {(implied)} & 0.0538690378 & 0.0538690378 & (0.0789697) & (0.0811920) \\ \cmidrule(lr){1-5}
{Log-likelihood} & -513.5709576473 & -513.5709576473 & -513.2849624670 & -514.1458025902 \\
\bottomrule
\end{tabular*}%
\tabnotes[-3mm][.925\columnwidth][-1.0mm]{This table reports replication
results for the Stage 2 model parameter vector $\boldsymbol{\theta }_{2}$ of
\cite{holston.etal:2017}. The first column (HLW.R-File) reports estimates
obtained by running \cites{holston.etal:2017} R-Code for the Stage 2 model.
The second column
(Replicated) shows the replicated results using the same set-up as in %
\cites{holston.etal:2017}. The third column (MLE($\sigma _{g}$)) reports
estimates when $\sigma _{g}$ is freely estimated by MLE together with the
other parameters of the Stage 2 model, rather than imposing the ratio $%
\lambda _{g}=\sigma _{g}/\sigma _{y^{\ast }}=0.0538690378$ obtained from
Stage 1. The last column (MLE($\sigma _{g}$)$.\mathcal{M}_{0}$) provides
estimates of the \emph{"correctly specified"} Stage 2 model in
\ref{S2full0}, with $\sigma _{g} $ again estimated directly by MLE. Values
in round brackets give the implied
$\sigma _{g}$ or $\lambda _{g}$ values when either $\lambda _{g}$ is fixed or when $%
\sigma _{g}$ is estimated. The last row (Log-likelihood) reports the value
of the log-likelihood function at these parameter estimates. The Matlab file
\texttt{Stage2\_replication.m} replicates these results.}\label%
{tab:Stage2} \ET
\cleardoublepage\newpage
\begin{figure}[p!]
\centering
\includegraphics[width=1\textwidth,trim={0 0 0 0},clip]{Fstat_MUE2.pdf}\vspace{-0mm}
\caption{Sequence of $\{F(\tau )\}_{\tau =\tau _{0}}^{\tau _{1}}$ statistics on the dummy
variable coefficients $\{\hat{\zeta}_{1}(\tau )\}_{\tau =\tau _{0}}^{\tau
_{1}}$ used in the construction of the structural break test statistics.}
\label{fig:seqaF}
\end{figure}
\cleardoublepage\newpage
\BST[p!]%
\BAW[5]%
\caption{Stage 2 MUE results of $\lambda_z$ with corresponding 90\%
confidence intervals, structural break test statistics and $p-$values} %
\centering\vspace*{-2mm}\renewcommand{%
\arraystretch}{1.15}\renewcommand\tabcolsep{7pt} %
\fontsize{10pt}{12pt}\selectfont %
\newcolumntype{N}{S[table-format = 1.6,round-precision = 6]} %
\newcolumntype{K}{S[table-format = 1.5,round-precision = 6]} %
\newcolumntype{Q}{S[table-format = 1.4,round-precision = 6]} %
\begin{tabular*}{.94\columnwidth}{p{7mm}NNNNNNNp{0mm}NNNNNN}
\topline
\multirow{2}{*}{\hsp[2]$\lambda_{z}$}
& \multicolumn{7}{c}{\emph{`Time varying} $\boldsymbol{\phi }$\textit{'}}
&& \multicolumn{6}{c}{\emph{`Constant} $\boldsymbol{\phi }$\textit{'}} \\ \cmidrule(rr){2-8} \cmidrule(rr){10-15}
& {HLW.R-File} & {\hsp[-2] Replicated \hsp[-2]} & {[90\% CI]} & {\hsp[-2]
MLE($\sigma_g$) \hsp[-2]} & {[90\% CI]} & {\hsp[-2]
MLE($\sigma_g$).$\mathcal{M}_0$ \hsp[-2]} & {[90\% CI]} & {} & {\hsp[-2]
Replicated \hsp[-2]} & {[90\% CI]} & {\hsp[-2] MLE($\sigma_g$) \hsp[-2]} &
{[90\% CI]} & {\hsp[-2] MLE($\sigma_g$).$\mathcal{M}_0$ \hsp[-2]} & {[90\%
CI]}
\\
\midrule
$L$ & {---} & 0 & {[0, 0.02]} & 0 & {[0, 0.00]} & 0 & {[0, 0.05]} && 0 & {[0, 0.02]} & 0 & {[0, 0.00]} & 0 & {[0, 0.05]} \\
MW & 0.0249690419675479 & 0.02496905 & {[0, 0.11]} & 0.03299656 & {[0, 0.14]} & 0.00892013 & {[0, 0.07]} && 0 & {[0, 0.03]} & 0 & {[0, 0.02]} & 0 & {[0, 0.06]} \\
EW & 0.0302172209051429 & 0.03021723 & {[0, 0.11]} & 0.03379822 & {[0, 0.12]} & 0.00779609 & {[0, 0.06]} && 0 & {[0, 0.03]} & 0 & {[0, 0.02]} & 0.00075430 & {[0, 0.06]} \\
QLR & 0.0342646997381782 & 0.03426471 & {[0, 0.12]} & 0.03894233 & {[0, 0.14]} & 0.01719852 & {[0, 0.08]} && 0 & {[0, 0.05]} & 0 & {[0, 0.04]} & 0.01470321 & {[0, 0.07]} \\
\midrule
\multirow{1}{*}{} & \multicolumn{14}{c}{Structural break test statistics ($p-$values in parenthesis)\hsp[0]} \\
\midrule
$L$ & {---} & 0.05085088 & {(0.8700)} & 0.03808119 & {(0.9400)} & 0.10830144 & {(0.5450)} && 0.05085088 & {(0.8700)} & 0.03808119 & {(0.9400)} & 0.10830144 & {(0.5450)} \\
MW & 1.8705612948815300 & 1.87056176 & {(0.1300)} & 2.68314033 & {(0.0600)} & 0.80746921 & {(0.4300)} && 0.33010788 & {(0.8100)} & 0.25322628 & {(0.8900)} & 0.65167084 & {(0.5250)} \\
EW & 1.6930140233544000 & 1.69301457 & {(0.0800)} & 2.12052740 & {(0.0450)} & 0.50616478 & {(0.4300)} && 0.20293551 & {(0.7850)} & 0.14831394 & {(0.8750)} & 0.43448586 & {(0.4900)} \\
QLR & 8.7144611146321900 & 8.71446298 & {(0.0450)} & 10.30303534 & {(0.0250)} & 4.75129241 & {(0.2700)} && 2.85143418 & {(0.5700)} & 2.20271913 & {(0.7150)} & 4.33470147 & {(0.3150)} \\
\bottomrule
\end{tabular*}\label{tab:Stage2_lambda_z}
\tabnotes[-2.5mm][.935\columnwidth][-0.5mm]{This table reports the Stage 2
estimates of $\lambda _{z}$ for the different
$\boldsymbol{\theta }_{2}$ estimates corresponding to the \emph{%
"misspecified"} and \emph{"correctly specified"} Stage 2 models reported in %
\autoref{tab:Stage2}. The table is split into two column blocks, showing the
results for the \emph{`Time varying} $\boldsymbol{\phi }$\textit{'} and
\emph{`Constant} $\boldsymbol{\phi }$\textit{'} scenarios in the left and
right blocks, respectively. In the bottom half of the table, the four
different structural break test statistics for the considered models are
shown. The results under the heading `HLW.R-File' show the $\lambda _{z}$
estimates obtained from running \cites{holston.etal:2017} R-Code for the
Stage 2 model as reference values. The second column `Replicated' shows my
replicated results. Under the heading `MLE($\sigma _{g}$)', results for the
\emph{"misspecified}" Stage 2 model are shown with $\sigma _{g}$ estimated
directly by MLE rather
than from the first stage estimate of $\lambda _{g}$. Under the heading `MLE(%
$\sigma _{g}$).$\mathcal{M}_{0}$', results for the \emph{"correctly
specified"} Stage 2 model are reported where $\sigma _{g}$ is again
estimated by MLE. The values in square brackets in the top half of the table
report 90\% confidence intervals for $\lambda _{z}$ computed from %
\cites{stock.watson:1998} tabulated values provided in their GAUSS\ files.
These were divided by sample size $T$ to make them comparable to $\lambda
_{z}$. In the bottom panel, $p-$values of the various structural break tests
are reported in round brackets. These were also extracted from %
\cites{stock.watson:1998} GAUSS\ files.}%
\EAW \EST%
\cleardoublepage\newpage
\BT[p!]%
\caption{Summary statistics of the $\lambda_z$ estimates obtained from applying
\cites{holston.etal:2017} Stage 2 MUE procedure to simulated data} %
\centering\vspace*{-2mm}\renewcommand{%
\arraystretch}{1.1}\renewcommand\tabcolsep{7pt} %
\fontsize{11pt}{13pt}\selectfont %
\newcolumntype{N}{S[table-format = 3.8,round-precision = 6]} %
\newcolumntype{K}{S[table-format = 5.9,round-precision = 6]} %
\newcolumntype{Q}{S[table-format = 1.0,round-precision = 6]} %
\begin{tabular*}{1\columnwidth}{p{48mm}NNp{3mm}NN}
\topline
\multirow{2}{*}{\hsp[1] Summary Statistics}
& \multicolumn{2}{c}{DGPs when $\boldsymbol{\theta }_{2}$ held fixed at $\skew{0}\boldsymbol{\hat{\theta}}_{2}$~~~} &
& \multicolumn{2}{c}{DGPs when $\boldsymbol{\theta}_{2}$ is re-estimated~~~}
\\ \cmidrule(rr){2-3} \cmidrule(rr){5-6}
& {$r_t^{\ast}=4g_{t}$}
& {$r_t^{\ast}=4g_{t}+z_{t}$} &
& {$r_t^{\ast}=4g_{t}$}
& {$r_t^{\ast}=4g_{t}+z_{t}$} \\ \midrule
Minimum & 0 & 0 && 0 & 0 \\
Maximum & 0.10121984 & 0.09642681 && 0.11688594 & 0.11644479 \\
Standard deviation & 0.01624549 & 0.01658156 && 0.01851192 & 0.01964657 \\
Mean & 0.02884249 & 0.03072566 && 0.02510333 & 0.02746184 \\
Median & 0.02839441 & 0.02960898 && 0.02221494 & 0.02511532 \\
$\mathrm{Pr}(\hat\lambda^s_z> 0.030217)$& 0.4570 & 0.4900 && 0.3390 & 0.3930 \\
\bottomrule
\end{tabular*}\label{tab:Stage2_lambda_z_o}
\tabnotes[-3mm][.994\columnwidth][-1.5mm]{This table reports summary statistics
of the $\lambda _{z}$ estimates that
one obtains from implementing \cites{holston.etal:2017} Stage 2 MUE
procedure on artificial data that was simulated from two different data
generating processes (DGPs). The first DGP simulates data from the full
structural model in \ref{eq:hlw} under the parameter estimates of \cite%
{holston.etal:2017}, but where the natural rate is determined solely by
trend growth. That is, in the output gap equation in \ref{IS}, $r_{t}^{\ast
}=4g_{t}$. The second DGP simulates data from the full model of \cite%
{holston.etal:2017} where $r_{t}^{\ast }=4g_{t}+z_{t}$. The summary
statistics that are reported are the minimum, maximum, standard deviation,
mean, median, as well as the
empirical frequency of observing a value larger than the estimate of $%
0.030217$ obtained by \cite{holston.etal:2017}, denoted by $\Pr (\hat{\lambda%
}_{z}^{s}>0.030217)$. The table shows four different estimates, grouped in 2
block pairs. The left block under the heading `DGPs when $\boldsymbol{\theta
}_{2}$ is held fixed' shows the simulation results for the two DGPs when the
Stage 2 parameter vector $\boldsymbol{\theta }_{2}$ is held fixed at the
Stage 2 estimates and is not re-estimated on the simulated data. The right block under the heading
`DGPs when $\boldsymbol{\theta }_{2}$ is re-estimated' shows the simulation
results when $\boldsymbol{\theta }_{2}$ is re-estimated for each simulated
series. Simulations are performed on a sample size equivalent to the empirical data, with
$1000$ repetitions. }
\ET %
\cleardoublepage\newpage
\begin{figure}[p!]
\centering
\subfigure[Stage 2 parameters held fixed at $\skew{0}\boldsymbol{\hat{\theta}}_{2}$ from column 1 of \autoref{tab:Stage2}]{\includegraphics[width=.75\textwidth]{MUE2a1.pdf}}\vspace{4mm}
\subfigure[Stage 2 parameters re-estimated on each simulated series]{\includegraphics[width=.75\textwidth]{MUE2a1_hat.pdf}}
\caption{Histograms of the estimated $\left\{ \hat\lambda_z^s\right\}_{s=1}^{S}$
sequence corresponding to the summary statistics shown in \autoref{tab:Stage2_lambda_z_o}.
On the left and right columns, histograms for the two different DGPs are shown. To top two
histograms show the results when $\boldsymbol{\theta
}_{2}$ is held fixed in the simulations and is not re-estimated, while the bottom plots
show the results when $\boldsymbol{\theta}_{2}$ is re-estimated on each simulated series
that is generated.}
\label{fig:S2Lam_z_sim}
\end{figure}
\cleardoublepage\newpage
\BT[p!]
\caption{Stage 3 parameter estimates}\centering\vspace*{-2mm}%
\renewcommand{\arraystretch}{1.1}\renewcommand\tabcolsep{7pt}%
\fontsize{11pt}{13pt}\selectfont%
\newcolumntype{N}{S[table-format = 4.8,round-precision = 8]}
\newcolumntype{U}{S[table-format = 4.8,round-precision = 8]}
\newcolumntype{L}{S[table-format = 4.8,round-precision = 8]}
\begin{tabular*}{1\columnwidth}{p{27mm}NNNNN}
\topline
\hsp[5]$\boldsymbol{\theta }_{3}$
& {\hsp[2]HLW.R-File\hsp[-4]}
& {\hsp[2]Replicated\hsp[-4]}
& {\hsp[2]MLE($\sigma_g|\lambda_z^{\mathrm{HLW}})$\hsp[-3]}
& {\hsp[3]MLE($\sigma_g|\lambda_z^{\mathcal{M}_0})$\hsp[-4]}
& {\hsp[2]MLE($\sigma_g,\sigma_z)$\hsp[-4]}\\
\midrule
$\hsp[3]a_{y,1} $ & 1.5295724886 & 1.5295724693 & 1.4944246197 & 1.4956671184 & 1.4956614728 \\
$\hsp[3]a_{y,2} $ & -0.5875641518 & -0.5875641351 & -0.5537026759 & -0.5544894229 & -0.5544821188 \\
$\hsp[3]a_{r} $ & -0.0711956862 & -0.0711956881 & -0.0794159824 & -0.0752549575 & -0.0752523989 \\
$\hsp[3]b_{\pi } $ & 0.6682070533 & 0.6682070526 & 0.6712819687 & 0.6691946828 & 0.6691999284 \\
$\hsp[3]b_{y} $ & 0.0789577832 & 0.0789577841 & 0.0759360415 & 0.0805490066 & 0.0805471559 \\
$\hsp[3]\sigma _{\tilde{y}}$ & 0.3534684542 & 0.3534684662 & 0.3604311438 & 0.3738137616 & 0.3738293536 \\
$\hsp[3]\sigma _{\pi } $ & 0.7891948659 & 0.7891948667 & 0.7902998169 & 0.7894892114 & 0.7894909384 \\
$\hsp[3]\sigma _{y^{\ast }}$ & 0.5724192458 & 0.5724192433 & 0.5591574254 & 0.5529381760 & 0.5529301787 \\
$\hsp[3]\sigma _{g}$ {(implied)} & (0.03083567) & (0.03083567) & 0.0458385200 & 0.0449745020 & 0.0449741366 \\
$\hsp[3]\sigma _{z}$ {(implied)} & (0.15002080) & (0.15002080) & (0.13714150) & (0.00374682) & 0.0000000051 \\
$\hsp[3]\lambda_g $ {(implied)} & 0.0538690379 & 0.0538690379 & (0.08197784) & (0.08133730) & (0.08133782) \\
$\hsp[3]\lambda_z $ {(implied)} & 0.0302172209 & 0.0302172209 & 0.0302172209 & 0.0007542990 & (0.00000000) \\ \cmidrule(ll){1-6}
{Log-likelihood} & -515.1447052780 & -515.1447059855 & -514.8307054362 & -514.2898742606 & -514.2895896936 \\
\bottomrule
\end{tabular*}\label{tab:Stage3}%
\tabnotes[-3mm][.99\columnwidth][-1.0mm]{This table reports replication
results for the Stage 3 model parameter vector $\boldsymbol{\theta }_{3}$ of
\cite{holston.etal:2017}. The first column (HLW.R-File) reports estimates
obtained by running \cites{holston.etal:2017} R-Code for the Stage 3 model.
The second column (Replicated) shows the replicated results using the same set-up as in %
\cites{holston.etal:2017}. The third column
(MLE($\sigma_g|\lambda_z^{\mathrm{HLW}})$) reports estimates when $\sigma
_{g}$ is directly estimated by MLE together with the other parameters of the
Stage 3 model, while $\lambda _{z}$ is held fixed at
$\lambda _{z}^{\mathrm{HLW}}=0.030217$ obtained from %
\cites{holston.etal:2017} \emph{"misspecified"} Stage 2 procedure. In the
forth column (MLE($\sigma_g|\lambda_z^{\mathcal{M}_0})$), $\sigma _{g}$ is
again estimated directly by MLE together with the other parameters of the
Stage 3 model, but with $\lambda _{z}$ now fixed at $\lambda
_{z}^{\mathcal{M}_{0}}=0.000754$ obtained from the \emph{"correctly
specified"} Stage 2 model in \ref{S2full0}. The last column
(MLE($\sigma_g,\sigma_g)$) shows estimates when all parameters are computed
by MLE. Values in round brackets give the implied $\{\sigma _{g}, \sigma
_{z}\}$ or $\{\lambda _{g},\lambda _{z}\}$ values when either is fixed or
estimated. The last row (Log-likelihood) reports the value of the
log-likelihood function at these parameter estimates. The Matlab file
\texttt{Stage3\_replication.m} replicates these results.} \ET
\cleardoublepage\newpage
\begin{figure}[p!]
\centering
\includegraphics[width=1\textwidth,trim={0 0 0 0},clip]{Stage3_estimates_filtered_HLW_prior_2017Q1.pdf}
\vspace{-3mm}
\caption{Filtered estimates of the natural rate $r^{\ast}_t$,
annualized trend growth $g_t$, \emph{`other factor'} $z_t$, and the output gap
(cycle) variable $\tilde{y}_t$.}
\label{fig:2017KF}
\end{figure}
\cleardoublepage\newpage
\begin{figure}[p!]
\centering
\includegraphics[width=1\textwidth,trim={0 0 0 0},clip]{Stage3_estimates_smoothed_HLW_prior_2017Q1.pdf}
\vspace{-3mm}
\caption{Smoothed estimates of the natural rate $r^{\ast}_t$,
annualized trend growth $g_t$, \emph{`other factor'} $z_t$, and the output gap
(cycle) variable $\tilde{y}_t$.}
\label{fig:2017KS}
\end{figure}
\cleardoublepage\newpage
\begin{figure}[p!]
\centering \vspace{-3mm}
\includegraphics[width=1\textwidth,trim={0 0 0 0},clip]{different_starting_dates_HLW_factors_filtered.pdf}
\vspace{-3mm}
\caption{Filtered estimates of annualized trend growth $g_t$, \emph{`other factor'} $z_t$
and the natural rate $r^{\ast}_t$ based on different starting dates}
\label{fig:T0KF}
\end{figure}
\cleardoublepage\newpage
\begin{figure}[p!]
\centering \vspace{-3mm}
\includegraphics[width=1\textwidth,trim={0 0 0 0},clip]{different_starting_dates_HLW_factors_smoothed.pdf}
\vspace{-3mm}
\caption{Smoothed estimates of annualized trend growth $g_t$, \emph{`other factor'} $z_t$
and the natural rate $r^{\ast}_t$ based on different starting dates}
\label{fig:T0KS}
\end{figure}
\cleardoublepage\newpage
\newpage\cleardoublepage
\ifthenelse{\equal{1}{1}}{\end{document}}{}
\section{Introduction \label{sec:intro}}
Since the global financial crisis, nominal interest rates have declined
substantially to levels last witnessed in the early 1940s following the
Great Depression. The academic as well as policy literature has attributed
this decline in nominal interest rates to a decline in the natural rate of
interest; namely, the rate of interest consistent with employment at full
capacity and inflation at its target. In this literature, \citeauthor*
holston.etal:2017}' (\citeyear{holston.etal:2017}) estimates of the natural
rate have become particularly influential and are widely regarded as a
benchmark. The Federal Reserve Bank of New York (FRBNY) maintains an entire
website dedicated to providing updates to \cites{holston.etal:2017}
estimates of the natural rate, not only for the United States (U.S.), but
also for the Euro Area, Canada and the United Kingdom (U.K.) (see
\url{https://www.newyorkfed.org/research/policy/rstar}).
In \cites{holston.etal:2017} model, the natural rate of interest is defined
as the sum of trend growth of output $g_{t}$ and `\emph{other factor}'
z_{t} $. This `\emph{other factor}' $z_{t}$ is meant to capture various
underlying structural factors such as savings/investment imbalances,
demographic changes, and fiscal imbalances that influence the natural rate,
but which are not captured by trend growth $g_{t}$. In \autoref{fig:HLW_zf}
below, I show filtered (as well as smoothed) estimates of
\cites{holston.etal:2017} `\emph{other factor}' $z_{t}$.\footnote{\cit
{holston.etal:2017} do not show a plot of `\emph{other factor}' $z_{t}$ on
the FRBNY website (as of 22$^{nd}$ of June, 2020).}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,rotate=00,trim={0 0 0
0},clip]{other_factor_4.pdf} \vspace{-06mm}
\caption{Filtered and smoothed estimates of \cites{holston.etal:2017} \emph
`other factor'} $z_{t}$.}
\label{fig:HLW_zf}
\end{figure}
\vsp[-2]
\noindent The dashed lines in \autoref{fig:HLW_zf} show estimates obtained
with data ending in 2017:Q1, while the solid lines are estimates based on
data extended to 2019:Q2. The strong and persistent downward trending
behaviour of `\emph{other factor}' $z_{t}$ is striking from \autore
{fig:HLW_zf}, particularly from 2012:Q1 onwards. The two (black) dashed
vertical lines mark the periods 2012:Q1 and 2015:Q4. In 2015:Q4, the Federal
Reserve started the tightening cycle and raised nominal interest rates by 25
basis points. In 2012:Q1, real rates began to rise due to a (mild)
deterioration in inflation expectations.\footnote
See panel (a) of \autoref{fig:HLW_factors}, which shows plots of the federal
funds rate, the real interest rate, as well as inflation and inflation
expectations.} Both led to an increase in the real rate. Yet,
\cites{holston.etal:2017} estimates of `\emph{other factor}' $z_{t}$
declined by about 50 basis points from 2012:Q1 to 2015:Q4, and then another
50 basis points from 2015:Q4 to 2019:Q2, reaching a value of $-1.58$ in
2019:Q2. Because $z_{t}$ evolves as a driftless random walk in the model,
the only parameter that \emph{`controls'} the influence of $z_{t}$ on the
natural rate is the `\emph{signal-to-noise ratio}' $\lambda _{z}$.\footnote
This description is somewhat imprecise to avoid cumbersome language. Since
z_{t}$ evolves as $z_{t}=z_{t-1}+\sigma _{z}\epsilon _{t}$, with $\epsilon
_{t}$ being standard normal, it is the standard deviation $\sigma _{z}$ that
is the only parameter that influences the evolution of $z_{t}$. However,
\cite{holston.etal:2017} determine $\sigma _{z}$ indirectly through the
\emph{signal-to-noise ratio}' $\lambda _{z}$, so it is the size of $\lambda
_{z}$ that matters for the evolution of $z_{t}$.} Thus, how exactly this
parameter is estimated is of fundamental importance for the determination of
the natural rate of interest.
In this paper, I show that \cites{holston.etal:2017} implementation of
\cites{stock.watson:1998} Median Unbiased Estimation (MUE) is unsound. It
cannot recover the ratios of interest $\lambda _{g}=\sigma _{g}/\sigma
_{y^{\ast }}$ and $\lambda _{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$ from
Stages 1 and 2 of their three stage procedure needed for the estimation of
the full structural model. The implementation of MUE of $\lambda _{z}$ in
Stage~2 is particularly problematic, as \cites{holston.etal:2017} procedure
is based on an \emph{`unnecessarily'} misspecified Stage~2 model. This
misspecified Stage 2 model not only fails to identify the ratio of interest
\lambda _{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$, but moreover, due to the
way \cite{holston.etal:2017} implement MUE in Stage 2, leads to spuriously
large and excessively amplified estimates of $\lambda _{z}$. Since the
magnitude of $\lambda _{z}$ determines and drives the downward trending
behaviour of `\emph{other factor}' $z_{t}$, this misspecification is
consequential. Correcting their Stage 2 model and the MUE\ implementation
results in a substantial quantitative reduction in the point estimate of
\lambda _{z}$, and hence also $\sigma _{z}$. For instance, using data ending
in 2017:Q1, \cites{holston.etal:2017} estimate of $\lambda _{z}$ is
0.030217 $ and yields an implied value of $0.150021$ for $\sigma _{z}$.
After the correction, $\lambda _{z}$ is estimated to be $0.000754$ with an
implied value for $\sigma _{z}$ of $0.003746$.\footnote
These are my replicated estimates using data up to 2017:Q1, but they are
effectively identical to those listed in Table 1, column 1 for the U.S. on
page S60 in \cite{holston.etal:2017}.} The resulting filtered (and smoothed)
estimates of $z_{t}$ are markedly different, with the one from the correct
Stage 2 implementation not only being very close to zero, but also highly
insignificant statistically. The $p-$values corresponding to the structural
break statistics from which $\lambda _{z}$ is estimated are of an order of
magnitude of 0.5. These results highlight that there is no evidence of \emph
`other factor'} $z_{t}$ in this model. The large and persistent downward
trend in \cites{holston.etal:2017} estimates thus appears to be spurious.
In \Sref{sec:S2}, I outline in detail the Stage 2 model and the MUE
procedure that \cite{holston.etal:2017} implement to estimate $\lambda _{z}
. I show that their Stage 2 model is misspecified and that due to this,
their MUE procedure cannot identify the ratio of interest $a_{r}\sigma
_{z}/\sigma _{\tilde{y}}$ from $\lambda _{z}$. Instead, it recovers $\lambda
_{z}=a_{r}\sigma _{z}/(\sigma _{\tilde{y}}+0.5a_{g}\sigma _{g})$ if
(a_{g}+4a_{r})=0$. If $(a_{g}+4a_{r})\neq 0$, then additional parameters
enter the denominator of $\lambda _{z}$, making it more intricate to recover
$\sigma _{z}$ from $\lambda _{z}$, as it will be necessary to make
additional assumptions about the time series properties of the nominal
interest rate which is not explicitly modelled by \cite{holston.etal:2017},
but rather added as an exogenous variable. The terms $a_{r}$ and $a_{g}$ are
the parameters on the lagged real interest rate and lagged trend growth in
the Stage 2 model of the output gap equation (see \Sref{sec:S2} for more
details). In the full model, these are restricted so that $a_{g}=-4a_{r}$.
In their specification of the Stage 2 model, \cite{holston.etal:2017} do not
impose this restriction. Moreover, they include only one lag of trend growth
$g_{t}$ in the output gap equation and, curiously, further add an intercept
term to the specification that is not present in the full model (see
equation \ref{S2:ytilde}). Since \cites{stock.watson:1998} MUE relies upon
\cite{chow:1960} type structural break tests to estimate $\lambda _{z}$,
these differences in the output gap specification lead to substantially
larger $F$ statistics (see \autoref{fig:seqaF} for a visual presentation)
and therefore estimates of $\lambda _{z}$. To demonstrate that their
misspecified Stage 2 model and MUE procedure leads to spurious and
excessively large estimates of $\lambda _{z}$ when the true value is zero, I
implement a simulation experiment in \Sref{sec:S2}. This simulation
experiment shows that the mean estimate of $\lambda _{z}$ can be as high as
0.028842$, with a $45.7\%$ probability (relative frequency) of observing a
value larger than estimated from the empirical data, when computed from
simulated data which were generated from a model with the true $\lambda
_{z}=0$. These simulation results are concerning, because they suggest that
it is \cites{holston.etal:2017} MUE procedure itself that leads to the
excessively large estimates of $\lambda _{z}$, rather than the size of the
true $\lambda _{z}$ in the data.
Although \Sref{sec:S2} describes the core problem with
\cites{holston.etal:2017} estimation procedure, there are other issues with
the model and how it is estimated. Some of these are outlined in
\Sref{sec:other}.\ For instance, \cites{holston.etal:2017} estimates of the
natural rate, trend growth, `\emph{other factor}' $z_{t}$ and the output gap
are extremely sensitive to the starting date of the sample used to estimate
the model. Estimating the model with data beginning in 1972:Q1 (or 1967:Q1)
leads to negative estimates of the natural rate of interest toward the end
of the sample period. These negative estimates are again driven purely by
the exaggerated downward trending behaviour of `\emph{other factor}' $z_{t}
. The 1972:Q1 sample start was chosen to match the starting date used in the
estimation of this model for the Euro Area. Out of the four countries that
\cites{holston.etal:2017} model is fitted to, only the Euro Area estimates
of the natural rate turn negative in 2013.\footnote
Only the Euro Area estimates are based on a sample that starts in 1972:Q1,
while the estimates for the U.K., Canada and the U.S. are based on samples
starting in 1961:Q1.} The fact that it is also possible to generate such
negative estimates of the natural rate from \cites{holston.etal:2017} model
for the U.S. by simply adjusting the start of the estimation period to match
that of the Euro Area data suggests that the model is far from robust, and
therefore inappropriate for use in policy analysis. Furthermore, because
Kalman Filtered estimates of the natural rate of interest will be moving
averages of the observed variables that enter the state-space model, a
circular or confounding relationship between the natural rate and the
(nominal)\ policy rate will arise, because any central bank induced change
in the policy target will be mechanically transferred to the natural rate
via the Kalman Filtered estimate of the state vector. This makes it
impossible to address \textit{`causal'} questions regarding the relationship
between natural rates and policy rates.
Median Unbiased Estimation is neither well known nor widely used at policy
institutions. To give some background on the methodology, and to be able to
understand why \cites{holston.etal:2017} implementation of MUE\ in Stage 2
is unsound, I provide a concise but important and informative overview of
the methodology in \Sref{sec:MUE}.\ This section is essential for readers
unfamiliar with the estimator. It reviews and summarises the conditions when
it is likely to encounter \emph{`pile-up'} at zero problems with Maximum
Likelihood Estimation (MLE) of such models. Namely, MLE is likely to
generate higher \emph{`pile-up'} at zero frequencies than MUE when the
initial conditions of the state vector are unknown and need to be estimated,
and when the true `\emph{signal-to-noise ratio}' is very small (close to
zero). Since \cite{holston.etal:2017} do not estimate the initial conditions
of the state vector, but instead use very tightly specified prior values,
and because their MUEs of the `\emph{signal-to-noise ratio}' are everything
else but very small in the context of MUE, it seems highly unlikely a priori
that MLE should generate higher \emph{`pile-up'} at zero probabilities than
MUE. From \cites{stock.watson:1998} simulation results we know that MLE
(with a diffuse prior) is substantially more efficient than MUE when the
\emph{signal-to-noise ratio}' is not extremely small. MLE should thus be
preferred as an estimator.
For reasons of completeness, I provide a comprehensive description of
\cites{holston.etal:2017} Stage 1 model and their first stage MUE
implementation in \Sref{sec:S1}. As in the Stage 2 model, I show
algebraically that their MUE procedure cannot recover the ratio $\sigma
_{g}/\sigma _{y^{\ast }}$ from $\lambda _{g}$ because the error term in the
first difference of the constructed trend variable $y_{t}^{\ast }$ in the
first stage model depends on the real interest rate, as well as `\emph{other
factor}' $z_{t}$ and trend growth $g_{t}$. This means that when the long-run
standard deviation from the MUE\ procedure is constructed, it will not only
equal $\sigma _{y^{\ast }}$ as required, but also depend on $\sigma _{z}$,
\sigma _{g}$, as well as the long-run standard deviation of the real rate.
Rewriting a simpler version of the Stage 1 model in local level model form
also fails to identify the ratio of interest $\sigma _{g}/\sigma _{y^{\ast }}
$ from MUE of $\lambda _{g}$. The inability to recover the ratio $\sigma
_{g}/\sigma _{y^{\ast }}$ from the first stage model thus appears to be a
broader issue highlighting the unsuitability of MUE in this context. This
section also illustrates that it is empirically unnecessary to use MUE\ to
estimate $\sigma _{g}$ in the first stage model since MLE does not lead to
\emph{`pile-up'} at zero problems with $\sigma _{g}$, neither in the local
level model nor in the local linear trend (or unobserved component) model
form. Estimating $\sigma _{g}$ directly by MLE\ in the second and third
stages confirms this result, yielding in fact larger point estimates than
implied by the first stage MUE of $\lambda _{g}$ obtained from
\cites{holston.etal:2017} procedure. Readers not interested in the
computational intricacies and nuances of the Stage 1 model may skip this
section entirely, and only refer back to it as needed for clarification of
later results. The key contribution of this paper relates to the correct
estimation of $\lambda _{z}$ in \cites{holston.etal:2017} Stage 2 model and
its impact on the natural rate of interest through `\emph{other factor}'
z_{t}$.
MUE of $\lambda _{z}$ based on the correctly specified Stage 2 model
suggests that there is no role for `\emph{other factor}' $z_{t}$ in this
model and given this data.\footnote
This result is inline with the MLE based estimates of $\sigma _{z}$.
Furthermore, these results also carry over to the Euro Area, Canadian and
U.K. estimates of $z_{t}$ which are not reported here, but will be made
available on the author's webpage.} This brings the focus back to (the
estimates of) trend growth in this model. \cites{holston.etal:2017}
estimates give the impression that trend growth has markedly slowed since
the global financial crisis, particularly in the immediate aftermath of the
crisis. In panels (b) and (c) of \autoref{fig:HLW_factors}, I\ show plots of
\cites{holston.etal:2017} estimates of $g_{t}$ together with a few simple
alternative ones (annualized GDP growth is superimposed in panel (b)). Trend
growth is severely underestimated from 2009:Q3 onwards. From the robust
(median) estimates of average GDP growth over the various expansion periods
shown in \autoref{tab:sumstatGDP}, trend growth is only approximately 25
basis points lower at 2.25\% since 2009:Q3 than over the pre financial
crisis expansion from 2002:Q1 to 2007:Q4.\footnote
GDP\ growth is close to being serially uncorrelated over the last two
expansion periods, with low variances.} Survey based 10 year-ahead
expectations of annualized real GDP\ growth plotted in \autore
{Afig:SPF_GDP_growth} and \autoref{Afig:giglio_GDP_growth} also suggest that
trend growth remained stable (these plots are discussed further in
\Sref{sec:other}). The key point to take away from this discussion is that
\cites{holston.etal:2017} (one sided) Kalman Filter based estimate of
$g_{t}$ is excessively \emph{`pulled down'} by the large decline in GDP
during the financial crisis, and this strongly and adversely effects the
estimate of trend growth for many periods \emph{after} the crisis
The rest of the paper is organised as follows. In \Sref{sec:model},
\cites{holston.etal:2017} structural model of the natural rate of interest
is described. \Sref{sec:MUE} gives a concise background to
\cites{stock.watson:1998} Median Unbiased Estimation. In \Sref{sec:HLW}, I
provide a detailed description of the Stage 1 and Stage 2 models, and report
the results of the full Stage 3 model estimates. Some additional issues with
the model are discussed in \Sref{sec:other}, and \Sref{sec:conclusion}
concludes the study.
\section{Holston, Laubach and Williams' (2017) Model \label{sec:model}}
\citeallauthors{holston.etal:2017} use the following \textit{`structural'}
model to estimate the natural rate of interest:\footnote
In what follows, I use the same notation as in \cite{holston.etal:2017} (see
equations 3 to 9 on pages S61 to S63) to facilitate a direct comparison.
Also note that this model builds on an earlier specification of \cit
{laubach.williams:2003}, where trend growth $g_{t}$ is scaled by another
parameter $c$, and where also a stationary AR(2) process for the \emph
`other factor'} $z_{t}$ was considered in addition to the $I(1)$
specification in \ref{z}.} \bsq\label{eq:hlw}\vsp[-0]
\begin{align}
\text{Output}& \text{:} & y_{t}& =y_{t}^{\ast }+\tilde{y}_{t} \label{gdp} \\
\text{Inflation}& \text{:} & \pi _{t}& =b_{\pi }\pi _{t-1}+(1-b_{\pi })\pi
_{t-2,4}+b_{y}\tilde{y}_{t-1}+\varepsilon _{t}^{\pi } \label{AS} \\
\text{Output gap}& \text{:} & \tilde{y}_{t}& =a_{y,1}\tilde{y}_{t-1}+a_{y,2
\tilde{y}_{t-2}+\tfrac{a_{r}}{2}[\left( r_{t-1}-r_{t-1}^{\ast }\right)
+\left( r_{t-2}-r_{t-2}^{\ast }\right) ]+\varepsilon _{t}^{\tilde{y}}
\label{IS} \\
\text{Output trend}& \text{:} & y_{t}^{\ast }& =y_{t-1}^{\ast
}+g_{t-1}+\varepsilon _{t}^{y^{\ast }} \label{y*} \\
\text{Trend growth}& \text{:} & g_{t}& =g_{t-1}+\varepsilon _{t}^{g}
\label{g} \\
\text{Other factor}& \text{:} & z_{t}& =z_{t-1}+\varepsilon _{t}^{z},
\label{z}
\end{align
\esq where $y_{t}$ is 100 times the (natural) log of real GDP, $y_{t}^{\ast
} $ is the permanent or trend component of GDP, $\tilde{y}_{t}$ is its
cyclical component, $\pi _{t}$ is annualized quarter-on-quarter PCE
inflation, and $\pi _{t-2,4}=\left( \pi _{t-2}+\pi _{t-3}+\pi _{t-4}\right)
/3$. The real interest rate $r_{t}$ is computed as:
\begin{equation}
r_{t}=i_{t}-\pi _{t}^{e}, \label{r}
\end{equation
where expected inflation is constructed as
\begin{equation}
\pi _{t}^{e}=(\pi _{t}+\pi _{t-1}+\pi _{t-2}+\pi _{t-3})/4 \label{pi}
\end{equation
and $i_{t}$ is the \emph{exogenously} determined nominal interest rate, the
federal funds rate.
The natural rate of interest $r_{t}^{\ast }$ is computed as the sum of trend
growth $g_{t}$ and \emph{`other factor'} $z_{t}$, both of which are $I(1)$
processes. The real interest rate gap is defined as $\tilde{r
_{t}=(r_{t}-r_{t}^{\ast })$. The error terms $\varepsilon _{t}^{\ell
},\forall \ell =\{\pi ,\tilde{y},y^{\ast }\hsp[-1],g,z\}$ are assumed to be
i.i.d$ normal distributed, mutually uncorrelated, and with time-invariant
variances denoted by $\sigma _{\ell }^{2}$. Notice from \ref{AS} that
inflation is restricted to follow an integrated AR(4) process. From the
description of the data, we can see that the nominal interest rate $i_{t}$
as well as inflation $\pi _{t}$ are defined in annual or annualized terms,
while output, and hence the output gap, trend and trend growth in output are
defined at a quarterly rate. Due to this measurement mismatch, \cit
{holston.etal:2017} adjust the calculation of the natural rate in their code
so that trend growth $g_{t}$ is scaled by 4 whenever it enters equations
that relate it to annualized variables. The natural rate is thus factually
computed as $r_{t}^{\ast }=4g_{t}+z_{t}$.\footnote
This generates some confusion when working with the model, as it is not
clear whether the estimated $z_{t}$ factor is to be interpreted at an annual
or quarterly rate.} In the descriptions that follow, I\ will use the
annualized $4g_{t}$ trend growth rate whenever it is important to highlight
a result or in some of the algebraic derivations, and will leave the
equations in \ref{eq:hlw} as in \cite{holston.etal:2017} otherwise for ease
of comparability.
\cite{holston.etal:2017} argue that due to \textit{`pile-up'} at zero
problems with Maximum Likelihood (ML) estimation of the variances of the
innovation terms $\varepsilon _{t}^{g}$ and $\varepsilon _{t}^{z}$ in \re
{eq:hlw}, estimates of $\sigma _{g}^{2}$ and $\sigma _{z}^{2}$ are
\textquotedblleft \textit{likely to be biased towards zero
\textquotedblright\ (page S64). To avoid such \textit{`pile-up'} at zero
problems, they employ Median Unbiased Estimation (MUE)\ of \cit
{stock.watson:1998} in two preliminary steps --- Stage 1 and Stage 2 --- to
get estimates of what they refer to as \textit{`}\emph{signal-to-noise
ratios'} defined as $\lambda _{g}=\sigma _{g}/\sigma _{_{y^{\ast }}}$ and
\lambda _{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$. In Stage 3, the
remaining parameters of the full model in \ref{eq:hlw} are estimated,
conditional on the median unbiased estimates $\hat{\lambda}_{g}$ and $\hat
\lambda}_{z}$ obtained in Stages 1 and 2, respectively.
In the above description, I\ intentionally differentiate between the \textit
`}\emph{signal-to-noise ratio' }terminology of \cite{holston.etal:2017} and
the one used in \cite{harvey:1989} and in the broader literature on
state-space models and exponential smoothing, where the signal-to-noise
ratio would be defined as $\sigma _{y^{\ast }}/\sigma _{\tilde{y}}$ or
\left( \sigma _{g}/\sigma _{\tilde{y}}\right) $ from the relations in \re
{eq:hlw}.\footnote
As noted on page 337 in \cite{harvey:2006}, the signal-to-noise ratio
\textquotedblleft \emph{plays the key role in determining how observations
should be weighted for prediction and signal extraction.}\textquotedblright}
To be more explicit, in the context of the classic local level model of \cit
{muth:1960}:\bsq\label{eq:ll
\begin{align}
y_{t}& =\mu _{t}+\varepsilon _{t} \\
\mu _{t}& =\mu _{t-1}+\eta _{t},
\end{align
\esq the signal-to-noise ratio is computed as $\sigma _{\eta }/\sigma
_{\varepsilon }$. In the extended version of the model in \ref{eq:ll} known
as the local linear trend model: \bsq\label{eq:llt}
\begin{align}
y_{t}& =\mu _{t}+\varepsilon _{t} \\
\mu _{t}& =\mu _{t-1}+\tau _{t-1}+\eta _{t} \\
\tau _{t}& =\tau _{t-1}+\zeta _{t},
\end{align
\esq two signal-to-noise ratios, namely $\sigma _{\eta }/\sigma
_{\varepsilon }$ and $\sigma _{\zeta }/\sigma _{\varepsilon }$, can be
formed.\footnote
The processes $\varepsilon _{t},\eta _{t}$ and $\zeta _{t}$ are uncorrelated
white noise. These two state-space formulations are described in more detail
in Chapters 2 and 4 of \cite{harvey:1989}. \cite{harvey:1989} also shows how
to derive their relation to simple and double exponential smoothing models.}
Note here that the model of \cite{holston.etal:2017} in \ref{eq:hlw} is
essentially an extended and more flexible version of the local linear trend
model in \ref{eq:llt}. Referring to $\lambda _{g}=\sigma _{g}/\sigma
_{_{y^{\ast }}}$ as a signal-to-noise ratio as \cite{holston.etal:2017} do
is thus rather misleading, since it would correspond to $\sigma _{\zeta
}/\sigma _{\eta }$ in the local linear trend model in \ref{eq:llt}, which
has no relation to the traditional signal-to-noise ratio terminology of \cit
{harvey:1989} and others in this literature.\footnote
Readers familiar with the \cite{hodrick.prescott:1997} (HP) filter will
recognize that the local linear trend model in \ref{eq:llt} --- with the
extra \emph{`smoothness'} restriction $\sigma _{\eta }=0$ --- defines the
state-space model representation of the HP\ filter, where the square of the
inverse of the signal-to-noise ratio ($\sigma _{\varepsilon }^{2}/\sigma
_{\zeta }^{2}$ in \ref{eq:llt} or equivalently $\sigma _{\tilde{y
}^{2}/\sigma _{g}^{2}$ in \ref{eq:hlw}) is the HP\ filter smoothing
parameter that is frequently set to $1600$ in applications involving
quarterly GDP data.}
Before the three stage procedure of \cite{holston.etal:2017} is described, I
outline in detail how \cites{stock.watson:1998} median unbiased estimator is
implemented, what normalization assumptions it imposes, and how look-up
tables for the construction of the estimator are computed. I also include a
replication of \cites{stock.watson:1998} empirical estimation of trend
growth of U.S. real GDP per capita. Although the section that follows below
may seem excessively detailed, long, and perhaps unnecessary, the intention
here is to provide the reader with an overview of how median unbiased
estimation is implemented, what it is intended for, and when one can expect
to encounter \emph{`pile-up'} at zero problems to materialize. Most
importantly, it should highlight that \emph{`pile-up'} at zero is not a
problem in the general sense of the word, but rather only a nuisance in
situations when it is necessary to distinguish between \emph{very} small
variances and ones that are zero.
\section{\cites{stock.watson:1998} Median Unbiased Estimation \label{sec:MUE
}
\cite{stock.watson:1998} proposed Median Unbiased Estimation (MUE) in the
general setting of Time Varying Parameter (TVP) models. TVP models are
commonly specified in a way that allows their parameters to change gradually
or smoothly over time. This is achieved by defining the parameters to evolve
as driftless random walks (RWs), with the variances of the innovation terms
in the RW equations assumed to be small. One issue with Kalman Filter based
ML estimation of such models is that estimates of these variances can
frequently \emph{`pile-up'} at zero when the true error variances are \emph
`very'} small, but nevertheless, non-zero.\footnote
See the discussion in Section 1 of \cite{stock.watson:1998} for additional
motivation and explanations. As the title of \cites{stock.watson:1998} paper
suggests, MUE was introduced for \textquotedblleft \textit{coefficient
variance estimation in TVP models\textquotedblright } when this variance is
expected to be small.}
\cite{stock.watson:1998} show simulation evidence of \emph{`pile-up'} at
zero problems with Kalman Filter based ML estimation in Table 1 on page 353
of their paper. In their simulation set-up, they consider the following data
generating process for the series $GY_{t}$:\footnote
See their GAUSS\ files \texttt{TESTCDF.GSS} and \texttt{ESTLAM.GSS} for
details on the data generating process, which are available from Mark
Watson's homepage at \url{http://www.princeton.edu/~mwatson/ddisk/tvpci.zip}
}\bsq\label{eq:tvp_sim
\begin{align}
GY_{t}& =\beta _{t}+\varepsilon _{t} \\
\beta _{t}& =\beta _{t-1}+(\lambda /T)\eta _{t},
\end{align
\esq where $\varepsilon _{t}$ and $\eta _{t}$ are drawn from $i.i.d.$
standard normal distributions, $\beta _{00}$ is initialized at 0, and the
sample size is held fixed at $T=500$ observations, using $5000$
replications. The $\lambda $ values that determine the size of the variance
of $\Delta \beta _{t}$ are generated over a grid from 0 to 30, with unit
increments.\footnote
To be precise, in their GAUSS\ code, \cite{stock.watson:1998} use a range
from 0 to 80 for $\lambda $, with finer step sizes for lower $\lambda $
values (see, for instance, the file \texttt{TESTCDF.GSS}). That is, $\lambda
$ is a sequence between 0 to 30 with increments of 0.25, then 0.5 unit
increments from 30 to 60, and unit increments from 60 to 80. In Tables 1 to
3 of their paper, results are reported for $\lambda $ values from 0 up to
30 $ only, with unit increments.} Four median unbiased estimators relying on
four different structural break test statistics are compared to two ML
estimators. The first ML estimator, referred to as the maximum profile
likelihood estimator (MPLE), treats the initial state vector as an unknown
parameter to be estimated. The second estimator, the maximum marginal
likelihood estimator (MMLE), treats the initial state vector as a Gaussian
random variable with a given mean and variance. When the variance of the
integrated part of the initial state vector goes to infinity, MMLE produces
a likelihood with a diffuse prior.
How one treats the initial condition in the Kalman Filter recursions matters
substantially for the \emph{`pile-up'} at zero problem with MLE. This fact
has been known, at least, since the work of \cite{shephard.harvey:1990}
\footnote
On page 340, \cite{shephard.harvey:1990} write to this: \emph
\textquotedblleft \ldots we show that the results for the fixed and known
start-up and the diffuse prior are not too different. However, in Section 4
we demonstrate that the sampling distribution of the ML estimator will
change dramatically when we specify a fixed but unknown start-up
procedure.\textquotedblright } Their Tables II and III quantify how much
worse the ML estimator that attempts to estimate the initial condition in
the local level model performs compared to MLE with a diffuse prior.} The
simulation results reported in Table 1 on page 353 in \cit
{stock.watson:1998} show that \emph{`pile-up'} at zero frequencies are \emph
considerably} lower when MMLE with a diffuse prior is used than for MPLE,
which estimates the initial state vector. For instance, for the smallest
considered non-zero population value of $\lambda =1$, which implies a
standard deviation of $\Delta \beta _{t}$ ($\sigma _{\Delta \beta }$
henceforth)\ of $\lambda /T=1/500=0.002$, MMLE produces an at most $14\
percentage points higher \emph{`pile-up'} at zero frequency than MUE (ie.,
0.60$ or $60\%$ for MMLE versus $0.46$ or $46\%$ for MUE based on the \cit
{quandt:1960} Likelihood Ratio, henceforth QLR, structural break test
statistic).\footnote
The four different MUEs based on the different structural break tests appear
to perform equally well.} For MPLE, this frequency is $45$ percentage points
higher at $0.91$ ($91\%$). At $\lambda =5$ ($\sigma _{\Delta \beta }=0.01$)
and $\lambda =10$ ($\sigma _{\Delta \beta }=0.02$), these differences in the
\emph{`pile-up'} at zero frequencies reduce to $11$ and $4$ percentage
points, respectively, for MMLE, but remain still sizeable for MPLE. At
\lambda =20$ ($\sigma _{\Delta \beta }=0.04$), the \emph{`pile-up'} at zero
problem disappears nearly entirely for MMLE\ and MUE, with \emph{`pile-up'}
frequencies dropping to $2$ and $1$ percentage points, respectively, for
these two estimators, staying somewhat higher at $7$ percentage points for
MPLE.
Using MUE instead of MLE to mitigate \emph{`pile-up'} at zero problems
comes, nevertheless, at a cost; that is, a loss in estimator efficiency
whenever $\lambda $ (or $\sigma _{\Delta \beta }$) is not \emph{very} small.
From Table 2 on page 353 in \cite{stock.watson:1998}, which shows the
asymptotic relative efficiency of MUE (and MPLE) relative to MMLE, it is
evident that for true $\lambda $ values of $10$ or greater $(\sigma _{\Delta
\beta }\geq 0.02)$, the 4 different MUEs yield asymptotic relative
efficiencies (AREs) as low as 0.65 (see the results under the $L$ and
\textrm{MW} columns in Table 2).\footnote
The QLR structural break test seems to be the most efficient among the MUEs,
yielding the highest AREs across the various MUE implementations.} This
means that MMLE only needs $65\%$ of MUE's sample size to achieve the same
probability of falling into a given null set. Only for very small values of
\lambda \leq 4$ $(\sigma _{\Delta \beta }\leq 0.008)$ are the AREs of MUE\
and MMLE of a similar magnitude, ie., close to 1, suggesting that both
estimators achieve approximately the same precision.
Three important points are to be taken away from this review of the
simulation results reported in \cite{stock.watson:1998}. First, with MLE,
\emph{`pile-up'} at zero frequencies are substantially smaller when the
initial state vector is treated as a known fixed quantity or when a diffuse
prior is used, which is the case with MMLE (but not with MPLE). Second,
\emph{`pile-up'} at zero frequencies of MMLE\ are at most $4$ percentage
points higher than those of MUE once $\lambda \geq 10$ ($\sigma _{\Delta
\beta }=0.02$). Third, MUE can be considerably less efficient than MMLE, in
particular for \emph{`larger'} values of $\lambda \geq 10$ ($\sigma _{\Delta
\beta }=0.02$). This suggests that MLE\ with a diffuse prior should be
preferred whenever MUE\ based estimates of $\lambda $ (or $\sigma _{\Delta
\beta }$) are \emph{`large'} enough to indicate that \emph{`pile-up'} at
zero problems are unlikely to materialize.
To provide the reader with an illustration of how MUE is implemented, and
how its estimates compare to the two maximum likelihood based procedures
(MPLE and MMLE), I replicate the empirical example in Section 4 of \cit
{stock.watson:1998} which provides estimates of trend growth of U.S. real
GDP per capita over the period from 1947:Q2 to 1995:Q4. Note that trend
growth in GDP is one of the two components that make up the real natural
rate $r_{t}^{\ast }$ in \cite{holston.etal:2017}. It is thus beneficial to
illustrate the implementation of MUE in this specific context, rather than
in the more general setting of time varying parameter models.
\subsection{Median unbiased estimation of U.S. trend growth \labe
{subsec:MUE}}
\cite{stock.watson:1998} use the following specification to model the
evolution of annualized trend growth in real per capita GDP for the U.S.,
denoted by $GY_{t}$ below:\footnote
That is, $GY_{t}=400\Delta \ln (\text{real per capita GDP}_{t})$, where
\Delta $ is the first difference operator (see Section 4 on page 354 in \cit
{stock.watson:1998}). I again follow their notation as closely as possible
for comparability reasons.} \bsq\label{eq:sw98}
\begin{align}
GY_{t}& =\beta _{t}+u_{t} \label{eq:sw1} \\
\Delta \beta _{t}& =(\lambda /T)\eta _{t} \label{eq:swRW} \\
a(L)u_{t}& =\varepsilon _{t}, \label{eq:sw3}
\end{align
\esq where $a(L)$ is a (\emph{`stationary'}) lag polynomial with all roots
outside the unit circle, $\lambda $ is the parameter of interest, $T$ is the
sample size, and $\eta _{t}$ and $\varepsilon _{t}$ are two uncorrelated
disturbance terms, with variances $\sigma _{\eta }^{2}$ and $\sigma
_{\varepsilon }^{2}$, respectively. The growth rate of per capita GDP is
thus composed of a stationary component $u_{t}$ and a random walk component
\beta _{t}$ for trend growth. \cite{stock.watson:1998} set $a(L)$ to a
4^{th}$ order lag-polynomial, so that $u_{t}$ follows an AR(4) process. The
model in \ref{eq:sw98} can be recognized as the local level model of \cit
{muth:1960} defined earlier in \ref{eq:ll}, albeit with the generalisation
that $u_{t}$ follows an AR(4)\ process, rather than white noise. Being in
the class of local level models means that the estimate of trend growth will
be an exponentially weighted moving average (EWMA) of $GY_{t}$.\footnote
\cite{stock.watson:1998} offer a discussion of the rationale behind the
random walk specification of trend growth in $GY_{t}$ in the second
paragraph on the left of page 355. Without wanting to get into a technical
discussion, one might want to view the random walk specification of trend
growth $\beta _{t}$ as a purely statistical tool to allow for a slowly
changing mean, rather than interpreting trend growth as an $I(1)$ process.}
It is important to highlight here that \cites{stock.watson:1998} discussion
of the theoretical results of the estimator in Sections $2.2-2.3$ of their
paper emphasizes that MUE of $\lambda $ in the model in \ref{eq:sw98} is
only possible with the \textquotedblleft \textit{normalisation }$\mathbf{D
=1 $\textquotedblright . They write at the top of page 351\ (right column):
\textquotedblleft \textit{Henceforth, when }$k=1$\textit{, we thus set }
\mathbf{D}=1$\textit{. When }$\mathbf{X}_{t}=1$\textit{, under this
normalization, }$\lambda $\textit{\ is }$T$\textit{\ times the ratio of the
long-run standard deviation of }$\Delta \beta _{t}$\textit{\ to the long run
standard deviation of }$u_{t}$\textit{.}\textquotedblright \footnote
The parameter $k$ here refers to the column dimension of regressor vector
\mathbf{X}_{t}$. When $k=1$, then only a model with an intercept is fitted,
ie., $\mathbf{X}_{t}$ contains only a unit constant and no other regressors.}
Denoting the long-run standard deviation of a stochastic process by $\bar
\sigma}(\cdot )$, this means tha
\begin{equation}
\lambda =T\frac{\bar{\sigma}(\Delta \beta _{t})}{\bar{\sigma}(u_{t})}=T\frac
\sigma _{\Delta \beta }}{\sigma _{\varepsilon }/a(1)}, \label{eq:lambda}
\end{equation
or alternatively, expressed in signal-to-noise ratio form as used by \cit
{holston.etal:2017}
\begin{equation}
\frac{\lambda }{T}=\frac{\bar{\sigma}(\Delta \beta _{t})}{\bar{\sigma}(u_{t}
}=\frac{\sigma _{\Delta \beta }}{\sigma _{\varepsilon }/a(1)},
\label{eq:s2n}
\end{equation
where $\bar{\sigma}(u_{t})=\sigma _{\varepsilon }/a(1)$ since $u_{t}$
follows a stationary AR(4) process, $a(1)=(1-\sum_{i=1}^{4}a_{i})$, and
\bar{\sigma}(\Delta \beta _{t})=\sigma _{\Delta \beta }$ due to $\eta _{t}$
being $i.i.d.$, yielding further the relation $\sigma _{\Delta \beta
}=(\lambda /T)\sigma _{\eta }$. As a result of the identifying
\textquotedblleft \textit{normalization }$\mathbf{D}=1$\textquotedblright\
of MUE, \ref{eq:s2n} implies that $\sigma _{\eta }=\sigma _{\varepsilon
}/a(1)$. That is, the long-run standard deviation of the stationary
component $u_{t}$ is equal to the standard deviation of the trend growth
innovations $\eta _{t}$.
\cite{stock.watson:1998} write on page 354: \textquotedblleft \textit{Table
3 is a lookup table that permits computing median unbiased estimates, given
a value of the test statistic. The normalization used in Table 3 is that
\mathbf{D}=1$, and users of this lookup table must be sure to impose this
normalization when using the resulting estimator of $\lambda $.
\textquotedblright\ Moreover, the numerical results that are reported in\
Section 3, which is appropriately labelled \textquotedblleft \textit
Numerical Results for the univariate Local-Level Model}\textquotedblright ,
are obtained from simulations that employ the local level model of \re
{eq:tvp_sim} as the data generating process (see \cites{stock.watson:1998}
GAUSS\ programs \texttt{ESTLAM.GSS}, \texttt{TESTCDF.GSS}, and \texttt
LOOKUP.GSS} in the \texttt{tvpci.zip} file that accompanies their paper).
These numerical results do not only include the simulations regarding \emph
`pile-up'} at zero frequencies reported in Table 1, asymptotic power
functions plotted in Figure 1, or the AREs provided in Table 2 of \cit
{stock.watson:1998}, but also the look-up tables for the construction of the
median unbiased estimator of $\lambda $ in Table 3. It must therefore be
kept in mind that these look-up table values are valid only for the
univariate local level model, or for models that can be (re-)written in
\textit{\textquotedblleft local level form\textquotedblright }.
\autoref{tab:sw98_T4} below reports the replication results of Tables 4 and
5 in \cite{stock.watson:1998}.\footnote
All computations are implemented in Matlab, using their GDP growth data
provided in the file \texttt{DYPC.ASC}. Note that I also obtained look-up
table values based on a finer grid of $\lambda $ values from their original
GAUSS file \texttt{LOOKUP.GSS} (commenting out the lines \texttt{if
(lamdat[i,1] .<= 30) .and (lamdat[i,1]-floor(lamdat[i,1]) .== 0);} in
\texttt{LOOKUP.GSS} to list look-up values for the entire grid of $\lambda
's considered), rather than those listed in Table 3 on page 354 of their
paper, where the grid is based on unit increments in $\lambda $ from 0 to
30. I further changed the settings in the tolerance on the gradient in their
maximum likelihood (maxlik) library routine to \texttt{\_max\_GradTol = 1e-0
} and used the printing option \texttt{format /rd 14,14} for a more precise
printing of all results up to 14 decimal points. Lastly, there is a small
error in the construction of the lag matrix in the estimation of the AR(4)
model in file \texttt{TST\_GDP1.GSS} (see lines 40 to 47). The first column
in the \texttt{w} matrix is the first lag of the demeaned per capita trend
growth series, while columns 2 to 4 are the second to fourth lags of the
raw, that is, not demeaned per capita trend growth series. Correcting this
leads to mildly higher, yet still insignificant, point estimates of all
\sigma_{\Delta\beta}$. For instance, the point estimate of
\sigma_{\Delta\beta}$ based on \cites{nyblom:1989} $L$ statistic yields
0.1501, rather than 0.1303, but remains still statistically insignificant,
with the lower value of the confidence interval being 0. To exactly
replicate the results in \cites{stock.watson:1998}, I compute the lag matrix
as they do.} Columns one and two in the top half of \autoref{tab:sw98_T4}
show test statistics and $p-$values of the four structural break tests that
are considered: $i)$ \cites{nyblom:1989} $L$ test, $ii)$
\cites{andrews.ploberger:1994} mean Wald (MW) test, $iii)$
\cites{andrews.ploberger:1994} exponential Wald (EW) test, and $iv)$
\cites{quandt:1960} Likelihood ratio (QLR) test, together with corresponding
$p-$values.
As a reminder, the MW, EW and QLR tests are \cite{chow:1960} type structural
break tests, which test for a structural break in the unconditional mean of
a series at a given or known point in time. \cite{chow:1960} break tests
require a partitioning of the data into two sub-periods. When the break date
is unknown, these tests are implemented by rolling through the sample. To be
more concrete, denote by $\mathcal{Y}_{t}$ the series to be tested for a
structural break in the unconditional mean. Let the dummy variable
D_{t}(\tau )=1$ if $t>\tau ,$ and $0$ otherwise, where $\tau =\{\tau
_{0},\tau _{0}+1,\tau _{0}+2,\ldots ,\tau _{1}\}$ is an index (or sequence)
of grid points between endpoints $\tau _{0}$ and $\tau _{1}$. As is common
in this literature, \cite{stock.watson:1998} set these endpoints at the
15^{th}$ and $85^{th}$ percentiles of the sample size $T$, that is, $\tau
_{0}=0.15T$ and $\tau _{1}=0.85T$.\footnote
To be precise, $\tau _{0}$ is computed as $\mathtt{floor(0.15\ast T)}$ and
\tau _{1}$ as $T-\tau _{0}$. Also, it is standard practice in the structural
break literature to trim out some upper/lower percentiles of the search
variable to avoid having too few observations at the beginning or at the end
of the sample in the 0 and 1 dummy regimes created by $D_{t}(\tau )$. In
fact, the large sample approximation of the distribution of the QLR test
statistic depends on $\tau _{0}$ and $\tau _{1}$. \cite{stock.watson:2011}
write to this on page 558: \textquotedblleft \emph{For the large-sample
approximation to the distribution of the QLR statistic to be a good one, the
sub-sample endpoints, }$\tau _{0}$ \emph{and} $\tau _{1}$\emph{, cannot be
too close to the beginning or the end of the sample}.\textquotedblright\
Employing endpoints other than the $15^{th}$ upper/lower percentile values
used by \cite{stock.watson:1998} in the simulation of the look-up table for
\lambda $ is thus likely to affect the values provided in Table 3 of \cit
{stock.watson:1998}, due to the endpoints' influence on the distribution of
the structural break test statistics.} For each $\tau \in \lbrack \tau
_{0},\tau _{1}]$, the following regression of $\mathcal{Y}_{t}$ on an
intercept term and $D_{t}(\tau )$ is estimated
\begin{equation}
\mathcal{Y}_{t}=\zeta _{0}+\zeta _{1}D_{t}(\tau )+\epsilon _{t}, \label{Zt}
\end{equation
and the $F$ statistic (the square of the $t-$statistic) on the point
estimate $\hat{\zeta}_{1}$ is constructed. The sequence $\{F(\tau )\}_{\tau
=\tau _{0}}^{\tau _{1}}$ of $F$ statistics is then utilized to compute the
MW, EW and QLR structural break test statistics needed in the implementation
of MUE. These are calculated as:\bsq\label{eq:breakTests
\begin{align}
\mathrm{MW}& =\frac{1}{N_{\tau }}\sum\limits_{\tau =\tau _{0}}^{\tau
_{1}}F(\tau ) \\
\mathrm{EW}& =\ln \left( \frac{1}{N_{\tau }}\sum_{\tau =\tau _{0}}^{\tau
_{1}}\exp \left\{ \frac{1}{2}F(\tau )\right\} \right) \label{EW} \\
\mathrm{QLR}& =\max_{\tau \in \lbrack \tau _{0},\tau _{1}]}\{F(\tau
)\}_{\tau =\tau _{0}}^{\tau _{1}},
\end{align
\esq where $N_{\tau }$ denotes the number of grid points in $\tau $.
\cites{nyblom:1989} $L$ test statistic is computed without sequentially
partitioning the data via the sum of squared cumulative sums of $\mathcal{Y
_{t}$. More specifically, let $\hat{\mu}_{\mathcal{Y}}$ denote the sample
mean of $\mathcal{Y}_{t}$, $\hat{\sigma}_{\mathcal{Y}}^{2}$ the sample
variance of $\mathcal{Y}_{t}$, and $\mathcal{\tilde{Y}}_{t}=\mathcal{Y}_{t}
\hat{\mu}_{\mathcal{Y}}$ the demeaned $\mathcal{Y}_{t}$ process.
\cites{nyblom:1989} $L$ statistic is then constructed as
\begin{equation}
L=T^{-1}\sum_{t=1}^{T}\vartheta _{t}^{2}/\hat{\sigma}_{\mathcal{Y}}^{2},
\label{eqL}
\end{equation
where $\vartheta _{t}$ is the scaled cumulative sum of $\mathcal{\tilde{Y}
_{t}$, ie., $\vartheta _{t}=T^{-1/2}\sum_{s=1}^{t}\mathcal{\tilde{Y}}_{s} $.
Median unbiased estimates of $\lambda $ based on \cites{stock.watson:1998}
look-up tables are reported in column 3 of \autoref{tab:sw98_T4}, followed
by respective 90\% confidence intervals (CIs) in square brackets. The last
two columns show estimates of $\sigma _{\Delta \beta }$ computed as $\hat
\sigma}_{\Delta \beta }=\hat{\lambda}/T\times \hat{\sigma}_{\varepsilon }
\hat{a}(1)$, with 90\% CIs also in square brackets.\ In the bottom half of
\autoref{tab:sw98_T4}, MLE\ and MUE based parameter estimates of the model
in \ref{eq:sw98} are reported. The columns under the MPLE and MMLE headings
show, respectively, MLE based results when the initial state vector is
estimated and when a diffuse prior is used. The diffuse prior for the $I(1)$
element of the state vector is centered at 0 with a variance of $10^{6}$.
The next two columns under the headings MUE(0.13) and MUE(0.62) report
parameter estimates of the model in \ref{eq:sw98} with $\sigma _{\Delta
\beta }$ held fixed at its MUE$\ $point estimate of $0.13$ and upper 90\% CI
value of $0.62$, respectively.
The last column under the heading SW.GAUSS lists the corresponding MUE(0.13)
estimates obtained from running \cites{stock.watson:1998} GAUSS\ code as
reference values.\footnote
See the results reported in Table 5 on page 354 in \cite{stock.watson:1998},
where nevertheless only two decimal points are reported. MPLE and MMLE are
also replicated accurately to 6 decimal points.}
As can be seen from the results in \autoref{tab:sw98_T4}, consistent with
the \emph{`pile-up'} at zero problem documented in the simulations in \cit
{stock.watson:1998} (and also \cite{shephard.harvey:1990}), the MPLE\
estimate of $\sigma _{\Delta \beta }$ goes numerically to zero (up to 11
decimal points), while MMLE produces a \textit{`sizeable'} point estimate of
$\sigma _{\Delta \beta }$ of $0.044$. Although \cite{stock.watson:1998} (and
also\ I) do not report a standard error for $\hat{\sigma}_{\Delta \beta }$
in the tables containing the estimation results, the estimate of $\mathrm
stderr}(\hat{\sigma}_{\Delta \beta })$ is $0.1520$, suggesting that $\hat
\sigma}_{\Delta \beta }$ is very imprecisely estimated.\footnote{\cit
{stock.watson:1998} compute standard errors for the remaining MMLE\
parameters (see column three in the upper part of Table 5 on page 354 in
their paper. They write in the notes to Table 5: \textquotedblleft \textit
Because of the nonnormal distribution of the MLE of }$\lambda $, \textit{the
standard error for }$\sigma _{\Delta \beta }$ \textit{is not reported
.\textquotedblright\ Evidently, \textit{`testing'} the null hypothesis of
\sigma _{\Delta \beta }=0$ using a standard $t-$ratio does not make any
sense statistically. Nevertheless, $\hat{\sigma}_{\Delta \beta }$ is very
imprecisely estimated, and highly likely to be \emph{`very'} close to zero.
The MMLE log-likelihood function with the restriction $\sigma _{\Delta \beta
}=0$ is $-547.5781$, while the (unrestricted) MMLE is $-547.4805$, with the
difference between the two being very small of about $0.10$.} From the MUE\
results reported in the first column of the top half of \autoref{tab:sw98_T4}
it is evident that all 4 structural break tests yield confidence intervals
for $\lambda $ and hence also $\sigma _{\Delta \beta }$ that include zero.
Thus, even when using MUE as the \textit{`preferred'} estimator, one would
conclude that $\hat{\lambda}$ and $\hat{\sigma}_{\Delta \beta }$ are \textit
not} statistically different from zero.
An evident practical problem with the use of \cites{stock.watson:1998} MUE\
is that the 4 different structural break tests can produce vastly different
point estimates of $\lambda $. This is clearly visible from \autore
{tab:sw98_T4}, where the 4 tests yield $\lambda $ estimates with an implied
\hat{\sigma}_{\Delta \beta }$ range between $0.0250$ (for QLR)$\ $and
0.1303 $ (for $L$). From the simulation results in \cite{stock.watson:1998}
we know that all 4 tests seem to behave equally well in the \emph{`pile-up'}
at zero frequency simulations (see Table 1 in \cite{stock.watson:1998}).
However, the QLR test performed \emph{`best'} in the efficiency results,
producing the largest (closes to 1)\ asymptotic relative efficiencies in
Table 2 of \cite{stock.watson:1998}. Analysing these results in the context
of the empirical estimation of trend growth, the most accurate MUE estimator
based on the QLR\ structural break test produces an estimate of $\sigma
_{\Delta \beta }$ that is 5 times \textit{smaller} than the largest one
based on the $L$ structural break test, with the MMLE\ estimate of $\sigma
_{\Delta \beta } $ being approximately double the size of the QLR estimate.
To provide a visual feel of how different the MLE\ and MUE based estimates
of U.S. trend growth are, I show plots of the \emph{smoothed} estimates in
\autoref{fig:sw98_F4} (these correspond to Figures 4 and 3 in \cit
{stock.watson:1998}). The top panel displays the MPLE, MMLE, MUE(0.13), and
MUE(0.62) estimates together with a 90\%\ CI of the MMLE estimate (shaded
area), as well as a dashed yellow line that shows \cites{stock.watson:1998}
GAUSS code based MUE(0.13) estimate for reference.\ The plot in the bottom
panel of \autoref{fig:sw98_F4} superimposes the actual $GY_{t}$ series to
portray the variability in the trend growth estimates relative to the
variation in the data from which these were extracted.\footnote
Notice from the top panel of \autoref{tab:sw98_T4} that there are four
different estimates of $\lambda $, and thus four $\hat{\sigma}_{\Delta \beta
}$. Rather then showing smoothed trend estimates for all four of these, I
follow \cite{stock.watson:1998} and only show estimates based on
\cites{nyblom:1989} $L$ statistic, which has the largest $\lambda ~
estimate, and hence also $\sigma _{\Delta \beta }$.} The $y-$axis range is
set as in Figures 4 and 3 in \cite{stock.watson:1998}. As can be seen from
\autoref{fig:sw98_F4}, there is only little variability in the MLE based
trend growth estimates, with somewhat more variation from MUE(0.13).
Nonetheless, all three trend growth point estimates stay within the 90\%
error bands of MMLE. Moreover, the plots in \autoref{fig:sw98_F4} confirm
the lack of precision of MUE. Trend growth could be anywhere between a
constant value of about $1.8\%$ ($\hat{\beta}_{00}$ from MPLE), which is a
flat line graphically when $\sigma _{\Delta \beta }$ is held fixed at its
lower $90\%$\ CI value of 0, and a rather volatile series which produces a
range between nearly $4.5\%$ in 1950 and less than $0.5\%$ in 1980 when
\sigma _{\Delta \beta }$ is set at its upper $90\%$\ CI value of 0.62.
Given the previous results and discussion, one could argue that the
statistical evidence in support of any important time varying trend growth
in real U.S.\ GDP per capita is rather weak in this model and data set. As a
robustness check and in the context of a broader replication of the time
varying trend growth estimates of \cite{stock.watson:1998}, I obtain real
GDP per capita data from the Federal Reserve Economic Data (FRED2) database
and re-estimate model. These results are reported in \autoref{tab:sw98_T4_2
, which is arranged in the same way as \autoref{tab:sw98_T4} (only the last
column with heading SW.GAUSS is removed). The sample period is again from
1947:Q2 to 1995:Q4, using an AR(4)\ model to approximate $u_{t}$ in \re
{eq:sw1}.\footnote
The results using an ARMA($2,2$) model for $u_{t}$ instead are qualitatively
the same.} From \autoref{tab:sw98_T4_2} it is clear that not only do the two
MLE based estimates of $\sigma _{\Delta \beta }$ yield point estimates that
are numerically equal to zero, but so do all 4 MUEs. Hence, trend growth may
well be constant. More importantly, it demonstrates that MUE\ can also lead
to zero estimates of $\sigma _{\Delta \beta }$ and that there is nothing
unusual about that.\footnote
I show later that the Stage 2 MUE procedure of \cite{holston.etal:2017} is
incorrectly implemented and based on a misspecified Stage 2 model. Once this
is corrected, the Stage 2 $\lambda _{z}$ that one obtains is very close to
zero, resulting in the full model MLE\ and MUE\ estimates being very similar
}
Before I proceed to describe how the three stage procedure of \cit
{holston.etal:2017} is implemented, a brief procedural description of
\cites{stock.watson:1998} MUE\ that lists the main steps needed to replicate
the results reported in \autoref{tab:sw98_T4} and \autoref{fig:sw98_F4} is
provided below.
\begin{enum}
\item Fit an AR(4)\ model to $GY_{t}$, construct $\hat{a}(L)$ from the
estimated AR(4) coefficients $\left\{ \hat{a}_{j}\right\} _{j=1}^{4}$, and
filter the series to remove the AR(4) serial dependence. Let $\widetilde{GY
_{t}=\hat{a}(L)GY_{t}$ denote the AR(4)\ filtered series.\footnote
This is the generalized least squares (GLS) step in the original TVP model
description on page 350 in \cite{stock.watson:1998}.} Use the residuals
\hat{\varepsilon}_{t}$ from the fitted AR(4)\ model for $GY_{t}$ to compute
an estimate of the standard deviation of $\varepsilon _{t}$ and denote it by
$\hat{\sigma}_{\varepsilon }$. Also, let $\hat{a}(1)=\big(1-\sum_{j=1}^{4
\hat{a}_{j}\big)$.
\item Test for a structural break in the unconditional mean of the AR(4)
filtered series $\widetilde{GY}_{t}$ using the four structural break tests
described above. That is, replace $\mathcal{Y}_{t}$ in \ref{Zt} with
\widetilde{GY}_{t}$, run the dummy variable regression in \ref{Zt}, and
compute the structural break statistics as defined in \ref{eq:breakTests}
and \ref{eqL}.
\item Given these structural break test statistics, use the look-up values
provided in Table 3 on page 354 in \cite{stock.watson:1998} to find the
corresponding $\lambda $ value by interpolation. Once an estimate of
\lambda $ is available, compute $\hat{\sigma}_{\Delta \beta }=T^{-1}\hat
\lambda}\hat{\sigma}_{\varepsilon }/\hat{a}(1)$, where $\hat{\sigma
_{\varepsilon }$ and $\hat{a}(1)$ are obtained from Step $(i)$.
\item With $\sigma _{\Delta \beta }$ held fixed at its median unbiased
estimate obtained in Step $(iii)$, estimate the remaining parameters of the
model in \ref{eq:sw98} using the Kalman Filter and MLE, namely, MPLE, where
the initial value is estimated as well. Finally, using the estimates of the
full set of parameters of the model in \ref{eq:sw98}, apply the Kalman
Smoother to extract an estimate of annualized trend growth of U.S. real per
capita GDP.
\end{enum}
\section{Three stage estimation procedure of \protect\cite{holston.etal:2017}
\label{sec:HLW}}
\cite{holston.etal:2017} employ MUE in two preliminary stages that are based
on restricted versions of the full model in \ref{eq:hlw} to obtain estimates
of the \emph{`signal-to-noise ratios'} $\lambda _{g}=\sigma _{g}/\sigma
_{_{y^{\ast }}}$ and $\lambda _{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$.
These ratios are then held fixed in Stage 3 of their procedure, which
produces estimates of the remaining parameters of the model in \ref{eq:hlw}.
In order to conserve space in the main text, I provide all algebraic details
needed for the replication of the three individual stages in the
\aref{appendix}, which includes also some additional discussion as well as
R-Code extracts to show the exact computations. In the results that are
reported in this section, I have used their R-Code from the file \href{https://www.newyorkfed.org/medialibrary/media/research/economists/williams/data/HLW_Code.zip
{\texttt{HLW\_Code.zip}} made available on Willams' website at the New York
Fed to (numerically) accurately reproduce their results.\footnote
Williams' website at the Federal Reserve Bank of New York is at:
\url{https://www.newyorkfed.org/research/economists/williams/pub}. Their
R-Code is available from the website:
\url{https://www.newyorkfed.org/medialibrary/media/research/economists/williams/data/HLW_Code.zip
. The weblink to the file with their real time estimates is:
\url{https://www.newyorkfed.org/medialibrary/media/research/economists/williams/data/Holston_Laubach_Williams_real_time_estimates.xlsx
. Note here that all my results exactly match their estimates provided in
the \href{https://www.newyorkfed.org/medialibrary/media/research/economists/williams/data/Holston_Laubach_Williams_real_time_estimates.xlsx
{\texttt{Holston\_Laubach\_Williams\_real\_time\_estimates.xlsx}} file in
Sheet 2017Q1.} The sample period that I cover ends in 2017:Q1.\ The
beginning of the sample is the same as in \cite{holston.etal:2017}. That is,
it starts in 1960:Q1, where the first 4 quarters are used for initialisation
of the state vector, while the estimation period starts in 1961:Q1.
\cite{holston.etal:2017} adopt the general state-space model (SSM) notation
of \cite{hamilton:1994} in their three stage procedure. The SSM is
formulated as follows:\footnote
The state-space form that they use is described on pages 9 to 11 of their
online appendix that is included with the R-Code \texttt{HLW\_Code.zip} file
from Williams' website at the New York Fed. Note that I use exactly the same
state-space notation to facilitate the comparison to \cite{holston.etal:2017
, with the only exception being that I include one extra selection matrix
term $\mathbf{S}$ in front of $\boldsymbol{\epsilon }_{t}$ in \ref{eq:RQ} as
is common in the literature to match the dimension of the state vector to
\boldsymbol{\epsilon }_{t}$ when there are identities due to lagged values.
I also prefer not to transpose the system matrices $\mathbf{A}$\textbf{\ }an
\textbf{\ }$\mathbf{H}$ in \ref{eq:RQ}, as it is not necessary and does not
improve the readability.
\begin{equation}
\begin{array}{l}
\mathbf{y}_{t}=\mathbf{Ax}_{t}+\mathbf{H}\boldsymbol{\xi }_{t}+\boldsymbol
\nu }_{t} \\
\boldsymbol{\xi }_{t}=\mathbf{F}\boldsymbol{\xi }_{t-1}+\mathbf{S
\boldsymbol{\varepsilon }_{t
\end{array
\text{, \ \ where
\begin{bmatrix}
\boldsymbol{\nu }_{t} \\
\boldsymbol{\varepsilon }_{t
\end{bmatrix
\sim \mathsf{MNorm}\left(
\begin{bmatrix}
\boldsymbol{0} \\
\boldsymbol{0
\end{bmatrix
\begin{bmatrix}
\mathbf{R} & \boldsymbol{0} \\
\boldsymbol{0} & \mathbf{W
\end{bmatrix
\right) , \label{eq:RQ}
\end{equation
where we can define $\boldsymbol{\epsilon }_{t}=\mathbf{S}\boldsymbol
\varepsilon }_{t}$, so that $\mathrm{Var}(\boldsymbol{\epsilon }_{t})
\mathrm{Var}(\mathbf{S}\boldsymbol{\varepsilon }_{t})=\mathbf{SWS}^{\prime }
\mathbf{Q}$ to make it consistent with the notation used in \cit
{holston.etal:2017}. The (observed) measurement vector is denoted by
\mathbf{y}_{t}$ in \ref{eq:RQ}, $\mathbf{x}_{t}$ is a vector of exogenous
variables, $\mathbf{A}$\textbf{, }$\mathbf{H}$ and $\mathbf{F}$ are
conformable system matrices, $\boldsymbol{\xi }_{t}$ is the latent state
vector, $\mathbf{S}$ is a selection matrix, and the notation $\mathsf{MNorm
\left( \boldsymbol{\mu },\boldsymbol{\Sigma }\right) $ denotes a
multivariate normal random variable with mean vector $\boldsymbol{\mu }$ and
covariance matrix $\boldsymbol{\Sigma }$. The disturbance terms $\boldsymbol
\nu }_{t}$ and $\boldsymbol{\varepsilon }_{t}$ are serially uncorrelated,
and the (individual) covariance matrices $\mathbf{R}$ and $\mathbf{W}$ are
assumed to be diagonal matrices, implying zero correlation between the
elements of the measurement and state vector disturbance terms. The
measurement vector $\mathbf{y}_{t}$ in \ref{eq:RQ} is the same for all three
stages and is defined as $\mathbf{y}_{t}=[y_{t},~\pi _{t}]^{\prime }$, where
$y_{t}$ and $\pi _{t}$ are the log of real GDP\ and annualized PCE
inflation, respectively, as defined in \Sref{sec:model}. The exact form of
the remaining components of the SSM\ in \ref{eq:RQ} changes with the
estimation stage that is considered, and is described in detail either in
the text below or in the \aref{appendix}.
As I have emphasized in the description of MUE in \Sref{sec:MUE}, the
simulation results of \cite{stock.watson:1998} show that \emph{`pile-up'} at
zero frequencies for MLE are not only a function of the size of the variance
of $\Delta \beta _{t}=(\lambda /T)\eta _{t}$ (or alternatively $\lambda $),
but also depend critically on whether the initial condition of the state
vector is estimated or not. Now \cite{holston.etal:2017} \emph{do not}
estimate the initial condition of the state vector in any of the three
stages that are implemented. Instead, they apply the HP filter to log GDP
data with the smoothing parameter set to $36000$ to get a preliminary
estimate of $y_{t}^{\ast }$ and trend growth $g_{t}$ (computed as the first
difference of the HP\ filter estimate of $y_{t}^{\ast }$) using data from
1960:Q1 onwards. \textit{`Other factor' }$z_{t}$ is initialized at 0
\footnote
See the listing in \coderef{R:stage3}{1} in the \hyperref[Rcode]{A.6~R-Code
Snippets} section of the \aref{appendix}, which shows the first 122 lines of
their R-file \texttt{rstar.stage3.R}. Line 30 shows the construction of the
initial state vector as $\boldsymbol{\xi }_{00}=[y_{0}^{\ast },y_{-1}^{\ast
},y_{-2}^{\ast },g_{-1},g_{-2},z_{-1},z_{-2}]^{\prime }$ where subscripts
[0,-1,-2]$ refer to the time periods 1960:Q4, 1960:Q3, and 1960:Q2,
respectively. In terms of their R-Code, we have: \texttt{xi.00 <- c(100
\texttt{g.pot}[3:1],100*g.pot.diff[2:1],0,0)}, where \texttt{g.pot} is the
HP filtered trend and \texttt{g.pot.diff} is its first difference, ie.,
trend growth, with the two zeros at the end being the initialisation of
z_{t}$. This yields the following numerical values: [806.45, 805.29, 804.12,
1.1604, 1.1603, 0, 0]. The same strategy is also used in the first two
stages (see their R-files \texttt{rstar.stage1.R} and \texttt{rstar.stage2.R
).\label{fn:1}} This means that $\boldsymbol{\xi }_{00}$ has known and fixed
quantities in all three stages.\ Given the simulation evidence provided in
Table 1 on page 353 in \cite{stock.watson:1998}, one may thus expect a
priori \emph{`pile-up'} at zero frequencies of MLE (without estimation of
the initial conditions) to be only marginally larger than those of MUE,
especially for everything but very small values of $\lambda $.
Also, \cite{holston.etal:2017} determined the covariance matrix of the
initial state vector in an unorthodox way. Even though every element of the
state vector $\boldsymbol{\xi }_{t}$ in all three estimation stages is an
I(1)$ variable, they do not employ a diffuse prior on the state vector.
Instead, the covariance matrix is determined with a call to the function
\texttt{calculate.covariance.R} (see the code snippet in \coderef{R:covar}{2}
for details on this function, and also lines 66, 84, and 88, respectively,
in their R-files \texttt{rstar.stage1.R}, \texttt{rstar.stage2.R}, and
\texttt{rstar.stage3.R}, with line 88 in \texttt{rstar.stage3.R }also shown
on the second page of the code snippet in \coderef{R:stage3}{1}). To
summarize what this function does, consider the Stage 1 model, which is
estimated with a call to \texttt{rstar.stage1.R}. The function \texttt
calculate.covariance.R }first sets the initial covariance matrix to $0.2$
times a three dimensional identity matrix $\mathbf{I}_{3}$. Their procedure
then continues by using data from 1961:Q1 to the end of the sample to get an
estimate of $\sigma _{y^{\ast }}^{2}$ from the Stage 1 model. Lastly, the
initial covariance matrix $\mathbf{P}_{00}$ to be used in the \textit{`
\emph{final}\textit{'} estimation of the Stage 1 model is then computed as
\bsq\label{eq:P00S1
\begin{align}
\mathbf{P}_{00}& =\mathbf{F}\,\mathrm{\mathrm{diag}}([0.2,~0.2,~0.2])\
\mathbf{F}^{\prime }+\mathbf{\hat{Q}} \label{eq:P00S1a} \\
&
\begin{bmatrix}
1 & 0 & 0 \\
1 & 0 & 0 \\
0 & 1 &
\end{bmatrix
\begin{bmatrix}
0.2 & 0 & 0 \\
0 & 0.2 & 0 \\
0 & 0 & 0.
\end{bmatrix
\begin{bmatrix}
1 & 0 & 0 \\
1 & 0 & 0 \\
0 & 1 &
\end{bmatrix
^{\prime }
\begin{bmatrix}
\hat{\sigma}_{y^{\ast }}^{2} & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 &
\end{bmatrix}
\label{eq:P00S1b} \\[2mm]
&
\begin{bmatrix}
0.4711 & 0.2 & 0.0 \\
0.2 & 0.2 & 0.0 \\
0.0 & 0.0 & 0.
\end{bmatrix
, \label{eq:P00S1c}
\end{align
\esq with $\mathbf{\hat{Q}}$ a $(3\times 3)$ dimensional zero matrix with
element $(1,1)$ set to $\hat{\sigma}_{y^{\ast }}^{2}=0.27113455739$ from the
initial run of the Stage 1 model. What this procedure effectively does is to
set $\mathbf{P}_{00}$ to the first time period's predicted state covariance
matrix, given an initial state covariance matrix of $0.2\times \mathbf{I
_{3} $ and the estimate $\hat{\sigma}_{y^{\ast }}^{2}$, where $\hat{\sigma
_{y^{\ast }}^{2}$ was obtained by MLE\ and the Kalman Filter using
0.2\times \mathbf{I}_{3}$ as the initial state covariance. This way of
initialising $\mathbf{P}_{00}$ is rather circular, as it fundamentally
presets $\mathbf{P}_{00}$ at $0.2\times \mathbf{I}_{3}$.\footnote
In footnote 6 on page S64 in \cite{holston.etal:2017} (and also in the
description of the \texttt{calculate.covariance.R} file), they write:
\textquotedblleft \emph{We compute the covariance matrix of these states
from the gradients of the likelihood function.}\textquotedblright\ Given the
contents of the R-Code, it is unclear how and if this was implemented.}
When the state vector contains $I(1)$ variables, it is not only standard
practice to use a diffuse prior, but it is highly recommended. For instance,
\cite{harvey:1989} writes to this on the bottom of page 121:
\textquotedblleft \textit{When the transition equation is non-stationary,
the unconditional distribution of the state vector is not defined. Unless
genuine prior information is available, therefore, the initial distribution
of }$\boldsymbol{\alpha }_{0}$\textit{\ \textbf{must} be specified in terms
of a diffuse or non-informative prior}.\textquotedblright\ (emphasis added,
\boldsymbol{\alpha }_{t}$ is the state-vector in Harvey's notation). It is
not clear why \cite{holston.etal:2017} do not use a diffuse prior.\footnote
In an earlier paper using a similar model for the NAIRU, \cite{laubach:2001}
discusses the use of diffuse priors. \cite{laubach:2001} writes on page
222:\ \textquotedblleft \emph{The most commonly used approach in the
presence of a nonstationary state is to integrate the initial value out of
the likelihood by specifying an (approximately) diffuse
prior.\textquotedblright\ }He then proceeds to describe an alternative
procedure that can be implemented by using: \textquotedblleft \emph{a few
initial observations to estimate the initial state by GLS, and use the
covariance matrix of the estimator as initial value for the conditional
covariance matrix of the state.}\textquotedblright\ The discussion is then
closed with the statement: \textquotedblleft \emph{This is the first
approach considered here. Because this estimate of the initial state and its
covariance matrix are functions of the model parameters, under certain
parameter choices the covariance matrix may be ill conditioned. The routines
then choose the diffuse prior described above as default}.\textquotedblrigh
\ Thus even here, the diffuse prior is the "\emph{safe}" default option.
Note that their current procedure does not use: \textquotedblleft \emph{a
few initial observations to estimate the initial state}\textquotedblright ,
but the same sample of data that are used in the final model, ie., with data
beginning in 1961:Q1.} However, one may conjecture that it could be due to
their preference for reporting Kalman Filtered (one-sided) rather than the
more efficient Kalman Smoothed (two-sided) estimates of the latent state
vector $\boldsymbol{\xi }_{t}$ which includes trend growth $g_{t}$ and \emph
`other factor'} $z_{t}$ needed to construct $r_{t}^{\ast }$.\footnote
Note that Filtered estimates of $g_{t}$, $z_{t}$ and thus also $r_{t}^{\ast
} $ are very volatile at the beginning of the sample period (until about
1970) when $\mathbf{P}_{00}$ is initialized with a diffuse prior.}
As a final point in relation to the probability of \emph{`pile-up'} at zero
problems arising due to small variances of the state innovations, and hence
the rationale for employing MUE rather than MLE\ in the first place, one can
observe from the size of the $\sigma _{g}$ and $\sigma _{z}$ estimates for
the U.S. reported in Table 1 on page S60 in \cite{holston.etal:2017} that
these are rather \emph{`large'} at $0.122$ and $0.150$, respectively. The
simulation results in Table 1 in \cite{stock.watson:1998} show that \emph
`pile-up'} at zero frequencies drop to $0.01$ for both, MMLE\ and MUE, when
the true population value of $\lambda $ is $30$ ($\sigma _{\Delta \beta
}=0.06$). Given the fact that \cite{holston.etal:2017} do not estimate the
initial value of the state vector, and that their median unbiased estimates
are about two times larger than $0.06$, it seems highly implausible that
\emph{`pile-up'} at zero problems should materialize with a higher
probability for MLE than for MUE.
\subsection{Stage 1 model \label{sec:S1}}
\cites{holston.etal:2017} first stage model takes the following restricted
form of the full model presented in equation \ref{eq:hlw}:\footnote
See \hyperref[sec:AS1]{Section A.1} in the \hyperref[appendix]{Appendix} for
the exact matrix expressions and expansions of the first stage SSM. Note
that one key difference of \cites{holston.etal:2017} SSM specification
described in equations \ref{AS1:m} and \ref{AS1:s} in the \hyperref[appendix
{Appendix} is that the expansion of the system matrices for the Stage 1
model does not include the drift term $g$ in the trend specification in \re
{S1d}, so that $y_{t}^{\ast }$ follows a random walk \emph{without} drift.
Evidently, such a specification cannot match the upward trend in the GDP
data. To resolve this mismatch, \cite{holston.etal:2017} `\textit{detrend'}
output $y_{t}$ in the estimation (see \hyperref[sec:AS1]{Section A.1} in the
\hyperref[appendix]{Appendix} which describes how this is done and also
shows snippets of their R-Code).}\bsq\label{eq:stag1
\begin{align}
y_{t}& =y_{t}^{\ast }+\tilde{y}_{t} \label{S1a} \\
\pi _{t}& =b_{\pi }\pi _{t-1}+\left( 1-b_{\pi }\right) \pi _{t-2,4}+b_{y
\tilde{y}_{t-1}+\varepsilon _{t}^{\pi } \label{S1b} \\
\tilde{y}_{t}& =a_{y,1}\tilde{y}_{t-1}+a_{y,2}\tilde{y}_{t-2}+\mathring
\varepsilon}_{t}^{\tilde{y}} \label{S1c} \\
y_{t}^{\ast }& =g+y_{t-1}^{\ast }+\mathring{\varepsilon}_{t}^{y^{\ast }}\!\!,
\label{S1d}
\end{align
\esq where the vector of Stage 1 parameters to be estimated is
\begin{equation}
\boldsymbol{\theta }_{1}=[a_{y,1},~a_{y,2},~b_{\pi },~b_{y},~g,~\sigma _
\tilde{y}},~\sigma _{\pi },~\sigma _{y^{\ast }}]^{\prime }. \label{S1theta1}
\end{equation
To be able to distinguish the disturbance terms of the full model in \re
{eq:hlw} from the ones in the restricted Stage 1 model in \ref{eq:stag1}
above, I have placed a ring $(\mathring{\phantom{y}})$ symbol on the error
terms in \ref{S1c} and \ref{S1d}. These two disturbance terms from the
restricted model are defined as:
\begin{equation}
\mathring{\varepsilon}_{t}^{y^{\ast }}=g_{t-1}-g+\varepsilon _{t}^{y^{\ast }}
\label{S1eps_ystar0}
\end{equation
and
\begin{equation}
\mathring{\varepsilon}_{t}^{\tilde{y}}=\tfrac{a_{r}}{2}[\left(
r_{t-1}-4g_{t-1}-z_{t-1}\right) +\left( r_{t-2}-4g_{t-2}-z_{t-2}\right)
]+\varepsilon _{t}^{\tilde{y}}. \label{S1eps_ytilde0}
\end{equation
From the relations in \ref{S1eps_ystar0} and \ref{S1eps_ytilde0} it is clear
that, due to the restrictions in the Stage 1 model, the error terms
\mathring{\varepsilon}_{t}^{\tilde{y}}$ and $\mathring{\varepsilon
_{t}^{y^{\ast }}$ in \ref{eq:stag1} will not be uncorrelated anymore, since
\mathrm{Cov}(\mathring{\varepsilon}_{t}^{\tilde{y}},\mathring{\varepsilon
_{t}^{y^{\ast }})=-\tfrac{a_{r}}{2}4\sigma _{g}^{2}$ given the assumptions
of the full model in \ref{eq:hlw}.\ The separation of trend and cycle shocks
in this formulation of the Stage 1 model is thus more intricate, as both
shocks will respond to one common factor, the missing $g_{t-1}$.
In the implementation of the Stage 1 model, \cite{holston.etal:2017} make
two important modelling choices that have a substantial impact on the
\boldsymbol{\theta }_{1}$ parameter estimates, and thus also the estimate of
the `\emph{signal-to-noise ratio}' $\lambda _{g}$ used in the later stages.
The first is the tight specification of the prior variance of the initial
state vector $\mathbf{P}_{00}$ discussed in the introduction of this
section. The second is a lower bound restriction on $b_{y}$ in the inflation
equation in \ref{S1b} ($b_{y}\geq 0.025$ in the estimation). The effect of
these two choices on the estimates of the Stage 1 model parameters are shown
in \autoref{tab:Stage1} below. The left block of the estimates in \autore
{tab:Stage1} (under the heading `HLW Prior') reports four sets of results
where the state vector was initialized using their values for $\boldsymbol
\xi }_{00}$ and $\mathbf{P}_{00}$. The first column of this block
(HLW.R-File) reports estimates from running \cites{holston.etal:2017} R-Code
for the first stage model. These are reported as reference values. The
second column ($b_{y}\geq 0.025$) shows my replication of
\cites{holston.etal:2017} results using the same initial values for
parameter vector $\boldsymbol{\theta }_{1}$ in the optimisation routine and
also the same lower bound constraint on $b_{y}$. The third column
(Alt.Init.Vals) displays the results I obtain when a different initial value
for $b_{y}$ is used, with the lower bound restriction $b_{y}\geq 0.025$
still in place. The fourth column ($b_{y} $ Free)\ reports results when the
lower bound constraint on $b_{y}$ is removed.\footnote{\label{FN:initVals}
To find the initial values for $\boldsymbol{\theta }_{1}$, \cit
{holston.etal:2017} apply the HP filter to GDP\ to obtain an initial
estimate of the cycle and trend components of GDP. These estimates are then
used to find initial values for (some of) the components of parameter vector
$\boldsymbol{\theta }_{1}$ by running OLS\ regressions of the HP cycle
estimate on two of its own lags (an AR(2) essentially), and by running
regressions of inflation on its own lags and one lag of the HP cycle.
Interestingly, although readily available, rather than taking the
coefficient on the lagged value of the HP cycle in the initialization of
b_{y}$, which yields a value of $0.0921$, \cite{holston.etal:2017} use the
lower bound value of $0.025$ for $b_{y}$ as the initial value. In the
optimisation, this has the effect that the estimate for $b_{y}$ is
effectively stuck at $0.025$, although it is not the global optimum in the
restricted model, which is at $b_{y}=0.097185$ (see also the values of the
log-likelihood function reported in the last row of \autoref{tab:Stage1}).}
The right block in \autoref{tab:Stage1} shows parameter estimates when a
diffuse prior for $\boldsymbol{\xi }_{t}$ is used, where $\mathbf{P}_{00}$
is set to $10^{6}$ times a three dimensional identity matrix, with the left
and right columns showing, respectively, the estimates with and without the
lower bound restriction on $b_{y}$ imposed.
Notice initially from the first two columns in the left block of \autore
{tab:Stage1} that their numerical results are accurately replicated up to 6
decimal points. From these results we also see that the lower bound
restriction on $b_{y}$ is binding. \cite{holston.etal:2017} set the initial
value for $b_{y}$ at $0.025$, and there is no movement away from this value
in the numerical routine. Specifying an alternative initial value for $b_{y}
, which is determined in the same way as for the remaining parameters in
\boldsymbol{\theta }_{1}$, leads to markedly different estimates, while
removing the lower bound restriction on $b_{y}$ all together results in the
ML estimate of $b_{y}$ to converge to zero. Evidently, these three scenarios
yield also noticeably different values for ${\hat{\sigma}}_{y^{\ast }}$,
that is, values between $0.4190$ and $0.6177$. The diffuse prior based
results (with and without the lower bound restriction) in the right block of
\autoref{tab:Stage1} show somewhat less variability in ${\hat{\sigma}
_{y^{\ast }}$, but affect the persistence of the cycle variable $\tilde{y
_{t}$ in the model, with the smallest AR(2) lag polynomial root being 1.1190
when $b_{y}\geq 0.025$ is imposed, while it is only 1.0251 and thus closer
to the unit circle when $b_{y}$ is left unrestricted.
As a final comment, there is only little variation in the likelihoods of the
different estimates that are reported in the respective left and right
blocks of \autoref{tab:Stage1}. For instance, the largest difference in
log-likelihoods is obtained from the diffuse prior results shown in the
right block of \autoref{tab:Stage1}. If we treat the lower bound as a
restriction, a Likelihood Ratio (LR) test of the null hypothesis of the
difference in these likelihoods being zero yields
-2(-536.9803-(-535.9596))=2.0414$, which, with one degree of freedom has a
p-$value of $0.1531$ and cannot be rejected at conventional significance
levels. Hence, there is only limited information in the data to compute a
precise estimate of $b_{y}$. This empirical fact is known in the literature
as a \emph{`flat Phillips curve'}.\footnote
That the output gap is nearly uninformative for inflation (forecasting) once
structural break information is conditioned upon --- regardless of what
measure of the output gap is used or whether it is combined as an ensemble
from multiple measures --- is shown in \cite{buncic.muller:2017} for the
U.S. and for Switzerland.}
Given the Stage 1 estimate $\skew{0}\boldsymbol{\hat{\theta}}_{1}$, \cit
{holston.etal:2017} use the following steps to implement median unbiased
estimation of their `\emph{signal-to-noise ratio}' $\lambda _{g}=\sigma
_{g}/\sigma _{y^{\ast }}$.
\begin{enuma}
\item Use the Stage 1 model to extract an estimate of $y_{t}^{\ast }$ from
the Kalman Smoother and construct annualised trend growth as $\Delta \hat{y
_{t|T}^{\ast }=400(\hat{y}_{t|T}^{\ast }-\hat{y}_{t-1|T}^{\ast })$, where
\hat{y}_{t|T}^{\ast }$ here denotes the Kalman Smoothed estimate of
y_{t}^{\ast }$.\footnote
Note that, although the series is annualised (scaled by 400), this does not
have an impact on the magnitude of the structural break tests. The numerical
values that one obtains for $\lambda _{g}$ are identical if scaled by 100
instead.}
\item Apply the three structural break tests described in \ref{eq:breakTests}
to the $\Delta \hat{y}_{t|T}^{\ast }$ series. Specifically, replace $Y_{t}$
in \ref{Zt} with the constructed $\Delta \hat{y}_{t|T}^{\ast }$ series, run
the dummy variable regression in \ref{Zt}, and compute the structural break
statistics as defined in \ref{eq:breakTests} and \ref{eqL}. Note that \cit
{holston.etal:2017} specify the endpoint values of the search-grid over
\tau $ at $\tau _{0}=4$ and $\tau _{1}=T-4$.\footnote
This effectively tests for a structural break in nearly every time period in
the sample. Interestingly, adjusting the $\tau $ grid to cover the $15^{th}$
upper/lower percentiles of $T$ as in \cite{stock.watson:1998} leads to no
important differences in the structural break test statistics, or the size
of the $\lambda $ estimates that one obtains in Stage 1. Nevertheless, it
should be kept in mind that it is not clear what critical values the
structural break test statistics should be compared to and also what
\lambda $ values for MUE are the appropriate ones to use with such endpoint
values. Also, \cite{holston.etal:2017} do not compute \cites{nyblom:1989} $L$
statistic.}
\item Given the structural break test statistics computed in Step ($b$),
find the corresponding $\lambda $ values in the look-up table of \cit
{stock.watson:1998}. Return the ratio ${\lambda }/T=\sigma _{g}/\sigma
_{y^{\ast }}$ which \cite{holston.etal:2017} denoted by $\lambda _{g}$,
where their preferred estimate of $\lambda $ is based on the EW statistic of
\cite{andrews.ploberger:1994}.
\end{enuma}
\autoref{tab:Stage1_lambda_g} shows the range of $\lambda _{g}$ estimates
computed from the five sets of $\skew{0}\boldsymbol{\hat{\theta}}_{1}$
values reported in \autoref{tab:Stage1}, using all four structural break
tests of \cite{stock.watson:1998}. \autoref{tab:Stage1_lambda_g} is arranged
in the same format as \autoref{tab:Stage1}, again showing
\cites{holston.etal:2017} estimates of $\lambda _{g}$ obtained from running
their R-Code in the first column of the left block for reference. As can be
seen from \autoref{tab:Stage1_lambda_g}, the range of $\hat{\lambda}_{g}$
values one obtains from \cites{holston.etal:2017} MUE procedure is between
0 $ to $0.08945$ (if only the three structural break tests implemented by
\cite{holston.etal:2017} are considered, and up to $0.09419$ if the $L$
statistic is computed as well. Note that this range is not due to
statistical uncertainty, but simply due to the choice of structural break
test, which prior for $\mathbf{P}_{00}$ is used, and whether the lower bound
constraint on $b_{y}$ is imposed. Since these estimates determine the
relative variation in trend growth through the magnitude of $\sigma
_{y^{\ast }}$, they have a direct impact not only on the variation in the
permanent component of GDP, but also on the natural rate of interest through
the ratio $\lambda _{g}=\sigma _{g}/\sigma _{y^{\ast }}$ utilized in the
later stages of the three step procedure of \cite{holston.etal:2017}.
\subsubsection{\cites{holston.etal:2017} rational for MUE in Stage 1 \labe
{S1MUE}}
Comparing the MUE procedure that \cite{holston.etal:2017} implement to the
one by \cite{stock.watson:1998}, it is evident that they are fundamentally
different. Instead of rewriting the true model of interest in local level
form to make it compatible with \cites{stock.watson:1998} look-up tables,
\cite{holston.etal:2017} instead formulate a restricted Stage 1 model that
not only sets $a_{r}$ in the output gap equation to zero, but also makes the
awkward assumption that trend growth is constant when computing the \emph
`preliminary'} estimate of $y_{t}^{\ast }$.
The rationale behind \cites{holston.etal:2017} implementation of MUE is as
follows. Suppose we observe trend $y_{t}^{\ast }$. Then, a local level model
for $\Delta y_{t}^{\ast }$ can be formulated as:\bsq\label{eq:hlwLL1
\begin{align}
\Delta y_{t}^{\ast }& =g_{t}+\varepsilon _{t}^{y^{\ast }} \label{hlw_ll1a}
\\
\Delta g_{t}& =\varepsilon _{t}^{g}, \label{hlw_ll1b}
\end{align
\esq where $\Delta y_{t}^{\ast }$, $g_{t}$ and $\varepsilon _{t}^{y^{\ast }}$
are the analogues to $GY_{t},\beta _{t}$ and $u_{t}$, respectively, in
\cites{stock.watson:1998} MUE in \ref{eq:sw98}, with $\varepsilon
_{t}^{y^{\ast }}$ in \ref{hlw_ll1a}, nonetheless, assumed to be $i.i.d.$
rather than an autocorrelated AR(4) process as $u_{t}$ in \ref{eq:sw1}.
Under \cites{stock.watson:1998} assumptions, MUE of the local level model in
\ref{eq:hlwLL1} yields $\lambda _{g}=\lambda /T$ defined as
\begin{equation}
\frac{\lambda }{T}=\frac{\bar{\sigma}(\varepsilon _{t}^{g})}{\bar{\sigma
(\varepsilon _{t}^{y^{\ast }})}=\frac{\sigma _{g}}{\sigma _{y^{\ast }}},
\label{hlw_rationale1}
\end{equation
where $\bar{\sigma}(\cdot )$ denotes again the long-run standard deviation,
and the last equality in \ref{hlw_rationale1} follows due to $\varepsilon
_{t}^{y^{\ast }}$ and $\varepsilon _{t}^{g}$ assumed to be uncorrelated
white noise processes.
Since $\Delta y_{t}^{\ast }$ is not observed, \cite{holston.etal:2017}
replace it with the Kalman Smoother based estimate $\Delta \hat{y
_{t|T}^{\ast }$ obtained from the restricted Stage 1 model in \ref{eq:stag1
. To illustrate what impact this has on their MUE procedure, let
a_{y}(L)=(1-a_{y,1}L-a_{y,2}L^{2})$ and $a_{r}(L)=\tfrac{a_{r}}{2}(L+L^{2})$
denote two lag polynomials that capture the dynamics in the output gap
\tilde{y}_{t}$ and the real rate cycle $\tilde{r}_{t}=(r_{t}-r_{t}^{\ast
})=(r_{t}-4g_{t}-z_{t})$, respectively. Also, define $\psi
(L)=a_{y}(L)^{-1}a_{r}(L)$ and $\psi (1)=a_{r}/(1-a_{y,1}-a_{y,2})$. The
output gap equation of the true (full) model in \ref{eq:hlw} can then be
written compactly as
\begin{align}
a_{y}(L)\tilde{y}_{t}& =a_{r}(L)\tilde{r}_{t}+\varepsilon _{t}^{\tilde{y}},
\label{s2dya} \\
\intxt{or in differenced form and solved for $\Delta \tilde{y}_t$ as:}\Delta
\tilde{y}_{t}& =a_{y}(L)^{-1}\left[ a_{r}(L)\Delta \tilde{r}_{t}+\Delta
\varepsilon _{t}^{\tilde{y}}\right] . \label{s2dyb}
\end{align
Observed output, and trend and cycle are related by the identit
\begin{align}
y_{t}& =y_{t}^{\ast }+\tilde{y}_{t} \notag \\
\therefore \Delta y_{t}& =\Delta y_{t}^{\ast }+\Delta \tilde{y}_{t}.
\label{trndCycl}
\end{align
This relation, together with \ref{hlw_ll1a} and \ref{s2dyb}, can be written
as:
\begin{align}
\Delta y_{t}-\Delta \tilde{y}_{t}& =\Delta y_{t}^{\ast } \notag \\
\Delta y_{t}-\underbrace{a_{y}(L)^{-1}\left[ a_{r}(L)\Delta \tilde{r
_{t}+\Delta \varepsilon _{t}^{\tilde{y}}\right] }_{\Delta \tilde{y}_{t}}&
\underbrace{g_{t}+\varepsilon _{t}^{y^{\ast }}}_{_{\Delta y_{t}^{\ast }}}.
\label{eq:mis1}
\end{align
\qquad
Because the data $\Delta y_{t}$ are fixed, any restriction imposed on the
\Delta \tilde{y}_{t}$ process translates directly into a misspecification of
the right hand side of \ref{eq:mis1}; the $\Delta y_{t}^{\ast }$ term. In
the Stage 1 model, $a_{r}$ is restricted to zero. For the relation in \re
{eq:mis1} to balance, $\Delta y_{t}^{\ast }$ effectively becomes:\footnote
Note that we need to formulate a local level model for trend growth as in
\ref{eq:hlwLL1} to be able to apply the MUE\ framework of \cit
{stock.watson:1998}. To arrive at \ref{hlw_ll2a}, add
[a_{y}(L)^{-1}a_{r}(L)\Delta \tilde{r}_{t}]$ to both sides of \ref{eq:mis1}.
The ring $(\mathring{\phantom{y}})$ symbol on $\mathring{\nu}_{t}^{y\ast }$
highlights again that it is obtained from the restricted model.
} \bsq\label{S1LLfalse
\begin{align}
\Delta y_{t}^{\ast }& =g_{t}+\mathring{\nu}_{t}^{y\ast } \label{hlw_ll2a} \\
\Delta g_{t}& =\varepsilon _{t}^{g}, \label{hlw_ll2b}
\end{align
\esq where
\begin{equation}
\mathring{\nu}_{t}^{y\ast }=\varepsilon _{t}^{y^{\ast }}+\psi (L)\Delta
\tilde{r}_{t}.
\end{equation
\cites{holston.etal:2017} implementation of MUE relies on the (constructed)
local level model relations from the restricted Stage 1 model in \re
{S1LLfalse} and requires us to evaluate the ratio of the long-run standard
deviations of $\varepsilon _{t}^{g}$ and $\mathring{\nu}_{t}^{y\ast }$:
\begin{equation}
\frac{\bar{\sigma}(\varepsilon _{t}^{g})}{\bar{\sigma}(\mathring{\nu
_{t}^{y\ast })}. \label{s2n_s1}
\end{equation
Evidently, $\varepsilon _{t}^{g}$ in \ref{hlw_ll2b} has not changed, so the
numerator of the \emph{`signal-to-noise ratio'} in \ref{s2n_s1} is still
\bar{\sigma}(\varepsilon _{t}^{g})=\sigma _{g}\ $, due to $\varepsilon
_{t}^{g}$ being an $i.i.d.$ process. However, the term $\mathring{\nu
_{t}^{y\ast }$ in \ref{hlw_ll2a} is not uncorrelated white noise anymore.
Moreover, the long-run standard deviation $\bar{\sigma}(\mathring{\nu
_{t}^{y\ast })$ in the denominator of \ref{s2n_s1} now also depends on the
(long-run) standard deviation of $\psi (L)\Delta \tilde{r}_{t}$, and will be
equal to $\sigma _{y^{\ast }}$ if and only if $a_{r}=0$ in the \textit
empirical data}.\footnote
If monetary policy is believed to be effective in cyclical aggregate demand
management, then $a_{r}$ cannot be 0 and one would not have formulated the
main model of interest assuming that $a_{r}$ is different from zero (viz,
negative). Also, this restriction cannot be enforced in the data.}
To see what the long-run standard deviation of $\mathring{\nu}_{t}^{y\ast }$
looks like, assume for simplicity that $\varepsilon _{t}^{y^{\ast }}$ and
\Delta \tilde{r}_{t}$ are uncorrelated, so that the long-run standard
deviation calculation of $\mathring{\nu}_{t}^{y\ast }$ can be broken up into
a part involving $\varepsilon _{t}^{y^{\ast }}$ and another part involving
\psi (L)\Delta \tilde{r}_{t}$, where the latter decomposes as
\begin{align*}
\psi (L)\Delta \tilde{r}_{t}& =\psi (L)[\Delta r_{t}-4\Delta g_{t}-\Delta
z_{t}] \\
& =\psi (L)[\Delta r_{t}-4\varepsilon _{t}^{g}-\varepsilon _{t}^{z}].
\end{align*
Assuming that the shocks $\{\varepsilon _{t}^{g},\varepsilon _{t}^{z}\}$ are
uncorrelated with the (change in the) real rate $\Delta r_{t}$, the long-run
standard deviation of $\psi (L)\Delta \tilde{r}_{t}$ can be evaluated as
\begin{align}
\bar{\sigma}\left( \psi (L)\Delta \tilde{r}_{t}\right) & =\bar{\sigma}\left(
\psi (L)\Delta r_{t}\right) +\bar{\sigma}\left( \psi (L)4\varepsilon
_{t}^{g}\right) +\bar{\sigma}\left( \psi (L)\varepsilon _{t}^{z}\right)
\notag \\
& =\bar{\sigma}\left( \psi (L)\Delta r_{t}\right) +\psi (1)\left[ 4\sigma
_{g}+\sigma _{z}\right] , \label{LRv}
\end{align
since $\varepsilon _{t}^{g}$ and $\varepsilon _{t}^{z}$ are uncorrelated in
the model. Because the nominal rate $i_{t}$ is exogenous, it will not be
possible to say more about the first term on the right hand side of \ref{LRv}
unless we assume some time series process for $\Delta r_{t}$. Suppose that
r_{t}$ follows a random walk, so that $\Delta r_{t}=\varepsilon _{t}^{r}$,
with $\mathrm{Var}(\varepsilon _{t}^{r})=\sigma _{r}^{2}$. Then $\bar{\sigma
\left( \psi (L)\Delta \tilde{r}_{t}\right) =a_{r}/(1-a_{y,1}-a_{y,2})\left[
\sigma _{r}+4\sigma _{g}+\sigma _{z}\right] $, and we obtain $\bar{\sigma}
\mathring{\nu}_{t}^{y\ast })=\sigma _{y^{\ast }}+a_{r}/(1-a_{y,1}-a_{y,2}
\left[ \sigma _{r}+4\sigma _{g}+\sigma _{z}\right] $. The MUE ratio in \re
{s2n_s1} based on the restricted Stage 1 model yields
\begin{equation}
\frac{\bar{\sigma}(\varepsilon _{t}^{g})}{\bar{\sigma}(\mathring{\nu
_{t}^{y\ast })}=\frac{\sigma _{g}}{\sigma _{y^{\ast
}}+a_{r}/(1-a_{y,1}-a_{y,2})\left[ \sigma _{r}+4\sigma _{g}+\sigma _{z
\right] }\neq \frac{\sigma _{g}}{\sigma _{y^{\ast }}}. \label{S1_Lg0}
\end{equation
Thus, \cites{holston.etal:2017} implementation of MUE in Stage 1 cannot
recover the \textit{`}\emph{signal-to-noise ratio'} of interest $\frac
\sigma _{g}}{\sigma _{y^{\ast }}}$ from $\lambda _{g}$.
Note here that the autocorrelation pattern in $\mathring{\nu}_{t}^{y\ast }$
is also reflected in the $\Delta \hat{y}_{t|T}^{\ast }$ series which is used
as the observable counterpart to $\Delta y_{t}^{\ast }$ in \ref{hlw_ll2a}.
That is, $\Delta \hat{y}_{t|T}^{\ast }$ has a significant and sizeable AR(1)
coefficient of $-0.2320$ (standard error $\approx 0.0649$). Inline with Step
$(i)$ of \cites{stock.watson:1998} implementation of MUE (the GLS\ step),
one would thus need to AR(1)\ filter the constructed $\Delta \hat{y
_{t|T}^{\ast }$ series used in the local level model \emph{before}
implementing the structural break tests. Accounting for this autocorrelation
patter in $\Delta \hat{y}_{t|T}^{\ast }$ leads to very different $\lambda
_{g}$ point estimates (see \autoref{tab:stage1_MUE_AR1}, which is arranged
in the same way as the top half of \autoref{tab:sw98_T4}, with the last
column showing $\lambda _{g}=\lambda /T$ rather than $\sigma _{g}$ to be
able to compare these to column one of \autoref{tab:Stage1_lambda_g}).
\subsubsection{Rewriting the Stage 1 model in local level model form}
One nuisance with the Stage 1 model formulation of \cite{holston.etal:2017}
in \ref{eq:stag1} is that trend growth is initially assumed to be constant
to compute a first estimate of $y_{t}^{\ast }$. This estimate is then used
to construct the empirical counterpart of $\Delta y_{t}^{\ast }$ to which
MUE is applied.
A more coherent way to implement MUE in the context of the Stage 1 model is
to rewrite the local linear trend model in local level form. To see how this
could be done, we can simplify the Stage 1 model by excluding the inflation
equation \ref{S1b} and replacing the constant trend growth equation in \re
{S1d} with the original trend and trend growth equations in \ref{y*} and \re
{g}. Since the specification of the full model in \ref{eq:hlw} assumes that
the error terms $\varepsilon _{t}^{\ell },\forall \ell =\{\pi ,\tilde{y
,y^{\ast }\hsp[-1],g,z\}$ are $i.i.d.$ Normal and mutually uncorrelated, and
$\hat{b}_{y}\approx 0$ in the unrestricted Stage 1 model (see the results
under the heading `$b_{y}$ Free' in \autoref{tab:Stage1}), this
simplification is unlikely to induce any additional misspecification into
the model.
The modified Stage 1 model we can work with thus takes the following form
\bsq\label{Stage1:mod}
\begin{align}
y_{t}& =y_{t}^{\ast }+\tilde{y}_{t} \label{S1M1a} \\
a_{y}(L)\tilde{y}_{t}& =\mathring{\varepsilon}_{t}^{\tilde{y}} \label{S1M1b}
\\
y_{t}^{\ast }& =y_{t-1}^{\ast }+g_{t-1}+\varepsilon _{t}^{y^{\ast }}
\label{S1M1c} \\
g_{t}& =g_{t-1}+\varepsilon _{t}^{g}, \label{S1M1d}
\end{align
\esq where $\mathring{\varepsilon}_{t}^{\tilde{y}}=a_{r}(L)\tilde{r
_{t}+\varepsilon _{t}^{\tilde{y}}$ again due to the restriction of the
output gap equation of the full model in \ref{eq:hlw}.\footnote
If the disturbance term $\mathring{\varepsilon}_{t}^{\tilde{y}}$ is $i.i.d.
, then the model in \ref{Stage1:mod} can be recognized as \cites{clark:1987}
Unobserved Component (UC) model. However, $\mathring{\varepsilon}_{t}^
\tilde{y}}$ is not $i.i.d.$ and instead follows a general ARMA\ process with
non-zero autocovariances, which are functions of $\sigma _{g}^{2}$, $\sigma
_{z}^{2}$, the autocovariances of inflation $\pi _{t}$, as well as the
exogenously specified interest rate $i_{t}$. To see this, recall from
\Sref{sec:model} that the real interest rate gap $\tilde{r}_{t}$ is defied
as $\tilde{r}_{t}=\left[ i_{t}-\delta (L)\pi _{t}-4g_{t}-z_{t}\right] $,
where expected inflation $\pi _{t}^{e}=\delta (L)\pi _{t}$ and $\delta (L)
\tfrac{1}{4}\left( 1+L+L^{2}+L^{3}\right) $, so that we can re-express
\mathring{\varepsilon}_{t}^{\tilde{y}}$ as
\begin{equation}
\mathring{\varepsilon}_{t}^{\tilde{y}}=a_{r}(L)\left[ i_{t}-\delta (L)\pi
_{t}-4g_{t}-z_{t}\right] +\varepsilon _{t}^{\tilde{y}}. \label{eps_tilde}
\end{equation
The product of the two lag polynomials $a_{r}(L)\delta (L)$ in \re
{eps_tilde} yields a $5^{th}$ order lag polynomial for inflation. If $i_{t}$
and $\pi _{t}$ were uncorrelated white noise processes (which they are
clearly not), then we would obtain an MA(5)\ process for $\mathring
\varepsilon}_{t}^{\tilde{y}}$ when $a_{r}$ is non-zero. Since $\pi _{t}$ is
modelled as an integrated AR(4), the implied process for $\mathring
\varepsilon}_{t}^{\tilde{y}}$ is a higher order ARMA\ process, the exact
order of which depends on the assumptions one places on the exogenously
specified interest rate $i_{t}$. To determine this process exactly is of no
material interest here. However, the important point to take away from this
is that $\mathring{\varepsilon}_{t}^{ \tilde{y}}$ is autocorrelated and
follows a higher order ARMA\ process. Moreover, if $i_{t},\pi _{t},g_{t}$
and $z_{t}$ do not co-integrate, then $\mathring{\varepsilon}_{t}^{\tilde{y
} $ will be an $I(1)$ process.} The local linear trend model in \re
{Stage1:mod} can now be rewritten in local level model form by differencing
\ref{S1M1a} and \ref{S1M1b}, and bringing $y_{t-1}^{\ast }$ to the left side
of \ref{S1M1c} to give the relations:\bsq\label{Stage1:mod2}
\begin{align}
\Delta y_{t}& =\Delta y_{t}^{\ast }+\Delta \tilde{y}_{t} \label{S1M2a} \\
a_{y}(L)\Delta \tilde{y}_{t}& =\Delta \mathring{\varepsilon}_{t}^{\tilde{y}}
\label{S1M2b} \\
\Delta y_{t}^{\ast }& =g_{t-1}+\varepsilon _{t}^{y^{\ast }} \label{S1M2c} \\
g_{t}& =g_{t-1}+\varepsilon _{t}^{g}. \notag
\end{align
\esq
Substituting \ref{S1M2b} and \ref{S1M2c} into \ref{S1M2a} yields the local
level model
\begin{align}
\Delta y_{t}& =g_{t-1}+u_{t} \label{S1Mod2b} \\
\Delta g_{t}& =\varepsilon _{t}^{g}, \label{S1Mod2c}
\end{align
where $u_{t}$ is defined as
\begin{align}
u_{t}& =\varepsilon _{t}^{y^{\ast }}+a_{y}(L)^{-1}\Delta \mathring
\varepsilon}_{t}^{\tilde{y}} \notag \\
\underbrace{a_{y}(L)u_{t}}_{\text{AR(2)}}& =\underbrace{a_{y}(L)\varepsilon
_{t}^{y^{\ast }}}_{\text{MA(2)}}+\Delta \mathring{\varepsilon}_{t}^{\tilde{y
} \notag \\
a_{y}(L)u_{t}& =b(L)\varepsilon _{t}, \label{eq:ut_arma}
\end{align
with $b(L)\varepsilon _{t}=a_{y}(L)\varepsilon _{t}^{y^{\ast }}+\Delta
\mathring{\varepsilon}_{t}^{\tilde{y}}$ on the right hand side of \re
{eq:ut_arma} denoting a general MA process. The $u_{t}$ term in \re
{eq:ut_arma} thus follows a higher order ARMA model. If $a_{r}=0$, then
\mathring{\varepsilon}_{t}^{\tilde{y}}=\varepsilon _{t}^{\tilde{y}}$ in \re
{eps_tilde} and $\Delta \mathring{\varepsilon}_{t}^{\tilde{y}}=\Delta
\varepsilon _{t}^{\tilde{y}}$, which is an integrated MA(1)\ process, so
that the right hand side would be the sum of an MA(2)\ and an MA(1),
yielding an overall MA(2) for $b(L)\varepsilon _{t}$. With $a_{y}(L)$ being
an AR(2) lag polynomial for the cycle component, we would then get an ARMA
(2,2)$ for $u_{t}$ in \ref{eq:ut_arma}. If $a_{r}\neq 0$, then $\Delta
\mathring{\varepsilon}_{t}^{\tilde{y}}$ follows a higher order ARMA process.
In the empirical implementation of MUE, I follow \cite{stock.watson:1998},
and use an AR(4) as an approximating model for $u_{t}$.\footnote
They also considered an ARMA($2,3$) model (see page 355 in their paper). It
is well known that higher order ARMA\ models can be difficult to estimate
numerically due to potential root cancellations in the AR and MA lag
polynomials. Inspection of the autocorrelation and partial autocorrelation
functions of $\Delta y_{t}$ indicate that an AR(4) model is more than
adequate to capture the time series dynamics of $\Delta y_{t}$. I have also
estimated an ARMA$(2,2)$\ model for $\Delta y_{t}\,$, with the overall
qualitative conclusions being the same and the quantitative results very
similar.}
The relations in \ref{S1Mod2b} to \ref{eq:ut_arma} are now in local level
model form to which MUE\ can be applied to as outlined in equations \re
{eq:sw98} to \ref{eq:s2n} in \Sref{subsec:MUE}.\footnote
I am grateful to James Stock for his email correspondence on this point.} To
examine if we can recover the \textit{`signal-to-noise ratio'} of interest
\sigma _{g}/\sigma _{y^{\ast }}$ from this MUE procedure, we need to evaluat
\begin{equation}
\frac{\bar{\sigma}(\varepsilon _{t}^{g})}{\bar{\sigma}(u_{t})}.
\label{eq:S1Lambda}
\end{equation
In the numerator of \ref{eq:S1Lambda}, the term $\bar{\sigma}(\varepsilon
_{t}^{g})=\sigma _{g}$ as before. Nevertheless, the denominator term $\bar
\sigma}(u_{t})=\bar{\sigma}(\varepsilon _{t}^{y^{\ast }}+a_{y}(L)^{-1}\Delta
\mathring{\varepsilon}_{t}^{\tilde{y}})\neq \sigma _{y^{\ast }}$. With
\mathring{\varepsilon}_{t}^{\tilde{y}}=a_{r}(L)\tilde{r}_{t}+\varepsilon
_{t}^{\tilde{y}}$, we have:\
\begin{align}
\bar{\sigma}(u_{t})& =\bar{\sigma}(\varepsilon _{t}^{y^{\ast
}}+a_{y}(L)^{-1}\Delta \mathring{\varepsilon}_{t}^{\tilde{y}}) \notag \\
& =\bar{\sigma}(\varepsilon _{t}^{y^{\ast }}+\psi (L)\Delta \tilde{r
_{t}+a_{y}(L)^{-1}\Delta \varepsilon _{t}^{\tilde{y}}), \label{LRv2}
\end{align
where the middle part in \ref{LRv2} (ie., $\psi (L)\Delta \tilde{r}_{t}$)
will again be as before in \ref{LRv} and therefore depend on $\Delta r_{t}$,
$\varepsilon _{t}^{g}$ and $\varepsilon _{t}^{z}$. Notice here also that
even if we knew $a_{r}=0$, so that the middle part in \ref{LRv2} is $0$,
there is no mechanism to enforce a zero correlation between $\varepsilon
_{t}^{y^{\ast }}$ and $\varepsilon _{t}^{\tilde{y}}$ in the data, because
u_{t}$ appears in reduced form in the local level model. We would thus need
the empirical correlation between $\varepsilon _{t}^{y^{\ast }}$ and
\varepsilon _{t}^{\tilde{y}}$ to be zero for the long-run standard deviation
$\bar{\sigma}(u_{t})$ to equal $\sigma _{y^{\ast }}$ even when the true
a_{r}=0$. Estimates from the existing business cycle literature suggest that
trend and cycle shocks are negatively correlated (see for instance Table 3
in \cite{morley.etal:2003}, who estimate this correlation to be $-0.9062$,
or Table 1 in the more recent study by \cite{grant.chan:2017a} whose
estimate is $-0.87$). I obtain an estimate of $-0.9426$ (see \autore
{tab:clarkddUC} below).
For completeness, parameter estimates of MUE applied to the local level
transformed Stage 1 model defined in \ref{Stage1:mod} are reported in
\autoref{tab:MUE_S1}. This table is arranged in the same way as \autore
{tab:sw98_T4}, with all computations performed in exactly the same way as
before. The MUE results in the last two columns of the bottom part of the
table are based on the exponential Wald (EW) structural break test as used
in \cite{holston.etal:2017}. Overall, these estimates are very similar to
\cites{stock.watson:1998} estimates, despite different time periods and GDP
data being used. The $\lambda $ (and also $\sigma _{g}$) estimates are not
statistically different from 0, and the MMLE $\hat{\sigma}_{g}$ of $0.1062$
is rather sizeable and quite close to the one implied by MUE.
\subsubsection{Estimating the local linear trend version of the Stage 1 mode
}
So far, \textit{`pile-up'} at zero problems were examined in the local level
model form which is compatible with MUE. As a last exercise, I estimate the
modified Stage 1 model in \ref{Stage1:mod} in local linear trend model form.
Two different specifications of the model are estimated. The first assumes
all error terms to be uncorrelated. This version is referred to as
\cites{clark:1987} UC0 model. The second allows for a non-zero correlation
between $\varepsilon _{t}^{y^{\ast }}$ and $\mathring{\varepsilon}_{t}^
\tilde{y}}$. This version is labelled \cites{clark:1987} UC model. The aim
here is to not only examine empirically how valid the zero correlation
assumption is and to quantify its magnitude, but also to investigate whether
\textit{`pile-up'} at zero problems materialize more generally in UC\
models. In \autoref{tab:clarkddUC}, the parameter estimates of the two UC
models are reported, together with standard errors of the parameter
estimates (these are listed under the columns with the heading Std.error).
As can be seen from the estimates in \autoref{tab:clarkddUC}, there exists
no evidence of \textit{`pile-up'} at zero problems with MLE in either of
these two UC models.\footnote
I use a diffuse prior on the initial state vector in the estimation of both
UC models, and do not estimate the initial value. This is analogous to MMLE
in \cite{stock.watson:1998}. The input data are $100$ times the log of real
GDP.} The estimates of $\sigma _{g}$ from the two UC models are $0.0463$ and
$0.0322$, respectively, and are based on quarterly data. Expressed at an
annualized rate, they amount to approximately $0.1852$ and $0.1288$, and
hence are similar in magnitude to the corresponding MUE based estimates
obtained from the transformed model in \autoref{tab:MUE_S1}. Notice also
that the correlation between $\mathring{\varepsilon}_{t}^{\tilde{y}}$ and
\varepsilon _{t}^{y^{\ast }}$ (denoted by $\mathrm{Corr}(\mathring
\varepsilon}_{t}^{\tilde{y}},\varepsilon _{t}^{y^{\ast }})$ in \autore
{tab:clarkddUC}) is estimated to be $-0.9426$ ($t-$statistic is
approximately $-10$). The magnitude of the $\hat{\sigma}_{y^{\ast }}$ and
\hat{\sigma}_{\tilde{y}}$ coefficients nearly doubles when an allowance for
a non-zero correlation between $\mathring{\varepsilon}_{t}^{\tilde{y}}$ and
\varepsilon _{t}^{y^{\ast }}$ is made.\footnote
As is common with UC models, the improvement in the log-likelihood due to
the addition of the extra correlation parameter is rather small. Although it
is important to empirically capture the correlation between $\mathring
\varepsilon}_{t}^{\tilde{y}}$ and $\varepsilon _{t}^{y^{\ast }}$ as it
affects the trend growth estimate (see \autoref{fig:MUE_S1}), the overall
level of information contained in the data appears to be limited and
therefore makes it difficult to decisively distinguish one model over the
other statistically. Also, one other aspect of the empirical GDP data that
both models fail to capture is the global financial crisis. The level of GDP
dropped substantially and in an unprecedented manner. Simply \emph
`smoothing'} the data to extract a trend as the UC models implicity do may
thus not adequately capture this drop in the level of the series.}
\autoref{fig:MUE_S1} shows plots of the various trend growth estimates from
the modified Stage 1 models reported in \autoref{tab:MUE_S1} and \autore
{tab:clarkddUC}. The plots are presented in the same way as in \autore
{fig:sw98_F4} earlier, with the (annualized) trend growth estimates from the
two UC models superimposed. Analogous to the results in \cit
{stock.watson:1998}, the variation in the MUE based estimates is once again
large. Trend growth can be a flat line when the lower 90\% CI\ of MUE is
considered or rather variable when the upper CI bound is used.
Interestingly, the MMLE, Clark UC model (with non-zero $\mathrm{Corr}
\mathring{\varepsilon}_{t}^{\tilde{y}},\varepsilon _{t}^{y^{\ast }})$) and
MUE$(\hat{\lambda}_{EW})$ trend growth estimates are very similar visually.
More importantly, the effect of restricting $\mathrm{Corr}(\mathring
\varepsilon}_{t}^{\tilde{y}},\varepsilon _{t}^{y^{\ast }})$ to zero on the
trend growth estimate can be directly seen in \autoref{fig:MUE_S1}. The UC0
model produces a noticeably more variable trend growth estimate than the UC
model.
Two conclusions can be drawn from this section. Firstly,
\cites{holston.etal:2017} implementation of MUE in Stage 1 and the resulting
$\lambda _{g}$ estimate cannot recover the \textit{`signal-to-noise ratio'}
of interest $\sigma _{g}/\sigma _{y^{\ast }}$. Secondly, there is no
evidence of \textit{`pile-up'} at zero problems materializing when
estimating $\sigma _{g}$ directly by MLE. Replacing $\sigma _{g}$ in
\mathbf{Q}$ by $\hat{\lambda}_{g}\sigma _{y^{\ast }}$ in the Stage 2 and
full model log-likelihood functions (see \ref{S2Q} and \ref{Q3}) where $\hat
\lambda}_{g}$ was obtained from MUE applied to the Stage 1 model is not only
unsound but empirically entirely unnecessary.
\subsection{Stage 2 Model\label{sec:S2}}
The second stage model of \cite{holston.etal:2017} consists of the following
system of equations, which are again a restricted version of the full model
in \ref{eq:hlw}:\bsq\label{eq:stag2
\begin{align}
y_{t}& =y_{t}^{\ast }+\tilde{y}_{t} \label{S2:y} \\
\pi _{t}& =b_{\pi }\pi _{t-1}+\left( 1-b_{\pi }\right) \pi _{t-2,4}+b_{y
\tilde{y}_{t-1}+\varepsilon _{t}^{\pi } \label{S2:pi} \\
a_{y}(L)\tilde{y}_{t}& =a_{0}+\tfrac{a_{r}}{2}(r_{t-1}+r_{t-2})+a_{g}g_{t-1}
\mathring{\varepsilon}_{t}^{\tilde{y}} \label{S2:ytilde} \\
y_{t}^{\ast }& =y_{t-1}^{\ast }+g_{t-2}+\mathring{\varepsilon}_{t}^{y^{\ast
}} \label{S2:ystar} \\
g_{t-1}& =g_{t-2}+\varepsilon _{t-1}^{g}. \label{S2:g}
\end{align
\esq Given the estimate of $\lambda _{g}$ from Stage 1, the vector of Stage
2 parameters to be estimated by MLE is:\footnote
See \hyperref[sec:AS2]{Section A.2} in the \hyperref[appendix]{Appendix} for
the exact matrix expressions and expansions of the SSM of Stage 2. In the
\mathbf{Q}$ matrix, $\sigma _{g}$ is replaced by $\hat{\lambda}_{g}\sigma
_{y^{\ast }}$, where $\hat{\lambda}_{g}$ is the estimate from the first
stage model (see \ref{S2Q}). The state vector $\boldsymbol{\xi }_{t}$ is
initialized using the same procedure as outlined in \ref{eq:P00S1a} and
\fnref{fn:1}, with the numerical values of $\boldsymbol{\xi }_{00}$ and
\mathbf{P}_{00}$ given in \ref{AS2:xi00} and \ref{AS2:P00}.
\begin{equation}
\boldsymbol{\theta }_{2}=[a_{y,1},~a_{y,2},~a_{r},~a_{0},~a_{g},~b_{\pi
},~b_{y},~\sigma _{\tilde{y}},~\sigma _{\pi },~\sigma _{y^{\ast }}]^{\prime
}. \label{S2theta2}
\end{equation
As in the first stage model in \ref{eq:stag1}, I again use the ring symbol $
\mathring{\phantom{y}})$ on the disturbance terms in \ref{S2:ytilde} and \re
{S2:ystar} to distinguish them from the $i.i.d.$ error terms of the full
model in \ref{eq:hlw}.
Examining the formulation of the Stage 2 model in \ref{eq:stag2} and
comparing it to the full model in \ref{eq:hlw}, it is evident that \cit
{holston.etal:2017} make two \emph{`misspecification'} choices that are
important to highlight. First, they include $g_{t-2}$ instead of $g_{t-1}$
in the trend equation in \ref{S2:ystar}, so that the $\mathring{\varepsilon
_{t}^{y^{\ast }}$ error term is in fact:\footnote{\cite{holston.etal:2017}
only report the $\mathbf{\mathbf{Q}}$ matrix in their documentation, which
is a diagonal matrix and takes the form given in \ref{S2Q}. In \hyperref[sec:AS2
{Section A.2} of the \hyperref[appendix]{Appendix}, I\ show how this matrix
is obtained. In \hyperref[sec:AS21]{Section A.2.1}, the correct Stage 2
model state-space form is provided, applying the same \emph{`trick'} as used
in the Stage 3 state-space model specification. The two $\mathbf{\mathbf{Q}}$
matrices are listed in \ref{S2Q} and \ref{S2Qcorrect}.
\begin{align}
\mathring{\varepsilon}_{t}^{y^{\ast }}& =\varepsilon _{t}^{y^{\ast }}
\overbrace{g_{t-1}-g_{t-2}}^{\varepsilon _{t-1}^{g}\text{ from \ref{S2:g}}}
\notag \\
& =\varepsilon _{t}^{y^{\ast }}+\varepsilon _{t-1}^{g}. \label{e_ystar}
\end{align
As a result of this, $\mathring{\varepsilon}_{t}^{y^{\ast }}$ in \re
{e_ystar} follows an MA(1)\ process, instead of white noise as $\varepsilon
_{t}^{y^{\ast }}$ in \ref{y*}. Moreover, due to the $\varepsilon _{t-1}^{g}$
term in \ref{e_ystar}, the covariance between the two error terms in \re
{S2:ystar} and \ref{S2:g} is no longer zero, but rather $\sigma _{g}^{2}$.
Thus, treating $\mathbf{W}$ in \ref{eq:RQ} as a diagonal variance-covariance
matrix in the estimation of the second stage model is incorrect.
Second, \cite{holston.etal:2017} do not only add an (unnecessary) intercept
term $a_{0}$ to the output gap equation in \ref{S2:ytilde}, but they also
account for only one lag in trend growth $g_{t}$, and further fail to impose
the $a_{g}=-4a_{r}$ restriction in the estimation of $a_{g}$. Due to this,
the error term $\mathring{\varepsilon}_{t}^{\tilde{y}}$ in \ref{S2:ytilde}
can be seen to consist of the following two components
\begin{align}
\mathring{\varepsilon}_{t}^{\tilde{y}}& =\overbrace
-a_{r}(L)4g_{t}-a_{r}(L)z_{t}+\varepsilon _{t}^{\tilde{y}}}^{\text{missing
true model part}}-\overbrace{(a_{0}+a_{g}g_{t-1})}^{\text{added Stage 2 part
} \notag \\
& =\underbrace{-a_{r}(L)z_{t}+\varepsilon _{t}^{\tilde{y}}}_{\text{desired
terms}}-\underbrace{\left[ a_{0}+a_{g}g_{t-1}+a_{r}(L)4g_{t}\right] }_{\text
unnecessary terms}}, \label{S2:eps_ytilde}
\end{align
where the \emph{`desired terms'} on the right-hand side of \re
{S2:eps_ytilde} are needed for \cites{holston.etal:2017} implementation of
MUE in the second stage, whose logic I will explain momentarily, while the
\emph{`unnecessary terms'} are purely due to the \emph{ad hoc} addition of
an intercept term, changing lag structure on $g_{t}$ and failure to impose
the $a_{g}=-4a_{r}$ restriction.
To be consistent with the full model specification in \ref{eq:hlw}, the
relations in \ref{S2:ytilde} and \ref{S2:ystar} should have been formulated
as
\bsq\label{eq:stag2a}
\begin{align}
a_{y}(L)\tilde{y}_{t}& =a_{r}(L)[r_{t}-4g_{t}]+\mathring{\varepsilon}_{t}^
\tilde{y}} \label{S2a:ytilde} \\
y_{t}^{\ast }& =y_{t-1}^{\ast }+g_{t-1}+\varepsilon _{t}^{y^{\ast }}
\label{S2a:ystar}
\end{align
\esq so that only the two missing lags of $z_{t}$ from \ref{S2a:ytilde}
appear in the error term $\mathring{\varepsilon}_{t}^{\tilde{y}}$,
specifically:
\begin{equation}
\mathring{\varepsilon}_{t}^{\tilde{y}}=-a_{r}(L)z_{t}+\varepsilon _{t}^
\tilde{y}}. \label{e_ytilde_true}
\end{equation
Such a specification could have been easily obtained from the full Stage 3
state-space model form described in \hyperref[sec:AS3]{Section A.3} in the
\hyperref[appendix]{Appendix}, by simply removing the last two row entries
of the state vector $\boldsymbol{\xi }_{t}$ in \ref{AS3:xi}, and adjusting
the $\mathbf{H}$, $\mathbf{F}$, and $\mathbf{S}$ matrices in the state and
measurement equations to be conformable with this state vector. This is
illustrated in \hyperref[sec:AS21]{Section A.2.1} in the \hyperref[appendix]
Appendix}. The \emph{`correctly specified'} Stage 2 model should thus have
been:\bsq\label{S2full0
\begin{align}
y_{t}& =y_{t}^{\ast }+\tilde{y}_{t} \\
\pi _{t}& =b_{\pi }\pi _{t-1}+\left( 1-b_{\pi }\right) \pi _{t-2,4}+b_{y
\tilde{y}_{t-1}+\varepsilon _{t}^{\pi } \\
a_{y}(L)\tilde{y}_{t}& =a_{r}(L)[r_{t}-4g_{t}]+\mathring{\varepsilon}_{t}^
\tilde{y}} \label{S2_ytilde0} \\
y_{t}^{\ast }& =y_{t-1}^{\ast }+g_{t-1}+\varepsilon _{t}^{y^{\ast }} \\
g_{t-1}& =g_{t-2}+\varepsilon _{t-1}^{g}.
\end{align
\esq
To see why this matters, let us examine how one would implement MUE\ in the
Stage 2 model, following again \cites{holston.etal:2017} logic as applied in
Stage 1. That is, one would first need to define a local level model
involving $z_{t}$ to be in the same format as in \ref{eq:sw98}. If we assume
for the moment that the true state variables $\tilde{y}_{t}$ and $g_{t}$, as
well as parameters $a_{y,1},$ $a_{y,2}$ and $a_{r}$ are known, and we ignore
the econometric issues that arise when these are replaced by estimates, then
the following local level model from the \emph{`correctly specified'} Stage
2 model in \ref{S2_ytilde0} can be formed:\bsq\label{MUE2
\begin{align}
\overbrace{a_{y}(L)\tilde{y}_{t}-a_{r}(L)[r_{t}-4g_{t}]}^{\text{analogue to
GY_{t}\text{ in \ref{eq:sw1}}}& =\underbrace{\overbrace{-a_{r}(L)z_{t}}^
\mathclap{\substack{\hsp[13]\text{analogue to } \beta _{t} \text{ in
\ref{eq:sw1}}}}}+\varepsilon _{t}^{\tilde{y}}}_{\mathring{\varepsilon}_{t}^
\tilde{y}}\text{ in \ref{e_ytilde_true}}} \label{MUE2a} \\
\underbrace{-a_{r}(L)\Delta z_{t}}_{\mathclap{\substack{\text{analogue to }
\\[1pt] \Delta\beta _{t} \text{ in \ref{eq:swRW}}}}}& =\underbrace
-a_{r}(L)\varepsilon _{t}^{z}}_{\mathclap{\substack{\text{analogue to }
\\[1pt] (\lambda /T)\eta _{t} \text{ in \ref{eq:swRW}}}}}, \label{MUE2b}
\end{align
\esq where $a_{y}(L)\tilde{y}_{t}-a_{r}(L)[r_{t}-4g_{t}]$ and
-a_{r}(L)z_{t} $ in \ref{MUE2a} are the analogues to $GY_{t}$ and $\beta
_{t} $ in \ref{eq:sw1}, $\varepsilon _{t}^{\tilde{y}}$ corresponds to $u_{t}$
(but is $i.i.d.$ from the full model assumptions in \ref{eq:hlw} rather than
an autocorrelated time series process as $u_{t}$ in \ref{eq:sw1}), and
-a_{r}(L)\Delta z_{t}$ and $-a_{r}(L)\varepsilon _{t}^{z}$ are the
counterparts to $\Delta \beta _{t}$ and $(\lambda /T)\eta _{t}$ in the state
equation in \ref{eq:swRW}.\footnote
To arrive at \ref{MUE2b}, simply multiply \ref{z} in the full model by
-a_{r}(L)$.}
The equations in \ref{MUE2} are now in local level model form suitable for
MUE. The Stage 2 MUE procedure implemented on this constructed
GY_{t}=a_{y}(L)\tilde{y}_{t}-a_{r}(L)[r_{t}-4g_{t}]$ series produces the
\lambda _{z}=\lambda /T$ ratio corresponding to \ref{eq:s2n}, that is
\footnote
To make this clear, MUE returns an estimate of $\lambda $ by using the
look-up table on page 354 in \cite{stock.watson:1998} to find the closest
matching value of one of the four structural break test statistics defined
in \ref{eq:breakTests} and \ref{eqL} which test for a structural break in
the unconditional mean of the constructed $GY_{t}$ series by running a dummy
variable regression of the form defined in \ref{Zt}.
\begin{equation}
\frac{\lambda }{T}=\frac{\bar{\sigma}(\Delta \beta _{t})}{\bar{\sigma
(\varepsilon _{t}^{\tilde{y}})}=\frac{\bar{\sigma}(-a_{r}(L)\Delta z_{t})}
\sigma _{\tilde{y}}}=\frac{a_{r}(1)\sigma _{z}}{\sigma _{\tilde{y}}}=\frac
a_{r}\sigma _{z}}{\sigma _{\tilde{y}}}. \label{mue2_ratio}
\end{equation
The last two steps in \ref{mue2_ratio} follow due to $a_{r}(1)=\frac{a_{r}}{
}(1+1^{2})=a_{r}$ and $\bar{\sigma}(\varepsilon _{t}^{\tilde{y}})=\sigma _
\tilde{y}}$, with $\bar{\sigma}(\cdot )$ denoting again the long-run
standard deviation. The final term in \ref{mue2_ratio} gives
\cites{holston.etal:2017} ratio $\lambda _{z}=a_{r}\sigma _{z}/\sigma _
\tilde{y}}$.\footnote
In \cite{laubach.williams:2003}, $\lambda _{z}$ is curiously defined as the
ratio $a_{r}\sigma _{z}/(\sigma _{\tilde{y}}\sqrt{2})$ (see page 1064,
second paragraph on the right). It is not clear where the extra $\sqrt{2}$
term comes from.} This is the logic behind \cites{holston.etal:2017}
implementation of MUE in Stage 2.
However, because \cite{holston.etal:2017} define the Stage 2 model in
\textit{`misspecified'} form in \ref{eq:stag2}, $\mathring{\varepsilon}_{t}^
\tilde{y}}$ is no longer simply equal to $-a_{r}(L)z_{t}+\varepsilon _{t}^
\tilde{y}}$ as needed for the right-hand side of \ref{MUE2a}, but now also
includes the \emph{`unnecessary terms'} $\left[
a_{0}+a_{g}g_{t-1}+a_{r}(L)4g_{t}\right] $ (see the decomposition in \re
{S2:eps_ytilde}). What effect this has on the Stage 2 MUE\ procedure can be
seen by first rewriting $a_{g}g_{t-1}$ as
\begin{align}
a_{g}g_{t-1}& =\tfrac{a_{g}}{2}(g_{t-1}+g_{t-1}) \notag \\
& =\tfrac{a_{g}}{2}(g_{t-1}+\underbrace{g_{t-2}+\varepsilon _{t-1}^{g}
_{g_{t-1}\text{ from \textrm{\ref{S2:g}}}}) \notag \\
& =a_{g}(L)g_{t}+\tfrac{a_{g}}{2}\varepsilon _{t-1}^{g}, \label{ag}
\end{align
where $a_{g}(L)=\frac{a_{g}}{2}(L+L^{2})$. The additional \emph{`unnecessary
terms'} on the right-hand side of \ref{S2:eps_ytilde} become:\vsp[-2
\begin{align}
-\left[ a_{0}+a_{g}g_{t-1}+a_{r}(L)4g_{t}\right] & =-[a_{0}+\overbrace
a_{g}(L)g_{t}+\tfrac{a_{g}}{2}\varepsilon _{t-1}^{g}}^{a_{g}g_{t-1}\text{
from \ref{ag}}}+a_{r}(L)4g_{t}] \notag \\
& =-[a_{0}+\tfrac{(a_{g}+4a_{r})}{2}(g_{t-1}+g_{t-2})+\tfrac{a_{g}}{2
\varepsilon _{t-1}^{g}]. \label{unTerms}
\end{align}
In \cites{holston.etal:2017} Stage 2 model in \ref{eq:stag2}, the
constructed local level model takes then the form:\bsq\label{S2wrong
\begin{align}
\overbrace{a_{y}(L)\tilde{y}_{t}-a_{0}-a_{r}(L)r_{t}-a_{g}g_{t-1}}^{\text
misspecified analogue to }GY_{t}\text{ in \ref{MUE2a}}}& =\overbrace
-a_{r}(L)z_{t}}^{\mathclap{~\text{analogue to } \beta _{t} }}+\mathring{\nu
_{t}^{\tilde{y}} \label{S2wrong_a} \\
\underbrace{-a_{r}(L)\Delta z_{t}}_{\mathclap{\substack{\text{analogue}
\\[1pt] \text{to } \Delta\beta _{t}}}}& =\underbrace{-a_{r}(L)\varepsilon
_{t}^{z}}_{\mathclap{\substack{\text{analogue} \\[1pt] \text{to } (\lambda
/T)\eta _{t} }}}, \label{S2wrong_b}
\end{align
\bigskip \esq where $\mathring{\nu}_{t}^{\tilde{y}}$ in \ref{S2wrong_a} is
the misspecified analogue to $\varepsilon _{t}^{\tilde{y}}$ in \ref{MUE2a}
and is defined as:
\begin{equation}
\mathring{\nu}_{t}^{\tilde{y}}=\varepsilon _{t}^{\tilde{y}}-[a_{0}+\tfrac
(a_{g}+4a_{r})}{2}(g_{t-1}+g_{t-2})+\tfrac{a_{g}}{2}\varepsilon _{t-1}^{g}].
\label{S2_nu_ring}
\end{equation
As can be seen, the error term $\mathring{\nu}_{t}^{\tilde{y}}$ in \re
{S2_nu_ring} will not be white noise. Moreover, forming the MUE\ $\lambda /T$
ratio from the model in \ref{S2wrong} in the same way as in \ref{mue2_ratio}
leads to:
\begin{equation}
\frac{\lambda }{T}=\frac{\bar{\sigma}(-a_{r}(L)\Delta z_{t})}{\bar{\sigma}
\mathring{\nu}_{t}^{\tilde{y}})}=\frac{a_{r}(1)\sigma _{z}}{\bar{\sigma}
\mathring{\nu}_{t}^{\tilde{y}})}=\frac{a_{r}\sigma _{z}}{\bar{\sigma}
\mathring{\nu}_{t}^{\tilde{y}})}, \label{Lambda_z_correct}
\end{equation
and now requires the evaluation of the long-run standard deviation of
\mathring{\nu}_{t}^{\tilde{y}}$ in the denominator, which will not be equal
to $\sigma _{\tilde{y}}$ as from the \emph{`correctly'} specified Stage 2
model defined in \ref{S2full0}. Note here that, even in the unlikely
scenario that $(a_{g}+4a_{r})=0$ in the data, the long-run standard
deviation of $\mathring{\nu}_{t}^{\tilde{y}}$ will also depend on $\tfrac
a_{g}}{2}\sigma _{g}$ because of the $\tfrac{a_{g}}{2}\varepsilon _{t-1}^{g}$
term in $\mathring{\nu}_{t}^{\tilde{y}}$, so that one obtains:
\begin{equation}
\lambda _{z}=\frac{\lambda }{T}=\frac{a_{r}\sigma _{z}}{(\sigma _{\tilde{y
}+a_{g}\sigma _{g}/2)}. \label{Lz00}
\end{equation
Thus, MUE applied to \cites{holston.etal:2017} \textit{`misspecified' }Stage
2 model as defined in \ref{eq:stag2} cannot recover the ratio of interest
\lambda _{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$.\footnote
If $(a_{g}+4a_{r})\neq 0$, additional $\sigma _{g}$ terms enter the long-run
standard deviation in the denominator of $\lambda _{z}$.}
Before I\ discuss in the next section what effect\ the \emph
`misspecification'} of the Stage 2 model has on \cites{holston.etal:2017}
median unbiased estimates of $\lambda _{z}$, I\ report the estimates of the
two different Stage 2 models in \autoref{tab:Stage2}. The first and second
columns show replicated results which are based on \cites{holston.etal:2017}
R-Code as well as my own implementation and serve as reference values. In
the third column under the heading `MLE$(\sigma _{g})$', $\sigma _{g}$ is
estimated directly by MLE together with the other parameters of the model
without using $\hat{\lambda}_{g}$ from Stage 1.\footnote
I use the same initial values for the parameter and the state vector (mean
and variance) as in the exact replication of \cite{holston.etal:2017}. Using
a diffuse prior instead leads to only minor differences in the numerical
values. The implied $\lambda _{g}$ and $\sigma _{g}$ estimates are shown in
brackets and were computed from the \emph{`signal-to-noise ratio'} relation
\lambda _{g}=\sigma _{g}/\sigma _{y^{\ast }}.$} The last column under the
heading `MLE$(\sigma _{g}).\mathcal{M}_{0}$' reports estimates obtained from
the \emph{`correctly specified'} Stage 2 model defined in \ref{S2full0},
where $\sigma _{g}$ is once again estimated directly by MLE.
The results in \autoref{tab:Stage2} can be summarized as follows. First,
there exists no evidence of \emph{`pile-up'} at zero problems materializing
when estimating $\sigma _{g}$ directly by MLE; not in the \textit
`misspecified' }Stage 2 model, nor in the \emph{`correctly specified'
\textit{\ }one. This finding is consistent with the earlier results from the
first stage. The Stage 2 MLE of $\sigma _{g}$ is in fact nearly $50\%$
\textit{larger} than the estimate implied by $\hat{\lambda}_{g}$ from MUE in
Stage 1. MUE in Stage 1 thus seems to be redundant. Second, the estimate of
a_{g}$ is about eight times the magnitude of $-a_{r}$, so that
(a_{g}+4a_{r})\approx 0.3132\neq 0$. Therefore, the ratio in \ref{Lz00} will
have additional $\sigma _{g}$ terms in the denominator, making the
evaluation of this quantity more intricate. And third, despite the different
Stage 2 model specifications, the resulting parameter estimates as well as
the log-likelihood values across the three different models in columns two
to four of \autoref{tab:Stage2} are very similar. This suggests that,
overall, the data are uninformative about the model parameters.\footnote
These findings also hold when using data for the Euro Area, the U.K., and
Canada, but are not reported here.}
Note here that, although the results in \autoref{tab:Stage2} indicate that
\emph{`misspecifying'} the Stage 2 model does not have an important impact
on the parameter estimates that are obtained, I show below that it
substantially and spuriously amplifies the size of the $\lambda _{z}$
estimate
\subsubsection{\cites{holston.etal:2017} implementation of MUE in Stage 2
\label{sec:MUE2}}
Recall again conceptually how MUE in Stage 2 would need to be implemented
following the same logic as in Stage 1 before.\ First, one needs to
construct an observable counterpart to $GY_{t}$ as given in \ref{MUE2a} from
the Stage 2 model estimates. Then, the four structural break tests described
in \Sref{subsec:MUE} are applied to test for a break in the unconditional
mean of (the AR filtered) $GY_{t}$ series. This corresponds to Step $(ii)$
in \cites{stock.watson:1998} procedural description. Constructing a local
level model of the form described in \ref{MUE2} enables us to implement MUE\
to yield the ratio $\lambda /T=\bar{\sigma}(\Delta \beta _{t})/\bar{\sigma
(\varepsilon _{t}^{\tilde{y}})$ as defined in \ref{mue2_ratio}.
\cites{holston.etal:2017} implementation of MUE in Stage 2, nonetheless,
departs from this description in two important ways. First, instead of using
the \emph{`correctly specified'} Stage 2 model defined in \ref{S2full0},
they work with the \emph{`misspecified'} model given in \ref{eq:stag2}.
Second, rather than leaving\ the $a_{y,1},$ $a_{y,2},$ $a_{r},$ $a_{g}$ and
a_{0}$ parameters fixed at their Stage 2 estimates and constructing the
observable counterpart to $GY_{t}$ in \ref{S2wrong_a} only once outside the
dummy variable regression loop, \cite{holston.etal:2017} essentially \emph
`re-estimate'} these parameters by including the vector $\boldsymbol
\mathcal{X}}_{t}$ defined in \ref{XX} below as a regressor in the structural
break regression in \ref{eqS2regs}. For the \emph{`misspecified'} Stage 2
model, this has the effect of substantially increasing the size and
variability of not only the dummy variable coefficients $\hat{\zeta}_{1}$ in
\ref{eqS2regs}, but also the corresponding $F$ statistics used in the
computation of the \textrm{MW}, \textrm{EW}, and \textrm{QLR} structural
break tests needed for MUE$\ $of $\lambda _{z}$.
To illustrate how \cite{holston.etal:2017} implement MUE in the second
stage, I list below the main steps that they follow to compute $\lambda _{z}
.
\begin{enumI}
\item Given the Stage 2 estimate $\skew{0}\boldsymbol{\hat{\theta}}_{2}$
from the model in \ref{eq:stag2}, use the Kalman Smoother to obtain
(smoothed) estimates of the latent state vector $\boldsymbol{\xi
_{t}=[y_{t}^{\ast },~y_{t-1}^{\ast },~y_{t-2}^{\ast },~g_{t-1}]^{\prime }$.
Then form estimates of the cycle variable and its lags as $\hat{\tilde{y}
_{t-i|T}=(y_{t-i}-\hat{y}_{t-i|T}^{\ast }),\forall i=0,1,2$.
\item Construct
\begin{equation}
\mathcal{Y}_{t}=\hat{\tilde{y}}_{t|T} \label{YY}
\end{equation
and the $(1\times 5)$ vector
\begin{equation}
\boldsymbol{\mathcal{X}}_{t}=[\hat{\tilde{y}}_{t-1|T},~\hat{\tilde{y}
_{t-2|T},~(r_{t-1}+r_{t-2})/2,~\hat{g}_{t-1|T},~1], \label{XX}
\end{equation
where $r_{t}$ is the real interest rate, $\hat{g}_{t-1|T}$ is the Kalman
Smoothed estimate of $g_{t-1}$ and $1$ is a scalar to capture the constant
a_{0}$ (intercept term).
\item For each $\tau \in \lbrack \tau _{0},\tau _{1}]$, run the following
dummy variable regression analogous to \ref{Zt}
\begin{equation}
\mathcal{Y}_{t}=\boldsymbol{\mathcal{X}}_{t}\boldsymbol{\phi }+\zeta
_{1}D_{t}(\tau )+\epsilon _{t}, \label{eqS2regs}
\end{equation
where $\boldsymbol{\mathcal{X}}_{t}$ is as defined in \ref{XX} and
\boldsymbol{\phi \ }$is a $(5\times 1)$ parameter vector. The structural
break dummy variable $D_{t}(\tau )$ takes the value $1$ if $t>\tau $ and $0$
otherwise, and $\tau =\{\tau _{0},\ldots ,\tau _{1}\}$ is an index of grid
points between $\tau _{0}=4$ and $\tau _{1}=T-4$. Use the sequence of $F$
statistics $\{F(\tau )\}_{\tau =\tau _{0}}^{\tau _{1}}$ on the dummy
variable coefficients to compute the \textrm{MW}, \textrm{EW}, and \textrm
QLR} structural break test statistics needed for MUE.
\item Given the structural break test statistics computed in Step (\emph{III
\hsp[.3]), find the corresponding $\lambda $ values in look-up Table 3 of
\cite{stock.watson:1998} and return the ratio ${\lambda }/T=\lambda _{z}$,
where the preferred estimate of $\lambda $ is again based on the \textrm{EW}
structural break statistic defined in \ref{EW} as in the Stage 1 MUE.
\end{enumI}
In the top and bottom panels of \autoref{fig:seqaF} I show plots of the
sequences of $F$ statistics $\{F(\tau )\}_{\tau =\tau _{0}}^{\tau _{1}}$
computed from \cites{holston.etal:2017} \emph{`misspecified}\textit{'} Stage
2 model and the \emph{`correctly specified'} Stage 2 model defined in \re
{S2full0}, respectively. Two sets of sequences are drawn in each panel
\footnote
The same sequence computed from an updated data series up to 2019:Q2 is
shown in \autoref{fig:seqaF_2019Q2} in the \hyperref[appendix]{Appendix}.}
The first sequence, which I refer to as \emph{`time varying} $\boldsymbol
\phi }$\textit{'} (drawn as a red line in \autoref{fig:seqaF}) is
constructed by following
\cites{holston.etal:2017}
implementation outlined in Steps (\emph{I}\hsp[.3]) to (\emph{III}\hsp[.3])
above. I call this the \emph{`time varying} $\boldsymbol{\phi }$\textit{'}
sequence because the $a_{y,1},$ $a_{y,2},$ $a_{r},$ $a_{g}$ and $a_{0}$
parameters needed to \textit{`construct'} the observable counterpart to
GY_{t}$ in \ref{S2wrong_a} are effectively \textit{`re-estimated'} for each
\tau \in \lbrack \tau _{0},\tau _{1}]$ in the dummy variable regression loop
due to the inclusion of the extra $\boldsymbol{\mathcal{X}}_{t}\boldsymbol
\phi }$ term in \ref{eqS2regs}. For the \emph{`correctly specified}\textit{'}
Stage 2 model in \ref{S2full0}, $\boldsymbol{\mathcal{X}}_{t}$ in \ref{XX}
is replaced by the $(1\times 3)$ vector $[\hat{\tilde{y}}_{t-1|T},~\hat
\tilde{y}}_{t-2|T},~(r_{t-1}+r_{t-2}-4\{\hat{g}_{t-1|T}+\hat{g
_{t-2|T}\})/2] $.
In the second sequence, labelled \emph{`constant} $\boldsymbol{\phi }
\textit{' }in \autoref{fig:seqaF} and drawn as a blue line, the observable
counterpart to $GY_{t}$ is computed only once outside the structural break
regression loop, with the dummy variable regression performed without the
extra $\boldsymbol{\mathcal{X}}_{t}\boldsymbol{\phi }$ term in \ref{eqS2regs
, ie., it is computed in its \textit{`original'} form as given in \ref{Zt}
\footnote
Note that \cites{stock.watson:1998} MUE look-up table values for $\lambda $
were constructed by simulation with the structural break test testing the
unconditional mean of the $GY_{t}$ series for a break, without any other
variables being included in the regression. This form of the structural
break regression is thus compatible with
\cites{stock.watson:1998}
look-up table values.} More specifically, for the \emph{`misspecified
\textit{'} and \emph{`correctly specified}\textit{' }Stage 2 models, the
observable counterparts to the $GY_{t}$ series are constructed as
\begin{align}
GY_{t}& =\hat{\tilde{y}}_{t|T}-\hat{a}_{y,1}\hat{\tilde{y}}_{t-1|T}-\hat{a
_{y,2}\hat{\tilde{y}}_{t-2|T}-\hat{a}_{r}(r_{t-1}+r_{t-2})/2-\hat{a}_{g}\hat
g}_{t-1|T}-\hat{a}_{0}, \label{GY_HLW} \\
\intxt{and}GY_{t}& =\hat{\tilde{y}}_{t|T}-\hat{a}_{y,1}\hat{\tilde{y}
_{t-1|T}-\hat{a}_{y,2}\hat{\tilde{y}}_{t-2|T}-\hat{a}_{r}(r_{t-1}+r_{t-2}-4\
\hat{g}_{t-1|T}+\hat{g}_{t-2|T}\})/2, \label{GYcorr1}
\end{align
respectively. The $\hat{a}_{y,1},\hat{a}_{y,2},\hat{a}_{r},\hat{a}_{g},$ and
$\hat{a}_{0}$ coefficients are the (full sample) estimates reported in
columns 2 and 4 of \autoref{tab:Stage2} under the headings `Replicated' and
`MLE$(\sigma _{g}).\mathcal{M}_{0}$', with the corresponding latent state
estimates from the respective models.\footnote
For instance, $\hat{g}_{t-1|T}$ in \ref{GY_HLW} is the Kalman Smoothed
estimate of trend growth from \cites{holston.etal:2017} \emph{`misspecified
\textit{'} Stage 2 model, while trend growth $\hat{g}_{t-1|T}$ in \re
{GYcorr1} is the corresponding estimate from the \emph{`correctly specified
\textit{' }Stage 2 model.}
As can be seen from \autoref{fig:seqaF}, the $\{F(\tau )\}_{\tau =\tau
_{0}}^{\tau _{1}}$ sequences from the \emph{`correctly specified}\textit{'}
Stage 2 models shown in the bottom panel are not only smaller overall, but
they are nearly unaffected by
\cites{holston.etal:2017}
approach to \emph{`re-estimate'} the parameters in the structural break
loop. Both, the \emph{`constant} $\boldsymbol{\phi }$\textit{' }and the
\emph{`time varying} $\boldsymbol{\phi }$\textit{'} versions generate
\{F(\tau )\}_{\tau =\tau _{0}}^{\tau _{1}}$ sequences that are overall very
similar, with their maximum values being around 4.5. For the \emph
`misspecified}\textit{'} Stage 2 model shown in the top panel, this is not
the case. The variation as well as the magnitude of $\{F(\tau )\}_{\tau
=\tau _{0}}^{\tau _{1}}$ from the \emph{`time varying} $\boldsymbol{\phi }
\textit{'} and \emph{`constant} $\boldsymbol{\phi }$\textit{'
implementations are vastly different, with the former having a much higher
mean and maximum value.
These large differences in the $\{F(\tau )\}_{\tau =\tau _{0}}^{\tau _{1}}$
sequences from the \emph{`misspecified}\textit{'} Stage 2 models also lead
to very different estimates of $\lambda _{z}$. This can be seen from \autore
{tab:Stage2_lambda_z}, which shows the resulting $\lambda _{z}$ estimates in
the top part with the corresponding $L$, \textrm{MW}, \textrm{EW}, and
\textrm{QLR} structural break test statistics in the bottom part. \autore
{tab:Stage2_lambda_z} is arranged further into a left and a right column
block, referring to the \emph{`time varying} $\boldsymbol{\phi }$\textit{'
and the \emph{`constant} $\boldsymbol{\phi }$\textit{' }MUE implementations
for the three different models reported in \ref{tab:Stage2}. `Replicated'
refers to the baseline replicated results, `MLE$(\sigma _{g})$' corresponds
to the \emph{`misspecified}\textit{'} Stage 2 model but with $\sigma _{g}$
estimated by MLE, and `MLE$(\sigma _{g}).\mathcal{M}_{0}$' is from the \emph
`correctly specified}\textit{'} Stage 2 model with $\sigma _{g}$ again
estimated by MLE. The `HLW.R-File' column lists the results from
\cites{holston.etal:2017}
R-Code. Note that \cite{holston.etal:2017} do not report estimates based on
\cites{nyblom:1989} $L$ statistic. The entries in the $L$ rows in \autore
{tab:Stage2_lambda_z} under `HLW.R-File' thus simply list `---'. 90\%
confidence intervals for $\lambda _{z}$ and $p-$values for the structural
break tests are reported in square and round brackets, respectively
\footnote
As in the replication of
\cites{stock.watson:1998}
results reported in \ref{tab:sw98_T4}, these were again obtained from their
GAUSS files.}
Consistent with the visual findings from \autoref{fig:seqaF}, the structural
break statistics from the \emph{`misspecified}\textit{'} Stage 2 model shown
under the 'Replicated' heading for the \emph{`time varying} $\boldsymbol
\phi }$\textit{'} and \emph{`constant} $\boldsymbol{\phi }$\textit{'}
settings are very different. The \textrm{MW}, \textrm{EW}, and \textrm{QLR}
statistics are approximately 4 to 5 times larger under the \emph{`time
varying} $\boldsymbol{\phi }$\textit{' }setting than under the \emph
`constant} $\boldsymbol{\phi }$\textit{'} scenario. Because
\cites{nyblom:1989} $L$ statistic is constructed as the scaled cumulative
sum of the demeaned `$GY_{t}$' series and thus does not require the
partitioning of data, creation of dummy variables, or looping through
potential break dates, it is not affected by this choice, yielding the same
test statistic of about $0.05$ under both settings.
Under the \emph{`time varying} $\boldsymbol{\phi }$\textit{' }setting, the
\textrm{MW}, \textrm{EW}, and \textrm{QLR} statistics and \cites{nyblom:1989}
$L$ statistic generate vastly different $\lambda _{z}$ estimates.
\cites{nyblom:1989} $L$ statistic is highly insignificant with a $p-$value
of $0.87$, resulting in a $\lambda _{z}$ estimate of exactly $0$
\cites{nyblom:1989} $L$ statistic is less than $0.118$, the smallest value
in
\cites{stock.watson:1998}
look-up Table 3 which corresponds to $\lambda =0$). The \textrm{MW}, \textrm
EW}, and \textrm{QLR} structural break statistics on the other hand are
either weakly significant or marginally insignificant, with $p-$values
between $0.045$ and $0.13$. These borderline significant structural break
statistics generate sizable $\lambda _{z}$ point estimates between $0.025$
and $0.034$. The resulting $90\%$ confidence intervals for $\lambda _{z}$
are, nonetheless, rather wide with $0$ as the lower bound, suggesting that
these point estimates are not significantly different from zero.\footnote
Given the earlier discussion in \Sref{subsec:MUE} and the ARE results in
Table 2 of \cite{stock.watson:1998}, we know that MUE\ can be a very
inefficient estimator.} Under the \emph{`constant} $\boldsymbol{\phi }
\textit{'} setting, the four structural break statistics and the resulting
\lambda _{z}$ estimates tell a consistent story (see the `Replicated'
heading in the right column block). All structural break statistics are
highly insignificant, with their respective $\lambda _{z}$ point estimates
being equal to zero.
For the \emph{`correctly specified}\textit{'} Stage 2 models shown under the
headings `MLE$(\sigma _{g}).\mathcal{M}_{0}$' in \autore
{tab:Stage2_lambda_z}, the \emph{`time varying} $\boldsymbol{\phi }$\textit{
} and the \emph{`constant} $\boldsymbol{\phi }$\textit{'} estimates of
\lambda _{z}$ reflect the visual similarity of the $\{F(\tau )\}_{\tau =\tau
_{0}}^{\tau _{1}}$ sequences shown in the bottom panel of \autoref{fig:seqaF
. The $\lambda _{z}$ point estimates are of the same order of magnitude,
very close to zero (they are exactly equal to zero for \cites{nyblom:1989}
L $ statistic and \textrm{MW}\ under the \emph{`constant} $\boldsymbol{\phi }
$\textit{'} setting), and most importantly, substantially smaller than those
constructed from
\cites{holston.etal:2017}
\textit{`}\emph{misspecified}\textit{'} Stage 2 model.\footnote
In \autoref{Atab:Stage2_lambda_z_2019} in the \hyperref[appendix]{Appendix},
I present these Stage 2 MUE results for data that was updated to 2019:Q2.
The conclusion is the same.}
What is causing this large difference in the $\{F(\tau )\}_{\tau =\tau
_{0}}^{\tau _{1}}$ sequences between the\ \emph{`misspecified}\textit{'} and
\emph{`correctly specified}\textit{'} Stage 2 models in the \emph{`time
varying} $\boldsymbol{\phi }$\textit{'} setting? There are two components.
First, the Kalman Smoothed estimates of the output gap (cycle) $\hat{\tilde{
}}_{t|T}\ $and of (annualized) trend growth $\hat{g}_{t|T}$ can be quite
different from these two models, despite the parameter estimates and values
of the log-likelihoods being very similar. This difference is more
pronounced for the cycle estimate $\hat{\tilde{y}}_{t|T}$, particulary
towards the end of the sample period than for the trend growth estimate
\hat{g}_{t|T}$ (see \autoref{fig:MUE_comp_input} in the \hyperref[appendix]
Appendix} which shows a comparison of $\hat{\tilde{y}}_{t|T}\ $and $\hat{g
_{t|T}$ from the\ \emph{`misspecified}\textit{'} and \emph{`correctly
specified}\textit{'} Stage 2 models).
Second, the parameter restriction $(a_{g}+4a_{r})$ on the relationship
between the real rate and trend growth matters. More specifically, when
conditioning on $\boldsymbol{\mathcal{X}}_{t}$ in \ref{eqS2regs}, it is the
restriction $(r_{t-1}-4\hat{g}_{t-1|T})$ in $\boldsymbol{\mathcal{X}}_{t}$
that makes the largest difference to the $\{F(\tau )\}_{\tau =\tau
_{0}}^{\tau _{1}}$ sequence. To see this, I\ show plots of the $\{F(\tau
)\}_{\tau =\tau _{0}}^{\tau _{1}}$ sequences from various $\boldsymbol
\mathcal{X}}_{t}$ constructs corresponding to the different Stage 2 model
specifications in \autoref{fig:MUE_comp} in the \hyperref[appendix]{Appendix
. I use the \emph{`correctly specified}\textit{'} Stage 2 model's $\{\hat
\tilde{y}}_{t-i|T}\}_{i=1}^{2}$ and $\hat{g}_{t-1|T}$ estimates to form
three sets of $\boldsymbol{\mathcal{X}}_{t}$ vectors for the dummy variable
regressions in \ref{eqS2regs}. These are:\bsq\label{S2:compis}
\begin{align}
\boldsymbol{\mathcal{X}}_{t}& =[\hat{\tilde{y}}_{t-1|T},~\hat{\tilde{y}
_{t-2|T},~(r_{t-1}+r_{t-2})/2,~\hat{g}_{t-1|T},~1] \label{c1} \\
\boldsymbol{\mathcal{X}}_{t}& =[\hat{\tilde{y}}_{t-1|T},~\hat{\tilde{y}
_{t-2|T},~r_{t-1},~\hat{g}_{t-1|T},~1] \label{c2} \\
\boldsymbol{\mathcal{X}}_{t}& =[\hat{\tilde{y}}_{t-1|T},~\hat{\tilde{y}
_{t-2|T},~(r_{t-1}-4\hat{g}_{t-1|T})], \label{c3}
\end{align
\esq and are labelled accordingly in \autoref{fig:MUE_comp} (the preceding
`MLE$(\sigma _{g}).\mathcal{M}_{0}$' signifies that these were constructed
using the $\{\hat{\tilde{y}}_{t-i|T}\}_{i=1}^{2}$ and $\hat{g}_{t-1|T}$
estimates from the \emph{`correctly specified}\textit{'} Stage 2 model). The
corresponding $\mathcal{Y}_{t}$ dependent variable for these structural
break regressions also uses the \emph{`correctly specified'} Stage 2 model's
output gap estimate $\hat{\tilde{y}}_{t|T}$. The $\{F(\tau )\}_{\tau =\tau
_{0}}^{\tau _{1}}$ sequences from
\cites{holston.etal:2017}
\emph{`misspecified}\textit{'} and the \emph{`correctly specified}\textit{'}
Stage 2 models are superimposed as reference values and are denoted by `HLW'
and `MLE$(\sigma _{g}).\mathcal{M}_{0}$'.
The plot corresponding to \ref{c1} (orange dashed line in \autore
{fig:MUE_comp}) shows a rather small difference relative to the `HLW'
benchmark (blue solid line). Thus, exchanging $\{\hat{\tilde{y}
_{t-i|T}\}_{i=1}^{2}$ and $\hat{g}_{t-1|T}$ from
\cites{holston.etal:2017}
\emph{`misspecified}\textit{'} Stage 2 model for those from the \emph
`correctly specified}\textit{'} one only has a small impact on the $\{F(\tau
)\}_{\tau =\tau _{0}}^{\tau _{1}}$ sequence and is most visible over the
1994 to 2000 period. Dropping the second lag in $r_{t}$ from $\boldsymbol
\mathcal{X}}_{t}$ in \ref{c2} (see the cyan dotted line in \autore
{fig:MUE_comp}) also has only a small impact on the $\{F(\tau )\}_{\tau
=\tau _{0}}^{\tau _{1}}$ sequence. The biggest effect on $\{F(\tau )\}_{\tau
=\tau _{0}}^{\tau _{1}}$ has the restriction $(r_{t-1}-4\hat{g}_{t-1|T})$ as
imposed in \ref{c3} (green dashed-dotted line \autoref{fig:MUE_comp}). This
is evident from the near overlapping with the red solid line corresponding
to the \emph{correctly specified}\textit{'} Stage 2 model's $\{F(\tau
)\}_{\tau =\tau _{0}}^{\tau _{1}}$ sequence. Recall that the only difference
between these two is that an extra lag of $(r_{t-1}-4\hat{g}_{t-1|T})$ is
added to $\boldsymbol{\mathcal{X}}_{t}$, and that these enter as an average,
viz, $\boldsymbol{\mathcal{X}}_{t}=[\hat{\tilde{y}}_{t-1|T},~\hat{\tilde{y}
_{t-2|T},~(r_{t-1}+r_{t-2}-4\{\hat{g}_{t-1|T}+\hat{g}_{t-2|T}\})/2]$.
\subsubsection{What does \cites{holston.etal:2017} Stage 2 MUE procedure
recover?}
\cites{holston.etal:2017}
Stage 2 MUE procedure implemented on the \emph{`misspecified}\textit{'}
Stage 2 model leads to spuriously large estimates of $\lambda _{z}$ when the
true value is zero. To show this, I\ perform two simple simulation
experiments.
In the first experiment, I\ simulate data from the full structural model in
\ref{eq:hlw} using the Stage 3 parameter estimates of \cit
{holston.etal:2017} reported in column one of \autoref{tab:Stage3} as the
true values that generate the data, but with \textit{`other factor'} $z_{t}$
set to zero for all $t$. The natural rate $r_{t}^{\ast }$ in the output gap
equation in \ref{IS} is thus solely determined by (annualized) trend growth,
that is, $r_{t}^{\ast }=4g_{t}$, which implies that $\lambda _{z}$ is zero
in the simulated data.\footnote
To implement the simulations from the full Stage 3 model, I\ need to define
a process for the exogenously determined interest rate in
\cites{holston.etal:2017} model. For simplicity, I estimate a parsimonious,
but well fitting, ARMA($2,1$) model for the real interest rate series, and
then use the ARMA($2,1$) coefficients to generate a sequence of 229
simulated observations for $r_{t}$. Recall that \cite{holston.etal:2017} use
data from 1960:Q1, where the first 4 quarters are used for initialisation of
the state vector, so that in total $4+225=T$ observations are available. The
remaining series are simulated from the Stage 3 model given in \ref{eq:hlw}.
To get a realistic simulation path from the Stage 3 model, I\ initialize the
first four data points for the simulated inflation series at their observed
empirical values. For the $y_{t}^{\ast }$ series, the HP-filter based trend
estimates of GDP (also utilized in the initialisation of the State vector in
Stage 1) are used to set the first four observations. The cycle variable
\tilde{y}_{t}$ is initialized at zero, while trend growth $g_{t}$ is
initialized at $0.75$, which corresponds to an annualized rate of $3$
percent. In the analysis that requires a simulated path of `\emph{other
factor}' $z_{t}$, ie., when the natural rate is generated from $r_{t}^{\ast
}=4g_{t}+z_{t}$, the first four entries in $z_{t}$ are initialized at zero.
A total of $S=1000$ sequences are simulated with a total sample size of $229$
observations, where the first four entries are discarded in later analysis.}
I then implement \cites{holston.etal:2017} Stage 2 MUE procedure on the
simulated data following steps (\emph{I}\hsp[.3]) to (\emph{IV}\hsp[.3])
outlined in \Sref{sec:MUE2}\ above to yield a sequence of $S=1000$ estimates
of $\lambda _{z}$ $\left( \{\hat{\lambda}_{z}^{s}\}_{s=1}^{S}\right) $.
I use two different scenarios for $\boldsymbol{\theta }_{2}$ in the Kalman
Smoother recursions described in Step $(I)$ to extract the latent cycle as
well as trend growth series needed for the construction of $\mathcal{Y}_{t}$
and $\boldsymbol{\mathcal{X}}_{t}$ in the dummy variable regression in \re
{eqS2regs}. The first scenario simply takes \cites{holston.etal:2017}
empirical Stage 2 estimate $\skew{0}\boldsymbol{\hat{\theta}}_{2}$ as
reported in column one of \autoref{tab:Stage2}, and keeps these values fixed
for all 1000 generated data sequences when applying the Kalman Smoother. In
the second scenario, I re-estimate the Stage 2 parameters for each simulated
sequence to obtain new estimates $\skew{0}\boldsymbol{\hat{\theta}
_{2}^{s},\forall s=1,\ldots ,S$. I then apply the Kalman Smoother using
these estimates to generate the $\mathcal{Y}_{t}$ and $\boldsymbol{\mathcal{
}}_{t}$ sequences for the regression in \ref{eqS2regs}.
Finally, I repeat the above computations on data that were generated from
the full model in \ref{eq:hlw} with the natural rate of interest determined
by both factors, namely, $r_{t}^{\ast }=4g_{t}+z_{t}$, where $z_{t}$ was
simulated as a pure random walk. The standard deviation of $z_{t}$ was set
at the implied value from the Stage 2 estimate of $\lambda _{z}$ and the
Stage 3 estimates of $\sigma _{\tilde{y}}$ and $a_{r}$, ie., at $\sigma
_{z}=\lambda _{z}\sigma _{\tilde{y}}/a_{r}\approx 0.15$ (see row $\sigma
_{z} $ (implied) of column one in \autoref{tab:Stage3}). The objective here
is to provide a comparison of the magnitudes of the $\lambda _{z}$ estimates
that are obtained when implementing \cites{holston.etal:2017} Stage 2 MUE
procedure on data that were generate with and without \textit{`other factor'}
$z_{t}$ in the natural rate.
In \autoref{tab:Stage2_lambda_z_o}, summary statistics of $\hat{\lambda
_{z}^{s}$ from the two different data generating processes (DGPs) are
reported. The left column block shows results for the two different DGPs
when the Stage 2 parameter vector $\boldsymbol{\theta }_{2}$ is held fixed
at the estimates reported in column one of \autoref{tab:Stage2}. The right
column block shows corresponding results when $\boldsymbol{\theta }_{2}$ is
re-estimated for each simulated data series. The summary statistics are the
minimum, maximum, standard deviation, mean, and median of $\hat{\lambda
_{z}^{s}$, as well as the relative frequency of obtaining a value larger
than the empirical point estimate of \cite{holston.etal:2017}.\ This point
estimate and the corresponding relative frequency are denoted by $\hat
\lambda}_{z}^{\mathrm{HLW}}$ and $\Pr (\hat{\lambda}_{z}^{s}>\hat{\lambda
_{z}^{\mathrm{HLW}})$, respectively. To complement the summary statistics in
\autoref{tab:Stage2_lambda_z_o}, histograms of $\hat{\lambda}_{z}^{s}$ are
shown in \autoref{fig:S2Lam_z_sim} to provide visual information about its
sampling distribution.
From the summary statistics in \autoref{tab:Stage2_lambda_z_o} as well as
the histograms in \autoref{fig:S2Lam_z_sim} we can see how similar the $\hat
\lambda}_{z}^{s}$ coefficients from these two different DGPs are. For
instance, when the data were simulated without \textit{`other factor'}
z_{t} $ (ie., $\lambda _{z}=0$), the sample mean of $\hat{\lambda}_{z}^{s}$
is $0.028842$. When the data were generated from the full model with
r_{t}^{\ast }=4g_{t}+z_{t}$, the sample mean of $\hat{\lambda}_{z}^{s}$ is
only $6.53\%$ higher at $0.030726$. Similarly, the relative frequencies $\Pr
(\hat{\lambda}_{z}^{s}>\hat{\lambda}_{z}^{\mathrm{HLW}})$ for these two DGPs
are $45.70\%$ and $49\%$, respectively. The inclusion of \textit{`other
factor'} $z_{t}$ in the DGP\ of the natural rate thus results in only a $3.3$
percentage points higher $\Pr (\hat{\lambda}_{z}^{s}>\hat{\lambda}_{z}^
\mathrm{HLW}})$.\footnote
When the Stage 2 parameter vector $\boldsymbol{\theta }_{2}$ is re-estimated
for each simulated sequence shown in the right column block in \autore
{tab:Stage2_lambda_z_o}, the sample means as well as the relative frequency
\Pr (\hat{\lambda}_{z}^{s}>\hat{\lambda}_{z}^{\mathrm{HLW}})$ are somewhat
lower at $0.025103$ and $0.027462$, and 33.90\% and 39.30\%, respectively.}
The histograms in \autoref{fig:S2Lam_z_sim} paint the same overall picture.
As can be seen, the Stage 2 MUE implementation has difficulties to
discriminate between these two DGPs. Moreover, it seems that it is
\cites{holston.etal:2017}
procedure itself that leads to the spuriously amplified estimates of
\lambda _{z}$, regardless of the data.
In a second experiment I\ simulate DGPs from entirely unrelated univariate
ARMA\ processes of the individual components of the $\mathcal{Y}_{t}$ and
\boldsymbol{\mathcal{X}}_{t}$ series needed for the regressions in \re
{eqS2regs}. To match the time series properties of the $\mathcal{Y}_{t}$ and
$\boldsymbol{\mathcal{X}}_{t}$ elements given in \ref{YY} and \ref{XX}, I
fit simple low-order ARMA\ models to $\hat{\tilde{y}}_{t|T},$ $r_{t}$ and
\hat{g}_{t|T}$, and then use these ARMA\ estimates to simulate artificial
data.\footnote
I use 4 different time series processes for $\hat{g}_{t|T}$ in these
simulations. Complete details of the simulation design are given in
\hyperref[sec:AS4]{Section A.4} of the \hyperref[appendix]{Appendix}.}
Finally, I apply \cites{holston.etal:2017} Stage 2 MUE procedure to the
simulated data as before, nevertheless starting from Step (\emph{II}\hsp[.3
), and thereby skipping the Kalman Smoother step. The full results from the
second experiment are reported in \autoref{tab:MUE2_Sim_extra} and \autore
{fig:MUE2_Sim_extra} in the \hyperref[appendix]{Appendix}. These yield
magnitudes of $\hat{\lambda}_{z}^{s}$ that are similar to those from the
first simulation experiment, with mean estimates being between $0.026117$
and $0.031798$, and relative frequencies corresponding to $\Pr (\hat{\lambda
_{z}^{s}>\hat{\lambda}_{z}^{\mathrm{HLW}})$ being between $38.40\%$ and
49.80\%$.
\subsection{Stage 3 Model \label{sec:S3}}
The analysis so far has demonstrated that the ratios of interest $\lambda
_{g}=\sigma _{g}/\sigma _{_{y^{\ast }}}$ and $\lambda _{z}=a_{r}\sigma
_{z}/\sigma _{\tilde{y}}$ required for the estimation of the full structural
model in \ref{eq:hlw} cannot be recovered from \cites{holston.etal:2017}
MUE\ procedure implemented in Stages 1 and 2. Moreover, since their
procedure is based on the \emph{`misspecified}\textit{'} Stage 2 model in
\ref{eq:stag2}, it results in a substantially larger estimate of $\lambda
_{z}$ than when implemented on the \emph{`correctly specified}\textit{'}
Stage 2 model in \ref{S2full0}. This substantially larger estimate of
\lambda _{z}$ in turn leads to a greatly amplified and strongly downward
trending \emph{`other factor'} $z_{t}$. To show the impact of this on
\cites{holston.etal:2017} estimate of the natural rate of interest, I
initially report parameter estimates of the full Stage 3 model in \autore
{tab:Stage3}, followed by plots of filtered estimates of the natural rate
r_{t}^{\ast }$, trend growth $g_{t}$, \emph{`other factor'} $z_{t}$, and the
output gap (cycle) variable $\tilde{y}_{t}$ in \autoref{fig:2017KF}
\footnote
Smoothed estimates are shown in \autoref{fig:2017KS}. In \hyperref[sec:AS3]
Section A.3} in the \hyperref[appendix]{Appendix}, the expansion of the
system matrices are reported as for the earlier Stage 1 and Stage 2 models.
These are in line with the full model reported in \ref{eq:hlw}. As before,
the state vector $\boldsymbol{\xi }_{t}$ is initialized using the same
procedure as outlined in \ref{eq:P00S1a} and \fnref{fn:1}, with the
numerical values of $\boldsymbol{\xi }_{00}$ and $\mathbf{P}_{00}$ given in
\ref{AS3:xi00} and \ref{AS3:P00}.}
Given estimates of the ratios $\lambda _{g}=\sigma _{g}/\sigma _{_{y^{\ast
}}}$ and $\lambda _{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$ from the
previous two stages, the vector of Stage 3 parameters to be computed by MLE
is
\begin{equation}
\boldsymbol{\theta }_{3}=[a_{y,1},~a_{y,2},~a_{r},~b_{\pi },~b_{y},~\sigma _
\tilde{y}},~\sigma _{\pi },~\sigma _{y^{\ast }}]^{\prime }.
\label{eq:theta3}
\end{equation
In \autoref{tab:Stage3}, estimates of $\boldsymbol{\theta }_{3}$ are
presented following the same format as in \autoref{tab:Stage1} and \autore
{tab:Stage2} previously. Since I also estimate $\sigma _{g}$ and $\sigma
_{z} $ directly together with the other parameters by MLE without using the
Stage 1 and Stage 2 estimates of $\lambda _{g}$ and $\lambda _{z}$,
additional rows are inserted, with the values in brackets denoting implied
estimates. The first two columns in \autoref{tab:Stage3} show estimates of
\boldsymbol{\theta }_{3}$ obtained from running \cites{holston.etal:2017}
R-Code and my replication. The third and fourth columns (under headings `MLE
$\sigma _{g}|\hat{\lambda}_{z}^{\mathrm{HLW}}$)' and `MLE($\sigma
_{g}|\lambda _{z}^{\mathcal{M}_{0}}$)', respectively) report estimates when
\sigma _{g}$ is estimated freely by MLE, while $\lambda _{z}$ is held fixed
at either $\hat{\lambda}_{z}^{\mathrm{HLW}}=0.030217$ obtained from
\cites{holston.etal:2017} \emph{`misspecified}\textit{'} Stage 2 model under
their \emph{`time varying} $\boldsymbol{\phi }$\textit{'} approach, or at
\hat{\lambda}_{z}^{\mathcal{M}_{0}}=0.000754$ computed from the \emph
`correctly specified}\textit{'} Stage 2 model in \ref{S2full0} with \emph
`constant} $\boldsymbol{\phi }$\textit{'}. The last column of \autore
{tab:Stage3} under heading `MLE($\sigma _{g},\sigma _{z}$)' lists the
estimates of $\boldsymbol{\theta }_{3}$ when $\sigma _{g}$ and $\sigma _{z}$
are computed directly by MLE, with the implied values of $\lambda _{g}$ and
\lambda _{z}$ reported in brackets.
The Stage 3 results in \autoref{tab:Stage3} can be summarized as follows.\
The MLE of $\sigma _{g}$ does not \emph{`pile-up'} at zero and is again
approximately $50\%$ larger than the estimate implied by the Stage 1 MUE\ of
$\lambda _{g}$. That is, $\hat{\sigma}_{g}\approx $ $0.045$ in the last
three columns of \autoref{tab:Stage3}, and thus very similar in size to the
Stage 2 estimates of $0.044$ and $0.045$ shown in the last two columns of
\autoref{tab:Stage2}. Computing $\sigma _{z}$ directly by MLE leads to a
point estimate that shrinks numerically to zero, while the estimates of the
other parameters remain largely unchanged. Notice again that the
log-likelihood values of the last three models in \autoref{tab:Stage3} are
very similar, ie., between $-514.8307$ and $-514.2899$. Yet, the
corresponding estimates of $\sigma _{z}$ are either very small at $0$ or
comparatively large at $0.1371$ when implied from the \emph{`misspecified
\textit{'} Stage 2 model's $\hat{\lambda}_{z}^{\mathrm{HLW}}$ estimate. The
\hat{\sigma}_{z}$ coefficient from the \emph{`correctly specified}\textit{'}
Stage 2 model is $0.0037$ and thereby nearly 40 times smaller than from the
\emph{`misspecified}\textit{'} Stage 2 model.
The findings from \autoref{tab:Stage3} are mirrored in the filtered
estimates of $r_{t}^{\ast }$, $g_{t}$, $z_{t}$ and $\tilde{y}_{t}$ plotted
in \autoref{fig:2017KF}. The `MLE($\sigma _{g}|\lambda _{z}^{\mathcal{M
_{0}} $)' and `MLE($\sigma _{g},\sigma _{z}$)' estimates are visually
indistinguishable. Unsurprisingly, out of the four estimates, \emph{`other
factor'} $z_{t}$ is overall most strongly affected by the two different
\lambda _{z}$ values that are conditioned upon, showing either\ vary large
variability and a pronounced downward trend in $z_{t}$, or being close to
zero with very little variation (see panel (c) in \autoref{fig:2017KF}). The
effect on the estimate of the natural rate is largest in the immediate
aftermath of the global financial crisis, namely, from 2010 onwards.
Interestingly, the output gap estimates shown in panel (d) of \autore
{fig:2017KF} are quite similar, with the largest divergence occurring after
2012. The three trend growth estimates in panel (b) of \autoref{fig:2017KF}
which estimate $\sigma _{g}$ directly by MLE are visually indistinguishable,
despite having very different ${\sigma}_{z}$ values, namely, between $0$ and
$0.1371$ (see the lines corresponding to `MLE($\sigma _{g}|\lambda _{z}^
\mathrm{HLW}}$)', `MLE($\sigma _{g}|\lambda _{z}^{\mathcal{M}_{0}}$)' and
`MLE($\sigma _{g},\sigma _{z}$)'). Trend growth estimated from
\cites{holston.etal:2017} \ Stage 1 MUE of $\lambda _{g}$ is noticeably
larger from 2009 to 2014. In comparison to the plots shown in panel (c) of
\autoref{fig:HLW_factors}, the drop in all four trend growth estimates
following the financial crisis seems exaggerated. The pure backward looking
nature of the Kalman Filtered $g_{t}$ series exacerbates the effect of the
decline in GDP during the financial crisis on trend growth estimates after
the crisis.
A final point I would like to make here --- and without the intention to
engage in repetitive and unnecessary discussion --- is that extending the
sample period to 2019:Q2 produces the interesting empirical result that
estimating $\sigma _{z}$ directly by MLE does not lead to any \emph{`pile-up
} at zero problems. Moreover, the ML estimate of $\sigma _{z}$ is very
similar to the one implied from the \emph{`correctly specified}\textit{'}
Stage 2 model's $\lambda _{z}$, and thereby again in stark contrast to the
oversized estimate obtained from
\cites{holston.etal:2017}
\emph{`misspecified}\textit{'} Stage 2 model's $\lambda _{z}$.\footnote
These estimation results using data up to 2019:Q2 together with
corresponding plots of filtered (and smoothed) estimates are reported in
\autoref{Atab:S3_2019}, \autoref{Afig:2019KF} and \autoref{Afig:2019KS} in
\hyperref[sec:AS3]{Section A.3} of the \hyperref[appendix]{Appendix}.} Even
so, despite the fact that the point estimate of $\sigma _{z}$ does not
shrink to zero, it is highly insignificant, which suggests that there is
little evidence in the data of \textit{`other factor'} $z_{t}$ being
relevant for the model.
\section{Other issues\label{sec:other}}
There are other issues with
\cites{holston.etal:2017}
structural model in \ref{eq:hlw} that make it unsuitable for policy
analysis. For instance, the interest rate $i_{t}$ is included as an
exogenous variable, so that the model essentially tries to find the best
fitting natural rate $r_{t}^{\ast }$ for it. With $r_{t}^{\ast
}=4g_{t}+z_{t} $, and \textit{`other factor'} $z_{t}$ the \textit{`free'}
variable due to $g_{t}$ being driven by GDP, $z_{t}$ effectively matches the
\textit{`leftover' }movements in the interest rate to make it compatible
with trend growth in the model. Since the central bank has full control over
the (fed funds) interest rate, it can set $i_{t}$ to any desired level and
the model will produce a natural rate through \textit{`other factor'} $z_{t}$
that will match it. Also, there is nothing in the structural model of \re
{eq:hlw} that makes the system stable. For the output gap relation in \re
{IS} to be stationary, the real rate cycle $r_{t}-r_{t}^{\ast }=(i_{t}-\pi
_{t}^{e})-(4g_{t}+z_{t})$ must be $I(0)$, yet there is no co-integrating
relation imposed anywhere in the system to ensure that this holds in the
model.\footnote
This insight is not new and has been discussed in, for instance, \cit
{pagan.wickens:2019} (see pages $21-23$).} When trying to simulate from such
a model, with $\pi _{t}$ being integrated of order 1, the simulated paths of
the real rate $r_{t}=i_{t}-\pi _{t}^{e}$ can frequently diverge to very
large values, even with samples of size $T=229$ observations, which is the
empirical sample size.
A broader concern for policy analysis is the fact that the filtered
estimates of the state vector $\boldsymbol{\xi }_{t}$ will be (weighted
combinations of the) one-sided moving averages of the three observed
variables that enter the state-space model; namely, $i_{t}$, $y_{t}$, and
\pi _{t}$.\footnote
Smoothed estimates will be (weighted combinations of the) two-sided moving
averages of the observables. See also \cite{durbin.koopman:2012}, who write
to this on page 104: \textit{"}\emph{It follows that these conditional means
are weighted sums of past (filtering), of past and present (contemporaneous
filtering) and of all (smoothing) observations. It is of interest to study
these weights to gain a better understanding of the properties of the
estimators as is argued in Koopman and Harvey (2003). ... . In effect, the
weights can be regarded as what are known as kernel functions in the field
of nonparametric regression; ... ."}} This can be seen by writing out the
Kalman Filtered estimate of the state vector as:\footnote
I again follow the notation in \cite{hamilton:1994}, see pages 394-395, with
the matrices $\mathbf{A}$ and $\mathbf{H}$ however not transposed to be
consistent with my earlier notation.
\begin{align}
\skew{0}\boldsymbol{\hat{\xi}}_{t|t}& =\skew{0}\boldsymbol{\hat{\xi}
_{t|t-1}+\underbrace{\mathbf{P}_{t|t-1}\mathbf{H}^{\prime }(\mathbf{HP
_{t|t-1}^{\prime }\mathbf{H}^{\prime }+\mathbf{R})^{-1}}_{\mathbf{G}_{t}}
\mathbf{y}_{t}-\mathbf{Ax}_{t}-\mathbf{H}\skew{0}\boldsymbol{\hat{\xi}
_{t|t-1}) \notag \\
& =\skew{0}\boldsymbol{\hat{\xi}}_{t|t-1}+\mathbf{G}_{t}(\mathbf{y}_{t}
\mathbf{Ax}_{t}-\mathbf{H}\skew{0}\boldsymbol{\hat{\xi}}_{t|t-1}) \notag \\
& =(\mathbf{I}-\mathbf{G}_{t}\mathbf{H)}\skew{0}\boldsymbol{\hat{\xi}
_{t|t-1}+\mathbf{G}_{t}(\mathbf{y}_{t}-\mathbf{Ax}_{t}) \notag \\
& =\underbrace{(\mathbf{I}-\mathbf{G}_{t}\mathbf{H)F}}_{\mathbf{\Phi }_{t}
\skew{0}\boldsymbol{\hat{\xi}}_{t-1|t-1}+\mathbf{G}_{t}\underbrace{(\mathbf{
}_{t}-\mathbf{Ax}_{t})}_{\mathbf{\bar{y}}_{t}} \notag \\
& =\mathbf{\Phi }_{t}\skew{0}\boldsymbol{\hat{\xi}}_{t-1|t-1}+\mathbf{G}_{t
\mathbf{\bar{y}}_{t}, \notag \\
\intxt{which is a (linear) recursion in
$\skew{0}\boldsymbol{\hat{\xi}}_{t|t}$ and can be thus rewritten as:}&
\boldsymbol{\Psi }_{t}\boldsymbol{\xi }_{0|0}+\sum_{i=0}^{t-1}\underbrace
\boldsymbol{\Psi }_{i}\mathbf{G}_{t-i}}_{\boldsymbol{\omega }_{ti}}\mathbf
\bar{y}}_{t-i} \notag \\
& =\boldsymbol{\Psi }_{t}\boldsymbol{\xi }_{0|0}+\sum_{i=0}^{t-1}\boldsymbol
\omega }_{ti}\mathbf{\bar{y}}_{t-i}, \label{xirec}
\end{align
where $\boldsymbol{\Psi }_{i}=\prod_{n=0}^{i-1}\mathbf{\Phi }_{t-n},\forall
i=1,2,\ldots ,$ $\boldsymbol{\Psi }_{0}=\mathbf{I,}$ $\mathbf{I}$ is the
identity matrix, $\skew{0}\boldsymbol{\hat{\xi}}_{t|t-1}=\mathbf{F}\skew{0
\boldsymbol{\hat{\xi}}_{t-1|t-1}$ is the predicted state vector,
\boldsymbol{\xi }_{0|0}$ is the prior mean, $\mathbf{P}_{t|t-1}=\mathbf{FP
_{t-1|t-1}\mathbf{F}+\mathbf{Q}$ is the predicted state variance,
\boldsymbol{\omega }_{ti}=\boldsymbol{\Psi }_{i}\mathbf{G}_{t-i}$ is a time
varying weight matrix, and $\mathbf{\bar{y}}_{t}$ consists of the observed
variables $y_{t}$, $\pi _{t}$, and $i_{t}$.\footnote
To understand what is driving the downward trend in `\emph{other factor}'
z_{t}$ since the early 2000s in the model, one could examine the weight
matrix $\boldsymbol{\omega }_{ti}$ in \ref{xirec} more closely to see how it
interacts with the observable vector $\mathbf{\bar{y}}_{t}=\mathbf{y}_{t}
\mathbf{Ax}_{t}=[a(L)y_{t}-a_{r}(L)r_{t};~b_{\pi }(L)\pi
_{t}-b_{y}y_{t}]^{\prime }$, where $b_{\pi }(L)=1-b_{\pi }L-\frac{1}{3
(1-b_{\pi })(L^{2}+L^{3}+L^{4})$ is the lag polynomial capturing the
dynamics of inflation. Alternatively, the steady-state $\mathbf{P}$ matrix
could be computed recursively as in equation 13.5.3 in \cite{hamilton:1994}
to replace $\mathbf{P}_{t|t-1}$ in the recursions for $\skew{0}\boldsymbol
\hat{\xi}}_{t|t}$. The relation in \ref{xirec} would then yield $\skew{0
\boldsymbol{\hat{\xi}}_{t|t}=\Phi ^{t}\boldsymbol{\xi }_{0|0}
\sum_{i=0}^{t-1}\mathbf{\Phi }^{i}\mathbf{G\bar{y}}_{t-i}$, where $\mathbf
\Phi =(I-GH)F}$ and $\mathbf{G=PH}^{\prime }(\mathbf{HP}^{\prime }\mathbf{H
^{\prime }+\mathbf{R})^{-1}$ would be the steady-state analogue to $\mathbf
\Phi }_{t}$ and $\mathbf{G}_{t}$, with $\mathbf{P}_{t|t-1}$ replaced by
\mathbf{P}$ from the steady-state $\mathbf{P}$ matrix.}
This creates the following two issues. First, since the nominal interest
rate $i_{t}$ is directly controlled by the central bank, and the natural
rate is constructed from the filtered estimate of state vector $\boldsymbol
\xi }_{t}$, which itself is computed as a moving average of $i_{t}$ (and the
other observable variables), a circular relationship can be seen to evolve.
Any central bank induced change in the policy rate $i_{t}$ is mechanically
transferred to the natural rate $r_{t}^{\ast }$ via the Kalman Filtered
estimate of the state vector $\skew{0}\boldsymbol{\hat{\xi}}_{t|t}$ in \re
{xirec}. A confounding effect between $r_{t}^{\ast }$ and $i_{t}$ will
arise, making it impossible to answer questions of interest such as:
\textquotedblleft \textit{Is the natural rate low because }$i_{t}$\ \textit
is low, or is }$i_{t}$\textit{\ low because the natural rate is
low?\textquotedblright } with this model, as one will follow as a direct
consequence from the other.
Second, because of the one-sided moving average nature of the Kalman
Filtered estimates of the state vector, any outliers, structural breaks or
otherwise \emph{`extreme'} observations at the beginning (or end) of the
sample period can have a strong impact on these filtered estimates. For the
(two-sided) \cite{hodrick.prescott:1997} filter, such problems (and other
ones) are well known and have been discussed extensively in the literature
before.\footnote
There exists a large literature on the HP filter and its problems (one of
the more recent papers is by \cite{hamilton:2018}), and it is not the goal
to review or list them here. However, the study by \cite{phillips.jin:2015}
is interesting to single out, in particular the introduction section on
pages 2 to 9, as it highlights the recent public debates by James Bullard,
Paul Krugman, Tim Duy and others on the use (and misuse) of the HP filter
for the construction of output gaps for policy analysis. \cit
{phillips.jin:2015} show also that the HP filter fails to recover the
underlying trend asymptotically in models with breaks (see section 4 in
their paper), and they further propose alternative filtering/smoothing
methods. In an earlier study, \cite{schlicht:2008} describes how to deal
with structural breaks and missing data.} However, (one-sided) Kalman Filter
based estimates will also be affected. This can be easily demonstrated here
by re-estimating the model using four different starting dates, while
keeping the end of the sample period the same at 2019:Q2. In \autore
{fig:T0KF} I show filtered estimates of $r_{t}^{\ast }$, $g_{t}$, $z_{t}$
and $\tilde{y}_{t}$ for the starting dates 1967:Q1, 1972:Q1, 1952:Q2 and
1947:Q1 (smoothed estimates are shown in \autoref{fig:T0KS}), together with
\cites{holston.etal:2017} estimates using 1961:Q1 as the starting date
\footnote
In all computations, I use \cites{holston.etal:2017} R-Code and follow
exactly their three stage procedure as before to estimate the factors of
interest.}
Why are these starting dates chosen? The period following the April 1960 to
February 1961 recession was marked by temporarily unusually (and perhaps
misleadingly) high GDP\ growth, yielding an annualized mean of $6.07\%$
(median $6.47\%$), with a low standard deviation of $2.67\%$ from 1961:Q2 to
1966:Q1 (see panel (b) of \autoref{fig:HLW_factors}). Having such \textit
`excessive'} growth at the beginning of the sample period has an unduly
strong impact on the filtered (less so on the smoothed) estimate of trend
growth $g_{t}$ in the model. Since both $g_{t}$ and $z_{t}$ enter the
natural rate, this affects the estimate of $r_{t}^{\ast }$. To illustrate
the sensitivity of these estimates to this time period, I\ re-estimate the
model with data starting 6 years later in 1967:Q1. Also,
\cites{holston.etal:2017} Euro Area estimates of $r_{t}^{\ast }$ are
negative from around 2013 onwards (see the bottom panel of Figure 3 on page
S63 of their paper).\footnote
This negative estimate in $r_{t}^{\ast }$ is driven by an excessively large
and volatile estimate of \textit{`other factor'} $z_{t}$. Some commentators
have attributed the larger decline in the natural rate to a stronger
manifestation of \textit{`}\emph{secular stagnation}\textit{'} in the Euro
Area than in the U.S.} To show that we can get the same negative estimates
of $r_{t}^{\ast }$ for the U.S., I re-estimate the model with data starting
in 1972:Q1 to match the sample period of the Euro Area in \cit
{holston.etal:2017}. Lastly, I\ extend \cites{holston.etal:2017} data back
to 1947:Q1 to have estimates from a very long sample, using total PCE\
inflation prior to 1959:Q2 in place of Core PCE\ inflation and the Federal
Reserve Bank of New York discount rate from 1965:Q1 back to 1947:Q1 as a
proxy for the Federal Funds rate as was done in \cite{laubach.williams:2003}
\footnote
Note that from the quarterly CORE PCE data it will be possible to construct
annualized inflation only from 1947:Q2 onwards. To have an inflation data
point for 1947:Q1, annual core PCE\ data (BEA\ Series ID: DPCCRG3A086NBEA)
that extends back to 1929 was interpolated to a quarterly frequency and
subsequently used to compute (annualized) quarterly inflation data for
1947:Q1. Since \cites{holston.etal:2017} R-Code requires 4 quarters of GDP\
data prior to 1947:Q1 as initial values, annual GDP\ (BEA Series ID: GDPCA)
was interpolated to quarterly data for the period 1946:Q1 1946:Q4.} Since
inflation was rather volatile from 1947 to 1952, I also re-estimate the
model with data beginning in 1952:Q2 to exclude this volatile inflation
period from the sample.
Panel (a) in \autoref{fig:T0KF} shows how sensitive the natural rate
estimates to the different starting dates are, particularly at the beginning
of \cites{holston.etal:2017} sample, namely, from 1961 until about 1980, and
at the end of the sample from 2009 onwards. Negative natural rate estimates
are now also obtained for the U.S. when the beginning of the sample period
is aligned with that of the Euro Area in 1972:Q1 (or 1967:Q1 which excludes
the high GDP growth period). Since the natural rate is defined as the sum of
trend growth $g_{t}$ and \textit{`}\emph{other factor}\textit{'} $z_{t}$, we
can separately examine the contribution of each of these factors to
r_{t}^{\ast }$. From panel (b) in \autoref{fig:T0KF} it is evident that the
filtered trend growth estimates\ are the primary driver of the excessive
sensitivity in $r_{t}^{\ast }$ over the 1961 to 1980 period. For instance,
in 1961:Q1, these estimates can be as high as 6 percent, or as low as 3
percent, depending on the starting date of the sample. Also, the differences
in these estimates stay sizeable until 1972:Q1, before converging to more
comparable magnitudes from approximately 1981 onwards. Apart from the
estimate using the very long sample beginning in 1947:Q1 (see the blue line
in panel (b) of \autoref{fig:T0KF}), the other four remain surprisingly
similar, even during and after the financial crisis period, that is, from
mid 2007 to the end of the sample in 2019:Q2. Thus, the \textit{`front-end'}
variability of the natural rate estimates are driven by the \textit
`front-end' }variability in the estimates of trend growth $g_{t}$.
In panel (b) of \autoref{fig:T0KF}, I also superimpose MUE and MMLE
(smoothed) estimates of trend growth from \cites{stock.watson:1998} model in
\ref{eq:tvp_sim}, as well as (smoothed) estimates from the (correlated)\ UC
model in \ref{Stage1:mod} to provide long-sample benchmarks of trend growth
from simple univariate models which can be compared to
\cites{holston.etal:2017} estimates. These are the same estimates that are
plotted in panels (b) and (c) of \autoref{fig:HLW_factors}. To avoid
cluttering the plot with additional lines, I do not plot the mean and median
estimates computed over the more recent expansion periods as was done in
\autoref{fig:HLW_factors}. Note, however, that the MUE estimate overlaps
with the mean and median of GDP\ growth from 2009:Q3 onwards and can thus be
used as a representative for these model free \textit{`average'} estimates
of GDP growth since the end of the financial crisis. Comparing the Kalman
Filter based estimates from the various starting dates to the MUE, MMLE, and
UC (smoothed) ones shows how different these are, particularly, from 2009:Q3
until the end of the sample.\ In the immediate post-crisis period, the
(one-sided) filter based estimates are \emph{`pulled down'} excessively by
the sharp decline in GDP and \textit{`converge'} only slowly at the very end
of the sample period towards the three long-sample benchmarks. Trend growth
is severely underestimated from 2009:Q3 onwards, and this is reflected in
the estimate of $r_{t}^{\ast }$.
In \autoref{Afig:rec_mean} in the \hyperref[appendix]{Appendix}, I show
plots of (real) GDP\ growth and the recursively estimated mean of GDP growth
over the post financial crisis period from 2009:Q3 to 2019:Q2. Trend growth
stays rather stable between $2\%$ and $3\%$ over nearly the entire period,
settling at around $2.25\%$ in 2014:Q2 and remaining at that level.
Moreover, it is never close to the filtered estimate of \cit
{holston.etal:2017} from 2009:Q3 to 2014:Q3. In \autoref{Afig:SPF_GDP_growth
, I plot the mean as well as median 10 year ahead annual-average (real) GDP
growth forecasts from the Survey of Professional Forecasters\ (SPF) from
1992 to 2020.\footnote
The data were downloaded from:
\url{https://www.philadelphiafed.org/research-and-data/real-time-center/survey-of-professional-forecasters/data-files/rgdp10}
(accessed on the $27^{th}$ of July, 2020).} These forecasts also remain
fairly stable between $2\%$ and $3\%$ from 2008 until 2017, and drift only
marginally lower towards the very end of the sample. In \autore
{Afig:giglio_GDP_growth}, Vanguard investor survey based 3 year and 10 year
ahead expectations of (real) GDP growth from February 2017 to April 2020 are
plotted. These are taken from Figure II on page 5 in \cite{giglio.etal:2020
. The 10 year expected growth rate shown in the right panel of \autore
{Afig:giglio_GDP_growth} fluctuates (mainly) between $2.8\%$ and $3.2\%$
(the 3 year expected growth rate on the left is somewhat lower). All three
plots suggest that following the financial crisis, trend growth in GDP\ is
unlikely to have dropped to the 1.3\% estimate of \cite{holston.etal:2017}.
Looking at the estimates of \emph{`other factor'} $z_{t}$ in panel (c) of
\autoref{fig:T0KF}, we can see that it is the end of the sample, namely,
from 2009:Q1 to 2019:Q2, that is most strongly affected by the different
starting dates.\footnote
There is also some variability beginning in the 70s until the 80s, but this
variation seems to be largely due to the noisier nature of the filtered
estimates and is not visible from the more efficient smoothed estimates
shown in panel\ (c) of \autoref{fig:T0KS}. The differentiation here is not
important. The point to take away from this discussion is that the period
following the financial crisis yields very different estimates from the two
shorter samples, irrespective of whether smoothed or filtered estimates are
used in the construction of the natural rate.} In particular the two $z_{t}$
estimates that are based on the shorter samples starting in 1967:Q1 and
1972:Q1, which exclude the \textit{`excessive'} GDP growth period at the
beginning of \cites{holston.etal:2017} sample, generate substantially more
negative $z_{t}$ estimates. For instance, in 2009:Q1, the 1972:Q1 based
z_{t}$ estimate is $-2.87$ while \cites{holston.etal:2017} is $-1.22$. Also,
the $z_{t}$ estimates from the shorter samples are well below $-2$ over
nearly the entire 2014:Q4 to 2019:Q2 period.\footnote
This is even more pronounced in the smoothed estimates of $z_{t}$ shown in
panel (c)\ of \autoref{fig:T0KS}.} What is particularly interesting to
highlight here is how stable (and very close to zero) the estimates of
z_{t} $ are from the four earlier sample starts from 1947:Q1 to about
1971:Q3. Given the rapid change in demographics and population growth, as
well as factors related to savings and investment following the end of World
War II, one would expect $z_{t}$ to capture this change. Even if we look at
the period until 1990:Q1, apart from the noise in the estimates, no apparent
upward or downward trend in $z_{t}$ is visible from panel (c) of \autore
{fig:T0KF}. Thus, the Baby Boomer generation entering the workforce shows no
effect on $z_{t}$. Only from 1990:Q2 onwards is a decisive downward trend in
the estimates of $z_{t}$ visible.
\cite{holston.etal:2017} initialize the state vector at zero for the $z_{t}$
elements of $\boldsymbol{\xi }_{t}$. This evidently has an anchoring effect
on \emph{`other factor'} $z_{t}$ at the beginning of the sample. In the
model, it acts like a normalisation, as it implies that the natural rate is
driven solely by trend growth $g_{t}$ at sample start. Although $z_{t}$
follows a (zero mean) random walk, so that an initialisation at zero seems
appealing from an econometric perspective, this initialisation has an
important impact on the economic interpretation of $z_{t}$ that should be
more openly discussed if one is to view \emph{`other factor'} $z_{t}$ as a
factor relating to structural changes in an economy. Due to its large impact
on the downward trend in the estimates of the natural rate, understanding
exactly what $z_{t}$ captures and how the zero initialisation affects these
estimates is crucial from a policy perspective.
One final point that needs to be raised relates to \cites{holston.etal:2017}
preference for reporting filtered estimates of the latent states, as opposed
to smoothed ones. It is well know that the mean squared error (MSE) of the
filtered states will in general be larger than the MSE\ of the smoothed
states (see the discussion on page 151 in \cite{harvey:1989}). This is not
surprising, as the smoothed estimates use the full sample --- and therefore
more information --- to estimate the latent states, leading to more
efficient estimates. Moreover, reporting filtered estimates \textit
`precludes'} the use of a diffuse prior for the $I(1)$ state vector, since
it generates extreme volatility in the filtered estimates of the states at
the beginning of the sample period. This is not the case with the smoothed
estimates. The large variability in the filtered states is particulary
visible from the three quantities of interest, ie., the estimates of
r_{t}^{\ast }$, $g_{t}$ and $z_{t}$, and less so from the output gap (cycle)
estimates.
While it is frequently claimed that the filtered states are \emph{`real time
} estimates, and are thus more relevant for policy analysis, one can see
that this cannot be a valid argument in the given context. Not only are the
parameter estimates of the model (ie., the estimates of $\boldsymbol{\theta
_{3}$ in \ref{eq:theta3}) based on full sample information, the GDP and PCE\
inflation data that go into the model are also not real time data, that is,
data that were available to policy makers at time $t<T$. Reporting filtered
(one-sided) estimates of the states as in \cite{holston.etal:2017} or as on
the FRBNY website where updates are provided is undesirable from an
estimator efficiency perspective.
\section{Conclusion \label{sec:conclusion}}
\cites{holston.etal:2017} implementation of \cites{stock.watson:1998} Median
Unbiased Estimation (MUE) in Stages 1 and 2 of their procedure to estimate
the natural rate of interest from a larger structural model is unsound. I
show algebraically that their procedure cannot recover the ratios of
interest $\lambda _{g}=\sigma _{g}/\sigma _{y^{\ast }}$ and $\lambda
_{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$ needed for the estimation of the
full structural model of interest. \cites{holston.etal:2017} implementation
of MUE\ in Stage 2 of their procedure is particularly problematic, because
it is based on an \emph{`unnecessarily'} misspecified model as well as an
incorrect MUE\ procedure that spuriously amplifies their estimate of
\lambda _{z}$. This has a direct and consequential effect on the severity of
the downward trending behaviour of \emph{`other factor'} $z_{t}$ and thereby
the magnitude of the estimate of the natural rate.
Correcting their Stage 2 model and the implementation of MUE\ leads to a
substantially smaller estimate of $\lambda _{z}$ of close to zero, and
an elimination of the downward trending influence of \emph{`other factor'}
z_{t}$ on the natural rate of interest. The correction that is applied is
quantitatively important. It shows that the estimate of $\lambda _{z}$ based
on the correctly specified Stage 2 model is statistically highly
insignificant. The resulting filtered estimates of $z_{t}$ are very close to
zero for the entire sample period, highlighting the lack of evidence of
\textit{`}\emph{other factor'} $z_{t}$ being important for the determination
of the natural rate in this model. Obtaining an accurate estimate of trend
growth for the measurement of the natural rate is therefore imperative. To
provide other benchmark estimates of trend growth, I construct various
simple alternative $g_{t}$ estimates and compare those to the estimate from
\cite{holston.etal:2017}. I find the latter one to be too small,
particularly in the immediate aftermath of the global financial crisis.
Lastly, I\ discuss various other issues with \cites{holston.etal:2017} model
that make it unsuitable for policy analysis. For instance,
\cites{holston.etal:2017} estimates of the natural rate, trend growth,
\emph{other factor}' $z_{t}$ and the output gap are extremely sensitive to
the starting date of the sample used to estimate the model. Using data
beginning in 1972:Q1 (or 1967:Q1) leads to negative estimates of the natural
rate as is the case for their Euro Area estimates. These negative estimates
are again driven purely by the exaggerated downward trending behaviour of
`\emph{other factor}' $z_{t}$. The 1972:Q1 date was chosen to match the
sample used in the estimation of the Euro Area model. Only the Euro Area
estimates of the natural rate turn negative in 2013, and only the Euro Area
sample starts in 1972:Q1 (the others start in 1961:Q1). The fact that it is
possible to generate such negative estimates of the natural rate from
\cites{holston.etal:2017} model for the U.S. as well by simply adjusting the
start of the estimation period suggests that the model is far from robust,
and therefore inappropriate for use in policy analysis.
Also, any Kalman Filtered (or Smoothed) estimates of the state vector will
be a function of the observable variables that enter into the model. If the
central bank controlled nominal interest rate is one of these observables, a
confounding effect between $r_{t}^{\ast }$ and $i_{t}$ will arise, because
any central bank induced change in the policy rate $i_{t}$ is mechanically
transferred to the natural rate via the estimate of the state vector
\skew{0}\boldsymbol{\hat{\xi}}_{t|t}$. This makes it impossible to answer
\textit{`causal'} questions regarding the relationship between $r_{t}^{\ast
} $ and $i_{t}$, as one responds as a direct consequence to changes in the
other.
\bigskip
\setlength{\oddsidemargin}{-5.4mm}
\bibliographystyle{LongBibStyleFile}
|
1,116,691,499,769 | arxiv | \section*{Acknowledgements}
The authors wish to thank Ericsson for providing the aggregated phone activity records. We also thank Zsófia Kallus at Ericsson Research for stimulating discussions. We further thank Ericsson, MIT SMART Program, Accenture, Air Liquide, BBVA, The Coca Cola Company, Emirates Integrated Telecommunications Company, The ENEL foundation, Expo 2015, Ferrovial, Liberty Mutual, The Regional Municipality of Wood Buffalo, Volkswagen Electronics Research Lab, UBER and all the members of the MIT Senseable City Lab Consortium for supporting the research.
|
1,116,691,499,770 | arxiv | \section{Introduction}
It is well-known that every finite graph $G=(V,E)$ has an {\em external partition}, i.e., a splitting of $V$ into two parts such that each vertex has at least half of its neighbors in the other part. This is, e.g., true for $G$'s max-cut partition. Much less is known about the {\em internal partition} problem in which $V$ is split into two non-empty parts, such that each vertex has at least half of its neighbors in its own part. Not all graphs have an internal partition and their existence is proved only for certain classes of graphs. Several investigators have raised the conjecture that for every $d$ there is an $n_0$ such that every $d$-regular graph with at least $n_0$ vertices has an internal partition. Here we prove the case $d=6$ of this conjecture.
A related intriguing concept in this area is the notion of {\em external bisection}. This is an external partition in which the two parts have the same cardinality. We conjecture that the Petersen graph is the only connected cubic graph with no external bisection. We take some steps in resolving this problem.
These concepts have emerged in several different areas and as a result there is an abundance of terminologies here. Thus Gerber and Kobler\cite{Gerber} used the term {\em satisfactory partition} for internal partitions. Internal/external partitions are called {\em friendly} and {\em unfriendly} partitions sometimes. Morris\cite{Morris} studied social learning, and considered a more general problem. Now we want to partition $V=A \dot\cup B$ with $A, B \neq \emptyset$ such that every $x\in A$ (resp $y \in B$) has at least $qd(x)$ of its neighbors in $A$ (resp. $\ge(1-q)d(y)$ neighbors in $B$). He refers to such sets as {\em ($q$/$1-q$)-cohesive}. Here we use the term {\em $q$-internal partitions}. The complementary notion of {\em $q$-external partitions} is considered as well.
\tikzstyle{gray}=[circle, draw, fill=gray!50, inner sep=0pt, minimum width=6pt]
\tikzstyle{green}=[circle, draw, fill=green!50, inner sep=0pt, minimum width=6pt]
\tikzstyle{orange}=[circle, draw, fill=orange!50, inner sep=0pt, minimum width=6pt]
\begin{figure}[tbp]
\begin{minipage}[t]{0.5\textwidth}
\begin{tikzpicture}[thick,scale=0.6]
\draw \foreach \x in {0,36,...,324}
{
(\x:2) node [orange] {} -- (\x+108:2)
(\x-10:3) node [green] {} -- (\x+5:4)
(\x-10:3) -- (\x+36:2)
(\x-10:3) --(\x+170:3)
(\x+5:4) node [green] {} -- (\x+41:4)
};
\end{tikzpicture}\quad
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{tikzpicture}[thick,scale=0.8]%
\draw \foreach \x in {18,90,...,306} {
(\x:4) node [orange] {} -- (\x+72:4)
(\x:4) -- (\x:3) node [green] {}
(\x:3) -- (\x+15:2) node [green] {}
(\x:3) -- (\x-15:2) node [green] {}
(\x+15:2) -- (\x+144-15:2)
(\x-15:2) -- (\x+144+15:2)
};
\end{tikzpicture}
\end{minipage}
\caption[Examples of internal partitions]{Examples of internal partitions}
\label{cubic-examples}
\end{figure}
Figure \ref{cubic-examples} shows examples of internal partitions of regular cubic graphs.
Bazgan, Tuza and Vanderpooten have written several papers~\cite{Bazgan2003,Bazgan2006} on internal partitions. In~\cite{Bazgan2010} they give a survey of this area. Much of their work concerns the complexity of finding such partitions, a problem which we do not address here.
Our own interest in this subject arose in our studies of learning in social or geographical networks. Vertices in these graphs represent individuals and edges stand for social connection or geographical proximity. The individuals adopt one of two choices of a social attribute (e.g. PC or Mac user). Society evolves over time, with each individual adopting the choice of the majority of her neighbors. We asked whether a stable, diverse assignment of choices is possible in such a society. This amounts to finding an internal partition if the social choices are equally persuasive. It is also of interest to consider the problem when choices carry different persuasive power (say a neighbor who is a Mac user is more persuasive than a PC neighbor). If the merits are in proportion $q : 1-q$, this leads to the problem of finding a {\em $q$-internal partition}.
Thomassen \cite{Thomassen} showed that for every two integers $s, t > 0$ there is a $g=g(s,t)$ such that every graph $G=(V,E)$ of minimum degree at least $g$ has a partition $V=V_1\dot\cup V_2$ so that the induced subgraphs $G(V_1), G(V_2)$ have minimum degree at least $s, t$, respectively. He conjectured that the same holds with $g(s,t) = s + t + 1$, which would be tight for complete graphs. Stiebitz \cite{Stiebitz} proved this conjecture, and extended it as follows: For every $a, b: V \mapsto \mathbb{Z}_{+}$ such that $\forall v \in V, d_G(v) \geq a(v) + b(v) + 1$, there exists a partition of $V=A\dot\cup B$, such that $\forall v \in A, d_A(v) \geq a(v)$ and $\forall v \in B, d_B(v) \geq b(v)$. Kaneko~\cite{Kaneko} showed that in triangle-free graphs the same conclusion holds under the weaker assumption $d_G(v) \geq a(v) + b(v)$.
Stiebitz's result shows that, given $q \in (0,1)$, every graph has a non-trivial partition which is at most one edge (for each vertex) short of being a $q$-internal partition. Shafique and Dutton~\cite{Shafique} showed the existence of internal partitions in all cubic graphs except $K_4$ and $K_{3,3}$ and in all 4-regular graphs except $K_5$. In this paper, we settle the problem for 6-regular graphs.
Shafique and Dutton also conjectured that $K_{2k+1}$ is the only $d=2k$-regular graph with no internal partition. We disprove this and present a number of counterexamples. Many of these exceptions are with $d \geq n-4$. This range turns out to be of interest and we discuss it as well. As we show, there exist $d$-regular $n$-vertex graphs with no internal partitions with both $d$ and $n-d$ arbitrarily large. We conjecture that every $2k$-regular graph with $n \geq 4k$ has an internal partition. In the process, we consider external bisections of regular graphs, and especially cubic graphs. We note that all class-I cubic graphs have an external bisection, and speculate that for class-II cubic graphs, only graphs that have the Petersen graph as a component do {\em not} have such a bisection.
Finally, we conjecture that there is a function $\mu=\mu(d,q)$ such that if $qd$ is an integer, then every $d$-regular graph has a $q$-internal partition. We also conjecture this for $q = 1/2$ and $d$ odd. As we show, for $d$ fixed and large $n$, every $n$-vertex $d$-regular graph has {\sl many} $q$-internal partitions for {\em some} $q$. This lends some support to our conjecture. We also discuss an algorithm that generates $q$-internal partitions of a graph for many, and plausibly all values of $q$. This sheds light on what causes a graph to be non-partitionable.
\section{Terminology}
We consider undirected graphs $G=(V,E)$ with $n$ vertices. For $S \subset V$, we denote by $G(S)$ the induced subgraph of $S$. The degree of $x\in V$ is denoted by $d(v)=d_G(v)$ and the number of neighbors that $v$ has in $S\subseteq V$ is called $d_S(v)$. The complement of $G$ is denoted by $\bar{G}$.
A {\em bisection} of $V=A\dot\cup B$ is a partition with $|A| = |B|$. If $||A| - |B|| \leq 1$, then we call it a near-bisection. Corresponding to the partition $(A,B)$ of $V$ is the {\em cut} $E(A,B)=E_G(A,B)=\{xy\in E|x\in A, y\in B\}$. For $x \in A$ and $y \in B$ we call $d_A(x), d_B(y)$, respectively, the vertices' {\em indegrees}, and $d_B(x), d_A(y)$ the {\em outdegrees}. These terms usually refer to directed graphs, but we could not resist the convenience of using them in the present context.
A subset $S \subseteq V$ is called {\em $p$-cohesive} if $\forall x \in S, d_S(x) \geq p$. It is called a {\em $p$-crumble} if no $S' \subseteq S$ is $p$-cohesive. (Note that our notion of cohesion differs from that of Morris~\cite{Morris}).
A partition $(A,B)$ is {\em $q$-internal} for $q \in (0,1)$ if $\forall x \in A, d_A(x) \geq qd_G$ and $\forall x \in B, d_B(x) \geq (1-q)d_G(x)$. A $\frac{1}{2}$-internal partition is simply {\em internal}.
If $\forall x \in A, d_B(x) \geq qd_G$ and $\forall x \in B, d_A(x) \geq (1-q)d_G(x)$ we call the partition {\em $q$-external}. A $\frac{1}{2}$-external partition is {\em external}.
A $q$-internal or a $q$-external partition is called {\em integral} if for every $v \in V$, $qd_G(v)$ is an integer.
A $q$-internal or a $q$-external partition $(A,B)$ is called {\em exact} if $|A| = qn$, and {\em near-exact} if $||A| - qn| < 1$. A $\frac{1}{2}$-exact partition is a {\em bisection}. For $q = \frac{1}{2}$, near-exact partitions are {\em near-bisections}.
\section{Internal Partitions of 6-Regular Graphs}
\begin{lemma}
\label{l}
Let $G=(V,E)$ be a graph with minimal degree $d$. For $0 < k < |V|$, let $(A,B)$ be a partition of $V$ that attains $\min |E(A,B)|$ over all partitions with $|A|=k$ or $|B|=k$. Then, either:
\begin{enumerate}
\item \label{l1} $A$ is $l$-cohesive and $B$ is $m$-cohesive for some integers $l,m$ with $l + m = d$, or:
\item \begin{enumerate}
\item \label{l2a} $A$ is $l$-cohesive and $B$ is $m$-cohesive for some integers $l,m$ with $l + m = d - 1$, and:
\item \label{l2b} The vertices in $A$ with indegree $l$ and the vertices in $B$ with indegree $m$ form a complete bipartite subgraph in $G$, and:
\item \label{l2c} For every $x \in A$ with indegree $l$, $B \cup \{x\}$ is $(m+1)$-cohesive. Similarly, $A \cup \{x\}$ is $(l+1)$-cohesive for every $x \in B$ with indegree $m$.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
Let $x \in A, y \in B$. If $xy \notin E$ then
\begin{align*}
|E(((A \backslash \{x\}) \cup \{y\}, (B \backslash \{y\}) \cup \{x\})| - |E(A,B)| & = \\
=d_A(x) - d_B(x) + d_B(y) - d_A(y) \leq &\\
\leq 2[d_A(x) + d_B(y) - d]
\end{align*}
If $xy\in E$, then
\begin{align*}
|E(((A \backslash \{x\}) \cup \{y\}, (B \backslash \{y\}) \cup \{x\})| - |E(A,B)| & = \\
= d_A(x) - d_B(x) + (d_B(y) + 1) - (d_A(y) - 1) \leq &\\
\leq 2[d_A(x) + d_B(y) - (d - 1)]
\end{align*}
Since $E(A,B)$ is minimal, it follows that the sum of indegrees is at least $d-1$ if $x, y$ are adjacent, and $d$ otherwise.
Let us apply this for $x,y$ of minimum indegree. Then (\ref{l1}) follows if there is such a pair with $xy \notin E$. On the other hand, if $xy\in E$ for all such pairs, then (\ref{l2a}) and (\ref{l2b}) follow. We obtain (\ref{l2c}) by observing that increasing by one the indegree of all minimum indegree vertices in a subset, increases the minimum indegree of the subset by one.
\qed
\end{proof}
\begin{corollary}
Every $n$-vertex $d$-regular graph has a $\lceil \frac{d}{2} \rceil$-cohesive set of at most $\lceil \frac{n}{2} \rceil$ vertices (resp. $\frac{n}{2} + 1$) for $d$ even (for $d$ odd).
\end{corollary}
\begin{proof}
Consider a near-bisection of $G$ that minimizes $|E(A,B)|$. By Lemma \ref{l} if $d$ is even, at least one of $A, B$ is $\frac{d}{2}$-cohesive. If $d$ is odd, and if neither $A$ nor $B$ are $\lceil \frac{d}{2} \rceil$-cohesive, then by (\ref{l2a}) both are $\lfloor \frac{d}{2} \rfloor$-cohesive, and by (\ref{l2c}) each can be made $\lceil \frac{d}{2} \rceil$-cohesive by adding a vertex of the other.
\qed
\end{proof}
\begin{theorem}
\label{6regular}
Every $6$-regular graph with at least $14$ vertices has an internal partition.
\end{theorem}
\begin{proof}
We argue by contradiction and consider an $n$-vertex $6$-regular graph $G=(V,E)$ with no internal partition. Let $(A, B)$ be the near-bisection of $V$ that attains $\min|E(A,B)|$ over all near-bisections.
By Lemma \ref{l} either $A$ or $B$ must be 3-cohesive. We may assume $A$ is 3-cohesive while $B$ is not, for else $(A,B)$ is an internal partition.
We repeatedly carry out the following step:
As long as there is some $y \in B$ with outdegree $d_A(y)> 3$ we move that vertex from $B$ to $A$. If $A$ is 3-cohesive then clearly so is $A\cup \{y\}$, while if $B$ is 3-crumble, so is $B \backslash \{y\}$. By assumption no internal partition exists, so this process must terminate with a trivial partition, i.e., $B$ must be 3-crumble. The move of $y$ from $B$ to $A$ decreases $|E(A,B)|$ by $2d_A(y)-6 \geq 2$. Every step of the process therefore decreases the cut by at least 2, while $|B|$ decreases by 1. Also in the last two moves $|E(A,B)|$ decreases by $\ge 4$, and $6$ in this order, and at termination $E(A,B)=\emptyset$. We conclude that $|E(A,B)| \geq 2|B| + 6$.
On the other hand $|E(A,B)| \leq 2|A| + 4$:
By Lemma \ref{l} all vertices in $A$ have outdegree $\le 2$, except for at most 4 (that are adjacent to a vertex in $B$ with outdegree $\le 4$) vertices with outdegree 3. Therefore $2|A| + 4 \ge |E(A,B)| \geq 2|B| + 6$ so that $|A| \ge |B|+1$.
It follows that $|A| = |B| + 1$, $n$ is odd and $B$ is a ``tight'' 3-crumble. Namely, exactly 4 vertices in $A$ have outdegree 3, and in all moves (except the last two) $|E(A,B)|$ is reduced by exactly 2. If $n \geq 9$ then $|B| \geq 4$, so the first two vertex moves are of outdegree 4. Let $y', y'' \in B$ be these first two vertices, let $(A',B') = (A \cup \{y'\},B \backslash \{y'\})$ be the partition after the first move, and let $(A'',B'') = (A \cup \{y',y''\},B \backslash \{y',y''\})$ be the partition after the second move. By the above $|E(A',B')| = |E(A,B)| - 2$ and $|E(A'',B'')| = |E(A,B)| - 4$.
By Lemma \ref{l} (\ref{l2c}) all vertices in $A'$ have outdegree 2. Therefore, in $A''$, all vertices have outdegree 2 except 4 with outdegree 1. Suppose that some pair of these outdegree-2 vertices in $A''$, say $x', x''$ are adjacent. Then it would be possible to move both vertices to $B''$ while increasing the cut size by only 3. Namely, $|E(A'' \backslash \{x',x''\},B'' \cup \{x',x''\})| = |E(A'',B'')| + 3 < |E(A,B)|$. This yields a near-bisection, that contradicts the minimality of $|E(A,B)|$. Alternatively, if the outdegree-2 vertices in $A''$ form an independent set, then all their neighbors in $A''$ must have outdegree 1 and indegree 5. It follows that there are at most 5 vertices in $A''$ of outdegree-2. Therefore $|A''| \leq 9 \Rightarrow |A| \leq 7 \Rightarrow n \leq 13$.
\qed
\end{proof}
\begin{remark}
We now comment on the range $n \le 13$. Note that the proof covers all even $n$. The complete graph $K_7$ is an exception with $n=7$.
\begin{figure}[tbp]
\centering
\begin{tikzpicture}[thick,scale=0.6]
\draw \foreach \x in {0,120,240}
{
(\x:4) node [green] {} -- (\x+100:4)
(\x-20:4) node [green] {} -- (\x+100:4)
(\x+20:4) node [red] {} -- (\x+100:4)
(\x:4) node [green] {} -- (\x+120:4)
(\x-20:4) node [green] {} -- (\x+120:4)
(\x+20:4) node [red] {} -- (\x+120:4)
(\x:4) node [green] {} -- (\x+140:4)
(\x-20:4) node [green] {} -- (\x+140:4)
(\x+20:4) node [green] {} -- (\x+140:4)
};
\end{tikzpicture}\quad
\caption[$K_{3,3,3}$: A 6-regular graph with no internal partition]{$K_{3,3,3}$: A 6-regular graph with no internal partition}
\label{d=n-3 example}
\end{figure}
For $n=9$, there is a unique unpartitionable 6-regular graph (see Figure \ref{d=n-3 example}). We prove this statement when we discuss the case $d = n -3$ in the following section.
For $n=11$, there exist 6-regular graphs with no internal partition. One such example, $Q_3$, is a member of a class of unpartitionable graphs we construct in Section \ref{general_case}.
The case $n=13$ remains unsettled. Our Conjecture \ref{conj2d} would imply that all such graphs have an internal partition.
\end{remark}
\section{Partitions of Complementary Graphs}
\begin{proposition}
\label{coexist}
For every $q \in (0,1)$, every graph $G$ has a $q$-external partition.
\end{proposition}
\begin{proof}
For a partition $(A,B)$ define
\begin{equation}
w(A,B) := |E(A,B)| - (1-q)\sum\limits_{x \in A} d_G(x) - q\sum\limits_{x \in B} d_G(x)
\end{equation}
The partition that maximizes $w(A,B)$ is non-trivial, since for every non-isolated vertex $x$ there holds $w(V \backslash \{x\}, \{x\}) > w(V,\emptyset)$ and $w(\{x\}, V \backslash \{x\}) > w(\emptyset,V)$. Furthermore $w(A,B) - w(A \backslash \{x\}, B \cup \{x\}) = d_B(x) - d_A(x) + (1-q) d_G(x) - qd_G(x) = 2d_B(x) - 2qd_G(x)$ and $w(A,B) - w(A \cup \{x\}, B \backslash \{x\}) = d_A(x) - d_B(x) -(1- q) d_G(x) + qd_G(x) = 2d_A(x) - 2(1-q)d_G(x)$, so the maximality of $(A,B)$ implies that it is $q$-external.
\qed
\end{proof}
\begin{proposition}
\label{dual}
For $q \in (0,1)$ every exact $q$-internal partition of $G=(V,E)$ is an exact $(1-q)$-external partition of $\bar{G}$.
\end{proposition}
\begin{proof}
Let $|V|=n$ and let $(A,B)$ be an exact $q$-internal partition of $G$. Namely, $|A| = qn, |B|=(1-q)n$ and $\forall x \in A, d_A(x) \geq qd_G(x)$ and $\forall x \in B, d_B(x) \geq (1-q)d_G(x)$. To indicate that we work in $\bar{G}$ we denote by $\bar{A}, \bar{B}$ the subgraphs of $\bar{G}$ induced by $A, B$. Then:
\begin{align*}
&\forall x \in V,&d_{\bar{G}}(x) = n - d_G(x) - 1 \\
&\forall x \in A,&d_{\bar{B}}(x) = |B| - d_B(x) = (1-q)n - (d_G(x) - d_A(x)) \geq \\
&&\geq (1-q)(n - d_G(x)) > (1-q)d_{\bar{G}}(x) \\
&\forall x \in B,&d_{\bar{A}}(x) = |A| - d_A(x) = qn - (d_G(x) - d_B(x)) \geq \\
&& \geq q(n - d_G(x)) > qd_{\bar{G}}(x)
\end{align*}
So $(A,B)$ is a $(1-q)$-external partition.
\qed
\end{proof}
\begin{proposition}
For $q \in (0,1)$ every exact $(1-q)$-external partition of $G=(V,E)$ is an exact $q$-internal partition of $\bar{G}$, provided the partition of $\bar{G}$ is integral.
\end{proposition}
\begin{proof}
Maintaining the notation of Proposition \ref{dual}, consider an exact $(1-q)$-external partition $(A,B)$ of $G$. Namely $|A| = qn, |B|=(1-q)n$ and $\forall x \in B, d_A(x) \geq qd_G(x)$ and $\forall x \in A, d_B(x) \geq (1-q)d_G(x)$. Then:
\begin{align*}
&\forall x \in V,&d_{\bar{G}}(x) = n - d_G(x) - 1 \\
&\forall x \in A,&d_{\bar{A}}(x) = |A| - d_A(x) - 1 = qn - (d_G(x) - d_B(x)) - 1 \geq \\
&&\geq q(n - d_G(x)) - 1 = qd_{\bar{G}}(x) - (1-q).
\end{align*}
By rounding up we conclude that $d_{\bar{A}}(x) \geq qd_{\bar{G}}(x)$. (Note that $d_{\bar{A}}(x)$ and $qd_{\bar{G}}(x)$ are integers and $1>q>0$).
\begin{align*}
&\forall x \in B,&d_{\bar{B}}(x) = |B| - d_B(x) = (1-q)n - (d_G(x) - d_A(x)) - 1 \geq \\
&& \geq (1-q)(n - d_G(x)) - 1 = (1-q)d_{\bar{G}}(x) - q.
\end{align*}
By a similar argument $d_{\bar{B}}(x) \geq (1-q)d_{\bar{G}}(x)$,
so $(A,B)$ is a $q$-internal partition.
\qed
\end{proof}
\begin{corollary}
If $G$ has an internal bisection, then $\bar{G}$ has an external bisection.
\end{corollary}
\begin{corollary}
\label{dual-bisection}
If all degrees in $G$ are even and $\bar{G}$ has an external bisection, then $G$ has an internal bisection.
\end{corollary}
\begin{theorem}
For even $n$, every ($n-2$)-regular graph has an internal bisection.
\end{theorem}
\begin{proof}
The complement of an ($n-2$)-regular graph is a perfect matching. Split each matched pair between sides of a partition to obtain an external bisection. The theorem follows from Corollary \ref{dual-bisection}.
\qed
\end{proof}
\begin{theorem}
An ($n-3$)-regular graph $G$ has an internal partition if and only if its complementary graph $\bar G$ has at most one odd cycle. Furthermore this partition is a near-bisection.
\end{theorem}
\begin{proof}
Clearly $\bar{G}$ is 2-regular, i.e. it is comprised of vertex disjoint cycles. For every cycle, place the vertices alternately in $A$ and in $B$. If at most one cycle is odd, then $||A| - |B|| \leq 1$, so the partition is a near-bisection. It is also an internal partition of $G$, since the smaller side, say $B$, is a clique. Also, $A$ spans a clique if $|A| = |B|$ , or a clique minus one edge if $|A| = |B| + 1$, so its minimum indegree is also $|B| - 1$. As $|B| - 1 \geq (n-3)/2$, the partition is internal.
Let $G$ have an internal partition $(A,B)$. If $n$ is even, every vertex must have indegree $\geq n/2 - 1$. Therefore $|A| = |B| = n/2$ and the complementary graph $\bar{G}$ is bipartite so has no odd cycles. If $n$ is odd, assume $|A| > |B|$. $B$'s minimum indegree is $(n-3)/2$ so $|B| = (n-1)/2, |A| = (n+1)/2$ and the partition is a near-bisection. In $\bar{G}$, $|E(A,B)|=2|B|=n-1$ so $E(A)=(2|A|-|E(A,B)|)/2 = 1$. Therefore $(A,B)$ is bipartite in $\bar{G}$ except for a single edge internal to $A$. Therefore $\bar{G}$ has only one odd cycle.
\qed
\end{proof}
We can now confirm that $K_{3,3,3}$, the graph in Figure \ref{d=n-3 example}, has no internal partition, as it is the complement of three disjoint triangles. Furthermore, as there is no other way for a 9-vertex graph to have more than one odd cycle, this is the only $n=9, d=6$ graph with this property.
\section{The Case $d = n - 4$ and Cubic Graphs}
Let $G$ be a $d$-regular graph on $n$ vertices with $d = n - 4$. Clearly $n$ must be even, and its complement $\bar{G}$ is a cubic graph.
\begin{proposition}
\label{n-4}
If an ($n-4$)-regular graph $G$ has an internal partition then either
\begin{itemize}
\item $\bar{G}$ has an external bisection, or
\item $\bar{G}$ has an independent set of size at least $n/2-1$.
\end{itemize}
\end{proposition}
\begin{proof}
By Corollary \ref{dual-bisection} if $\bar{G}$ has an external bisection, $G$ has an internal bisection. If not, to be internal a partition must have minimum degree $n/2-2$ so each part must have size $\ge n/2-1$. Therefore $|A| = |B|+2$, where $B$ is a clique in $G$ and an anticlique in $\bar{G}$.
\qed
\end{proof}
\begin{figure}[tbp]
\begin{center}
\begin{tikzpicture}[style=thick]
\draw (18:2cm) -- (90:2cm) -- (162:2cm) -- (234:2cm) --
(306:2cm) -- cycle;
\draw (18:1cm) -- (162:1cm) -- (306:1cm) -- (90:1cm) --
(234:1cm) -- cycle;
\foreach \x in {18,90,162,234,306}{
\draw (\x:1cm) -- (\x:2cm);
\draw (\x:2cm) [green] circle (3pt);
\draw (\x:1cm) [green] circle (3pt);
}
\draw (18:2cm) [orange] circle (3pt);
\draw (162:2cm) [orange] circle (3pt);
\draw (234:1cm) [orange] circle (3pt);
\draw (306:1cm) [orange] circle (3pt);
\end{tikzpicture}
\end{center}
\caption[External partition of the Petersen graph]{External partition of the Petersen graph}
\label{petersen}
\end{figure}
The Petersen graph (see Figure \ref{petersen}) has no external bisection, but it has an independent set of size 4. Its complement is 6-regular, and in fact has an internal partition (but not a bisection), as already proved in Theorem \ref{6regular}.
The requirement of an independent set of size $n/2-1$ means that, save for 3 edges, the cubic graph is bipartite. Clearly this is a rare phenomenon among cubic graphs, so our quest for graphs with internal partitions boils down to asking which cubic graphs have an external bisection.
We show next:
\begin{theorem}
\label{class-1}
Every class-1 3- or 4-regular graph $G$ has an external bisection.
\end{theorem}
\begin{proof}
Pick some $d$-edge coloring of $G$, and choose any two of the colors. The corresponding alternating cycles form a $2$-factor in $G$ of even cycles. Number the vertices of each of these cycles sequentially along the cycle path. Alternately assign the vertices in the cycles to the two sides of a partition which is clearly a bisection. For $d \leq 4$, this partition is external, since every vertex has at least two neighbors at the opposite part.
\qed
\end{proof}
While all class-1 cubic graphs have an external bisection, the same question for class-2 cubic graphs remains open, though below we present a partial result. As noted, the Petersen graph, the smallest {\em snark}, has no external bisection. We checked a substantial number of larger snarks and found external bisections in all of them. Our computer experiments also suggest that all cubic graphs with bridges have external bisections, so we make the conjecture:
\begin{conjecture}
\label{cubic}
The Petersen graph is the only connected cubic graph that has no external bisection.
\end{conjecture}
Note that disconnected cubic graphs with no external bisection do exist. For example, a graph that has an odd number of components that are Petersen graphs and any number of $K_4$ components.
\begin{figure}[tbp]
\begin{minipage}[t]{0.5\textwidth}
\begin{tikzpicture}[style=thick]
\foreach \pos/\name in {{(18:2cm)/a}, {(90:2cm)/b}, {(162:2cm)/c}, {(234:2cm)/d}, {(306:2cm)/e}}
\node[green] (\name) at \pos {};
\foreach \pos/\name in {{(18:1cm)/f}, {(90:1cm)/g}, {(162:1cm)/h}, {(234:1cm)/i}, {(306:1cm)/j}}
\node[green] (\name) at \pos {};
\draw (a) -- (b) -- (c) -- (d) -- (e) -- (a);
\draw (f) -- (h) -- (j) -- (g) -- (i) -- (f);
\draw (a) -- (f);
\draw (b) -- (g);
\draw (c) -- (h);
\draw (d) -- (i);
\draw (e) -- (j);
\foreach \pos/\name in {{(-1,-4)/k}, {(-2,-3)/l}, {(2,-3)/m}, {(1,-4)/n}}
\node[green] (\name) at \pos {};
\draw (k) -- (l) -- (m) -- (n) -- (k);
\draw (k) -- (m);
\draw (l) -- (n);
\end{tikzpicture}\quad
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{tikzpicture}[style=thick]
\foreach \pos/\name in {{(18:2cm)/a}, {(90:2cm)/b}, {(162:2cm)/c}, {(234:2cm)/d}, {(306:2cm)/e}}
\node[green] (\name) at \pos {};
\foreach \pos/\name in {{(18:1cm)/f}, {(90:1cm)/g}, {(162:1cm)/h}, {(234:1cm)/i}, {(306:1cm)/j}}
\node[green] (\name) at \pos {};
\draw (a) -- (c);
\draw (a) -- (d);
\draw (a) -- (g);
\draw (a) -- (h);
\draw (a) -- (i);
\draw (a) -- (j);
\draw (b) -- (d);
\draw (b) -- (e);
\draw (b) -- (f);
\draw (b) -- (h);
\draw (b) -- (i);
\draw (b) -- (j);
\draw (c) -- (e);
\draw (c) -- (f);
\draw (c) -- (g);
\draw (c) -- (i);
\draw (c) -- (j);
\draw (d) -- (f);
\draw (d) -- (g);
\draw (d) -- (h);
\draw (d) -- (j);
\draw (e) -- (f);
\draw (e) -- (g);
\draw (e) -- (h);
\draw (e) -- (i);
\foreach \pos/\name in {{(-1,-4)/k}, {(-2,-3)/l}, {(2,-3)/m}, {(1,-4)/n}}
\node[green] (\name) at \pos {};
\foreach \a in {k, l, m, n}
\foreach \b in {a, b, c, d, e, f, g, h, i, j}
\draw (\a) -- (\b);
\end{tikzpicture}
\end{minipage}
\caption[Smallest d=n-4 regular graph with no internal partition (right) is complement of cubic graph on left]{Smallest d=n-4 regular graph with no internal partition (right) is complement of cubic graph on left}
\label{petersen-k4}
\end{figure}
As mentioned above, the complement of the Petersen graph has an internal partition, by virtue of having an anticlique of size $n/2-1$ (as required by Proposition \ref{n-4}). But the above-mentioned disconnected cubic graphs do not meet that requirement and so their complements have no internal partition. The smallest of these is a 10-regular graph of order 14, whose complement is a Petersen graph plus a $K_4$ component (see Figure \ref{petersen-k4}). This is the smallest of an infinite class of $d=(n-4)$-regular graphs with no internal partition. If Conjecture \ref{cubic} is true, these are the only exceptions, as stated in the following:
\begin{conjecture}\label{cnj2}
If $G$ is $(n-4)$-regular and has no internal partition, then $\bar G$ is a disconnected cubic graph that has an odd number of components that are Petersen graphs. All other components of $\bar G$ have the property that all their external partitions are bisections.
\end{conjecture}
Another consequence of Conjecture~\ref{cubic} is:
\begin{conjecture}\label{cnj3}
Every cubic graph has an external partition $(A,B)$ with $||A| - |B|| \leq 2$.
\end{conjecture}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}[style=thick]
\foreach \pos/\name in {{(-4,0)/1}, {(-1,0)/3}, {(1,0)/5}, {(3,0)/7}, {(4,3)/9}, {(2,3)/10}, {(0,3)/12},
{(-2,4)/14}, {(-3,1.5)/17}, {(-3,3.5)/19}, {(-1,2)/20}, {(-2,1)/25}, {(0,1)/22}, {(2,1)/27}}
\node[green] (\name) at \pos {};
\foreach \pos/\name in {{(-2,0)/2}, {(0,0)/4}, {(2,0)/6}, {(4,1)/8}, {(1,3)/11}, {(-1,3)/13}, {(-4,4)/15},
{(-3,0.5)/16}, {(-3,2.5)/18}, {(0,2)/21}, {(-2,2)/24}, {(-1,1)/23}, {(1,1)/26}, {(3,2)/28}}
\node[orange] (\name) at \pos {};
\draw (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (8) -- (9) -- (10) -- (11) -- (12) -- (13) -- (14) -- (15) -- (1);
\draw (1) -- (16) -- (17) -- (18) -- (19) -- (20) -- (21) -- (22) -- (23) -- (24) -- (17);
\draw (5) -- (26) -- (11);
\draw (26) -- (27) -- (8);
\draw (7) -- (28) -- (10);
\draw (28) -- (9);
\draw (6) -- (27);
\draw (2) -- (25) -- (16);
\draw (25) -- (18);
\draw (3) -- (23);
\draw (4) -- (22);
\draw (15) -- (19);
\draw (14) -- (24);
\draw (13) -- (20);
\draw (12) -- (21);
\end{tikzpicture}
\caption[Possibly largest ($n=28$) connected cubic graph with no uneven external partition]{Possibly largest ($n=28$) connected cubic graph with no uneven external partition}
\label{all-bisection}
\end{figure}
There exist graphs other than $K_4$ all of whose external partitions are bisections. Every cubic graph of order 6 or 8 has this property, since an uneven external partition has at most a $3:2$ proportion of the sides. There are, however, larger connected cubic graphs with this property. The graph in Figure \ref{all-bisection} has order 28 and it may be the largest such graph.
An obvious first step in proving Conjecture~\ref{cubic} would be to show that the smallest counterexample to this conjecture must be bridgeless. We are presently unable to establish even that, but following is a partial result in that direction:
Every bridge in a cubic graph $G = (V,E)$ may be eliminated, resulting in two smaller cubic graphs by the following procedure. The reader may find it useful to follow Figure \ref{bridge} where this procedure is illustrated.
Start by deleting the two vertices of the bridge ($b_1, b_2$). In each of the two components all vertices then have degree $3$, except for two vertices of degree $2$. The following is repeated in a loop for each component until a cubic graph remains:
\begin{itemize}
\item
If the two degree-2 vertices are not adjacent, add an edge between them. This yields a cubic graph, and the procedure is terminated. Otherwise remove them both. The continuation depends on whether the two vertices share a neighbor:
\item
If the removed degree-2 vertices had a common neighbor (such as $p_1, p_2$ and their common neighbor $p_3$), delete that neighbor and its remaining neighbor (in the example: $p_4$). There remain exactly two vertices of degree 2 ($x_1, y_1$), and the loop is repeated.
\item
Otherwise (as in $q_1, q_2$) their additional neighbors ($q_3, q_4$) are distinct. Again, exactly two vertices with degree 2 remain, and the loop is repeated.
\end{itemize}
The terminal components $G_1 = (V_1,E_1), G_2 = (V_2,E_2)$ are nonempty and cubic, since during the run of the procedure the component always has two vertices of degree 2.
They each contain a single edge that is not in $E$, namely $x_1 y_1 \in E_1 , x_2 y_2 \in E_2$.
\begin{figure}[tbp]
\centering
\begin{tikzpicture}[style=thick]
\foreach \pos/\name in {{(-5,0)/x_1}, {(-5,2)/y_1}, {(4,0)/x_2}, {(4,2)/y_2}}
\node[gray] (\name) at \pos {$\name$};
\foreach \pos/\name in {{(-1,1)/b_1}, {(-4,1)/p_4}, {(-2,0)/p_2}, {(2,2)/q_1}, {(3,0)/q_4}}
\node[green] (\name) at \pos {$\name$};
\foreach \pos/\name in {{(-3,1)/p_3}, {(-2,2)/p_1}, {(1,1)/b_2}, {(2,0)/q_2}, {(3,2)/q_3}}
\node[orange] (\name) at \pos {$\name$};
\draw (0,1.2) node {$bridge$};
\draw (x_1) -- (p_4) -- (p_3) -- (p_2) -- (b_1) -- (b_2) -- (q_2) -- (q_4) -- (x_2);
\draw (y_1) -- (p_4);
\draw (b_2) -- (q_1) -- (q_3) -- (y_2);
\draw (p_3) -- (p_1) -- (b_1);
\draw (p_1) -- (p_2);
\draw (q_1) -- (q_2);
\draw (q_3) -- (q_4);
\draw [dashed] (x_1) -- (y_1);
\draw [dashed] (x_2) -- (y_2);
\draw (-6,0) -- (x_1);
\draw (-6,-0.5) -- (x_1);
\draw (-6,2) -- (y_1);
\draw (-6,2.5) -- (y_1);
\draw (5,0) -- (x_2);
\draw (5,-0.5) -- (x_2);
\draw (5,2) -- (y_2);
\draw (5,2.5) -- (y_2);
\draw (-6,3.5) node {$G_1$};
\draw [dotted] (-5.5,-1.5) arc (-40:40:4);
\draw (5,3.5) node {$G_2$};
\draw [dotted] (4.5,3.5) arc (140:220:4);
\end{tikzpicture}
\caption[Cubic graph bridge decomposition]{Cubic graph bridge decomposition}
\label{bridge}
\end{figure}
We now note that if $G_1$ and $G_2$ are both class-1, then $G$ has an external bisection, constructed as follows: Bisect the vertices in $V_1$ as in the proof of Theorem \ref{class-1}, taking care to choose the two colors other than $x_1 y_1$'s color. This creates an external bisection of $G_1$ in which $x_1 y_1$ may be removed and replaced by other edges without disturbing the fact that the partition is external. Similarly derive an external bisection of $G_2$, using two colors other than $x_2 y_2$'s color. Finally assign the bridge vertices to different sides of the partition, and do the same with any non-bridge vertex pair that was deleted to obtain $G_1$ and $G_2$. The result is an external bisection of $G$.
Much remains to be done here, since this construction does not work if either $G_1$ or $G_2$ are class-2. It may fail because the graph at hand is a snark that has no 3-edge-coloring, but also if it contains a bridge, due to the requirement pertaining to the color of the non-$E$ edge. If there is more than one such edge, it is not necessarily the case that we can simultaneously satisfy more than one such requirement.
\section{The General Case}
\label{general_case}
The existence of internal partitions for $d$-regular graphs with $d=5$ and with $7 \leq d \leq n-5$ remains unsettled, as is the existence of $q$-internal partitions for $q \neq \frac{1}{2}$.
\begin{figure}[tbp]
\begin{minipage}[t]{0.5\textwidth}
\begin{tikzpicture}[style=thick]
\foreach \pos/\name in {{(18:1.5cm)/a}, {(90:1.5cm)/b}, {(162:1.5cm)/c}, {(234:1.5cm)/d}, {(306:1.5cm)/e}}
\node[green] (\name) at \pos {};
\draw (a) -- (b) -- (c) -- (d) -- (e) -- (a);
\draw [dotted] (0,0) ellipse (1.7cm and 1.7cm);
\draw (-0.8,1.2) node {$X_2$};
\foreach \pos/\name in {{(-1,-4)/n}, {(0,-2.5)/l}, {(1,-4)/m}}
\node[green] (\name) at \pos {};
\draw (n) -- (l) -- (m) -- (n);
\draw [dotted] (0,-3.3) ellipse (1.5cm and 1.5cm);
\draw (-0.8,-2.5) node {$X_1$};
\draw [dotted] (0,-1.4) ellipse (2cm and 4cm);
\draw (0,2) node {$X$};
\foreach \pos/\name in {{(3,-3.7)/f}, {(3,-2.7)/g}, {(3,-1.7)/h}, {(3,-0.7)/i}, {(3,0.3)/j}, {(3,1.3)/k}}
\node[green] (\name) at \pos {};
\draw [dotted] (3,-1.2) ellipse (0.9cm and 3.6cm);
\draw (3,2) node {$Y$};
\end{tikzpicture}\quad
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{tikzpicture}[style=thick]
\foreach \pos/\name in {{(18:1.5cm)/a}, {(90:1.5cm)/b}, {(162:1.5cm)/c}, {(234:1.5cm)/d}, {(306:1.5cm)/e}}
\node[green] (\name) at \pos {};
\draw (a) -- (b) -- (c) -- (d) -- (e) -- (a);
\foreach \pos/\name in {{(-1,-4)/n}, {(0,-2.5)/l}, {(1,-4)/m}}
\node[green] (\name) at \pos {};
\draw (n) -- (l) -- (m) -- (n);
\foreach \pos/\name in {{(3,-3.7)/f}, {(3,-2.7)/g}, {(3,-1.7)/h}, {(3,-0.7)/i}, {(3,0.3)/j}, {(3,1.3)/k}}
\node[green] (\name) at \pos {};
\foreach \a in {f, g, h, i, j, k}
\foreach \b in {a, b, c, d, e, l, m, n}
\draw (\a) -- (\b);
\draw [dotted] (0.9,-1.4) ellipse (3.6cm and 4cm);
\draw (0.9,2) node {$Q_4$};
\end{tikzpicture}
\end{minipage}
\caption[$Q_4$: 8-regular graph with no internal partition (right) is composed from components (left)]{$Q_4$: 8-regular graph with no internal partition (right) is composed from components (left)}
\label{q4}
\end{figure}
We construct a class of graphs without an internal partition, in which both $d$ and $n-d$ are unbounded:
Given an integer $m > 2$, construct the graph $Q_m$ as follows (see Figure~\ref{q4}):
\begin{enumerate}
\item Start with a $X_1 := K_{m-1}$ component.
\item Let $X_2$ be an $(m+1)$-vertex, $(m-2)$-regular graph, and let $X$ be the graph with components $X_1, X_2$.
\item Let $Y := \bar{K}_{m+2}$ (i.e. $Y$ has $m+2$ isolated vertices).
\item Finally $Q_m$ is attained by adding to $X, Y$ the complete bipartite graph between $V(X)$ and $V(Y)$.
\end{enumerate}
$Q_m$ is $2m$-regular with $3m+2$ vertices. The first few such graphs are $Q_3 (n=11, d=6)$, $Q_4 (n=14, d=8)$, $Q_5 (n=17, d=10)$, \ldots.
\begin{proposition}
$Q_m$ has no internal partition.
\end{proposition}
\begin{proof}
Suppose to the contrary that $(A,B)$ is an internal partition of $Q_m$ with $|A|=a$ and $|B|=b$. In the complementary graph $\bar{Q}_m$, the set $Y$ are the vertices of a $K_{m+2}$ component. In the partition $(A,B)$ of $\bar{Q}_m$, each vertex in $A$ (resp. $B$) has outdegree at least $b-m$ (resp. $a-m$). The only way to partition $K_{m+2}$ to meet these requirements is to have $a-m$ of its vertices in $A$ and the other $b-m$ vertices in $B$
Therefore $|V(X) \cap A| = |V(X) \cap B| = m$. Also $\forall{x \in (V(X) \cap A)}, d_{V(X) \cap A}(x) \geq m - (a-m) = 2m-a$, and $\forall{x \in (V(X) \cap B)}, d_{V(X) \cap B}(x) \geq m - (b-m) = 2m-b$. Therefore $(V(X) \cap A, V(X) \cap B)$ is a $q$-internal partition of $X$ for $q = \frac{2m-a}{m-2}$. Now $X_1$, being complete, has no $q$-internal partition for any $q$. Therefore its vertices are either all in $A$ or all in $B$. Say in $A$. Then $|V(X_2) \cap B| = m - |V(X_1)| = 1$, so there is a single $B$-vertex in the $X_2$ component, but a partition of a connected graph into a single vertex and its complement is not $q$-internal for any $q$. A contradiction.
\qed
\end{proof}
The reader will note that for all known examples $G$ of even-degree regular graphs with no internal partition, the complement $\bar G$ is disconnected. We do not know whether this is true in general, but we observe that if true, this implies $2d > n$. To the best of our knowledge, this may hold in general:
\begin{conjecture}
\label{conj2d}
For every even $d$, every $d$-regular graph with no internal partition has less than $2d$ vertices.
\end{conjecture}
We return to the problem of the existence of a $q$-internal partition for arbitrary regular graphs. There is a distinction between integral and non-integral partitions. Non-integral partitions are rarer than integral partitions, since every $q$-internal partition of a $d$-regular graph $G$ is also an integral $q'$-internal partition of $G$ for $q' = \lfloor qd \rfloor/d$ as well as for $q' = \lceil qd \rceil/d$. We make the following conjecture:
\begin{conjecture}
For every integer $d$ and $1>q>0$ such that either (i) $q = \frac{1}{2}$ or (ii) $qd$ is an integer, there is an integer $\mu$ such that every $d$-regular graph of order $\geq \mu$ has a $q$-internal partition.
\end{conjecture}
As already noted, $\mu=8$ for $d=3, q=\frac{1}{2}$.
Numerical experiments suggest that for $q=\frac{1}{2}$ and $d=5, 7$ there holds $\mu=18, 26$ respectively.
In fact, the following stronger statement appears to be true: There exists an integer $\mu'$ that depends only on $d_{min}(G),d_{max}(G)$ and on $q$ such that every graph $G=(V,E)$ with order at least $\mu'$ has a $q$-internal partition if (i) $q = \frac{1}{2}$ or (ii) $qd_G(v)$ is a positive integer for all $v \in V$.
For other values of $q$ (i.e. with non-integral values of $qd$ other than $q = \frac{1}{2}$), we make no guesses. We note that, for example, a connected graph cannot have a $q$-internal partition for $0 < q < \frac{1}{d}$. On the other hand, for $\frac{1}{d} < q < \frac{2}{d}$, a shortest cycle and its complement often yield a $q$-internal partition (e.g., when the girth is $\ge 5$).
Although the above conjecture remains open, the following theorem shows that every incomplete graph has an integral $q$-internal partition for {\em some} $q$. Moreover, for $d$ fixed and growing $n$ the number of such distinct partitions tends to $\infty$.
\begin{theorem}
\label{existq}
A $d$-regular graph $G$ of order $n>d+1$ has a $q$-internal partition $(A,B)$ for some $q \in (0,1)$ with $qd$ an integer. Such partitions exist for at least $\frac{n-d-1}{d}$ different values of $|A|$.
\end{theorem}
\begin{proof}
$\bar{G}$ is ($n-d-1$)-regular. Select $r \in (0,1)$ such that $r(n-d-1)$ is not an integer. This is always possible since $n-d-1 \ne 0$. By Proposition \ref{coexist} $\bar{G}$ has an $(1-r)$-external partition $(A,B)$.
In this partition of $\bar{G}$, $\forall x \in A, d_{\bar{A}}(x) < r(n-d-1)$. The inequality is strict since $r(n-d-1)$ is not an integer. Similarly $\forall x \in B, d_{\bar{B}}(x) < (1-r)(n-d-1)$.
Considering the partition $(A,B)$ in $G$, we have $\forall x \in A, d_A(x) > |A| - 1 - r(n-d-1)$, and $\forall x \in B, d_B(x) > |B| - 1 - (1-r)(n-d-1)$. Therefore
\begin{align}
\label{eq1}
\forall x \in A, & d_A(x) \geq |A| - 1 - \lfloor{r(n-d-1)}\rfloor = |A| - \lceil{r(n-d-1)}\rceil \\
\label{eq2}
\forall x \in B, & d_B(x) \geq |B| - 1 - \lfloor{(1-r)(n-d-1)}\rfloor = |B| - \lceil{(1-r)(n-d-1)}\rceil
\end{align}
Set $q = (|A| - \lceil{r(n-d-1)}\rceil) / d$. By \eqref{eq1} the minimal indegree of $A$ is suitable for a $q$-internal partition. As for $B$, note that $\lfloor{(1-r)(n-d-1)}\rfloor + \lceil{r(n-d-1)}\rceil = n-d-1$. So:
\begin{equation}
|B| - 1 - \lfloor{(1-r)(n-d-1)}\rfloor = n - |A| - 1 - (n-d-1) + \lceil{r(n-d-1)}\rceil = (1 - q)d
\end{equation}
Therefore by \eqref{eq2} the minimal indegree of $B$ is also suitable, and $(A,B)$ is a $q$-internal partition.
From \eqref{eq1} we see that:
\begin{equation}
\label{boundA}
\lceil{r(n-d-1)}\rceil \leq |A| \leq \lceil{r(n-d-1)}\rceil + d
\end{equation}
So for any given $r$, $|A|$ has a range of at most $d$. Since $\lceil{r(n-d-1)}\rceil$ can take on $n-d-1$ values, $|A|$ takes on at least $\frac{n-d-1}{d}$ different values. The number of distinct $q$-internal partitions is at least as many.
\qed
\end{proof}
For $d$ fixed there are just $d-1$ values of $q \in (0,1)$ for which $qd$ is integral. By Theorem \ref{existq} every $d$-regular graph has $\Omega(n)$ distinct integral $q$-internal partitions. While this does not prove the existence of a $q$-internal partition for any {\em specific} $q$, it suggests that this becomes more likely as $n$ grows.
From Theorem \ref{existq} we derive an efficient algorithm that generates integral $q$-internal partitions for many and, for $n \gg d$, often {\em all} possible values of $q$:
\begin{algorithm}
\label{generate}
Given a $d$-regular graph $G=(V,E)$ with $n = |V|$:
\begin{enumerate}
\item Set $A \leftarrow \emptyset, B \leftarrow V$.
\item For $p = 1, \ldots, n - d - 1$
\begin{enumerate}
\item Repeat while $\exists{x \in B}, d_{\bar{A}}(x) < p$ or $\exists {x \in A}, d_{\bar{B}}(x) < n - d - p$
\begin{enumerate}
\item If $x \in A$ set $A \leftarrow A \setminus \{x\}, B \leftarrow B \cup \{x\}$
\item else set $A \leftarrow A \cup \{x\}, B \leftarrow B \setminus \{x\}$
\end{enumerate}
\item Set $A_p \leftarrow A, B_p \leftarrow B$
\end{enumerate}
\end{enumerate}
\end{algorithm}
This algorithm generates the partitions $(A_p,B_p), p \in [n-d-1]$ of $\bar{G}$ each of which is $q$-external for $q = p/(n-d-1)$, by greedily moving vertices. When $p > 1$, the starting point for $(A_p,B_p)$ is $(A_{p-1},B_{p-1})$.
From Theorem \ref{existq} and its proof, $(A_p,B_p)$ is also a $q$-internal partition of $G$ for $qd = |A_p| - p$. Note that $A_1$ is a maximal independent set in $\bar{G}$, and so is $B_{n-d-1}$. Now when $n \gg d$ the size of a maximal independent set is 2. Therefore, $|A_1| = 2$, $|A_{n-d-1}| = n-2$ and so $(A_1,B_1)$ is a $\frac{1}{d}$-internal partition of $G$ and $(A_{n-d-1},B_{n-d-1})$ is a $\frac{d-1}{d}$-internal partition of $G$.
Additionally from \eqref{boundA} $p \leq |A_p| \leq p+d$, so $|A_p|$ generally grows from $2$ to $n-2$ as $p$ grows from $1$ to $n-d-1$. The {\em average} of $|A_p| - |A_{p-1}|$ is $(n-4)/(n-d-2) \simeq 1$. Now since $(A_p,B_p)$ is a $q$-internal partition of $G$ for $q = \frac{|A_p| - p}{d}$, if it turns out that for all $p \in [n-d-2]$, $|A_{p+1}| - |A_p| < 3$, the algorithm generates all possible integral $q$-internal partitions of $G$.
Conversely, if for some graph $G$, some integral $q$-internal partition does not exist, then any sequence of partitions $(A_p,B_p), p \in [n-d-1]$, whether generated by Algorithm \ref{generate} or by any other means, will exhibit a gap $|A_p| - |A_{p-1}| \geq 3$ for some $p > 1$. For example, considering the graph $K_{3,3,3}$ (Figure \ref{d=n-3 example}) shown not to have an internal partition: $n-d-1 = 2$ and $|A_1| = 3, |A_2| = 6$.
|
1,116,691,499,771 | arxiv | \section{Introduction}
Gravitational-wave radiometry is a technique by which two or more detectors are cross-correlated in order to identify signals in the form of excess coherence~\cite{radiometer,radio_method}.
It is especially well-suited to situations where it is either impossible or impractical to carry out a matched filter search due to theoretical uncertainty and/or the vastness of the signal parameter space.
LIGO~\cite{aligo} and Virgo~\cite{virgo} radiometer searches for persistent gravitational waves have yielded limits on the gravitational-wave strain from targets including Scorpius X-1, the Galactic Center, and Supernova 1987A~\cite{sph_results}.
Radiometer searches are also sensitive to hot spots created from the superposition of many persistent sources~\cite{Dhurandhar,Mazumder,ns_sgwb}.
By restricting the timescale of integration, the same method has been applied to search for relatively long-lived $\sim$$10$--$\unit[1000]{s}$ gravitational-wave transients~\cite{stamp}, e.g., associated with long gamma-ray bursts~\cite{lgrb}.
Radiometry has also been proposed in the context of gravitational-wave astronomy with pulsar timing arrays~\cite{anholm}.
Due to its robustness and efficiency, there is strong motivation to extend the parameter space of radiometer searches.
In this paper, we propose a method to extend the search for persistent narrowband signals to target all frequencies {\em and} all directions on the sky---not just ones associated with targets such as Scorpius X-1.
An efficient all-sky search for persistent narrowband gravitational waves could facilitate the detection, e.g., of electromagnetically quiet spinning neutron stars in binary systems.
The challenge is to overcome computational challenges that make the search difficult.
There is a significant burden from both data storage and computation time.
In previous {\em broadband} radiometer searches for persistent gravitational waves~\cite{sph_results}, a Fisher matrix was used to characterize the covariance between different patches of sky~\cite{sph}.
However, saving Fisher matrices for many frequency bins (and for many ``jobs'' associated with stretches of science quality data) creates a burdensome data storage problem.
The computational cost of analyzing a full year of data to look at thousands of frequency bins is both inefficient and potentially prohibitive in terms of computation time.
Recent work~\cite{folding} shows that by ``folding'' data into a single sidereal day, it is possible to represent an entire year of radiometer data with just one twenty-four long spectrogram.
We show that this several-hundred-fold reduction in data volume facilitates efficient searches for persistent narrowband signals.
The remainder of this paper is organized as follows.
In Section~\ref{motivation}, we discuss the motivation for narrowband all-sky searches.
Section~\ref{method} describes how previous work on folded data~\cite{folding} and searches for $\approx$day-long signals~\cite{verylong} can be combined to design an efficient all-sky narrowband radiometer.
In Section~\ref{demonstration}, we demonstrate an all-sky narrowband radiometer search using folded data.
Finally, in Section~\ref{conclusions}, we discuss the astrophysical implications of these results.
\section{Motivation}\label{motivation}
One of the primary targets for an all-sky narrowband search for persistent gravitational waves is spinning neutron stars in binary systems.
The motivation for such a search was recently laid out in~\cite{twospect}.
In a variety of scenarios, accretion from a companion star can induce a time-varying quadrupole moment, which, in some cases, may persist after accretion abates.
For example, persistent localized mass accumulation may occur due to magnetic fields~\cite{bildsten} depending on unknown details of neutron star physics~\cite{vigelius1,priymak,wette,vigelius2,vigelius3}.
Magnetic fields may also induce deformations in the stellar interior~\cite{melatos2}.
Rotational instabilities such as $r$-modes may be sustained through accretion~\cite{reisenegger,ushomirsky}.
When the neutron star in a binary is a pulsar, it is possible to carry out an optimal search using matched filtering.
However, if it is electromagnetically quiet, then the space of possible signals becomes prohibitively large for a fully coherent search.
When at least the sky location is known, one can apply a targeted radiometer search~\cite{radiometer,radio_method,sph_results} or other semicoherent methods~\cite{twospect,twospect_method,sideband_method,crosscorr,polynomial}.
If the sky location is unknown, the problem is significantly more challenging and the signal might be missed or vetoed as a noise artifact by non-specialized pipelines~\cite{twospect}.
The search technique proposed here is highly complementary to the semicoherent method described in~\cite{twospect,twospect_method}.
The ``TwoSpect'' algorithm~\cite{twospect,twospect_method} employs a canonical signal model in which a neutron star emits periodic waves, modulated by binary motion.
The radiometer, on the other hand, employs only minimal assumptions; namely, that the signal is persistent and narrowband.
By incorporating additional assumptions about the signal model, a tuned search gains sensitivity.
The advantage of the radiometer, in contrast, is that it is extremely robust since it relies only on excess coherence in two or more detectors.
Indeed, while we focus on spinning neutron stars in binary systems, the narrowband radiometer is sensitive to {\em any} persistent narrowband source, whether or not it is modulated by binary motion.
\section{Method}\label{method}
\subsection{Folding}
We utilize the idea of folded data as described in~\cite{folding}, which we reformulate in the language of recent radiometer developments~\cite{stamp,stochtrack,lgrb,stochsky,stochtrack_cbc,stochtrack_ecbc,verylong}.
During an observing run, typically lasting a few months to a few years, data is collected from two spatially separated detectors.
(This discussion straightforwardly extends to three or more detectors, but our presentation focuses on the two-detector case for simplicity.)
We parse the data for which both detectors are simultaneously operational into sidereal days ($\unit[23]{hr}$, $\unit[56]{min}$, $\unit[4]{s}$).
Every sidereal day of radiometer data can be represented using a complex-valued estimator~\cite{verylong}:
\begin{equation}\label{eq:Y}
\widehat{\mathfrak{Y}}(t;f) \equiv \frac{2}{\cal N}
\tilde{s}_1^*(t;f) \tilde{s}_2(t;f)
\end{equation}
\begin{equation}\label{eq:sigma}
\sigma_{\mathfrak{Y}}(t;f) \equiv \frac{1}{2} \sqrt{P'_1(t;f) P'_2(t;f)}.
\end{equation}
Here, $(t;f)$ are spectrogram indices: $t$ is the start time of each data segment and $f$ is the frequency of the Fourier transformed data associated with that segment.
Each $\tilde{s}_I(t;f)$ refers to the discrete Fourier transform of the strain data from detector $I$.
The variable ${\cal N}$ is a Fourier normalization constant, $\sigma_{\mathfrak{Y}}(t;f)$ is an estimator for the uncertainty associated with $ \widehat{\mathfrak{Y}}$, and each $P'_I(t;f)$ represents the strain auto-power measured in detector $I$.
(The prime denotes that the auto-power is calculated as an average of neighboring segments.)
The direction of the source is encoded in the phase of $\mathfrak{Y}(t;f)$, which depends on the time delay between the two detectors~\cite{stochsky}.
Since the signal-induced phase delay and detectors' antenna factors are periodic on the timescale of a sidereal day, it is possible to define folded estimators by summing data over many sidereal days:
\begin{equation}
\widehat{\mathfrak{Y}}^\text{fold}(t;f) = \sum_k
\widehat{\mathfrak{Y}}(t;f|k) \sigma_{\mathfrak{Y}}^{-2}(t;f|k) \Big/
\sum_k \sigma_{\mathfrak{Y}}^{-2}(t;f|k)
\end{equation}
\begin{equation}
\sigma_{\mathfrak{Y}}^\text{fold}(t;f) =
\left( \sum_k \sigma_{\mathfrak{Y}}^{-2}(t;f|k) \right)^{-1/2} .
\end{equation}
Here, $k$ runs over all the sidereal days of the observing run.
Following~\cite{stochsky}, we define complex signal-to-noise ratio for folded data:
\begin{equation}\label{eq:rho_fold}
\mathfrak{p}^\text{fold}(t;f) \equiv \widehat{\mathfrak{Y}}^\text{fold}(t;f)/
\sigma_{\mathfrak{Y}}^\text{fold}(t;f) .
\end{equation}
The variable $\mathfrak{p}^\text{fold}(t;f)$ is a complex-valued spectrogram; see Fig.~\ref{fig:add_days}.
Gravitational-waves induce excess $\mathfrak{p}(t;f)$, which appears visually as brighter than usual spectrogram pixels.
By folding a year of data into a sidereal day, the volume of data is reduced by a factor of $\approx$$365$, which dramatically reduces both storage needs and computation time.
And, if we restrict our attention to persistent narrowband signals, then the folding operation is {\em lossless} in the sense that there is no more information (useful in a search for persistent signals) in a year of cross-correlated data than there is in a folded day.
\subsection{Radiometry}
By specializing the methods from~\cite{verylong} to consider approximately monochromatic signals lasting exactly one sidereal day, and then applying these methods to folded data~\cite{folding}, it is possible to construct a very efficient algorithm for the detection of electromagnetically quiet binary neutron stars (and other narrowband sources not matching a standard continuous wave model).
Note that when we refer in this paper to signals as ``approximately monochromatic,'' we mean that they fit within single $\approx$$\unit[1]{Hz}$ frequency bin.
Such signals include neutron stars in binary systems, which are relatively broadband compared to isolated neutron stars.
However, the signal is narrowband in the sense that it is contained within just one radiometer frequency bin.
In reality, the modulation depth of neutron stars in binaries can range from $\approx$$0.1$--$\unit[2]{Hz}$~\cite{twospect}, so a more nuanced definition of ``narrowband,'' employing variable frequency bin sizes, is appropriate.
However, this is beyond our present scope.
Radiometer searches can be cast as a pattern recognition problems in which an algorithm looks for statistically significant clusters of pixels~\cite{stamp}.
As we are focused here on approximately monochromatic signals, the optimal pattern recognition algorithm is a simple sum over the frequency bins of $\mathfrak{p}^\text{fold}$.
In the language of seedless clustering~\cite{stochtrack,stochsky}, each signal template corresponds to a different row in the $\mathfrak{p}^\text{fold}$ spectrogram.
We assume that the signal starts at the beginning of the spectrogram and finishes at the end.
To assume otherwise would imply an unphysical signal that starts and stops with a period matching Earth's sidereal day.
Likewise, it would not make sense to allow for any frequency evolution since this would imply that the signal repeats this evolution with the period of a sidereal day.
The following expression for the detection statistic $\text{SNR}_\text{tot}(f|\hat\Omega)$ is specialized from~\cite{verylong} to focus on monochromatic signals:
\begin{equation}\label{eq:vlong_as}
\text{SNR}_\text{tot}(f|\hat\Omega) = \frac{\text{Re}\left[
\sum_{t}
e^{\left(2\pi i f \hat\Omega \cdot \Delta \vec{x}(t)/c\right)}
\mathfrak{p}^\text{fold}(t;f) \, \epsilon_{12}(t|\hat\Omega)
\right]
}{
\Big(\sum_{t}\epsilon_{12}^2(t|\hat\Omega)\Big)^{1/2}
} .
\end{equation}
Here, $\hat\Omega$ is the direction of the source, $\Delta \vec{x}$ is the difference in detector position, and $c$ is the speed of light.
The phase factor $e^{\left(2\pi i f \hat\Omega \cdot \Delta \vec{x}(t)/c\right)}$ ``points'' the spectrogram toward the source by rotating the phase angle of $\mathfrak{p}^\text{fold}$ to zero so that the observed signal-to-noise ratio is real and positive~\cite{stochsky,verylong}.
The variable $\epsilon_{12}(t|\hat\Omega)$ is a time-dependent efficiency factor characterizing the fraction of gravitational-wave power measured due to the non-unity antenna factors of gravitational-wave interferometers:
\begin{equation}\label{eq:epsilon}
\epsilon_{12}(t|\hat\Omega) \equiv \frac{1}{2}
\sum_A F_1^A(t|\hat\Omega) F_2^A(t|\hat\Omega) .
\end{equation}
Here, $F_I^A(t|\hat\Omega)$ is the antenna factor~\cite{300years} for detector $I$ and $A=+,\times$ are the different polarization states.
As the Earth rotates, the detectors become more/less favorably aligned relative to the source.
The sum over $t$ is carried out over all the (typically $\approx$$\unit[60]{s}$) segments in a sidereal day.
For additional information pertaining to Eq.~\ref{eq:vlong_as}, the interested reader is referred to~\cite{verylong} and reference therein.
For the sake of compact notation, Eq.~\ref{eq:vlong_as} assumes that detector noise is approximately constant over the span of a sidereal day.
However, we note that it is straightforward to generalize to the case of non-stationary noise by weighting each data segment appropriately using a standard inverse variance weighting method~\cite{stoch_allenromano}.
Previous work~\cite{stochsky,verylong,stochtrack_cbc,stochtrack_ecbc} has shown that Eq.~\ref{eq:rho_fold} and Eq.~\ref{eq:vlong_as} can serve as the basis for ``all-sky'' searches for which the signal time and direction are unknown a priori.
Many directions in the sky can be considered simultaneously using parallelized computer code run on suitable processors, e.g., GPUs and multi-core CPUs~\cite{stochsky}.
For each observation, we record $\text{SNR}_\text{tot}^\text{max}$---the maximal value of $\text{SNR}_\text{tot}$ for all the directions:
\begin{equation}\label{eq:max}
\text{SNR}_\text{tot}^\text{max}(f) = \max_{\hat\Omega}
\left[ \text{SNR}_\text{tot}(f|\hat\Omega) \right] .
\end{equation}
Eq.~\ref{eq:vlong_as} assumes a fairly constrained signal model: approximately monochromatic signals, which start at the beginning of and finish at the of a sidereal day.
Thus, it is possible to search every sky location (given some resolution) using a HEALPix~\cite{healpix} grid.
In this work, we consider a number of sky locations chosen so as to be spaced at angular separations less than the diffraction limited resolution of the detector network.
For the LIGO network,
\begin{equation}
\theta \approx \frac{c}{f} \frac{1}{\left|\Delta\vec{x}\right|}
\approx \left(\frac{\unit[1000]{Hz}}{f}\right) 5^\circ .
\end{equation}
In the demonstration described below, we use HEALPix to efficiently sample the sky with 3072 tiles.
\subsection{Comparison with targeted searches}
In order to highlight the usefulness of this framework, it is useful to compare it to the procedure used for previous narrowband {\em targeted} radiometer searches~\cite{radiometer,sph_results}, applied, e.g., to a single direction such as Scorpius X-1.
In such searches, one analyzes data in $\approx$$500$--$\unit[10000]{s}$-long ``jobs,'' defined as data stretches during which coincident data is available for both detectors.
During initial LIGO's fifth science run, there were $\approx$$20000$ jobs.
For each job, one calculates cross-power (Eq.~\ref{eq:Y}) and auto-powers (Eq.~\ref{eq:sigma}) by optimally combining the results from each of many (typically) $\unit[60]{s}$ segments in each job.
Then, during post-processing, the results from each job are optimally combined to produce estimators analogous to $\widehat{\mathfrak{Y}}$ and $\sigma_\mathfrak{p}$, but obtained by integrating over the entire observing run.
This procedure is repeated separately for each sky location.
Using this scheme, each sky location requires the generation of tens of thousands of output files, and so there is significant overhead associated with the analysis of more than a few sky locations.
Moreover, covariances between different patches on the sky---characterized by a Fisher matrix---lead to subtleties when interpreting the significance of an outlier~\cite{sph_results}.
In order to understand these covariances in this framework, one must therefore produce also a Fisher matrix for every frequency bin.
In the new paradigm we propose here, the data are first combined into a single sidereal day, which can be used to look in every sky direction.
This data is small enough to be easily manipulated on a typical computer.
Once the data are loaded, we can search over every sky location all at once without having to read or write additional files, thereby eliminating a significant bottleneck.
Also, this method eliminates the need to store Fisher matrices.
This is because the covariances between different sky locations are automatically encoded into the factors of $e^{\left(2\pi i f \hat\Omega \cdot \Delta \vec{x}/c\right)}$ and $\epsilon(t|\hat\Omega)$.
Thus, one can use numerical simulations, in which $\mathfrak{p}$ is drawn from a Gaussian distribution, in order to determine the false-alarm probability of obtaining a given value of $\text{SNR}_\text{tot}^\text{max}$.
(The assumption of Gaussianity is justified by invoking the central limit theorem, and it has been born out in previous measurements~\cite{stoch-S5,sph_results}.)
An example search, analyzing $\unit[750]{Hz}$ of bandwidth with $\Delta f=\unit[1]{Hz}$ resolution (see Fig.~\ref{fig:add_days}), and scanning over 3072 Healpixels, can be carried out in $\lesssim\unit[100]{s}$ using a single eight-core CPU.
This computation time does not include the modest time required for folding the data.
\section{Demonstration}\label{demonstration}
Our goal now is to demonstrate the recovery of a persistent narrowband signal in folded data without a priori knowledge of the sky location or frequency of the signal.
To begin, we generate twenty days of simulated data for the LIGO Hanford and LIGO Livingston detectors operating at design sensitivity.
The data consists of Gaussian noise plus a simulated monochromatic signal with frequency $f=\unit[600]{Hz}$ and strain amplitude $h_0=1.5\times10^{-24}$, located at $(\text{ra},\text{dec})=(\unit[18.5]{hr},39^\circ)$.
The signal is circularly polarized.
For our purposes, a monochromatic signal is a reasonable proxy for a signal modulated by binary motion (discussed in Section~\ref{motivation}) so long as the binary modulation is $\lesssim\unit[1]{Hz}$.
We generate coarse-grained spectrograms with a resolution of $(\unit[79]{s},\unit[1]{Hz})$ (with non-overlapping segments).
This frequency bin width is relatively well-matched to the observed modulation depth of known neutron stars in binary systems, which can range from~$\approx$$0.1$--$\unit[2]{Hz}$~\cite{twospect}.
(A more systematic study should employ variable bin widths to fully cover parameter space.)
We study a band ranging from $500$--$\unit[1250]{Hz}$, where many neutron stars in binary systems are expected to rotate~\cite{chakrabarty}.
Next, the data are folded into one sidereal day following the procedure outlined in Section~\ref{method}.
As the data are folded, we analyze the integrated spectrogram with every accumulated day in order to study how the signal grows with time.
Following the procedure described in Eqs.~\ref{eq:vlong_as} and~\ref{eq:max}, we scan 3072 Healpixels and record the maximum signal-to-noise ratio at each frequency $\text{SNR}_\text{tot}^\text{max}(f)$.
The results are summarized in Fig.~\ref{fig:add_days}.
In Fig.~\ref{fig:add_days}a, we show the real part of $e^{\left(2\pi i f \hat\Omega \cdot \Delta \vec{x}/c\right)}\mathfrak{p}^\text{fold}(t;f)$ representing twenty days of data folded into a single sidereal day.
Though the signal is virtually impossible to see with the naked eye in just one day of data, it is visible in this image at $\unit[600]{Hz}$ since the signal-to-noise ratio grows with continued integration.
Fig.~\ref{fig:add_days}b shows the recovered signal ($\text{SNR}_\text{tot}^\text{max}\approx12$) using just one day of data.
The excellent match in the time-dependent modulation suggest that the sky location is well matched.
Indeed, the best fit direction $(\text{ra},\text{dec})=(\unit[18.3]{hr},38.9^\circ)$ is the closest possible pixel to the true source location.
\begin{figure*}[hbtp!]
\subfigure[]{\psfig{file=twenty_days.eps, width=3in}}
\subfigure[]{\psfig{file=rmap0_sidereal.eps, width=3in}}
\subfigure[]{\psfig{file=snr_vs_time.eps, width=2.8in}}
\subfigure[]{\psfig{file=snr_vs_freq.eps, width=2.8in}}
\caption{
Narrowband signals in folded data.
Top-left: a $(\delta t, \Delta f)=(\unit[79]{s},\unit[1]{Hz})$ spectrogram of the real part of $e^{\left(2\pi i f \hat\Omega \cdot \Delta \vec{x}/c\right)}\mathfrak{p}^\text{fold}(t;f)$ consisting of twenty days of data, folded into one sidereal day.
The data consist of Advanced LIGO~\cite{aligo} Monte Carlo noise plus a simulated signal with amplitude $h_0=1.5\times10^{-24}$ and frequency $\unit[600]{Hz}$.
The source is assumed to be face-on.
The spectrogram is ``pointed'' in the correct direction using the appropriate phase factor to make the signal positive; see Eq.~\ref{eq:Y}.
The simulated signal is detectable in just one day of data, but we show the result of 20 days of folding to make the signal visible by eye.
The top-right panel shows the recovered signal in just one day of data.
The signal is easily identified without any prior assumption about the source location or frequency.
The bottom-left panel shows how the folded SNR grows with the addition of folded data.
Blue shows the result for the bin with the signal while red is the next loudest bin.
The bottom-right panel shows the maximum $\text{SNR}$ (scanning over the entire sky) as a function of frequency for one day of data (red).
The injected signal is evident as a $\text{SNR}\approx12$ spike at $\unit[600]{Hz}$.
Typical fluctuations due to pure noise are indicated with blue.
It is interesting to note that there is a slight trend toward higher $\text{SNR}$ with increasing frequency due to increasing angular resolution.
\label{fig:add_days}
}
\end{figure*}
In Fig.~\ref{fig:add_days}c, we show how the signal-to-noise ratio of the $\unit[600]{Hz}$ signal (blue) grows as expected like the square root of the total observation time.
The dashed red curve shows the next-loudest frequency bin, which exhibits no tendency to grow or decline with continued integration.
In Fig.~\ref{fig:add_days}d, we show the signal-to-noise ratio spectrum for the spectrogram in Fig.~\ref{fig:add_days}a, maximized over search directions.
That is, for each frequency bin, we scanned the entire sky and plot (in red) $\text{SNR}_\text{tot}^\text{max}$ for the brightest patch of sky as a function of frequency.
The blue curves indicate typical one-sigma fluctuations for noise.
The $\unit[600]{Hz}$ signal is visible as a dramatic spike.
There is a slight tendency toward higher values of $\text{SNR}$ with increasing frequency as angular resolution improves and there are more independent patches of sky.
From Fig.~\ref{fig:add_days}, we show that it is possible carry out a computationally efficient search for persistent narrowband signals from all directions in the sky and at all frequencies.
Once the data have been folded, the entire search takes less than two minutes to carry out using a single eight-core CPU.
By studying the background distribution of $\text{SNR}_\text{tot}^\text{max}(f)$, we can estimate how the sensitivity of the radiometer search differs for a directed and an all-sky search.
Naively, we expect the all-sky sensitivity to be worse, but only slightly, since the additional trial factors incurred by looking in many directions will only slightly increase the detection threshold given the expected rapidly falling background distribution.
In order to identify a detection candidate with false alarm probability $1\%$, the targeted search requires $\text{SNR}_\text{tot}^\text{max}(f)\gtrsim4.5$ while the all-sky search requires $\text{SNR}_\text{tot}^\text{max}(f)\gtrsim5.8$ for the bandwidth and spectral resolution considered here.
This corresponds to a change in strain sensitivity of just $(5.8/4.5)^{1/2}-1\approx14\%$.
Upper limits can be set for each frequency bin independently, and so there are fewer trial factors to apply compared to the detection calculation, which must account for the existence of hundreds of frequency bins.
On average, the targeted search will set limits corresponding to $\text{SNR}_\text{tot}^\text{max}(f)\approx1.7$ whereas the all-sky search will set limits corresponding to $\text{SNR}_\text{tot}^\text{max}(f)\approx4$ (around $f\approx\unit[1000]{Hz}$ where we scan $\approx$$3072$ sky positions).
Thus, the all-sky limits are expected to be $\approx$$50\%$ higher in strain than limits obtained from a targeted search.
These estimates support the expectation that the all-sky search is only slightly less sensitive than the targeted search.
\section{Conclusions}\label{conclusions}
Rotating neutron stars in binaries are a very promising candidate for gravitational-wave detection by second-generation detectors like Advanced LIGO/Virgo.
Previous work~\cite{twospect_method} has enabled the exploration of these promising sources, though, no detection candidates have been found with initial LIGO data~\cite{twospect}.
Our presentation of an all-sky, narrowband radiometer using folded data is complementary since it makes only minimal assumptions about the source.
In this way, it should be sensitive not only to neutron stars in binary systems, but also to other sources, which emit approximately narrowband gravitational waves and which may or may not conform to the canonical model of an isolated neutron star.
A back-of-the-envelope calculation provides promising results.
In Subsection~\ref{demonstration}, we showed that a $h_0=1.5\times10^{-24}$ signal at $f=\unit[600]{Hz}$ can be easily recovered in one day of data.
Scaling to one year of data, this becomes $h_0\approx2\times10^{-25}$.
In terms of neutron-star ellipticity, this sensitivity can be restated~\cite{known_pulsars} as
\begin{equation}
\begin{split}
\epsilon \approx 1\times10^{-5} \left(\frac{0.4}{\beta}\right)
\left(\frac{\unit[10^{45}]{g\,cm^2}}{I}\right)
\left(\frac{r}{\unit[10]{kpc}}\right)
\left(\frac{\unit[600]{Hz}}{f}\right)^2 ,
\end{split}
\end{equation}
where $\beta$ is an orientation factor, $G$ is the gravitational constant, $r$ is the distance to the source, and $I$ is the moment of inertia.
(One arrives at a similar rough estimate by scaling the upper limits from targeted radiometer analyses with initial LIGO data~\cite{sph_results} to Advanced LIGO sensitivity.)
This is an order of magnitude above the expected strain at $f=\unit[600]{Hz}$ from Scorpius X-1 expected from accretion-torque balance~\cite{sideband_method}, suggesting that a signal might be detectable from another somewhat closer neutron star---especially in light of the fact that additional optimizations are possible, and only a hint of a signal is needed to trigger a coherent matched filtering search, which can boost the significance.
\section{Acknowledgements}
VM's work was supported by NSF grant PHY1204944.
NC's work was supported by NSF grant PHY-1204371.
This is LIGO document P1500015.
|
1,116,691,499,772 | arxiv | \section{Introduction}
\label{sec:intro}
Evolution is naturally a multiscale phenomenon~\citep{Keller,Metz_mdl}. The choice of right scale to describe a particular problem has as much art as science. For some populations
(e.g, with non overlapping generations) a discrete time provides adequate description; for different examples, this is excessively simplifying. Large populations can be described
as infinite (in order to use differential equations, for example), but this imposes limitations in the time validity of the model~\citep{Chalub_Souza:TPB_2009}. On the other hand, some finite population effects,
like, for example, the bottleneck effect, will be missing in any description relying in infinite populations~\citep{Hartl_Clark}.
In this vein, diffusion approximations, frequently used for large populations and long time scales, enjoy a long tradition in population genetics. This tradition dates back as early as the work by \cite{Feller1951} and references there in. In particular, diffusion approximations were implicitly used in the pioneering works of \cite{Wright1,Wright2} and \cite{Fisher1,Fisher2}. These efforts have been further developed in a number of directions as, for instance, in the studies on multispecies models in \cite{Sato1976a,Sato1983}; see
also the review in \cite{Sato1978}. Subsequently, \cite{EthierKurtz} systematically studied the approximation of finite Markov chain models by diffusions. In particular, they showed the validity of a diffusion approximation to a multidimensional Wright-Fisher model, in the regime of weak selection, and linear fitness. This led to a notable progress in diffusion theory, as reported for instance in \citep{EthierKurtz,StrockVarandhan:1997}. This considerable progress, in turn, led to a large use of diffusion theory in population genetics, which can be verified in contemporary introductions to the subject~\citep[see][]{Ewens,EtheridgeLNM}.
There is also a more heuristic approach, called the Kramers-Moyal expansion, where the kernel of the master equation of the stochastic
process is fully expanded in a series. The diffusion approximation
can be viewed as a Kramers-Moyal expansion truncated at second order.
Although it is commonly claimed that the full expansion
is needed in order to obtain a continuous approximation of discrete processes,
it is known that under various conditions discrete Markov chains can be approximated by diffusions; cf. \cite{EthierKurtz} and \cite{StrockVarandhan:1997}
for instance.
In this work, we shall show that under a number of conditions similar results hold for
the discrete processes considered.
See~\citep{VanKampen} for a discussion about this and other techniques for continuous approximations of discrete processes.
As observed above, results along similar lines had been obtained earlier by a number of authors \citep{Feller1951,EthierKurtz,Ewens}.
These works approach the problem mostly within a probabilistic framework, while here we take a pure analytical setting, and this brings two immediate consequences: firstly, we are able to directly derive a weak formulation for the forward Kolmogorov equation, assuming only continuity of the fitness functions
in contrast to the weak formulation for the backward problem for Feller processes; see~\cite{RogersWilliams:2000a,RogersWilliams:2000b}
. The second, and possibly most important consequence, is that we are able to deal with a variety of scalings for the evolution problem. This yields a full family of evolution problems: genetic-drift dominated evolution, which is described by a diffusion equation; selection dominated evolution, which is governed by a hyperbolic equation; and an evolutionary dynamics, where the two forces are balanced, which is governed by a convection-diffusion equation that we term replicator-diffusion.
If we assume some more regularity of the fitness functions, we can then recast the weak formulation in a strong formulation. In this case, we cannot impose any boundary conditions, but we must supplement it by a number of conservation laws, namely that the probability of fixation of each type for a given probability density of population, in any time, must be the same as in the initial time. The conservation laws are used to circumvent the impossibility of imposing boundary conditions when the boundaries are absorbing.
Furthermore, by a duality argument we obtain the backward equation formulation. For the particular case of linear fitness and balanced scalings, we then recover the classical result by \cite{EthierKurtz}. Additionally, by an appropriate combination of the weak and strong formulations, we are able to give a complete description of the forward solution.
A complimentary approach to the study of evolution, based on evolutionary game theory, has also been developed \citep[cf.][]{JMS} with conclusions that are not always compatible with results from diffusion theory.
As an example, diffusion models without mutation lead to the fixation of a homogeneous population, while frequency dependent models associated to the replicator dynamics\footnote{In this work, we will use the expressions ``replicator dynamics'', ``replicator equation'' and ``replicator system'' indistinctly.}
may lead to stable mixed populations. For an introduction to evolutionary game theory and replicator dynamics, we refer the reader to \cite{HofbauerSigmund} and \cite{Weilbull}.
The replicator equation was also modified to introduce stochasticity at population level~\citep{Fudenberg_Harris,Foster_Young}.
Relations between the matching scheme in a population and the deterministic approximation of its stochastic evolution are studied in~\citep{Boylan_1992,Boylan_1995}.
Consistent interaction among these two modelling schools have been attempted by a number of authors, with
different degrees of success~\citep[see][]{Traulsen_etal_2006,LessardLadret,Lessard_2005,McKaneWaxman07,Waxman2011,ChampagnatFerriereMeleard_TPB2006,ChampagnatFerriereMeleard_SM2008,Fournier_Meleard_AAP2004,Molzon_2009,BenaimWeibull_2003,CorradiSarin_2000,TraulsenClaussenHauert_PRE2012}.
We will show, as in many of these works, that both descriptions --- the one based on the diffusion approximation and the one based on the replicator dynamics --- are correct as models for the evolutionary dynamics of a given trait, but in
different scalings. As a by product, we will provide a generalisation of the Kimura equation valid for an arbitrary number of types and general fitness functions.
The long time asymptotics of both descriptions will suggest that the replicator equation is
a model with limited time validity, given a certain maximum admissible error. Such a limitation is to be naturally anticipated on the grounds that the diffusion process will be eventually absorbed, while the replication dynamics might converge to an equilibrium in the interior.
We also confirm that the solution of the replicator equation indicates the most probable state (mode) of the population, conditional on not have been absorbed. Hence, it does not necessarily indicate the expected value of the
trait.
The work presented here is a development of earlier work in \citep{Chalub_Souza:TPB_2009,Chalub_Souza:CMS_2009}: the former studying the derivation and convergence of the Moran model with two types to the 1-d version of the replicator-diffusion equation discussed here, and the latter with a comprehensive analytical study of the 1-d replicator-diffusion equation. The derivation of the continuous model in \cite{Chalub_Souza:TPB_2009} hinged on the idea that a formal expansion of master equation, but with control of the local error, and results on well-posedness of the continuous classical problem can be brought together via numerical analysis approximation results. This combination then yielded uniform convergence, in any proper closed sub interval of [0,1], of the rescaled probabilities of the discrete model to the continuous probability density. This convergence result, combined with the analytical results in \cite{Chalub_Souza:CMS_2009} on a weak formulation that satisfies the conservation laws provided
a continuous measure solution. The discrete process then converges weakly towards such a solution, on a neighbourhood of each endpoint, but uniformly as described above. To study the Wright-Fisher continuous limits, however, we took a different route.
This allows us to derive an approximate discrete weak formulation of the discrete process, with global error control. Further, by embedding the discrete probabilities in an appropriate measure space, we could use compactness arguments to obtain the continuous limit. Thus, in this setting both the weak formulation and the weak convergence of the discrete model to the continuous one follows with considerable less effort, but we do not get the improved convergence on the interior.
\subsection{Scalings, limits and approximations}
\label{ssec:DCA}
In order to be able to study more general models, we follow the approach used by the authors in \cite{Chalub_Souza:TPB_2009}. In particular, we are interested not only in diffusion approximations, but in approximations that can be consistent with the dynamics of the corresponding discrete process.
We begin with a definition:
\begin{deff}\label{def:DCA}
We shall say that a simplified model $\mathcal{M}_0$
is an approximation
of the family of detailed models $\mathcal{M}_\gamma$, $\gamma>0$, in a sense $\chi$, where $\chi$ is
an appropriate metric as, for instance, any norm in a suitable space of functions (e.g., $L^1$, $L^2$, $L^\infty$, etc) if the following holds true:
\begin{enumerate}
\item Consider a certain family of initial conditions $h^{\mathrm{I}}_\gamma$ such that
$\lim_{\gamma\to0}h^{\mathrm{I}}_\gamma=h^{\mathrm{I}}_0$, in the sense $\chi$;
\item Evolve through the model $\mathcal{M}_\gamma$ the initial condition $h^{\mathrm{I}}_\gamma$
and through the model $\mathcal{M}_0$ the initial condition $h^{\mathrm{I}}_0$ until the time
$t<\infty$ obtaining $h_\gamma(t)$ and $h_0(t)$ respectively;
\end{enumerate}
If for all $t<\infty$ we have that $\lim_{\gamma\to0} h_\gamma(t)=h_0(t)$, in the sense $\chi$, then we say that the model $\mathcal{M}_\gamma$ converge in
the limit $\gamma\to0$, in the sense $\chi$, to the model $\mathcal{M}_0$. If, furthermore, this convergence is uniformly in $t\in[0,\infty)$, then we say that the model $\mathcal{M}_\gamma$ converge in
the limit $\gamma\to0$ to the model $\mathcal{M}_0$ uniformly in time.
\end{deff}
Some examples of the relation between detailed and simplified models are listed in Table~\ref{table:drm}.
In general, some extra assumptions are frequently required to allow the passage to the limit. If, for example, there are more than one small parameter in the detailed model, it
is natural to assume a relationship among them, called \textsl{scaling}, as, in general, the limit model will depend on how these
parameters approaches zero. Other assumptions may also be necessary, as it will be discussed in the next paragraph.
The process of taking the limit of a family of models,
considering a given scaling, will be called ``the thermodynamical limit''; by extension,
we shall also call the limit model the \textsl{thermodynamical limit}.
In this work, depending on the precise choice of the scaling, the limit equation
can be of drift-type (a partial differential equation fully equivalent to the
replicator equation or system), of purely diffusion type, or, in a delicate
balance, of drift-diffusion type.
\begin{table}
{\footnotesize
\begin{center}
\begin{tabular}{c|c|c}
Detailed model&Meaning of parameter $\gamma$&Simplified model\\
\hline
Kinetic models&mean free path&hydrodynamical models\\
Othmer-Dumbar-Alt model&mean free path&Keller-Segel model\\
Quantum Mechanics&rescaled Planck constant&Classical Mechanics\\
Relativistic mechanics&(rescaled light velocity)${}^{-1}$&Non-relativistic Mechanics\\
Moran process&inverse of population size&replicator-diffusion equation\\
Moran process&inverse of population size&replicator equation
\end{tabular}
\end{center}
\caption{Detailed and simplified models. The last two lines state that both the replicator equation and the replicator
diffusion equation approximates the Moran process. References to these works are
\citep{Bardos_Golse_Levermore_I,Bardos_Golse_Levermore_II,Cercignani,Othmer_Hillen_I,Othmer_Hillen_II,CMPS,Stevens_2000,
Hepp,Cirincione,Bjorken_Drell,Chalub_Souza:CMS_2009,Chalub_Souza:TPB_2009}.}\label{table:drm}
}
\end{table}
In what follows, an important and natural assumption that must be introduced in order that we have an approximation in the sense of definition~\ref{def:DCA} is the so-called \textsl{weak selection principle}, to be precisely stated in equation~(\ref{WSP}). Generally speaking, we assume that the \textsl{fitness} of a given
individual
converges to 1 when the time separation between two successive generations $\Delta t$ approaches zero. This is a
natural assumption when we consider that two successive generations collapses into a
single one. However, in most of the literature, the weak selection principle
is assumed in the limit of $N\to\infty$, where $N$ is the population size.
Although they are equivalent (as we shall
assume a certain scaling relation between $N$ and $\Delta t$), we consider our approach more natural.
In this work, we will consider as the detailed model, the Wright-Fisher process, to be studied in detail for finite populations in Section~\ref{sec:finite}: an evolutionary process for an asexual population of $N$ individuals, constant in size, divided in $n$ different types, that evolves according to a specific rule, with fixed time separation between generations of $\Delta t>0$ (the detailed model in the discussion above, where $\gamma$ is the inverse of the population size --- or, as we shall see, equivalently, the inter generation time).
In short, given the weak-selection principle, we are able to find a precise scaling that yields, as the thermodynimical limit. a parabolic equation with degenerate diffusion at the boundaries: namely the replicator-diffusion equation. Under a range of different scalings, however, we shall also obtain a simpler first order differential system: the replicator equation. This simpler model turns out to be a good approximation for the detailed model over a short time scale, if genetic drift is weak or selection is strong\footnote{Strong selection in this context is
not directly related or opposed to weak selection as introduced before.}. For the former approximation, nevertheless, we shall be able to show that $\lim_{\gamma\to 0}\lim_{t\to\infty}h_\gamma(t)=\lim_{t\to\infty}\lim_{\gamma\to 0}h_\gamma(t)$, and hence we conjecture that such an approximation should be uniform in time.
\subsection{Outline}
Section~\ref{sec:prelim} introduces the basic notation and provides an extended abstract of our main results. In Section~\ref{sec:finite}, we review some classical results about the discrete process (the finite population Wright-Fisher process); we also show the existence of a number of associated conservation laws,
and explicitly obtain the first moments of the Wright-Fisher process.
In Section~\ref{sec:infinite}, with the assumption of weak-selection,
we obtain a family of continuous limits of the Wright-Fisher process depending on the scalings that are derived within a weak formulation, with solutions in appropriate measure spaces. In particular, we derive the replicator-diffusion equation, and show that it satisfies continuous counterparts of the conservation laws for the discrete process. We then continue the study of the replicator-diffusion equation in Section~\ref{sec:forward}, where we derive the main properties of its solutions, including a description of the solution structure as a regular part and a sum of singular measures over the sumbsimplices, and the large time convergence to a sum of Dirac measures over the vertices of the simplex.
We also show that the probability distribution associated with all types in the
population concentrates along the evolutionary stable states.
Additionally, in Subsection~\ref{ssec:duality}, we obtain the backward equation as the proper dual of the replicator-diffusion equation,
providing a consistent generalisation of the Kimura equation for the $n$ types and arbitrary fitness functions.
In Section~\ref{sec:replicator}, we study the replicator equation and show that, in the regime of strong selection the solutions to the replicator-diffusion will be well approximated by the
solutions to the replicator equation within a finite time interval.
Numerical examples are given in Section~\ref{sec:numerics}, where we also point out that, for intermediate times and large but finite populations, the replicator equation will approximate the mode of the discrete evolution, but not the expected value of a given trait. Conclusions are presented in Section~\ref{sec:conclusion}.
\section{Preliminaries and main results}
\label{sec:prelim}
We begin by introducing the space of states for the evolution:
\begin{deff}\label{def:simplex}
Let $\mathbb{R}_+=[0,\infty)$. We define the $n-1$ dimensional simplex
\[
S^{n-1}:=\left\{\mathbf{x}\in\mathbb{R}_+^n\left|\, |\mathbf{x}|:=\sum_{i=1}^nx_i=1\right.\right\}.
\]
We also define the set of vertices of the simplex
$\Delta S^{n-1}:=\{\mathbf{x}\in S^{n-1}|\exists i, x_i=1\}$,
its interior $\mathrm{int}\,{S^{n-1}}:=\{\mathbf{x}\in S^{n-1}|\forall i, x_i>0\}$
and its boundary $\partial S^{n-1}=S^{n-1}\backslash\mathrm{int}\, S^{n-1}$.
The \textsl{state} of the population is a vector $\mathbf{x}\in S^{n-1}$.
The elements of $\Delta S^{n-1}$ are denoted $\mathbf{e}_i$, $i=1,\dots,n$ and called
``homogeneous states''. A vector $\mathbf{x}\in S^{n-1}\backslash\Delta S^{n-1}$ is a ``mixed state''.
\end{deff}
In what follows, we let $p(\mathbf{x},t)$ to be the probability density of finding the population at state $\mathbf{x}\in S^{n-1}$ at time $t\ge 0$.
\begin{deff}
The fitness of type $i$, $i=1,\dots,n$ is a continuous function
$\psi^{(i)}:S^{n-1}\to\mathbb{R}$, and the average fitness in a given population
is given by $\bar\psi(\mathbf{x}):=\sum_{i=1}^nx_i\psi^{(i)}(\mathbf{x})$. Note that we consider
the fitness function to not depend explicitly on time.
\end{deff}
In this work, we derive a family of detailed models described by a parabolic equation of
drift-diffusion type, with degenerated coefficients~\citep{DiBenedetto93,CS76},
defined in the simplex $S^{n-1}$, called \textsl{the replicator-diffusion equation}, namely:
\begin{equation}\label{replicator_diffusion_eps}
\left\{
\begin{array}{l}
\partial_t p=\mathcal{L}_{n-1,x}p:=\frac{\kappa}{2}\sum_{i,j=1}^{n-1}\partial_{ij}^2\left(D_{ij}p\right)
-\sum_{i=1}^{n-1}\partial_{i}\left(\Omega_ip\right)\ ,\\
D_{ij}:= x_i\delta_{ij}-x_ix_j\ ,\\
\Omega_i:= x_i\left(\psi^{(i)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\ ,
\end{array}\right.
\end{equation}
with $i,j=1,\dots,n-1$, $\kappa> 0$, and
where $\delta_{ij}=1$ if $i=j$ and 0 otherwise is the Kronecker delta. The above equation has a solution in the classical sense (i.e., everywhere differentiable).
Furthermore, in the classical sense, it is a well posed problem, without any boundary conditions. However, this classical solution is not
the correct limit of the discrete process. In order to find the correct limit,
equation (\ref{replicator_diffusion_eps}) is to be supplemented with
$n$ conservation laws. From now on, whenever we refer to the replicator-diffusion equation~(\ref{replicator_diffusion_eps}), we are implicitly assuming
these conservation laws.
Our main conclusions are:
\begin{enumerate}
\item An analysis of the equation (\ref{replicator_diffusion_eps}) leads to a unique solution of measure type. This will require definitions of appropriate functional spaces.
\item This unique solution approximates, in the thermodynamical limit, the evolution of a discrete population by the Wright-Fisher process
pointwise for any time. In addition, the large time asymptotics is consistent with the discrete model.
\item A reduced model, obtained by setting $\kappa=0$ in (\ref{replicator_diffusion_eps}) (with only one conservation law), is
shown to be equivalent to the replicator dynamics. This will suggest that the replicator dynamics approximates the discrete process for any $t$, however with an error increasing in $t$ along a fixed discretisation
\item Furthermore, the
solution of the replicator equation models the time evolution of the mode of the probability distribution associated to the discrete process (and not the \textsl{expected value}
of the same distribution);
\item A frequency dependent generalisation of the Kimura equation for an arbitrary number of types is
obtained by looking at the dual problem for (\ref{replicator_diffusion_eps}).
\end{enumerate}
Before going into the technical details, we explain the last paragraph a little further.
Equation~(\ref{replicator_diffusion_eps}) has two natural time scales, one for the
natural selection (the mathematical drift and, as we shall see, fully compatible with
the replicator equation), the second for the genetic drift
(the mathematical diffusion). That is why we call equation~(\ref{replicator_diffusion_eps})
together with the conservation laws to be introduced in Subsection~\ref{ssec:conservation},
the ``replicator-diffusion equation''.
More precisely, the solution of the replicator-diffusion
equation when $\kappa=0$ (which is of hyperbolic type) is the leading order term
of the solution $p_\kappa$ of the replicator-diffusion equation for small $\kappa$ (i.e., large fitness
and/or short times). The replicator-diffusion equation with zero diffusion ($\kappa=0$) happens
to be the replicator equation (or system)~\citep{HofbauerSigmund}.
In an appropriate sense, to be made precise in Section~\ref{ssec:local} (Theorem~\ref{thm:conv_to_replicator}), we have
$p_\kappa\stackrel{\kappa\to 0}{\longrightarrow} p_0$, pointwise, but not uniformly in time.
Theorem~\ref{thm:conv_to_replicator} cannot be made uniform in time, for general fitness functions and initial conditions,
as the Wright-Fisher process always converge in $t\to\infty$
to a linear combination of homogeneous states, while it is possible that the solution of the
replicator equation converges to a stable mixed state.
The former statement is the mathematical formulation of a known principle in evolutionary biology that states that ``given enough time every
mutant gene will be fixed or extinct.''~\citep{Kimura}. This means that the final state of the replicator-diffusion equation
with any $\kappa>0$ will be a linear combination of Dirac deltas at the vertices of the simplex $S^{n-1}$. Actually, for any
positive time, the solution of equation~(\ref{replicator_diffusion_eps}) with the conservation laws described above is a sum of
a classical function in the simplex plus a sum of singular measures over all the subsimplexes on $\partial S^{n-1}$ and, inductively,
on their boundaries subsimplexes. In particular, we shall have also Dirac measures supported on the vertices of the simplex.
These measures appears immediately, i.e., for any
$t>0$. This represents the fact that in a single step there is a non zero --- however, small --- probability that the population reaches
fixation through Wright-Fisher evolution. The full evolution and the final states of the replicator-diffusion equation will
be studied in Section~\ref{sec:forward}.
From the practical point of view, we are, however, often interested in transient states (``in the long run, we are
all dead'', said John Maynard Keynes), specially because the transient states become more and more
important for the discrete evolution as the population size increases. Heuristically, when the
population is large the stochastic fluctuations
decrease in importance, and therefore, its evolution is deterministic. The associated limit will be given by
equation~(\ref{replicator_diffusion_eps}), with $\kappa=0$, i.e., the hyperbolic limit of equation~(\ref{replicator_diffusion_eps}).
This equation does not develop finite-time singularities.
The relationships between the three models is summarised in Figure~\ref{fig:relations}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Fig1.png}
\caption{The boxes in the figure represents the solutions of three different models: the Wright-Fisher
process (finite population $N$), the replicator-diffusion equation (positive diffusion $\kappa$)
and the replicator equation. The vertical axis indicates the arrow of time (top-down), and the horizontal
axis indicates, first the large population limit, secondly the no-diffusion limit.
Consider that there is
a maximum acceptable error $\epsilon$ (in the $L^\infty$ norm) between the Wright-Fisher model (suitably \emph{Radonmised} --- see Subsection~\ref{ssec:continuos_rep}) and
the continuous approximation. Therefore, there is a population size $N_0$ such that for $N>N_0$ the difference between the
replicator-diffusion equation and the discrete model is less than $\epsilon$. For the replicator approximation, for any $N$, it may exist maximal time $t_{\max}(N)<\infty$ such that for $t>t_{\max}$
the error is too large.
}
\label{fig:relations}
\end{figure}
Finally, we observe that the natural formulation for the continuous limit is the weak one. For such a formulation, we only require the fitness functions to be continuous. If, in addition, these functions are also Lipschitz, we can then recast the problem in a strong sense, provided that we supplement it with the conservation laws. Finally, requiring the fitness functions to be smooth allows for a number of results about the solutions to be easily derived. In particular, one can prove a structure theorem that show that the problem is equivalent to a hierarchy of classical degenerate problems, provided that some members are interpreted as densities for singular measures.
\section{The discrete model}
\label{sec:finite}
In this section, we study the discrete model, i.e., the Wright-Fisher model for constant population, arbitrary number
of types and arbitrary fitnesses functions. We start, in Subsection~\ref{ssec:prelim} with basic definitions; in
Subsection~\ref{ssec:sstates_fstates_claws} we briefly review some important results in the literature.
We also prove that the discrete process has as many linear conservation laws as types. Additionally, we also show that the
final state is a linear superposition of these independent stationary states, with coefficients that depend on the initial condition
and that can be calculated from a set of $n$ linearly independent conservation laws. All these results will be useful in
the correct determination of the continuous process, to be done in Sections~\ref{sec:infinite} and~\ref{sec:forward}.
The discrete Wright-Fisher process was studied, with different level of details in, for
example,~\citep{Ewens,Nowak_2006,Imhof_Nowak_2006}, but, to the best of our knowledge the conservation laws associated to the process
were overlooked.
The fact that the final state in the Wright-Fisher process, among others, for a finite population is always homogeneous was also
a matter of dispute with respect to the validity of the modelling~\citep{Vickery_1988,Smith_1988}. As we will shortly see in this work, this dispute is basically a consequence of the existence of two different time scales hidden in the model: the non-diffusive (drift) and the diffusive one. See also~\cite{EthierKurtz} and \cite{EtheridgeLNM} and references therein for a discussion in the r\^ole of time scales.
\subsection{Preliminaries}
\label{ssec:prelim}
We consider a
fixed size population of $N$ individuals
at time $t$ consisting of a fraction $x_i\in\left\{0,\frac{1}{N},\frac{2}{N},\dots,1\right\}$ of
individuals of type $i=1,2,\cdots,n$.
The population evolves in discrete generations with time-step separation of $\Delta t$.
We introduce the following notation:
\begin{deff}
The \textsl{state} of a population is defined by a vector in
the $N$-discrete $n-1$-dimensional simplex
\[
S^{n-1}_N:=\left\{\mathbf{x}=(x_1,\cdots,x_n)\big||\mathbf{x}|:=\sum_{i=1}^nx_i=1,
x_i\in\left\{0,\frac{1}{N},\frac{2}{N},\cdots,1\right\}\right\}\ .
\]
We also define the set of vertices of the $n-1$-dimensional simplex
\[
\Delta S^{n-1}_N:=\{\mathbf{x}\in S^{n-1}_N|\exists i, x_i=1\}=\left\{\mathbf{e}_i|i=1,\dots,n\right\}\ .
\]
The elements of $\Delta S^{n-1}_N$ are called \textsl{the homogeneous states}.
To each type we attribute a
function, called fitness, $\Psi^{(i)}_{\Delta t}:S^{n-1}_N\to(0,\infty)$.
It is convenient to assume that $\Psi^{(i)}_{\Delta t}$ is a discretisation
of a continuous function on the simplex $S^{n-1}$; more assumptions on $\Psi^{(i)}_{\Delta t}$ will be introduced in
Section~\ref{sec:infinite}.
\end{deff}
A population at time $t+\Delta t$ is obtained from
the population at time $t$ choosing $N$ individuals with
probability proportional to the fitness.
More precisely, we define the average fitness $\bar\Psi_{\Delta t}(\mathbf{x})=\sum_{i=1}^nx_i\Psi^{(i)}_{\Delta t}(\mathbf{x})$ and
then the transition probability from a population
at state $\mathbf{y}$ to a population at state $\mathbf{x}$ is given by
\begin{equation}\label{tran_prob_WF}
\Theta_{N,\Delta t}(\mathbf{y}\to\mathbf{x})=\frac{N!}{(Nx_1)!(Nx_2)!\cdots (Nx_n)!}
\prod_{i=1}^n\left(\frac{y_i\Psi^{(i)}_{\Delta t}(\mathbf{y})}{\bar\Psi_{\Delta t}(\mathbf{y})}\right)^{Nx_i}\ .
\end{equation}
The evolutionary process given by a Markov chain with transition probabilities given by
equation~(\ref{tran_prob_WF}) is called \textsl{the (frequency dependent) Wright-Fisher process}.
Let $\mathcal{P}(t)=\left(P(\mathbf{x},t)\right)_{\mathbf{x}\in S^{n-1}_N}$, with
\[
\mathcal{P}\in\left\{P:S^{n-1}_N\times\mathbb{R}_+\to\mathbb{R}_+|\sum_{\mathbf{x}\in S^{n-1}_N}P(\mathbf{x},\cdot)=1\right\}\ ,
\]
where $P(\mathbf{x},t)$
is the probability of finding the population at a given
state $\mathbf{x}\in S^{n-1}_N$ at time $t$.
Then, the evolution is given by the so called ``master equation'':
\begin{equation}\label{discrete_evolution}
P(\mathbf{x},t+\Delta t)=
(\mathcal{T}\mathcal{P}(t))(\mathbf{x}):=
\sum_{\mathbf{y}\in S^{n-1}_N}\Theta_{N,\Delta t}(\mathbf{y}\to\mathbf{x})P(\mathbf{y},t)\ .
\end{equation}
\subsection{Stationary states, final states and conservation laws}
\label{ssec:sstates_fstates_claws}
We call an homogeneous population a population of a single type, i.e., $P(\mathbf{x},t)=\hat P_{\mathbf{v}}(\mathbf{x})$
for $\mathbf{v}\in\Delta S^{n-1}_N$,
where
\[
\hat P_{\mathbf{x}}(\mathbf{y})=\left\{
\begin{array}{ll}
1\ ,&\qquad \mathbf{y}=\mathbf{x}\ ,\\
0\ ,&\qquad\mathbf{y}\ne\mathbf{x}\ .
\end{array}
\right.
\]
From the inner product definition:
\[
\langle v,w\rangle:=\sum_{\mathbf{x}\in S^{n-1}_N}v(\mathbf{x})w(\mathbf{x})\ ,
\]
it follows immediately that
$\langle\hat P_{\mathbf{x}},\hat P_{\mathbf{y}}\rangle=\delta_{\mathbf{x},\mathbf{y}}=1$ if $\mathbf{x}=\mathbf{y}$ and 0 otherwise.
Now, we state classical results for the Wright-Fisher process that will be useful in the sequel. Firstly, from the Perron-Frobenius theorem, the operator $\mathcal{T}^\infty:S^{n-1}_N\toS^{n-1}_N$, $\mathcal{T}^\infty P:=\lim_{m\to\infty}\mathcal{T}^m P$ is
well defined. For details in the lemma below, the interested reader should consult~\cite{Karlin_Taylor_first}.
\begin{lem}\label{lem:classical}
A function $f$ defined in $S^{n-1}_N$ is a fixed state of the operator $\mathcal{T}$
if, and only if, $f$ is a linear combination of homogeneous states. In particular, $\mathcal{T}$ has exactly $n$ linearly independent eigenfunctions associated to the eigenvalue $\lambda=1$.
For all non-negative initial condition $P^{\mathrm{I}}$, the final result is
a linear combination of homogeneous states,
\[
\mathcal{T}^\infty P:=\lim_{t\to\infty}P(\cdot,t)=
\sum_{i=1}^n F^{(i)}_{P_{\mathbf{e}_i}}\hat P_{\mathbf{e}_i}\ ,
\]
\end{lem}
where $F^{(i)}_{P}:=\lim_{m\to\infty}\langle\mathbf{e}_i,\mathcal{T}^m P\rangle$ is the fixation probability of the
type $i$ in a population initially with a probability distribution $P\inS^{n-1}_N$.
\begin{deff}
We define a \textit{linear conservation law} as one given by a linear functional $\mathsf{L}$ over the functions of $S^{n-1}_N$
such that $\mathsf{L}\left(\mathcal{P}(t+\Delta t)\right)=\mathsf{L}\left(\mathcal{P}(t)\right)$. A set of linear conservation laws is linearly independent,
if the only linear combination providing a trivial conservation law $\mathsf{L}(\mathcal{P}(t))=0$ is the trivial one.
\end{deff}
\begin{prop}
Define $\mathsf{F}^{(i)}:=\sum_{\mathbf{x}\in S^{n-1}_N}F^{(i)}_{\hat P_{\mathbf{x}}}\hat P_{\mathbf{x}}$, $i=1,\dots, n$, a functional over $S^{n-1}_N$. Therefore $\mathsf{F}^{(i)}(\mathbf{x})$ is
the fixation probability of the type $i$ associated to the initial condition $\mathbf{x}$.
Finally, the
set $\{\mathsf{F}^{(1)},\dots,\mathsf{F}^{(n)}\}$ is a basis for the set of linear conservation laws associated to the operator $\mathcal{T}$.
\end{prop}
\begin{proof}
From the fact that
\[
\mathcal{T}^\infty P=\sum_{i=1}^n F^{(i)}_P\hat P_{\mathbf{e}_i}
\]
we find
\begin{equation*}
F^{(i)}_{P}=\left(\mathcal{T}^\infty P\right)\left(\mathbf{e}_i\right)=\langle\mathcal{T}^\infty P,\hat P_{\mathbf{e}_i}\rangle
=\langle P,\left(\mathcal{T}^\dagger\right)^\infty\hat P_{\mathbf{e}_i}\rangle\ .
\end{equation*}
In particular
\[
\sum_{\mathbf{x}\in S^{n-1}_N}F^{(i)}_{\hat P_{\mathbf{x}}}\hat P_{\mathbf{x}}=\sum_{\mathbf{x}\in S^{n-1}_N}
\langle\hat P_{\mathbf{x}},\left(\mathcal{T}^\dagger\right)^\infty\hat P_{\mathbf{e}_i}\rangle\hat P_{\mathbf{x}}.
\]
Finally,
\[
\left(\mathcal{T}^\dagger\right)^\infty\hat P_{\mathbf{e}_i}=\sum_{\mathbf{x}\in S^{n-1}_N}F^{(i)}_{\hat P_{\mathbf{x}}}\hat P_{\mathbf{x}}=\mathsf{F}^{(i)}\ .
\]
Therefore, $\mathsf{F}^{(i)}$ is an eigenvector of $\mathcal{T}^\dagger$.
In particular,
\[
\mathsf{F}^{(i)}(\mathbf{e}_j)=\langle\left(\mathcal{T}^\dagger\right)^{\infty}\hat P_{\mathbf{e}_i},\hat P_{\mathbf{e}_j}\rangle
=\langle\hat P_{\mathbf{e}_i},\mathcal{T}^\infty\hat P_{\mathbf{e}_j}\rangle=
\langle\hat P_{\mathbf{e}_i},\hat P_{\mathbf{e}_j}\rangle=\delta_{ij}\ .
\]
It is immediate to prove that they are linearly independent; let $\alpha_1,\dots,\alpha_n$ such
that $\sum_{i=1}^{n}\alpha_i\mathsf{F}^{(i)}=0$, i.e.,
for every $\mathbf{x}\in S^{n-1}_N$,
$
\sum_{i=1}^{n}\alpha_i\mathsf{F}^{(i)}(\mathbf{x})=0
$.
Using $\mathbf{x}=\mathbf{e}_i$, we conclude that $\alpha_i=0$, and then $\{\mathsf{F}^{(1)},\dots,\mathsf{F}^{(n)}\}$ is a basis for the eigenspace of $\mathcal{T}^\dagger$ associated to
$\lambda=1$.
Now, consider a linear conservation law $\mathsf{L}$. From standard representation theorems, there is a vector $w\in S^{n-1}_N$ such that
\begin{equation}\label{discrete_conservation_laws}
\langle\mathcal{P}(t),w\rangle=\mathsf{L}(\mathcal{P}(t))=\mathsf{L}(\mathcal{P}(t+\Delta t))
=\langle\mathcal{T}\mathcal{P}(t),w\rangle=\langle\mathcal{P}(t),\mathcal{T}^\dagger w\rangle\ .
\end{equation}
Therefore, $w$ is an eigenvector of $\mathcal{T}^\dagger$ associated to $\lambda=1$ and then it is a linear combination of $\mathsf{F}^{(i)}$, $i=1,\dots,n$.
\end{proof}
\begin{rmk}
The conservation of probability (the most natural conservation law), follows directly from the equation
\[
\sum_{i=1}^n\mathsf{F}^{(i)}(\mathbf{x})=\sum_{i=1}^nF^{(i)}_{\hat P_{\mathbf{x}}}=1\ ,\ \forall\mathbf{x}\in S^{n-1}_N\ .
\]
\end{rmk}
\subsection{Properties of the transition kernel}
The probability conservation is a consequence of the definition~(\ref{tran_prob_WF}) and reads
\begin{equation}\label{conservation_of_probability}
\sum_{\mathbf{x}\in S^{n-1}_N}\Theta_{N,\Delta t}(\mathbf{y}\to\mathbf{x})=1, \qquad\forall\mathbf{y}\in S^{n-1}_N\ .
\end{equation}
It also follows from the definition that
\begin{equation}\label{transition_stationary}
\Theta_{N,\Delta t}(\mathbf{y}\to\mathbf{x})=\left\{
\begin{array}{ll}
1&\quad\text{if}\quad\mathbf{x}=\mathbf{y}\in\Delta S^{n-1}_N\ ,\\
0&\quad\text{if}\quad\mathbf{x}\ne\mathbf{y}\in\Delta S^{n-1}_N\ ,
\end{array}
\right.
\end{equation}
which can be readily interpreted as the absence of mutations in the model.
It will be also convenient to write
\[
S^{n-1}_{N,\mathbf{x}^\pm}=\left\{\mathbf{y}\in\mathbb{R}^{n-1} | \mathbf{x}\pm\mathbf{y} \in S^{n-1}_N\right\}.
\]
and to introduce
\[
z\tau_i=y_i,\quad z=\frac{1}{\sqrt{N}}.
\quad\text{and}\quad
\mathcal{S}_{\mathbf{x},z}=\{\boldsymbol{\tau}\in\mathbb{R}^n\big|\sum_{i=1}^n\tau_i=0\ \text{and} |\tau_i|<x_i/z\}.
\]
\begin{lem}\label{theta_moments}
Define
\[
\widetilde{x}_i=\frac{x_i\Psi^{(i)}_{\Delta t}(\mathbf{x})}{\bar\Psi(\mathbf{x})}
\]
and
\[
\mathbb{G}_\Theta[h]=\sum_{z\boldsymbol{\tau}\in S^{n-1}_{N,\mathbf{x}^+}}\Theta(\mathbf{x}\to\mathbf{x}+z\boldsymbol{\tau})h(\boldsymbol{\tau})\ ,
\]
where $h:\mathcal{S}_{\mathbf{x},z}\to\mathbb{R}$.
For any $N$, we have
\begin{align*}
\mathbb{G}_\Theta[1]&=1\\
z\mathbb{G}_\Theta[\tau_i]&=\widetilde{x}_i-x_i\\
z^2\mathbb{G}_\Theta[\tau_i\tau_j]&=(\widetilde{x}_i-x_i)(\widetilde{x}_j-x_j)+z^2\left(\delta_{ij}\widetilde{x}_i-\widetilde{x}_i\widetilde{x}_j\right)\\
z^3\mathbb{G}_\Theta[\tau_i\tau_j\tau_k]&=(\widetilde{x}_i-x_i)(\widetilde{x}_j-x_j)(\widetilde{x}_k-x_k)\\
&\qquad + z^2\left[(\delta_{ij}\widetilde{x}_i-\widetilde{x}_i\widetilde{x}_j)(\widetilde{x}_k-x_k)+(\delta_{ik}\widetilde{x}_i-\widetilde{x}_i\widetilde{x}_k)(\widetilde{x}_j-x_j) + (\delta_{jk}\widetilde{x}_j-\widetilde{x}_j\widetilde{x}_k)(\widetilde{x}_i-x_i)\right]\\
&\qquad +
z^4\left[2\widetilde{x}_i\widetilde{x}_j\widetilde{x}_k-(\delta_{ij}\widetilde{x}_j\widetilde{x}_k+\delta_{ik}\widetilde{x}_i\widetilde{x}_j+\delta_{kj}\widetilde{x}_i\widetilde{x}_j)+
\delta_{ij}\delta_{ik}\delta_{jk}\widetilde{x}_i\right]
\end{align*}
\end{lem}
\begin{proof}
Let $\mathbf{q}\inS^{n-1}$, $\boldsymbol{\alpha}\in N S^{n-1}_N$ and consider the multinomial distribution given by
\begin{equation*}
f(\mathbf{q},\boldsymbol{\alpha},N)=\frac{N!}{\alpha_1!\cdots \alpha_n!}\prod_{k=1}^nq_k^{\alpha_k},\quad \boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_n),\quad\sum_i\alpha_i=N\ .
\label{mlt_pmf}
\end{equation*}
Then, $\boldsymbol{\alpha}$ is a vector of random variables with first moments given by
\begin{align*}
& \mathbb{E}[1]=1\ ,\\
& \mathbb{E}[\alpha_i]=Nq_i\ ,\\
& \mathbb{E}[(\alpha_i-Nq_i)(\alpha_j-Nq_j)]=\mathop{\mathrm{Cov}}(\alpha_i,\alpha_j)=N(\delta_{ij}q_i-q_iq_j)\ ,\\
& \mathbb{E}[(\alpha_i-Nq_i)(\alpha_j-Nq_j)(\alpha_k-Nq_k)]= N\left[q_i\delta_{ij}\delta_{kj}-\left(q_iq_k\delta_{ij}+q_iq_j\delta_{kj}+q_kq_j\delta_{ik}\right)+2q_iq_jq_k\right],
\end{align*}
where $\mathbb{E}[\cdot]$ is the expected value under the multinomial distribution.
See~\cite{Karlin_Taylor_intro} for the mean and covariance; for the sake of completeness, we provide a derivation of the third moment in Appendix~\ref{ap:thirdmomentum}.
Now, note that $\Theta_{N,\Delta t}(\mathbf{x},\mathbf{x}+z\boldsymbol{\tau})=f(\widetilde{\mathbf{x}},N(\mathbf{x}+z\boldsymbol{\tau}),N)$.
Therefore, $\boldsymbol{\alpha}=N(\mathbf{x}+z\boldsymbol{\tau})$ is a random vector that is multinomially distributed, and upon substituting $\alpha$ in the multinomial moments --- with $\mathbf{q}=\tilde{\mathbf{x}}$ --- all the identities follow after some manipulation.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\section{Continuations of the discrete model}
\label{sec:infinite}
The aim of this section is to obtain a differential equation that approximates the discrete evolution,
when the population is large ($N\to\infty$) and there is no time-separation between successive
generations ($\Delta t\to 0$). The relevant variables, $\mathbf{x}\in S^{n-1}$ and $t>0$ will
be forced to be continuous.
The first four subsections will be devoted to the
development of three models, based on partial differential equations obtained from the Wright-Fisher process, when
$N\to\infty$ and $\Delta t\to 0$ (see equations~(\ref{convective_approximation}), (\ref{diffusive_approximation})
and (\ref{replicator_diffusion}), respectively). There is no ``right choice'' of the simplified model. As we
could expect, simpler models will have a restricted application. For example,
the model given by equation (\ref{convective_approximation}) is equivalent to a system of a ordinary differential equations;
actually, it is exactly equivalent to the well-know replicator dynamics~\citep[see][]{HofbauerSigmund}.
On the other hand, the diffusive approximation, given by equation~(\ref{diffusive_approximation}),
is a parabolic partial differential equation that is much simpler to
solve than the full model; in fact, explicit solutions are know using Gegenbauer polynomials~\citep{Ewens}.
Our focus will be on the replicator-diffusion approximation, equation~(\ref{replicator_diffusion}), which we
expect to be valid uniformly in time.
Results known for the Wright-Fisher process, and
stated in Section~\ref{sec:finite} will guide the derivation, i.e., the choice of the
right thermodynamical limit. We start in Subsection~\ref{ssec:prelimcont} by the asymptotic expansion of the
transition kernel in the negligible parameters (suitable combinations of $N$ and $\Delta t$); we plug this expansion into
the master equation~(\ref{discrete_evolution}) in Subsection~\ref{ssec:weak-discrete}. In Subsection~\ref{ssec:continuos_rep}, we construct
the continuous version of the discrete probability densities; in particular, we interpolate discrete probabilities in order to represent them by continuous
probability measures; these measures will be central when we finally pass to the limits in Subsection~\ref{ssec:passage_to_the_limit}, obtaining the various continuous
approximations of the discrete model. Finally, in Subsection~\ref{ssec:conservation}, we show that,
for every conservation law of the discrete process,
there exists a corresponding conservation law in the continuous model.
As a by product, the final state of the continuous model shall be a linear superposition of homogeneous states
(see Lemma~\ref{lem:classical} and compare it with Theorem~\ref{thm:final_state}).
\subsection{Preliminaries}
\label{ssec:prelimcont}
From a biological point of view, the most important assumption in this derivation is the so called weak selection principle
\begin{equation}\label{WSP}
\Psi^{(i)}_{\Delta t}(\mathbf{y})=1+\left(\Delta t\right)^\nu\psi^{(i)}(\mathbf{y}) ,
\end{equation}
where $\psi^{(i)}:S^{n-1}\to\mathbb{R}$ is a continuous function, and $\nu>0$ is a parameter yet to be specified.
In this case, we also have
\[
\bar\Psi_{\Delta t}(\mathbf{y})=1+\left(\Delta t\right)^\nu\bar\psi(\mathbf{y})\ ,
\]
where $\bar\psi(\mathbf{x}):=\sum_{i=1}^nx_i\psi^{(i)}(\mathbf{x})$.
As an immediate corollarium of Lemma~\ref{theta_moments} we have
\begin{cor}\label{cor:theta_moments_asymp}
Assume the weak-selection principle given by equation~(\ref{WSP}). Then, equations for $\mathbb{G}_\Theta$ in Lemma~\ref{theta_moments} are given by
\begin{align*}
z\mathbb{G}_\Theta[\tau_i]&=x_i\left(\Delta t\right)^\nu\left(\psi^{(i)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)+O\left(\left(\Delta t\right)^{2\nu}\right)\ ,\\
z^2\mathbb{G}_\Theta[\tau_i\tau_j]&=
\frac{1}{N}\left(x_i\delta_{ij}-x_ix_j\right)+O\left(N^{-1}\left(\Delta t\right)^\nu,\left(\Delta t\right)^{2\nu}\right)\\
z^3\mathbb{G}_\Theta[\tau_i\tau_j\tau_k]&=O\left(\left(\Delta t\right)^{3\nu},N^{-1}\left(\Delta t\right)^\nu,N^{-2}\right)\ .
\end{align*}
\end{cor}
\begin{proof}
All equations follow from the fact that
\[
\widetilde{x}_i=x_i\frac{1+\left(\Delta t\right)^\nu\psi^{(i)}(\mathbf{x})}{1+\left(\Delta t\right)^\nu\bar\psi(\mathbf{x})}=
x_i\left[1+\left(\Delta t\right)^\nu\left(\psi^{(i)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)+O\left(\left(\Delta t\right)^{2\nu}\right)\right]
\]
\end{proof}
and from Lemma~\ref{theta_moments}.
\subsection{An asymptotic weak-discrete formulation}
\label{ssec:weak-discrete}
We rewrite the master equation~\eqref{discrete_evolution} using displacements:
\begin{equation}\label{master_new}
p(\mathbf{x},t+\Delta t,N)=\sum_{\mathbf{y} \in S^{n-1}_{N,\mathbf{x}^-}}\Theta_{N,\Delta t}(\mathbf{x}-\mathbf{y}\to\mathbf{x})p(\mathbf{x}-\mathbf{y},t,N).
\end{equation}
We use our information on moments of the process as follows:
\begin{prop}
Let $g\in C^{3,1}(\Upsilon)$, where $\Upsilon$ is any open set such that $S^{n-1}\subset\Upsilon$, and consider its restriction to the simplex $S^{n-1}$.
Then, we have
\begin{align}\label{eq:weconcludethat}
& \sum_{\mathbf{x}\in S^{n-1}_N}\left(p(\mathbf{x},t+\Delta t,N)-p(\mathbf{x},t,N)\right)g(\mathbf{x},t)\\
\nonumber
&\quad=
\sum_{\mathbf{x}\in S^{n-1}_N}p(\mathbf{x},t,N)\left[\frac{1}{2N}\sum_{i,j=1}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}g(\mathbf{x},t)
+\left(\Delta t\right)^\nu\sum_{j=1}^{n-1}x_j\partial_{x_j}g(\mathbf{x},t)(\psi^{(j)}(\mathbf{x})-\bar\psi(\mathbf{x}))\right]\\
\nonumber
&\qquad\qquad+O\left(N^{-2},\left(\Delta t\right)^{2\nu},N^{-1}\left(\Delta t\right)^\nu \right).
\end{align}
\end{prop}
\begin{proof}
On multiplying equation~(\ref{master_new}) by $g(\mathbf{x},t)$ and summing over $S^{n-1}_N$, we find that:
\begin{align*}
& \sum_{\mathbf{x}\in S^{n-1}_N}p(\mathbf{x},t+\Delta t,N)g(\mathbf{x},t)=
\sum_{\mathbf{x}\in S^{n-1}_N}\sum_{\mathbf{y}\in S^{n-1}_{N,\mathbf{x}^-}}\Theta_{N,\Delta t}(\mathbf{x}-\mathbf{y}\to\mathbf{x})p(\mathbf{x}-\mathbf{y},t,N)g(\mathbf{x},t)\\
&\quad= \sum_{\mathbf{x}\in S^{n-1}_N}\sum_{\mathbf{y}\in S^{n-1}_{N,\mathbf{x}^+}}\Theta_{N,\Delta t}(\mathbf{x}\to\mathbf{x}+\mathbf{y})p(\mathbf{x},t,N)g(\mathbf{x}+\mathbf{y},t)\\
&\quad=\sum_{\mathbf{x}\in S^{n-1}_N}p(\mathbf{x},t,N)\sum_{z\boldsymbol{\tau}\in S^{n-1}_{N,\mathbf{x}^+}}\Theta(\mathbf{x}\to\mathbf{x} +z\boldsymbol{\tau})g(\mathbf{x}+z\boldsymbol{\tau},t)\\
&\quad=\sum_{\mathbf{x}\in S^{n-1}_N}p(\mathbf{x},t,N)\sum_{z\boldsymbol{\tau}\in S^{n-1}_{N,\mathbf{x}^+}}\Theta(\mathbf{x}\to\mathbf{x} +z\boldsymbol{\tau})
\left[ g(\mathbf{x},t)+z\sum_{j=1}^{n-1}\tau_j\partial_{x_j}g(\mathbf{x},t)+
\frac{z^2}{2}\sum_{k,l=1}^{n-1}\tau_k\tau_l\partial_{x_kx_l}^2g(\mathbf{x},t)+z^3R(\mathbf{x},\boldsymbol{\tau},t,z)
\right].
\end{align*}
where there is a constant $C$, depending only on $g$, such that
\[
|R(\mathbf{x},\boldsymbol{\tau},t,z)|\leq C\|\tau\|^3.
\]
Using Corollary~\ref{cor:theta_moments_asymp}, we obtain the result.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
Equation~\eqref{eq:weconcludethat} can be seen as discrete weak formulation for $p(\mathbf{x},t,N)$ in space only, and thus
any limiting argument would require some regularity assumption on $p(\mathbf{x},t,N)$ in $t$. In order to circumvent such assumptions,
we need a full discrete weak formulation:
\begin{prop}
Let $T=M\Delta t$, where $M$ is some fixed positive integer, and let $g$ be an admissible test function,
with support in $S^{n-1}\times[0,T]$. Let
\[
\mathsf{T}=\{k\Delta t\},\quad k=0,\ldots,M-1.
\]
Then we have that
\begin{align}\label{eq:weak_discrete}
& -\sum_{t\in\mathsf{T}}\sum_{\mathbf{x}\in S^{n-1}_N}p(\mathbf{x},t,N)\left(g(\mathbf{x},t+\Delta t)-g(\mathbf{x},t)\right)
-\sum_{\mathbf{x}\in S^{n-1}_N}p(\mathbf{x},t,N)g(\mathbf{x},0)\\
\nonumber
&\quad=
\sum_{t\in\mathsf{T}}\sum_{\mathbf{x}\in S^{n-1}_N}p(\mathbf{x},t,N)\left[\frac{1}{2N}\sum_{i,j=1}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}g(\mathbf{x},t)
+\left(\Delta t\right)^\nu\sum_{j=1}^{n-1}x_j\partial_{x_j}g(\mathbf{x},t)(\psi^{(j)}(\mathbf{x})-\bar\psi(\mathbf{x}))\right]\\
\nonumber
&\qquad\qquad+O\left(N^{-2}\left(\Delta t\right)^{-1},\left(\Delta t\right)^{2\nu-1},N^{-1}\left(\Delta t\right)^{\nu-1}\right)\nonumber.
\end{align}
\end{prop}
\begin{proof}
Sum \eqref{eq:weconcludethat} over $\mathsf{T}$, and estimate the error term by its total sum, taking into account that
there are $O((\Delta t)^{-1})$ terms in this sum. This shows the right hand side of \eqref{eq:weak_discrete}.
To obtain the left hand side, we perform a summation by parts and use that $g(\mathbf{x},T)=0$. See appendix~\ref{ap:A} for details.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\subsection{Continuous representation}
\label{ssec:continuos_rep}
The aim is now to obtain a continuous version of \eqref{eq:weak_discrete}, but \textit{without taking any limits yet}.
We first need some preliminary definitions:
\begin{deff}[Piecewise time interpolation]\label{def:pti}
Let $\mathsf{T}$ be a set of sampling times as above, and let $\mathsf{T}_0$ be a set of times such that
for each $\bar{t}\in\mathsf{T}$, there exists a unique $\xi\in\mathsf{T}_0$ such that $\xi\in(\bar{t},\bar{t}+\Delta t)$.
Let $g$ be an admissible test function with support in $S^{n-1}\times[0,T]$. Observe that under the assumptions on the
sets $\mathsf{T}$ and $\mathsf{T}_0$, for each $t\in[0,T]$ there exists
a unique $\bar{t}\in\mathsf{T}$ such that $t\in[\bar{t},\bar{t}+\Delta t)$, and a unique $\xi\in (\bar{t},\bar{t}+\Delta t)$.
With this in mind, we define:
\[
\hat{g}(\mathbf{x},t)=g(\mathbf{x},\bar{t}),\quad t\in[\bar{t},\bar{t}+\Delta t),\quad \bar{t}\in\mathsf{T},
\]
and
\[
\dhat {g}(\mathbf{x},t)=g(\mathbf{x},\xi),\quad t\in[\bar{t},\bar{t}+\Delta t),\quad \xi\in(\bar{t},\bar{t}+\Delta t),\quad \bar{t}\in\mathsf{T}\text{ and }\xi\in\mathsf{T}_0.
\]
\end{deff}
\begin{rmk}
For fixed $\mathbf{x}$, we have on one hand that $\hat{g}(\mathbf{x},t)$ is just freezing the value of $g$ on $[\bar{t},\bar{t}+\Delta t)$ to be the
value of $g(\mathbf{x},\bar{t})$. On the other hand, $\dhat{g}(\mathbf{x},t)$ is freezing the value of $g$ on the same interval to be the value
of $g(\mathbf{x},\xi)$, with $\xi\in(\bar{t},\bar{t}+\Delta t)$. The natural choice for $\xi$ will arise, in the present
context, from applications of the mean value theorem to $g$ over the interval $[\bar{t},\bar{t}+\Delta t]$.
\end{rmk}
\begin{deff}[Radonmisation (sic) of discrete densities]\label{def:rdd}
Let $p(\mathbf{x},t,N)$ be a probability density defined no $S^{n-1}_N\times\mathsf{T}$. Let $\delta_{\mathbf{x}}$ denote
the atomic measure at $\mathbf{x}$. We define
\[
p_N(\mathbf{x},t)=\sum_{\mathbf{y}\in S^{n-1}_N}p(\mathbf{y},\bar{t},N)\delta_{\mathbf{y}}(\mathbf{x}),\quad t\in[\bar{t},\bar{t}+\Delta t).
\]
\end{deff}
With these definitions we have the following result
\begin{prop}\label{thm:cont_discr}
Let $g$ be an admissible test function, let $N^{-1}=\kappa\left(\Delta t\right)^\mu$, where $\mu>0$ is a second parameter
yet to be specified, and let
$p_{\Delta t}(\mathbf{x},t)=p_{\kappa^{-1}\left(\Delta t\right)^{-\mu}}(\mathbf{x},t)$.
Then there exists a set $\mathsf{T}_0$ as in Definition~\ref{def:pti}, such that
\begin{align}
&-\int_0^\infty\int_{S^{n-1}} p_{\Delta t}(\mathbf{x},t)\partial_t\dhat{g}(\mathbf{x},t)\,\d\mathbf{x}\,\d t-\int_{S^{n-1}}p_{\Delta t}(\mathbf{x},0)\hat{g}(\mathbf{x},0)\,\d\mathbf{x}\,\d t\nonumber\\
&\quad=\frac{\kappa\left(\Delta t\right)^{\mu-1}}{2}\int_0^{\infty}\int_{S^{n-1}}p_{\Delta t}(\mathbf{x},t)\left(\sum_{i,j=1}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}\hat{g}(\mathbf{x},t)\right)\,\d\mathbf{x}\,\d t \nonumber \\
&\qquad+\left(\Delta t\right)^{\nu-1} \int_0^{\infty}\int_{S^{n-1}}p_{\Delta t}(\mathbf{x},t)\left[\sum_{j=1}^{n-1}x_j\left(\psi^{(j)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\partial_{j}\hat{g}(\mathbf{x},t)\right]\d\mathbf{x} \,\d t\label{eqn:cont_discr}\\
&\qquad+O\left(\left(\Delta t\right)^{2\mu-1},\left(\Delta t\right)^{\nu+\mu-1},\left(\Delta t\right)^{2\nu-1}\right).\nonumber
\end{align}
\end{prop}
\begin{proof}
For the right hand side, we observe that $\hat{g}(\mathbf{x},t)=g(\mathbf{x},t)$ for $\mathbf{x}\in S_N^{n-1}$ and $k\Delta t \leq t < (k+1)\Delta t$,
$k=0,1,\ldots$, and that this also holds for all partial derivatives of $g$ not involving $t$. On using the definition of $p_{\Delta t}$, we readily obtain the equivalence between the sums over $S_N^{n-1}$ and the integrals in $\mathbf{x}$. For the time integrals, we point out that both $p_{\Delta t}(\mathbf{x},t)$, $\hat{g}(\mathbf{x},t)$ and similarly for the derivatives of $g$ are piecewise constant in $t$. Hence the summation over time can be exactly converted into a time integral with a factor of $(\Delta t)^{-1}$.
As for the left hand side, apply the mean value theorem to $g(\mathbf{x},\cdot)$ to get the result and the set $\mathsf{T}_0$.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\begin{rmk}
The reader is cautioned that, although \eqref{eqn:cont_discr} has a remarkable resemblance with a weak formulation,
it is not quite so, since the prospective test functions $\hat{g}$ and $\dhat{g}$ are not test functions in the usual sense.
\end{rmk}
\subsection{Passage to the limit}
\label{ssec:passage_to_the_limit}
We now deal with the limit $\Delta t\to0$ in \eqref{eqn:cont_discr}.
\begin{thm}\label{thm:weak_conv}
Under the same assumptions of Proposition~\ref{thm:cont_discr}, we have that, for any choice of parameters $\mu$ and $\nu$,
there exists $p\in L^\infty([0,T],\mathsf{BM}^+\left(S^{n-1}\right)))$, where $\mathsf{BM}^+\left(S^{n-1}\right)$ is the set of positive measures of bounded variation on $S^{n-1}$, such that $p_{\Delta t}(\mathbf{x},t)\to p(\mathbf{x},t)$ weakly as $\Delta t\to0$. Moreover, the following
limits also hold:
\begin{align*}
& \int_0^\infty\int_{S^{n-1}} p_{\Delta t}(\mathbf{x},t)\partial_t\dhat{g}(\mathbf{x},t)\,\d\mathbf{x}\,\d t \to \int_0^\infty\int_{S^{n-1}} p(\mathbf{x},t)\partial_tg(\mathbf{x},t)\,\d\mathbf{x}\,\d t\\
&\int_0^{\infty}\int_{S^{n-1}}p_{\Delta t}(\mathbf{x},t)\left(\sum_{i,j=1}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}\hat{g}(\mathbf{x},t)\right)\,\d\mathbf{x}\,\d t\\
&\qquad\qquad\to
\int_0^{\infty}\int_{S^{n-1}}p(\mathbf{x},t)\left(\sum_{i,j=1}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}g(\mathbf{x},t)\right)\,\d\mathbf{x}\,\d t\\
& \int_0^{\infty}\int_{S^{n-1}}p_{\Delta t}(\mathbf{x},t)\left[\sum_{j=1}^{n-1}x_j\left(\psi^{(j)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\partial_{j}\hat{g}(\mathbf{x},t)\right]\d\mathbf{x}\,\d t \\
&\qquad\qquad\to
\int_0^{\infty}\int_{S^{n-1}}p(\mathbf{x},t)\left[\sum_{j=1}^{n-1}x_j\left(\psi^{(j)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\partial_{j}g(\mathbf{x},t)\right]\d\mathbf{x}\,\d t
\end{align*}
\end{thm}
\begin{proof}
From the tightness of Radon measures, cf. \cite{Billingsley:1999}, we have that there exists a sequence $\Delta t_n>0$, with $\Delta t_n \downarrow0$ as $n\to\infty$, and $p\in L^\infty([0,T],\mathsf{BM}^+\left(S^{n-1}\right))$, such that
%
\[
\lim_{n\to\infty}p_{\Delta t_n}(\mathbf{x},t)=p(\mathbf{x},t).
\]
The convergence of the integrals follows from the weak convergence of $p_{\Delta t_n}\to p$, and from the fact that for a
continuous function $h$, we have
\[
\lim_{\Delta t\to0}\|h-\hat{h}\|_\infty=\lim_{\Delta t\to0}\|h-\dhat{h}\|_\infty=0.
\]
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
If either $\mu<1$ or $\nu<1$, we can multiply \eqref{eqn:cont_discr} by $(\Delta t)^{-\min(\nu-1,\mu-1)}$. It is then
easily verified that the error term vanishes in the limit, as well as the term with a time derivative. Thus, in this case,
we obtain stationary limits governed by the steady version of the equations derived below.
Now let us assume that $\mu,\nu\ge 1$. It is easily verified that the error term will be small. If both $\mu,\nu>1$, we have stationary solutions given by the initial condition.
The other cases are as follows:
\begin{thm}\label{thm:weak_limits}
There exists $p\in L^\infty([0,T];\mathsf{BM}^+\left(S^{n-1}\right))$ such that
\begin{description}
\item If $\mu>1$, $\nu=1$, the \textsl{convective or drift approximation}:
\begin{align}
&-\int_0^{\infty}\int_{S^{n-1}}p(\mathbf{x},t)\partial_tg(\mathbf{x},t)\,\d\mathbf{x}\,\d t - \int_{S^{n-1}}p(\mathbf{x},t_0)g(\mathbf{x},t_0)\,\d\mathbf{x} \nonumber\\
&\quad=\int_0^{\infty}\int_{S^{n-1}}p(\mathbf{x},t)\left[\sum_{j=1}^{n-1}x_j\left(\psi^{(j)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\partial_{j}g(\mathbf{x},t)\right]\d\mathbf{x}\,\d t.\label{weak:convective}
\end{align}
\item If $\mu=1$, $\nu>1$, the \textsl{diffusive approximation}
\begin{align}
&-\int_0^{\infty}\int_{S^{n-1}}p(\mathbf{x},t)\partial_tg(\mathbf{x},t)\,\d\mathbf{x}\,\d t - \int_{S^{n-1}}p(\mathbf{x},t_0)g(\mathbf{x},t_0)\,\d\mathbf{x} \nonumber\\
&\quad=\frac{\kappa}{2}\int_0^{\infty}\int_{S^{n-1}}p(\mathbf{x},t)\left(\sum_{i,j=1}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}g(\mathbf{x},t)\right)\,\d\mathbf{x}\,\d t.\label{weak:diffusion}
\end{align}
\item If $\mu=1$, $\nu=1$, the case where there is a maximal balance of selection and genetic drift; we find the \textsl{replicator-diffusion equation}
\begin{align}
&-\int_0^{\infty}\int_{S^{n-1}}p(\mathbf{x},t)\partial_tg(\mathbf{x},t)\,\d\mathbf{x}\,\d t - \int_{S^{n-1}}p(\mathbf{x},t_0)g(\mathbf{x},t_0)\,\d\mathbf{x} \nonumber\\
&\quad=\frac{\kappa}{2}\int_0^{\infty}\int_{S^{n-1}}p(\mathbf{x},t)\left(\sum_{i,j=1}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}g(\mathbf{x},t)\right)\,\d\mathbf{x}\,\d t \label{weak:replicator_diffusion}\\
&\qquad+ \int_0^{\infty}\int_{S^{n-1}}p(\mathbf{x},t)\left[\sum_{j=1}^{n-1}x_j\left(\psi^{(j)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\partial_{j}g(\mathbf{x},t)\right]\d\mathbf{x}\,\d t.\nonumber
\end{align}
\end{description}
\end{thm}
\begin{proof}
The result follows from Theorem~\ref{thm:weak_conv}, and from straightforward bookkeeping of the $\Delta t$ orders
of the terms in \eqref{eqn:cont_discr}. \ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\begin{rmk}
Alternatively, Theorems~\ref{thm:weak_conv} and \ref{thm:weak_limits} can be seen together as an existence theorem for equations~\eqref{weak:convective}, \eqref{weak:diffusion} and \eqref{weak:replicator_diffusion}. Under additional regularity hypothesis on the fitness functions we have uniqueness --- see Section~\ref{sec:forward} --- and then the limit is unique.
\end{rmk}
Equations~(\ref{weak:convective}), (\ref{weak:diffusion}) and~(\ref{weak:replicator_diffusion}) are written in the weak form.
In population dynamics, and in others contexts as well, they are used casted into the strong formulation (or standard PDE formulation) as follows (see, however, remark~\ref{rmk:weak_to_pde}):
\begin{itemize}
\item If $\mu>1$ and $\nu=1$, the \textsl{convective of drift approximation}:
\begin{equation}\label{convective_approximation}
\partial_tp=-\sum_{i=1}^{n-1}\partial_i\left[x_i\left(\psi^{(i)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)p\right] .
\end{equation}
This equation is equivalent to the replicator dynamics, showing that the Wright-Fisher process will be equivalent to the
the replicator dynamics, in the limit of large population and small time-steps, if the population increases faster than
the time-step decreases.
\item If $\mu=1$ and $\nu>1$, the \textsl{diffusive approximation}
\begin{equation}\label{diffusive_approximation}
\partial_t p=\frac{\kappa}{2}\sum_{i,j=1}^{n-1}\partial_{ij}\left((x_i\delta_{ij}-x_ix_j)p\right) ,
\end{equation}
which is relevant when the fitness converges to 1 as $\Delta t\to 0$ faster than $N\to\infty$.
\item When there is a perfect balance between population size and time step, i.e., $\mu=\nu=1$, we find the \textsl{replicator-diffusion approximation}, given by equation~(\ref{replicator_diffusion_eps}), which we repeat here for convenience:
\begin{equation}\label{replicator_diffusion}\tag{\ref{replicator_diffusion_eps}'}
\partial_t p=\frac{\kappa}{2}\sum_{i,j=1}^{n-1}\partial_{ij}\left((x_i\delta_{ij}-x_ix_j)p\right)
-\sum_{i=1}^{n-1}\partial_i\left[x_i\left(\psi^{(i)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)p\right] .
\end{equation}
\end{itemize}
We shall focus on the last equation and on its weak formulation~(\ref{weak:replicator_diffusion}).
\begin{rmk}\label{rmk:weak_to_pde}
We shall see in Section~\ref{sec:forward} that the weak and the PDE formulations are not equivalent, and that the correct formulation is actually the weak one.
\end{rmk}
\subsection{Conservation laws from the discrete process}
\label{ssec:conservation}
Let us write $\mathfrak{S}$ for the set of all functions $g:S^{n-1}\times[0,+\infty)$ such that there exist an open set
$\bar\Upsilon \supset S^{n-1}$ and a function $G:\bar\Upsilon\times[0,+\infty)\to\mathbb{R}$ such that $g$ is the restriction of $G$ to
$S^{n-1}$ and $G\in C^{2,1}(\bar\Upsilon)$.
Notice that in the right hand side of \eqref{weak:replicator_diffusion}, $p$ is multiplied by:
\begin{equation}\label{adjoint}
\frac{\kappa}{2}\sum_{i,j=1}^{n-1}D_{ij}\partial_{ij}^2g+
\sum_{i=1}^{n-1}\Omega_i\partial_ig=0.
\end{equation}
Equation~\eqref{adjoint} is readily seen to be a steady backward equation.
We now show that the weak solutions have also conservation laws.
\begin{thm}
Let $p$ be a solution to \eqref{weak:replicator_diffusion} (we shall take \eqref{weak:diffusion} as a special case). Let $\varphi\in\mathfrak{S}$ be in the kernel of \eqref{adjoint}.
Then
\[
\int_{S^{n-1}}p(\mathbf{x},t)\varphi(\mathbf{x})\,\d\mathbf{x} = \int_{S^{n-1}}p(\mathbf{x},0)\varphi(\mathbf{x})\,\d\mathbf{x},
\]
for almost every $t\in[0,\infty)$.
\end{thm}
\begin{proof}
Let $\eta(t) \in C_c([0,\infty))$, with $\eta(0)=1$. Then
\[
g(\mathbf{x},t)=\eta(t)\varphi(\mathbf{x})
\]
is an admissible test function. On substituting in \eqref{weak:replicator_diffusion}, we find that
\[
\int_0^\infty\int_{S^{n-1}}p(\mathbf{x},t)\varphi(\mathbf{x})\eta'(t)\,\d\mathbf{x}\,\d t + \int_{S^{n-1}}p(\mathbf{x},0)\varphi(\mathbf{x})\,\d\mathbf{x}=0.
\]
Since $\eta$ is an arbitrary function with compact support in $[0,\infty)$, the result follows. \ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
A similar argument shows also the following
\begin{thm}
Let $p$ be a solution to \eqref{weak:convective}.
Then
\[
\int_{S^{n-1}}p(\mathbf{x},t)\,\d\mathbf{x} = \int_{S^{n-1}}p(\mathbf{x},0)\,\d\mathbf{x},
\]
for almost every $t\in[0,\infty)$.
\end{thm}
Therefore, the conservation laws given by equation~(\ref{discrete_conservation_laws}) now become
\begin{equation}\label{eqn:tobe_cons}
\frac{\d}{\d t}\int_{S^{n-1}}p(t,x)\varphi(x)\d x=0,
\end{equation}
where $\varphi$ satisfies \eqref{adjoint}.
In principle the condition set out by \eqref{eqn:tobe_cons} seems to imply an infinite (likely to be uncountable)
number of conservation laws. The following result shows that it is actually much more conspicuous:
\begin{thm}
\label{thm:finite_cons}
Let $\mathbf{e}_i$ denote the vertices of $S^{n-1}$. Then there exist unique $\rho_i$, $i=1,\ldots,n-1$, with $\rho_i(\mathbf{e}_j)=\delta_{ij}$ that are solutions to \eqref{adjoint}. In addition, let $\rho_0\equiv1$. Then, any solution to \eqref{adjoint} in $\mathfrak{S}$ can be written as a linear combination of $\rho_i$, $i=0,\dots,n-1$. In particular its kernel, for solutions in $\mathfrak{S}$, has dimension $n$.
\end{thm}
\begin{proof}
Given a vertex $\mathbf{e}_i$, let $\mathbf{e}_j$ be an adjacent vertex. Now we solve \eqref{adjoint} in the segment $\overline{\mathbf{e}_j\mathbf{e}_i}$ with boundary values $\delta_{ij}$. In the segments not adjacent to $\mathbf{e}_i$ define the solution to be zero. This defines the solution in all one-dimensional simplices. For each two-dimensional subsimplex, we now solve the Dirichlet problem with the data from the previous step. Now, assume that we have the solution uniquely defined in all subsimplices of dimension $m$. Repeating the construction above yields the solution in all subsimplices of dimension $m+1$. Proceeding inductively, this yields a solution in $S^{n-1}$ that is unique and admissible. Uniqueness follows from the maximum principle applied at each subsimplex level. Let $\varphi\in\mathfrak{S}$, and let
\[
\Phi(\mathbf{x})=\sum_{i=1}^n\varphi(\mathbf{e}_i)\rho_i(\mathbf{x}).
\]
Then $\Phi\in\mathfrak{S}$. By the preceding argument, $\Phi$ and $\varphi$ must agree at all edges of $S^{n-1}$. Proceeding inductively once again yields that $\varphi=\Phi$ in $S^{n-1}$. \ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
Thus, the solutions to \eqref{weak:replicator_diffusion} must satisfy:
\begin{equation}\label{continuous_conservation_laws}
\frac{\d}{\d t} \int_{S^{n-1}}\rho_i(\mathbf{x})p(\mathbf{x},t)\d\mathbf{x}=0\ ,\quad i=1,\cdots,n\ .
\end{equation}
From a probabilistic viewpoint, the $\rho_i$, $i=1,\ldots,n$, are naturally identified with the fixation probability of type $i$.
We now give a pure analytical argument for this fact. In Section~\ref{sec:forward}, we shall prove Theorem~\ref{thm:final_state}
which shows that the final state is given by
\begin{equation*}
p^\infty[p^{\mathrm{I}}]=\lim_{t\to\infty}p(\cdot,t)=\sum_{i=1}^n\pi_i[p^{\mathrm{I}}]\delta_{\mathbf{e}_i}\ ,
\end{equation*}
where $\delta_{\mathbf{e}_i}$ is a Dirac measure supported on the vertex $\mathbf{e}_i\in S^{n-1}_N$.
Clearly, $\pi_i[p^{\mathrm{I}}]$ is the fixation probability of type $i$ in a population initially described by
a probability distribution $p^{\mathrm{I}}$.
Therefore,
\[
\pi_i[\delta_{\mathbf{x_0}}]=\int\rho_i(\mathbf{x})p^\infty(\mathbf{x})\d\mathbf{x}=\int\rho_i(\mathbf{x})p^{\mathrm{I}}(\mathbf{x})\d\mathbf{x}
=\int\rho_i(\mathbf{x})\delta_{\mathbf{x}_0}(\mathbf{x})\d\mathbf{x}=\rho_i(\mathbf{x}_0).
\]
\begin{rmk}
In the neutral case, i.e., $\psi^{(i)}(\mathbf{x})=\psi^{(j)}(\mathbf{x})$ for all $i,j=1,\dots,n$ and $\mathbf{x}\in S^{n-1}$,
we define the \textsl{neutral fixation probability} $\pi_i^{\mathrm{N}}[\delta_{\mathbf{x}}]=x_i$, which follows from the fact
that in the neutral case, $\rho_i(\mathbf{x})=x_i$.
\end{rmk}
\section{The Replicator-Diffusion approximation}
\label{sec:forward}
We now discuss the nature of solutions $p$ to (\ref{replicator_diffusion}) together with the conservation
laws (\ref{continuous_conservation_laws}). The main result of this section is Theorem~\ref{thm:final_state}.
This must be understood as the continuous counterpart of
the Lemma~\ref{lem:classical}. We do not refer to the discrete model to prove this result.
Our approach is based solely in the properties of the partial differential equation~(\ref{replicator_diffusion}), the restriction
of the domain to the domain of interest, and
the associated conservation laws~(\ref{continuous_conservation_laws}).
An outline of the proof of Theorem~\ref{thm:final_state} is as follows:
First, we show that a solution to \eqref{weak:replicator_diffusion} can be written as regular part plus a singular measure over the boundary. Moreover, the regular part vanishes for large time. Repeating these arguments over the lower dimensional subsimplices, and using the projection result in Proposition~\ref{prop:face_proj}, we arrive at a representation of $p$ as a sum of its classical solution and
a sum of singular measures that are uniformly supported on the descending chain of subsimplices of $S^{n-1}$ down to the zeroth dimension.
Since the solutions over the subsimplices also have a regular part that vanishes, we can show that all measures that are not atomically supported at the vertices should vanish for large time. Thus, conservation of probability implies that the steady state of \eqref{weak:replicator_diffusion} is a sum of deltas.
Finally, we provide two applications.
In Subsection~\ref{ssec:duality}, we study the dual equation. This will be the continuous limit of the
evolution by the dual equation (backward equation) of the discrete process and therefore its solution $f(\mathbf{k},t)$ gives the fixation probability at time $t$ of a given type (to be prescribed by the boundary conditions in the dual process) for a population initially at state $\mathbf{k}$. This gives a generalization for an arbitrary number of types and for arbitrary fitnesses
of the celebrated Kimura equation with reversed time~\citep{Kimura}. In the sequel, Subsection~\ref{ssec:strat_domin},
we will show that if one type dominates all other types then, for any initial condition, the fixation probability of this
type will be larger than the neutral fixation probability. This shows, in particular, that for large populations, the
most probable type to fixate will be the one playing the Nash-equilibrium strategy of the game (assuming the identity
between fitness and pay-offs, which is standard in this framework). This is not true in general for small populations~\citep{Nowak_2006}.
\subsection{Solution of the replicator-diffusion equation}
\label{ssec:solution}
We now study in more detail the features of the solution to (\ref{weak:replicator_diffusion}) and show two important results: first that in the interior of the simplex, the solution must satisfy (\ref{replicator_diffusion}) in the classical sense; second, no classical solution to (\ref{replicator_diffusion}) can satisfy the conservation laws.
Throughout this section, we shall have the further assumption that the fitnesses are smooth.
We start by showing that, in the interior, a weak solution is regular enough to be a classical solution.
\begin{lem}
\label{lem:classical_int}
Let $p$ be a solution to (\ref{weak:replicator_diffusion}). Let $K\subset S^{n-1}$ be a proper compact subset. Then, in $K$, $p$ satisfies (\ref{replicator_diffusion}) in the classical sense. In particular, $p\in C([0,T];C^{\infty}(\mathop{\mathrm{int}}(S)))$.
\end{lem}
\begin{proof}
Let $g \in C_c^\infty(K)$, we have then the standard weak formulation of (\ref{replicator_diffusion}) in $K$. On the other hand, (\ref{replicator_diffusion}) is uniformly parabolic in any proper subset. Hence the weak and strong formulations coincide --- c.f. \citep{Evans,Taylor96a}.
The last statement follows from $\mathop{\mathrm{int}}(S)=\cup_{K\subset S}K$, with $K$ compact and $K\cap\partial S^{n-1}=\emptyset$. \ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
The next two Lemmas show existence of a unique classical solution, and that such a solution decays to zero for large time.
\begin{lem}
\label{lem:ncons_pde}
Let $p$ be a classical solution to (\ref{replicator_diffusion}). Then
\[
\lim_{t\to\infty} p(\mathbf{x},t)=0,\quad \mathbf{x}\in \mathop{\mathrm{int}} S^{n-1}.
\]
\end{lem}
\begin{proof}
We define $\mu_{\mathrm{S}}(\mathbf{x})=x_1x_2\cdots x_n$
(such that $\mu_{\mathrm{S}}(\mathbf{x})\ge 0$ in $S^{n-1}$ with $\mu_{\mathrm{S}}=0$ if and only if $\mathbf{x}\in\partial S^{n-1}$).
Note that
\begin{equation}
\sum_{j=1}^{n-1}\partial_j\left(\frac{D_{ij}}{\mu_{\mathrm{S}}}\right)=\mu_{\mathrm{S}}^{-1}\left[\sum_{j=1}^{n-1}\left(\delta_{ij}-\delta_{ij}x_j-x_i\right)
-\sum_{j=1}^{n-1}\left(x_j\delta_{ij}-x_ix_j\right)\left(\frac{1}{x_j}-\frac{1}{x_n}\right)\right]=0\ .
\label{change_zero}
\end{equation}
We introduce the new variable $u=\mu_{\mathrm{S}}p$ and
after some manipulations, we find
\begin{equation}
\partial_t u=\mu_{\mathrm{S}}\nabla\cdot\left[\mu_{\mathrm{S}}^{-1}\left(\frac{\kappa}{2}D\nabla u-\boldsymbol{\Omega} u\right)\right]\ ,
\label{RD_chang_var}
\end{equation}
with $D=\left(D_{ij}\right)_{i,j=1,\dots,n-1}$ and $\boldsymbol{\Omega}=\sum_{i=1}^{n-1}\Omega_i\mathbf{e}_i$.
We now show that the last equation is well defined in $S^{n-1}$. For the second order term, this follows from a new application
of equation~(\ref{change_zero}). For the first order term, note that
\[
\mu_{\mathrm{S}}\nabla\cdot\left(\frac{\boldsymbol{\Omega}u}{\mu_{\mathrm{S}}}\right)=
\nabla\cdot\boldsymbol{\Omega} u-\frac{\boldsymbol{\Omega}\cdot\nabla\mu_{\mathrm{S}}}{\mu_{\mathrm{S}}}u+\boldsymbol{\Omega}\cdot\nabla u\ .
\]
Furthermore
\[
\frac{\boldsymbol{\Omega}\cdot\nabla\mu_{\mathrm{S}}}{\mu_{\mathrm{S}}}=\sum_{i=1}^{n-1}\Omega_i\left(\frac{1}{x_i}-\frac{1}{x_n}\right)=
\sum_{i=1}^n\left(\psi^{(i)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\ .
\]
We shall now study the eigenvalue problem associated to \eqref{RD_chang_var} by
considering the dual problem, with respect to the measure $(\mu_S)^{-1}\d\mathbf{x} $, and with regularised coefficients:
\begin{equation}
\left\{\begin{array}{ll}
&\mu_{\mathrm{S}}^{(\varepsilon)}\nabla\cdot\left[\frac{\kappa}{2\mu_{\mathrm{S}}^{(\varepsilon)}}D^{(\varepsilon)}\nabla \varphi^{(\varepsilon)}\right] + s \boldmath{\Omega}\cdot\nabla\varphi^{(\varepsilon)}=
\lambda^{(\varepsilon)}\varphi^{(\varepsilon)},\\
&\varphi^{(\varepsilon)}=0\text{ in }\partialS^{n-1}\ ,
\end{array}\right.
\label{for:evp2}
\end{equation}
where $D^{(\varepsilon)}(\mathbf{x})$ is a positive defined matrix in $S^{n-1}$, with $D^{(\varepsilon)}\stackrel{\varepsilon\to0^+}{\longrightarrow}D$ uniformly in $\mathbf{x}$, and $\mu^{(\varepsilon)}_{\mathrm{S}}>0$ in $S^{n-1}$, $\mu^{(\varepsilon)}_{\mathrm{S}}\stackrel{\varepsilon\to0^+}{\longrightarrow}\mu_{\mathrm{S}}$ uniformly in $\mathbf{x}$, and $s$ is a real parameter.
First we observe that, for $\varepsilon\geq0$, equation \eqref{for:evp2} satisfies a maximum principle for solutions in $C^2(\mathop{\mathrm{int}}(S^{n-1}))$; therefore if $\lambda^{(\varepsilon)}=0$ we have $\varphi^{(\varepsilon)}=0$ in $S^{n-1}$; see \cite{Crandalletal1992}. We conclude that $\lambda^{(\varepsilon)}\not=0$, for $s\in\mathbb{R}$ and $\varepsilon\geq0$. Additionally, since the coefficients are smooth, the solution to equation \eqref{for:evp2} is smooth in the interior by standard elliptic regularity.
For $\varepsilon>0$, the dominant eigenvalue $\lambda_0^{(\varepsilon)}$ is real and from the maximum principle it follows that
$\lambda_0^{(\varepsilon)}\not=0$. For $s=0$, $\lambda_0^{(\varepsilon)}$ is negative and therefore from its continuity in $s$, we conclude that $\lambda^{(\varepsilon)}_0<0$ for any $s$. Therefore,
for any other eigenvalue $\mathrm{Re}\left(\lambda^{(\varepsilon)}\right)\le\lambda^{(\varepsilon)}_0<0$ (see~\citep{Evans} for further details).
Moreover, let $\varepsilon_k\to0$ be a decreasing sequence of positive numbers, and $\varphi^{(\varepsilon_k)}\ge 0$ be the normalised eigenfunctions for the corresponding leading eigenvalues. Since the coefficients are assumed smooth, the eigenfunctions are also smooth. Hence, by Rellich theorem, there is a subsequence $\varepsilon_{k_j}$ such that $\varphi^{(\varepsilon_{k_j})}$ converges in $L^2(S^{n-1})$. By considering the weak formulation for equation~\eqref{for:evp2}, we immediately see that, for this subsequence, we must also have $\lambda^{(\varepsilon_{k_j})}_0\to\lambda_0$.
We thus have obtained a real negative eigenvalue $\lambda_0$, with a real eigenfunction that is single signed. Using the same argument in \cite{Evans}, we conclude that $\lambda_0$ is the principal eigenvalue for the nonregularised problem. Hence, any other eigenvalue will satisfy $\mathop{\mathrm{Re}}(\lambda)\leq\lambda_0$.
This also shows that there exists $\alpha>0$, such that
\[
\frac{1}{2}\partial_t\int_{S^{n-1}} u^2\mu_S^{-1}d\mathbf{x}=
\int_{S^{n-1}}\mu_{\mathrm{S}}\nabla\cdot\left[\mu_{\mathrm{S}}^{-1}\left(\frac{1}{2}D\nabla u - \boldsymbol{\Omega}u\right)\right] u \,\mu_{\mathrm{S}}^{-1}\d\mathbf{x}
<-\alpha \int_{S^{n-1}}u^2\,\mu_{\mathrm{S}}^{-1}\d\mathbf{x}.
\]
Therefore
\[
\int p^2\mu_{\mathrm{S}}\d\mathbf{x}=\int u^2\mu_{\mathrm{S}}^{-1}\d\mathbf{x}\stackrel{t\to\infty}{\rightarrow}0\ .
\]
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\begin{lem}\label{lem:unique}
Equation~(\ref{RD_chang_var}) has a unique solution $u\in C\left([0,\infty);C^{\infty}(\mathrm{int}\left(S^{n-1}\right)\right)$.
\end{lem}
\begin{proof}
Consider equation~(\ref{RD_chang_var}) with $D=D^{(\varepsilon)}$ and $\mu_{\mathrm{S}}=\mu_{\mathrm{S}}^{(\varepsilon)}$, with $D^{(\varepsilon)}$ and $\mu_{\mathrm{S}}^{(\varepsilon)}$ as in Lemma~\ref{lem:ncons_pde}.
For $\varepsilon>0$, it is uniformly parabolic, and hence it has a unique solution with the required regularity.
We write (\ref{RD_chang_var}) in weak form as
\begin{align*}
& \int_0^\infty\int_{S^{n-1}}u^{(\varepsilon)}(t,x)\partial_t\phi(t,x)\left(\mu_{\mathrm{S}}^{(\varepsilon)}\right)^{-1}\d\mathbf{x}\d t\\
&\qquad
+\int_0^\infty\int_{S^{n-1}}\left(\mu_{\mathrm{S}}^{(\varepsilon)}\right)^{-1}\left(\frac{1}{2}D\nabla u^{(\varepsilon)}-\boldmath{\Omega}u^{(\varepsilon)}\right)\cdot\nabla\phi(t,x)\d\mathbf{x}\d t\\
&\qquad+\int_{S^{n-1}}u^{(\varepsilon)}(0,x)\phi(0,x)\left(\mu_{\mathrm{S}}^{(\varepsilon)}\right)^{-1}\d\mathbf{x}=0.
\end{align*}
We now observe that any such solution is bounded in $\aleph=L^2((0,T);W^{2,1}_0)$. Hence, one can select a sequence $\varepsilon_k\downarrow0$ such that $u^{(\varepsilon_k)}\to u^{*}\in\aleph$. Since \eqref{RD_chang_var} is weakly parabolic, such a solution must be unique --- \cite{Lieberman1996}. Finally, regularity follows from Lemma~\ref{lem:classical_int}.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
The preceding two lemmas have an important consequence:
\begin{cor}
\label{cor:no_cons}
No solution to (\ref{replicator_diffusion}) in the classical sense can satisfy the required conservation laws. In particular, this shows that the weak-formulation presented in Theorem~\ref{thm:weak_limits} is not only a device for obtaining the continuous limit, but it turns out to be the correct formulation.
\end{cor}
In view of Corollary~\ref{cor:no_cons}, we turn back to the weak formulation given by \eqref{weak:replicator_diffusion} or, equivalently, to \eqref{replicator_diffusion} together with the conservation laws \eqref{continuous_conservation_laws}. In what follows, we obtain some more information about such solutions.
\begin{rmk}\label{rmk:decomp}
As an extension of Lemma~4.1 in \cite{Chalub_Souza:CMS_2009}, we observe that if $p$ is a Radon measure in $S^{n-1}$,
we can write $p=q+r$, with $\mathop{\mathrm{sing\ supp}}(q)\in\partial S^{n-1}$ and $\mathop{\mathrm{sing\ supp}}(r)\in \mathop{\mathrm{int}} S^{n-1}$.
\end{rmk}
\begin{prop}[Face Projections]
\label{prop:face_proj}
Let $1\leq k<n$ and let $p$ be a solution to \eqref{weak:replicator_diffusion}. Let $S^{k-1}$ be a face of $S^{k}$. Assume that $\mathop{\mathrm{sing\ supp}}(p)\cap S^{k-1}\not=\emptyset$. Then, over $S^{k-1}$, $p$ satisfies \eqref{weak:replicator_diffusion} in one less dimension
with forcing given by the regular part of $p$ evaluated at $x_i=0$, for a certain value of $i$.
\end{prop}
\begin{proof}
Assume, without loss of generality, that $i=1$.
In view of remark~\ref{rmk:decomp}, we can write $p=q+r$, where $\mathop{\mathrm{sing\ supp}}(q)\subset S^{n-2}$ with the singular support of $r$ lying in the complement with respect to the full simplex. Moreover, we can also assume, without loss of generality, that $S^{n-2}$ is given by the intersection of the hyperplane $x_1=0$ with $S^{n-1}$. Let us write $\mathbf{x}=(x_2,\ldots,x_n)$. Let $h$ be an appropriate test function in $S^{n-2}$, satisfying $h(\mathbf{x},0)=0$ and let $\eta(x_1)\in C_c([0,1])$, with $\eta(0)=1$. Then $g=\eta h$ is an appropriate test function for $S^{n-1}$ and a direct computation with \eqref{weak:replicator_diffusion} then yields
%
\begin{align*}
&-\int_0^{\infty}\int_{S^{n-1}}p(x_1,\mathbf{x},t)\partial_tg(x_1,\mathbf{x},t)\,\d\mathbf{x}\,\d t\\
&\quad=\frac{\kappa}{2}\int_0^{\infty}\int_{S^{n-1}}p(x_1,\mathbf{x},t)\left(\sum_{i,j=1}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}g(x_1,\mathbf{x},t)\right)\,\d\mathbf{x}\,\d t \\
&\qquad+ \int_0^{\infty}\int_{S^{n-1}}p(x_1,\mathbf{x},t)\left[\sum_{j=1}^{n-1}x_j\left(\psi^{(j)}(x_1,\mathbf{x})-\bar\psi(x_1,\mathbf{x})\right)\partial_{j}g(x_1,\mathbf{x},t)\right]\d\mathbf{x}.
\end{align*}
Over $x_1=0$, on using the definition of $g$, we find that
\begin{align*}
-\int_0^{\infty}\int_{S^{n-2}}q(\mathbf{x},t)\partial_th(\mathbf{x},t)\,\d\mathbf{x}\,\d t
&=\frac{\kappa}{2}\int_0^{\infty}\int_{S^{n-2}}q(\mathbf{x},t)\left(\sum_{i,j=2}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}h(\mathbf{x},t)\right)\,\d\mathbf{x}\,\d t \\
&\quad+\int_0^{\infty}\int_{S^{n-2}}q(\mathbf{x},t)\left[\sum_{j=2}^{n-1}x_j\left(\psi^{(j)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\partial_{j}h(\mathbf{x},t)\right]\d\mathbf{x}\,\d t.
\end{align*}
For $r$, we have
\begin{align*}
&-\int_0^{\infty}\int_{S^{n-1}}r(x_1,\mathbf{x},t)\partial_tg(x_1,\mathbf{x},t)\,\d\mathbf{x}\,\d t\\
&\quad=\frac{\kappa}{2}\int_0^{\infty}\int_{S^{n-1}}r(x_1,\mathbf{x},t)\left(\sum_{i,j=1}^{n-1}x_i(\delta_{ij}-x_j)\partial^2_{ij}g(x_1,\mathbf{x},t)\right)\,\d\mathbf{x}\,\d t \\
&\qquad+ \int_0^{\infty}\int_{S^{n-1}}r(x_1,\mathbf{x},t)\left[\sum_{j=1}^{n-1}x_j\left(\psi^{(j)}(x_1,\mathbf{x})-\bar\psi(x_1,\mathbf{x})\right)\partial_{j}g(x_1,\mathbf{x},t)\right]\d\mathbf{x}\,\d t.
\end{align*}
By Lemma~\ref{lem:classical_int}, $r$ is smooth. Therefore, the above equation can be integrated by parts to yield,
an integral on $S^{n-1}$ that will cancel out identically, since $r$ is a classical solution to \eqref{replicator_diffusion}, and
a number of integrals over the various faces of $S^{n-1}$. In particular, at $x_1=0$, we find that
\[
0=-\frac{\kappa}{2}\int_0^{\infty}\int_{S^{n-2}}r(0,\mathbf{x},t)h(\mathbf{x},t)\,\d\mathbf{x}\,\d t.
\]
By collecting together the two calculations on $x_1=0$, we obtain the result.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
In what follows, we shall need some preliminaries. Recall --- see \cite{Stanley1996} --- that to the simplex $S^{n-1}$ is associated a corresponding $f$-vector, such that the entry $i+1$ ($f_{i+1}$) is the number of $i$-dimensional subsimplices of $S^{n-1}$. We shall assume that, for each dimension $i$, there is a definite order of the subsimplices $S^{i,j}$, with $i=0,\ldots,n-1$ and $j=1,\ldots,f_{i+1}$. Moreover, we define the adjacent operator by $\mathrm{ad}(j,k)$ which denotes the $k$th adjacent subsimplex of dimension $i+1$ to $S^{ij}$. Notice that there are $n-i$ such simplexes.
\begin{thm}[Solution Structure]
\label{thm:soln_struct}
Equation \eqref{weak:replicator_diffusion}, with a given initial condition $p^{\mathrm{I}}\in\mathsf{BM}^+\left(S^{n-1}\right)$, has a unique solution $p\in L^\infty\left([0,T];\mathsf{BM}^+\left(S^{n-1}\right)\right)$. Moreover, let $\delta^{ij}$ be the Radon measure with unit mass uniformly supported on $S^{ij}$. Then the solution $p$ can be written as
\begin{equation}
p(t,x)=p_{n1}+\sum_{\left(i,j\right)\in\mathcal{I}}p_{ij}\delta^{ij},\quad \mathcal{I}=\left(\cup_{i=0}^{n-1}\{i\}\right)\times\{1,\ldots,f_{i+1}\},
\label{strut_soln}
\end{equation}
where $p_{ij}$ satisfies
\begin{align}
&-\int_0^{\infty}\int_{S^{ij}}p_{ij}(\mathbf{x},t)\partial_tg(\mathbf{x},t)\,\d\mathbf{x}\,\d t - \int_{S^{ij}}p_{ij}(\mathbf{x},t_0)g(\mathbf{x},t_0)\,\d\mathbf{x} \nonumber\\
&\quad=\frac{\kappa}{2}\int_0^{\infty}\int_{S^{ij}}p_{ij}(\mathbf{x},t)\left(\sum_{r,s=1}^{n-1}x_r(\delta_{rs}-x_s)\partial^2_{rs}g(\mathbf{x},t)\right)\,\d\mathbf{x}\,\d t \label{weak:rd_forcing}\\
&\qquad+ \int_0^{\infty}\int_{S^{ij}}p_{ij}(\mathbf{x},t)\left[\sum_{s=1}^{n-1}x_s\left(\psi^{(s)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\partial_{s}g(\mathbf{x},t)\right]\d\mathbf{x}\,\d t\nonumber\\
&\qquad \quad + \int_0^{\infty}\int_{S^{ij}}\sum_{k=1}^{n-i}\left.p_{(i+1)\mathrm{ad}(j,k)}\right|_{S^{ij}}\,\d\mathbf{x}\,\d t.\nonumber
\end{align}
The initial condition for \eqref{weak:rd_forcing} will be denoted $p^{\mathrm{I}}_{ij}$, and it is obtained from $p^{\mathrm{I}}$ by applying the decomposition described in Remark~\ref{rmk:decomp} recursively.
\end{thm}
\begin{proof}
By direct substitution into \eqref{weak:replicator_diffusion} and after a integrating by parts starting from $S^{n-1}$, proceeding downwards until the vertices and using Lemma~\ref{prop:face_proj} one can verify that \eqref{strut_soln} is indeed a solution. To verify uniqueness, let $\tilde{p}$ a solution to \eqref{weak:replicator_diffusion}.
Consider $\tilde{p}$ restricted to $S^{n-1}$ and let $p_{n1}$ be the classical solution guaranteed by Lemma~\ref{lem:unique}.
By considering \eqref{weak:replicator_diffusion} with test functions with compact support on $\mathop{\mathrm{int}} S^{n-1}$, we see that $\tilde{p}-p_{n1}$ vanishes. Therefore $\mathop{\mathrm{sing\ supp}}(\tilde{p}-p_{n1})\subset \partial S^{n-1}$. By Remark~\ref{rmk:decomp}, we can write $\tilde{p}=p_{n1}+q$, with $\mathop{\mathrm{sing\ supp}}(q)\subset\partial S^{n-1}$. Now $\partial S^{n-1}$ is the union of
$f_{n-1}$ copies of $S^{n-2}$. By Proposition~\ref{prop:face_proj}, $q$ must satisfies \eqref{weak:rd_forcing} in one
less dimension in each of the subsimplices. Proceeding inductively, we can now choose a subsimplex $S^{n-3}$ of $S^{n-2}$. Now, we repeat the argument above for each simplex $S^{n-2}$ which has $S^{n-3}$ as a subsimplex.
Iterate until arrive at the simplices of zero dimension to get the result.\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
This theorem leads to the following result:
\begin{thm}[Final State]
\label{thm:final_state}
Let
\[
p^\infty(\mathbf{x}):=\lim_{t\to\infty}p(\mathbf{x},t)\ ,
\]
where $p$ is the solution of equation~(\ref{replicator_diffusion_eps}) subject to conservation laws~(\ref{continuous_conservation_laws}).
Then $p^\infty$ is a linear combination of point masses at the vertices of $S^{n-1}$,i.e,
\begin{equation}\label{final_state}
p^\infty=\sum_{i=1}^n \pi_i\left[p^{\mathrm{I}}\right]\delta_{\mathbf{e}_i}\ .
\end{equation}
\end{thm}
\begin{proof}
First, we observe that Lemma~\ref{lem:ncons_pde} still holds if applied to inhomogeneous version of \eqref{RD_chang_var}, provided
that the forcing decays for large times. The results now follows from a straightforward application of Proposition~\ref{prop:face_proj} together with Lemma~\ref{lem:ncons_pde} applied in a descending chain of simplices down to dimension 1.
Conservation of probability then yields that $p^\infty$ must be a sum of atomic measures at the vertices of $S^{n-1}$. On using the other conservation laws, we obtain the coefficients, and hence the result.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\subsection{Duality and the Kimura equation}
\label{ssec:duality}
The formal adjoint of equation~(\ref{replicator_diffusion_eps}) (changing the flow of time from forward to backward) provides a generalization of the celebrated
Kimura equation~\citep{Kimura}, both including more types and allowing frequency dependent fitness:
\begin{equation}\label{backward_kimura_full}
\partial_tf=\mathcal{L}^\dagger_{n-1,k}f:=\frac{\kappa}{2}\sum_{i,j=1}^{n-1}D_{ij}\partial_{ij}^2f+\sum_{i=1}^{n-1}\Omega_i\partial_if\ .
\end{equation}
In diffusion theory this equation is associated with a martingale problem for the diffusive continuous process.
In genetics, the meaning of equation~(\ref{backward_kimura_full}) is seldom made clear and depends on the boundary conditions imposed. One possible and common interpretation is as follows: given an homogeneous state $\mathbf{e}_i\in\Delta S^{n-1}$, let
$f_i(\mathbf{k},t)$ be the probability that given a population initially in a well-defined state
$\mathbf{k}\in S^{n-1}$ (i.e., $p^{\mathrm{I}}(\mathbf{x}):= p(\mathbf{x},0)=\delta_{\mathbf{k}}(\mathbf{x})$)
we find the population fixed at the homogeneous state $\mathbf{e}_i$ at time $t$ (or before),
i.e.,
$f_i(\mathbf{k},t)=\langle p(\cdot,t),\delta_{\mathbf{e}_i}\rangle$.
In this case, we need to find consistent boundary conditions. See~\citet{Maruyama,EtheridgeLNM}.
Let us study the fixation of type 1, represented by the state $\mathbf{e}_1$.
Let us now call $V_i$ the face of the
simplex with $x_i=0$ (type $i$ is absent). Then, $f_i\big|_{V_1}=0$. For $i\ne 1$,
$f_i\big|_{V_i}$ is the solution of $\partial_t f=\mathcal{L}^\dagger_{n-2,k} f$,
where the type $i$ was omitted from the equation. As the faces of the simplex are
invariant under the adjoint evolution (one more fact to be attributed to lack of mutations
in the model), this represent the same problem in one dimension less. We continue
this procedure until we find the evolution in the edge
from vertex $1$ to vertex $i\ne 1$, $L_{1i}$. In this case, we have that $f\big|_{L_{1i}}:[0,1]\to\mathbb{R}$, the
restriction of $f_i$ to this edge, with $x_1$ being the fraction of type 1 individuals, is the solution
of
\begin{equation}\label{Kimura}
\partial_t f=\frac{\kappa}{2}x_1(1-x_1)\partial^2_1f+
x_1(1-x_1)\left(\psi^{(1)}_{1i}(x_1)-\psi^{(i)}_{1i}(x_1)\right)\partial_1 f
\end{equation}
with boundary conditions given by $f(0)=0$ and $f(1)=1$ and $\psi^{(j)}_{1i}(x_1)=\psi^{(j)}(x_1\mathbf{e}_1+(1-x_1)\mathbf{e}_i)$ is the restriction of
$\psi^{(j)}$ to the edge $L_{1i}$.
The forward and backward versions of
Equation~(\ref{Kimura}) are fully studied in the references~\citep{Chalub_Souza:CMS_2009,Chalub_Souza:TPB_2009}.
For $\psi^{(1)}_{1i}-\psi^{(i)}_{1i}$ constant this is the
Kimura equation.
\subsection{Strategy dominance}
\label{ssec:strat_domin}
Let us assume that $\psi^{(1)}(\mathbf{x})\ge\psi^{(i)}(\mathbf{x})$ for all $\mathbf{x}\in S^{n-1}$. This happens, for example, if we identify fitness functions with pay-offs in game theory, types with strategists, and if strategist 1 plays the Nash-equilibrium strategy.
Therefore, we prove
\begin{thm}
If, for all states $\mathbf{x}\in S^{n-1}$, and all types $i=1,\dots,n$, $\psi^{(1)}(\mathbf{x})\ge\psi^{(i)}(\mathbf{x})$, then the
fixation probability of the first type is not less than the neutral fixation probability for any initial condition $p^{\mathrm{I}}$; i.e,
\[
\pi_1[p^{\mathrm{I}}]\ge\pi_1^{\mathrm{N}}[p^{\mathrm{I}}]\ .
\]
\end{thm}
\begin{proof}
First note that it is enough to prove that $\pi_1[\delta_{\mathbf{x}}]\ge\pi_1^{\mathrm{N}}[\delta_{\mathbf{x}}]=x_1$ for all
$\mathbf{x}\in S^{n-1}$. The difference $\rho_1(\mathbf{x})-x_1$ satisfy
\[
\frac{\kappa}{2}\sum_{i,j=1}^{n-1}D_{ij}\partial^2_{ij}\left(\rho_1(\mathbf{x})-x_1\right)+\sum_{i=1}^{n-1}\Omega_i\partial_i\left(\rho_1(\mathbf{x})-x_1\right)
=-\Omega_1=-x_1\left(\psi^{(1)}(\mathbf{x})-\bar\psi(\mathbf{x})\right)\le 0\ ,
\]
with vertex conditions $\rho_1(\mathbf{e}_i)-x_1(\mathbf{e}_i)=0$ for $i=1,\dots,n$.
Now, we proceed by induction in $n$. For the case $n=2$, the proof is in~\cite[Section 4.3]{Chalub_Souza:TPB_2009}; we reproduce it
here only for completeness.
We write explicitly the equation for $\rho_1$:
\[
\frac{\kappa}{2}x(1-x)\partial_x^2\rho_1+x\left(\psi^{(i)}(x)-\bar\psi(x)\right)\partial_x\rho_1=0
\]
with $\rho_1(0)=0$ and $\rho_1(1)=1$. We simplify the equation using the fact that
$\psi^{(1)}(x)-\bar\psi(x)=(1-x)\left(\psi^{(1)}(x)-\psi^{(2)}(x)\right)$ and the solution is given by
\[
\rho_1(x)=\frac{\int_0^x\exp\left[-\frac{2}{\kappa}\int_0^{\bar x}\left(\psi^{(1)}(\bar{\bar x})-\psi^{(2)}(\bar{\bar x})\right)\d \bar{\bar x}\right]\d \bar x}
{\int_0^1\exp\left[-\frac{2}{\kappa}\int_0^{\bar x}\left(\psi^{(1)}(\bar{\bar x})-\psi^{(2)}(\bar{\bar x})\right)\d \bar{\bar x}\right]\d \bar x}\ .
\]
As $\psi^{(1)}(x)\ge\psi^{(2)}(x)$, we conclude that
\begin{align*}
&\frac{1}{x}\int_0^x\exp\left[-\frac{2}{\kappa}\int_0^{\bar x}\left(\psi^{(1)}(\bar{\bar x})-\psi^{(2)}(\bar{\bar x})\right)\d \bar{\bar x}\right]\d \bar x
\\&\qquad\qquad\ge
\int_0^1\exp\left[-\frac{2}{\kappa}\int_0^{\bar x}\left(\psi^{(1)}(\bar{\bar x})-\psi^{(2)}(\bar{\bar x})\right)\d \bar{\bar x}\right]\d \bar x\ .
\end{align*}
In particular, $\rho_1(x)\ge x$.
Now, assume that
$\rho_1(\mathbf{x})-x_1\ge 0$ for all $\mathbf{x}\in\partial S^{n-1}$. (Note that $\partial S^{n-1}$ is an union of a finite
number of $n-2$ dimensional simplexes, where by the principle of induction we assume the result valid.) Finally, we use the maximum
principle for subharmonic functions to conclude that the minimum cannot be in the interior of
the simplex~\citep{CourantHilbert2}. Therefore $\rho_1(\mathbf{x})\ge x_1$ for all $\mathbf{x}\in S^{n-1}$.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\section{The Replicator Dynamics}
\label{sec:replicator}
In Section~\ref{sec:infinite}, we proved that, when genetic drift and selection balance, then there is a special timescale such that the evolution of an infinite population can be described by a parabolic partial differential equation. Nevertheless, in applications one is usually interested in large but finite populations. In this case, an exact limit is not taken, and \eqref{weak:replicator_diffusion} can be taken as an approximation of this evolution. We shall discuss this further in the conclusions, but we observe that this equation might be a good approximation even when balance is not exact, i.e., when $\nu$ and $\mu$ are close but not equal to one. This could typically lead to an equation with $\kappa$ being either quite large or small. In the former case, a regular expansion in $\kappa$ shows that the evolutions is governed by \eqref{weak:diffusion}. On the other hand, in the latter case, one expects that the much simpler transport equation \eqref{convective_approximation} will be a good approximation
for the evolution. Indeed,
in this section we show that \eqref{replicator_diffusion} can be uniformly approximated by \eqref{convective_approximation}
in proper compact subsets of the simplex, and over a time interval shorter than $\kappa^{-1}$.
We start in Subsection~\ref{ssec:repODE} showing that the equation~(\ref{convective_approximation})
is formally equivalent to the replicator system. Afterwards,
in Subsection~\ref{ssec:peak}, we answer what we believe to be an important question: what exactly is the replicator equation modelling? In particular,
we will show, using a simple argument, that the replicator equation does not model the evolution of
the expected value (of a given trait) in the population, but the evolution of the most common trait
conditional on the absence of extinctions.
Finally, we show, in Subsection~\ref{ssec:local}, that the replicator ordinary differential
equation is a good approximation for the initial dynamics of the Wright-Fisher process, when
$\kappa$ is small. As in Section~\ref{sec:forward}, we shall assume that the fitness functions are smooth.
\subsection{The replicator ODE and PDE}
\label{ssec:repODE}
We shall now study in more detail the equation~(\ref{convective_approximation}), which has
a close connection with the replicator dynamics as shown below:
\begin{thm}\label{thm:replicator_eu}
Assume that $\Omega$ is Lipschitz.
Let $\Phi_t(\mathbf{x})$ the flow map of
\begin{equation}\label{replicator_ode}
\frac{\d\mathbf{x}}{\d t}=\boldsymbol{\Omega}(\mathbf{x}(t)).
\end{equation}
and let
\[
Q(\mathbf{x},t)=-\int_0^t(\nabla\cdot\boldsymbol{\Omega})(\Phi_{s-t}(\mathbf{x}))\d s.
\]
Let $p^I\in\mathsf{BM}^+\left(S^{n-1}\right)$ and assume that $\mathop{\mathrm{sing\ supp}}(p^I)\subset \mathop{\mathrm{int}}(S^{n-1})$ (see Remark~\ref{rmk:decomp}).
Then the solution to \eqref{convective_approximation} with initial condition $p^I$ is given by
\begin{equation}
p(\mathbf{x},t)=\mathrm{e}^{Q(\mathbf{x},t)}p^{\mathrm{I}}\left(\Phi_{-t}(\mathbf{x})\right).
\label{replicator_pde:soln}
\end{equation}
\end{thm}
\begin{proof}
We observe that, since $\Omega$ is Lipschitz, the push-forward of $p^I$ by $\Phi_t$ is well defined---\citep{ambrosioetal2005}. Hence,
the proof is based on the methods of characteristics; see~\cite{John_F_PDE,Evans}. See \cite{DipernaLions1989} for an approach that works even if $\Omega$ fails to be Lipschitz continuous.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\begin{rmk}
If $p^{\mathrm{I}}$ gives mass to the boundaries of $S^{N-1}$, we write, as in Theorem~\ref{thm:soln_struct}:
\[
p^{\mathrm{I}}=\sum_{i,j}p^{\mathrm{I}}_{i,j},\quad\text{with}\quad \mathop{\mathrm{sing\ supp}}(p^{\mathrm{I}}_{i,j})\subset S^{i,j}.
\]
Moreover, notice that $\Omega$ restricted to $S^{i,j}$ is tangent to $\partial S^{i,j}$. Hence, the restricted dynamics is always well defined, and we can write $\Phi^{i,j}_t$ for the flow map of
\[
\frac{\d\mathbf{x}^{i,j}}{\d t}=\Omega^{i,j}(\mathbf{x}^{i,j}(t))
\quad\text{and}\quad
Q^{i,j}(\mathbf{x},t)=-\int_0^t\left(\nabla^{i,j}\cdot\Omega^{i,j}\right)\left(\Phi^{i,j}_{s-t}(\mathbf{x})\right)\,\d s,
\]
where $\Omega^{i,j}$ is the restriction of $\Omega$ to $S^{i,j}$, and $\nabla^{i,j}\cdot\Omega^{i,j}$ is divergence in $S^{i,j}$.
Then, repeated applications of Theorem~\ref{thm:replicator_eu} lead to the conclusion that the solution to \eqref{convective_approximation} is given by
\[
p(\mathbf{x},t)=\sum_{i,j}\mathrm{e}^{Q^{i,j}(\mathbf{x},t)}p^{\mathrm{I}}_{i,j}(\Phi^{i,j}_{-t}(\mathbf{x})).
\]
\end{rmk}
\subsection{Peak and average dynamics}
\label{ssec:peak}
We start by showing that the long term dynamics of the average in the Wright-Fisher process, even in the thermodynamical limit, is not governed by the replicator equation. Consider for example, a population of $n$ types, evolving according to
the replicator-diffusion equation with fitness functions given by $\psi^{(i)}:S^{n-1}\to\mathbb{R}$.
From the fact that the final state of the the replicator-diffusion equation is given by equation~(\ref{final_state}),
the coefficients $\pi_i[p^{\mathrm{I}}]$, $i=1,\dots,n$ can be calculated in two ways:
\[
\int \rho_i(\mathbf{x})p^{\mathrm{I}}(\mathbf{x})\d\mathbf{x}=\int\rho_i(\mathbf{x})p^\infty(\mathbf{x})\d\mathbf{x}=\pi_i[p^{\mathrm{I}}]=\int x_ip^\infty(\mathbf{x})\d\mathbf{x}=:\langle p^\infty\rangle_i\ .
\]
Therefore the average of the probability distribution will converge to a certain point of the simplex depending on the initial condition.
This is completely different from the replicator dynamics, as its solution converges to a single attractor, periodic orbits,
chaotic attractors, etc~\citep{HofbauerSigmund}.
Now, we show that the probability distribution concentrates in the ESS; this shows that the peak will behave in
manner similar to the solutions of the replicator dynamics.
Recall that \citep{HofbauerSigmund} an ESS that lies in interior of $S^{n-1}$ must be a global attractor of
the replicator equation~(\ref{replicator_ode}). We have then the following result
\begin{thm}\label{thm:delta_conv}
Assume $p^{\mathrm{I}}\in\mathsf{BM}^+\left(S^{n-1}\right)$, $\mathop{\mathrm{sing\ supp}} p^{\mathrm{I}}\subset\mathop{\mathrm{int}}S^{n-1}$ and assume that (\ref{replicator_ode})
has a unique point $\mathbf{x}^*$ such that for any initial condition $\mathbf{x}(0)\in\mathop{\mathrm{int}}S^{n-1}$, $\lim_{t\to\infty}\mathbf{x}(t)=\mathbf{x}^*$.
Then the solution of equation~(\ref{convective_approximation}) is such that
\[
\lim_{t\to\infty}p(\mathbf{x},t)=\delta_{\mathbf{x}^*}.
\]
\end{thm}
\begin{proof}
Assume, initially, that $\mathbf{x}^*\in\mathop{\mathrm{int}}S^{n-1}$.
Since $\mathbf{x}^*$ is a globally stable equilibrium for interior initial points, we can find $T>0$, such that, for $t>T$, and sufficiently small $\delta>0$, we have, for any proper compact subset $K\subset S^{n-1}$, that:
\[
\Phi_t(K)\subset B_{\delta}(\mathbf{x}^*)\subset\mathop{\mathrm{int}}S^{n-1}.
\]
where $B_\delta(\mathbf{x}^*)$ is the open ball of radius $\delta$ and centered at $\mathbf{x}^*$.
Let $\eta(\mathbf{x})$ be a continuous function with support contained in $K$.
Then, for $t>T$, we have that
\[
\int_{S^{n-1}}p(\mathbf{x},t)\eta(\mathbf{x})\,\d\mathbf{x} = \int_{B_\delta(\mathbf{x}^*)}p(\mathbf{x},t)\eta(\mathbf{x})\,\d\mathbf{x}.
\]
But, let $\epsilon>0$ be given. Since $\eta$ is continuous, possibly with a smaller $\delta>0$, we must have
\begin{equation}
\label{eq:nearid}
\eta(\mathbf{x}^*)-\epsilon\leq\int_{B_{\delta}(\mathbf{x}^*)}p(\mathbf{x},t)\eta(\mathbf{x})\,\d\mathbf{x}\leq\eta(\mathbf{x}^*)+\epsilon,
\end{equation}
Now take $(\delta_k,\varepsilon_k)\downarrow0$ such that (\ref{eq:nearid}) is satisfied. This yields a sequence of times $T_k$ such that $T_k\to\infty$ and
\[
\lim_{k\to\infty}\int_{S^{n-1}}p(\mathbf{x},T_k)\eta(\mathbf{x})\,\d\mathbf{x} = \eta(\mathbf{x}^*).
\]
Since $\Phi_s(K)\subset\Phi_t(K)$, for $s>t$, the claim follows.
For the case $\mathbf{x}^*\in\partialS^{n-1}$, the result follows from similar arguments, replacing $B_{\delta}(\mathbf{x}^*)$ by
$B_{\delta}(\mathbf{x}^*)\capS^{n-1}$.
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\subsection{Asymptotic approximation}
\label{ssec:local}
Let
\[
0<\kappa\ll1.
\]
If we perform a regular asymptotic expansion, i.e., if we write $p_\kappa\approx p_0+\kappa p_1+\cdots$, then we find, for times $t\ll\kappa^{-1}$, that the leading order dynamics is given by
\begin{equation}
\label{eq:loap}
\partial_{t}p_0+\nabla\cdot(p_0\boldsymbol{\Omega})=0.
\end{equation}
The next theorem shows that this indeed the case, provided we see $p_0$ as the leading order dynamics with respect to the regular part of the probability density.
\begin{thm}\label{thm:conv_to_replicator}
Assume that the fitness are $C^2(S^{n-1})$ functions, and that the initial condition $p^{\mathrm{I}}$ is also $C^2(S^{n-1})$. Let $r_\kappa$ be the regular part of the solution of (\ref{replicator_diffusion_eps}), with $\kappa\ge0$. Then $p_0$ is $C^2(S^{n-1})$, and satisfies the conservation law~(\ref{continuous_conservation_laws}).
Moreover, if $\nabla\cdot\boldsymbol{\Omega}\geq0$, then given $\kappa$ and $K$ positive, there exits a $C$ such that, for $t\ll C\kappa^{-1}$, we have
\[
\|r_\kappa(\cdot,t)-p_0(\cdot,t)\|_\infty\leq C\kappa
\]
and
\[
\|\partial_x^2p_0(\cdot,t)\|_\infty>K
\]
Thus $p_0$ is the leading order asymptotic approximation to $r_\kappa$, for $t\ll\kappa^{-1} C$.
\end{thm}
\begin{proof}
The statements about $p_0$ follows straightforward by obtaining the solution by the method of characteristics.
Let $w_\kappa=r_\kappa-p_0$. Then $w_\kappa$ satisfies
\begin{equation*}
\partial_t w_\kappa=\frac{\kappa}{2}\sum_{i,j=1}^{n-1}\partial^2_{ij}\left(D_{ij}w_\kappa\right)-\sum_{i=1}^{n-1}\partial_i\left(\Omega_i w_\kappa\right)
+\frac{\kappa}{2}g_0(\mathbf{x},t)
\end{equation*}
with null initial condition, where
\[
g_0(\mathbf{x},t)= \sum_{i,j=1}^{n-1}\partial_{i,j}^2\left(D_{ij}p_0\right) .
\]
Notice that, because of the assumptions on $p^{\mathrm{I}}$, we have that $g_0$ is uniformly bounded in time.
The solution for such a problem is given by Duhammel principle. Let $S(t,t_0)$ be associated solution operator. We have that
\[
w_\kappa(\mathbf{x},t)=\frac{\kappa}{2}\int_0^t S(t,s)g_0(s,x)\d s.
\]
By the maximum principle applied to the semigroup $S(t_2,t_1)$, we have that $\|S(t,s)g_0(s,x)\|\leq M_s$, and by the uniform bound on $g_0$, we have that there exists a constant $M$ such that $M_s\leq M$. Thus, we find that
\[
\|S(t,s)g_0(s,x)\|_\infty\leq M.
\]
Hence
\[
|w_{\kappa}(\mathbf{x},t)|\leq \kappa t\frac{M}{2}.
\]
Therefore, taking $C=2M^{-1}$, we find, for $t\ll C\kappa^{-1}$, that:
\[
\|w_\kappa(t,\cdot)\|_\infty \ll1.
\]
\ifmmode\else\unskip\quad\fi\squareforqed
\end{proof}
\begin{rmk}
If the condition on $\nabla\cdot\boldsymbol{\Omega}$ is not satisfied, a similar proof shows that if $t\ll-\log(\kappa)$ then
the same conclusion holds. Notice also that this condition is satisfied if the replicator has a globally stable equilibrium in the interior of $S^{n-1}$.
\end{rmk}
\begin{prop}\label{prop:weak_asymp}
Under the same hypothesis of Theorem~\ref{thm:conv_to_replicator}, let $U$ be an open set such that $U\subset S^{n-1}$ and $\bar{U}\cap \partial S^{n-1}=\emptyset$. Then, there exists $C>0$, such that
\[
\left|\int_{U}\left(p_\kappa(\mathbf{x},t)-p_0(\mathbf{x},t)\right)\d\mathbf{x}\right|<C\kappa t,
\]
for any $t$.
\end{prop}
\begin{proof}
By Theorem~\ref{thm:conv_to_replicator}, there exists $C'>0$ such that $\|r_\kappa(\cdot,t)-p_0(\cdot,t)\|_\infty<C'\kappa t$, which we write as:
\begin{equation*}
-C'\kappa t\leq r_k(\mathbf{x},t)-p_0(\mathbf{x},t)<C'\kappa t.
\end{equation*}
Integrating in $U$ and using that $\mathop{\mathrm{sing\ supp}}(q_\kappa)\cap U=\emptyset$, the result follows.
\end{proof}
Theorem~\ref{thm:delta_conv} shows that, for sufficient large time, the support of the solution of the replicator PDE, equation~(\ref{convective_approximation}),
will be concentrated in sufficiently small neighbourhoods of $\mathbf{x}^*$. In particular, this will be true for the maximum. For the replicator-diffusion equation~(\ref{replicator_diffusion_eps}) this
cannot be valid for any value of $\kappa>0$ (as it was proved in Theorem~\ref{thm:final_state}); however,
for strong selection, the initial dynamics given by the replicator-diffusion equation
is similar to
the one given by the replicator ODE.
This is justified by the following result:
\begin{thm}
Assume that the replicator has a unique global attractor. Then, under the same hypothesis of Theorem~\ref{thm:conv_to_replicator}, we have that given $\varepsilon>0$ and $\delta>0$ there exist a time $t^*$ and a constant $C>0$, depending only on the initial condition, such that
\[
\left|\int_{B_\varepsilon(\mathbf{x}_*)}p_{\kappa}(\mathbf{x},t)\,\d\mathbf{x}-1\right|<C\kappa t + \delta,
\]
for $t>t^*$.
\end{thm}
\begin{proof}
We have
\[
\left|\int_{B_\varepsilon(\mathbf{x}_*)}p_{\kappa}(\mathbf{x},t)\,\d\mathbf{x}-1\right|\leq \left|\int_{B_\varepsilon(\mathbf{x}_*)}\left(p_{\kappa}(\mathbf{x},t)-p_0(\mathbf{x},t\right)\,\d\mathbf{x}\right| + \left|\int_{B_\varepsilon(\mathbf{x}_*)}p_{0}(\mathbf{x},t)\,\d\mathbf{x}-1\right|.
\]
From Theorem~\ref{thm:conv_to_replicator}, we have a constant $C>0$ such that
\[
\left|\int_{B_\varepsilon(\mathbf{x}_*)}\left(p_{\kappa}(\mathbf{x},t)-p_0(\mathbf{x},t)\right)\,\d\mathbf{x}\right|<C\kappa t.
\]
From Theorem~\ref{thm:delta_conv}, we have that there exists a time $t^*$ such that, for $t>t^*$, we have
\[
\left|\int_{B_\varepsilon(\mathbf{x}_*)}p_{0}(\mathbf{x},t)\,\d\mathbf{x}-1\right|<\delta.
\]
Combining these two calculations yields the result.
\end{proof}
\section{Numerical results}
\label{sec:numerics}
We show, in this section, numerical results for two variants of the Rock-Scissor-Paper game~\citep{HofbauerSigmund}; i.e., fitness are identified with the pay-off
from game theory. In Subsection~\ref{ssec:num_f}, we study the evolution of the discrete evolution numerically in time, and show that the peak of
distribution behaves accordingly to the replicator equation while the average value of the same distribution converges to a point which is not the ESS.
In Subsection~\ref{ssec:num_b} we obtain explicitly the fixation probability of a given type for the symmetric Rock-Scissor-Paper game. A full animation
is available in the website indicated in the caption of figure~\ref{simulation}.
\subsection{Forward equation}
\label{ssec:num_f}
\begin{figure}
\includegraphics[width=.32\linewidth]{Fig2a}
\includegraphics[width=.32\linewidth]{Fig2b}
\includegraphics[width=.32\linewidth]{Fig2c}\\
\includegraphics[width=.32\linewidth]{Fig2d}
\includegraphics[width=.32\linewidth]{Fig2e}
\includegraphics[width=.32\linewidth]{Fig2f}\\
\includegraphics[width=.32\linewidth]{Fig2g}
\includegraphics[width=.32\linewidth]{Fig2h}
\includegraphics[width=.32\linewidth]{Fig2i}\\
\includegraphics[width=.32\linewidth]{Fig2j}
\includegraphics[width=.32\linewidth]{Fig2k}
\includegraphics[width=.32\linewidth]{Fig2l}
\caption{Solution for short times (1,3,6,10,15,21,28,35,44,54,65,77) of the Wright-Fisher evolution for a population
of 150 individuals of two given types, with fitness
given by equations~(\ref{fitness_rep}) and (\ref{pay_off_RSP}) for a
distribution initially concentrated in the interior non-stationary point
$\frac{1}{150}(70,70,10)$. The value of
the distribution $P(x,y,t)$ is in logarithmic scale. Note that the cyan spot, marking
the interior peak of the probability distribution rotates and converges to the
ESS $\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)$ (along characteristics of the PDE or, equivalently, the trajectories of
the replicator dynamics). At the same time, the green spot marks
the mean value of the probability distribution and also rotates initially. After a long time,
it moves toward its final position, given by
$\mathbf{x}^\infty:=\left(F^{(1)}_{p^{\mathrm{I}}},F^{(2)}_{p^{\mathrm{I}}},1-F^{(1)}_{p^{\mathrm{I}}}-F^{(2)}_{p^{\mathrm{I}}}\right)\approx(0.331,0.227,0.442)$.
For a full animation, also for different population sizes $N$, see
{\footnotesize
\texttt{http://dl.dropbox.com/u/11325424/WFsim/RSPFinal.html}}
}\label{simulation}
\label{fig:evolutionRSP}
\end{figure}
We use evolutionary game theory~\citep{JMS,HofbauerSigmund} to define the fitness function.
More precisely, we define a pay-off matrix $\mathbf{M}=\left(M_{ij}\right)_{i,j=1,\cdots,n}$ such that $M_{ij}$ is
the gain (in fitness) of the $i$ type against the $j$ type.
The fitness of the $i$ type in a population at state $\mathbf{x}$ is
\begin{equation}\label{fitness_rep}
\Psi^{(i)}(\mathbf{x})=\sum_{j=1}^nM_{ij}x_j=\left(\mathbf{M}\mathbf{x}\right)_{i}\ .
\end{equation}
In a simulation where both effects, drift and diffusion, are apparent, we have $\mu=\nu=1$ and $\kappa=O(1)$. The last identity implies
$\Delta t=O\left(N^{-1}\right)$.
Furthermore, from equation~(\ref{WSP}), we have $\psi^{(i)}(\mathbf{x})=\frac{1}{\Delta t}\left(\Psi^{(i)}(x)-1\right)$, and therefore, in order to see both effects
we need strong fitness functions and long times, i.e., $\Delta t\approx N^{-1}\ll 1$.
We consider in Figure~\ref{fig:evolutionRSP} the
evolution of a discrete population of $N=150$ individuals with the pay-off matrix given by
\begin{equation}\label{pay_off_RSP}
\mathbf{M}=\left(\begin{matrix}
30&81&29\\
6&30&104\\
106&4&30
\end{matrix}\right)\ .
\end{equation}
This is know as the generalized Rock-Scissor-Paper game and presents an evolutionary stable state (ESS) $(x^*,y^*,z^*)=\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)$.
Furthermore, the flow of the replicator dynamics converges in spirals to the ESS. The vertices as well as $(x^*,y^*,z^*)$ are equilibrium points
for the continuum dynamics. See~\cite{HofbauerSigmund} for the choice of values of the matrix $\mathbf{M}$.
Note that the peak moves in inward spirals
around the central equilibrium, following the trajectories of the replicator dynamics, while all the mass diffuses to the boundary.
The green spot indicates the average value for $x$ and $y$; at first it moves in spirals close to the trajectories of the
replicator dynamics. After a time depending on the value of $N$ it starts to move in the direction of its final point $(x^\infty,y^\infty,z^\infty)
=(\pi_1[p^{\mathrm{I}}],\pi_2[p^{\mathrm{I}}],\pi_3[p^{\mathrm{I}}])$.
This point can be calculated using equation~(\ref{final_state}) and the $n=3$ independent conservation laws.
\subsection{Backward equation and the decay of the interior $L^1$-norm}
\label{ssec:num_b}
\begin{figure}
\begin{center}
\includegraphics[width=.8\linewidth]{Fig3}
\end{center}
\caption{Fixation probability of the third type, in a Rock-Scissor-Paper game. This is the numerical solution
of the stationary state
of the equation (\ref{backward_kimura_full}), simulated by a Wright-Fisher process with $N=150$ and
pay-off matrix $\left([[20, 0, 40],[40, 20, 0],[0, 40, 20]]\right)$. Note that higher values of the
fixation probability ``rotates'' around the center of the simplex (the stationary state of the
replicator dynamics).
}
\label{fig:backward}
\end{figure}
The stationary state of the backward equation (\ref{backward_kimura_full}) represents the fixation of probability
of a given type. This type is specified by the associated boundary conditions. Let us consider, as an example, that
$n=3$, the evolution is given by the Rock-Scissor-Paper game defined by the matrix
\begin{equation}\label{RSP_matrix}
\mathbf{M}= \left(\begin{matrix}
0&40&20\\
20&0&40\\
40&20&0
\end{matrix}
\right)\ ,
\end{equation}
and we study the fixation probability of the third type.
An exact solution is difficult to obtain, as it would be necessary to solve an hierarchy of equations, each solution representing
boundary conditions of a larger set; however, a numerical solution is extremely easy to compute, as the Wright-Fisher
process is a natural discretisation of the (forward as well as the) backward equation (cf. Theorem~\ref{thm:finite_cons}). This is probably computationally inefficient, and
different processes can be compatible with the same limit equations. See figure \ref{fig:backward} for
an illustration.
In figure~\ref{fig:l1norm}, we plot the $L^1$ norm in the interior of the simplex and all subsimplexes, showing that that
the probability mass flows from the simplex $S^{n-1}$ to the faces (which are equivalent to the simplexes $S^{n-2}$); the
solution behaves on the faces as the solution of the replicator-diffusion problem with one dimension less. The probability
flows to the ``faces of the faces'', i.e., to simplexes $S^{n-3}$ until it reaches the absorbing state $\mathbf{e}_i$ (simplexes $S^0$) for
$i=1,\dots,n$. We may think of a stochastic process reaching and sticking to the faces of the simplex until they reach their
final spot, the vertices.
We further observe that, the probability mass in the interior of the simplex is the so-called quasi-stationary distribution of the process, namely, the probability distribution given that it has not been absorbed. See \cite{MeleardVillemonais2011} for a recent survey on the topic.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{Fig4}
\caption{Evolution of the probability mass, for the Rock-Scissor-Paper game given by matrix (\ref{RSP_matrix}) and with initial condition concentrated
in the ESS, $p^{\mathrm{I}}=\delta_{(\frac{1}{3},\frac{1}{3})}$. The red line indicates the mas ($L^1$-norm) in the interior of the simplex; the blue line, the mass
in the interior any of the faces, and the black line, the mass in any of the vertices.}\label{fig:l1norm}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
We present a derivation of continuous limits of discrete Markov chain evolutionary models, that are frequency-dependent extensions
the classical Wright-Fisher model, through pure analytical techniques.
The derivation presented pays close attention to the variety of possible time scalings possible as related to the selection and population size that are measured by two parameters $\mu,\nu\geq1$. The balance of diffusion and selection ($\mu=\nu=1$ in our terminology) can be seen as slight extension of the results in \cite[Chapter 10]{EthierKurtz} using analytical methods instead of probabilistic arguments, and that favours the forward Fokker-Planck equation instead of the backward. In this sense, from a mathematical point of view, the weak formulation presented for the forward equation seems to be new, in particular for the minimal assumptions on the fitness functions.
The case $\mu>\nu=1$ yields a hyperbolic equation that is the PDE version of the replicator equation. An apparently similar result can be found in \cite[Chapter 11]{EthierKurtz}, which would correspond formally to take $\Delta t=1$ and $N\gg 1$, without using explicit scaling between these two variables.
With some additional regularity assumptions on the fitnesses functions, we can show that \eqref{weak:replicator_diffusion} is equivalent to \eqref{replicator_diffusion} together with the conservation laws \eqref{continuous_conservation_laws}. In particular, this allows
to characterise the behaviour of $p$ on the lower dimensional subsimplices of $S^{n-1}$. This can be used to obtain equations for the probability of extinction among other information.
The results here are also related to results
in~\cite{ChampagnatFerriereMeleard_TPB2006,ChampagnatFerriereMeleard_SM2008}, where the idea that the underling scaling influences the macroscopic model was already present, although in a less explicit way than here.
Nevertheless, the accelerated birth-death regime in \cite{ChampagnatFerriereMeleard_SM2008} can be seen as a counterpart to our scaling of $\Delta t$ and $N$. On the other hand, the scaling for the fitness are taken as fixed (corresponding to our $\nu=1$ in our terminology), and this explains why they do not obtain the pure diffusive limit in the large population regime. Notice also that the large population regime taken there seems to annihilate any stochastic effects coming from births and deaths, and that the stochastic effects in this limit are due only to the mutation process.
However, as pointed out above, as we allow more flexibility in the scaling laws, we are able to highlight any of these two factors independently; more precisely, for certain choices of the scaling in the fitnesses functions (namely,
the exponent $\nu>1$), their influence in the dynamics goes to zero so fast that the limit model is purely diffusive. On the other hand, if we grow the population size fast enough (i.e., $\mu>1$) then
we highlight the deterministic evolution, providing a direct way to compare the replicator equation with the Wright-Fisher process (or, for that matter, also with the Moran process, but, naturally,
in a different time scale). To the best of our knowledge,
this explicit comparison is new. See also~\cite{Fournier_Meleard_AAP2004} for a similar approach.
The use of ordinary differential equations in population dynamics is widespread. However, as they are valid only for infinite populations, and
real populations are always finite, the precise justification of its use and the precise meaning of its solution is seldom made clear.
In this paper, we showed, in a limited framework, but expanding results from previous works~\citep{Chalub_Souza:CMS_2009,Chalub_Souza:TPB_2009}, that ODEs
can be justifiably used to model the evolution of a population. However, the validity of the modelling is necessarily limited in time
(increasing with the population size), and the solution of the differential equation models the most probable state of the system
(therefore, the differential equation would give answers compatible with the maximum likelihood method, but not necessarily compatible
with other estimators).
One of the central issues of the present work is to discuss the possibility of using diffusive approximation for large, but finite, $N$. However, a major challenge to
any one interested to use the replicator-diffusion equation to fit experimental data is the value of $\kappa$.
From the derivation of the replicator-diffusion equation, we see that $\kappa$ is directly linked to the variance of the diffusion, while $\|\psi\|_\infty$ is directly linked with natural selection. Hence, their ratio is an adimensional measure of the relative relevance of the genetic drift with respect to natural selection. If we normalise the fitness functions such that $\|\psi\|_\infty=1$, then $\kappa$ becomes a measure of such relative relevance.
In this sense, $\kappa^{-1}$ could be an alternative definition of effective population size (see also~\cite{EtheridgeLNM} for usual definitions).
Only when the population is small or times are long in the evolutionary scale, we would expect
order 1 values of $\kappa$.
We are currently applying a
similar technique to epidemiological models; in this case it is necessary to impose boundary conditions in part of the boundary
(as an homogeneous population of infected individual is not stationary, as infected individuals become, with time, removed or
even susceptible) and it is impossible to impose boundary conditions in part of the boundary
(a population of susceptible remains in this state for ever). Early results were already published in~\cite{Chalub_Souza_2011}.
The same problem, regarding the imposition of boundary conditions is true if we include mutations in the Moran or Wright-Fisher
model. This is work in progress.
\section*{Acknowledgements}
FACCC was partially supported by CMA/FCT/UNL, financiamento base 2011 ISFL-1-297 and projects PTDC/FIS/101248/2008, PTDC/FIS/70973/2006
from FCT/MCTES/Portugal. FACCC also acknowledges the hospitality of CRM/Barcelona where part of this work was performed and discussions with J. J. Velazquez (Madrid). MOS was partially supported by CNPq grants \#s 309616/2009-3 and 451313/2011-9, and FAPERJ grant \# 110.174/2009. We thank the careful reading and comments of three anonymous referees.
|
1,116,691,499,773 | arxiv | \section{Introduction}
Topological insulator (TI) – superconductor (S) hybrids are potential systems for realizing p-wave superconductivity and hosting Majorana zero-energy states \cite{Read2000,Kitaev2001,Ivanov2001,Fu2008,Nilsson2008,Tanaka2009,Stanescu2010,Potter2011}. The common singlet s-wave pairing from a nearby superconductor is predicted to induce a spinless p-wave superconducting order parameter component in a topological insulator \cite{ Fu2008, Potter2011} because of the spin-momentum locking of the surface states in a topological insulator \cite{Hsieh2009a,Chen2009,Hasan2010}. In a Josephson junction between two s-waves superconductors with a topological insulator barrier (S-TI-S), Majorana Bound States (MBS) can occur with a $\sin(\phi/2)$ current-phase relationship \cite{Fu2008}. Contacts between superconductors and 3D topological insulators were realised on exfoliated flakes and films of Bi$_{2}$Te$_{3}${}, Bi$_{2}$Se$_{3}$\xspace{}, and strained HgTe \cite{Sacepe2011,Veldhorst2012,Yang2012,Williams2012,Oostinga2013,Cho2013,Galletti2014}, and the Josephson behaviour was investigated by measuring Fraunhofer patterns in the presence of an applied magnetic field and Shapiro steps due to microwave radiation \cite{Veldhorst2012,Yang2012,Williams2012,Cho2013,Galletti2014}. Despite the presence of conductivity shunts through bulk TI, the Josephson current was found to be mainly carried by the topological surface states \cite{Veldhorst2012,Cho2013,Galletti2014}.
Pecularities in the Fraunhofer diffraction patterns have been found for topological Josephson junctions \cite{Williams2012, Kurter2014}, including non-zero minima in the Fraunhofer patterns and periodicities which do not correspond to the junction size. In junctions with a varying width the characteristic energy $I_{\textrm{c}} R_{\textrm{N}}$\xspace was reported to scale inversely with the junction width \cite{Williams2012}. This observation has been phenomenologically attributed to the width dependence of the Majorana modes contributing to a highly distorted current-phase relationship \cite{Williams2012}. The Majorana modes have also been held responsible for the unexpectedly small flux periodicities in the $I_c(B)$ Fraunhofer pattern of the same junctions \cite{Williams2012}.
However, only one mode out of many channels is a MBS. For all non-perpendicular trajectories a gap appears in the Andreev Bound state spectra, giving trivial $2\pi$ periodic bound states \cite{Snelder2013}. For typical device sizes fabricated so far, the number of channels is estimated to be large (a width of the order of a few 100 nm and a Fermi wavelength of the order of $k_F^{-1}=1$ nm already gives a few 100 modes). The Majorana signatures are, therefore, expected to be vanishingly small.
To understand the $I_c(B)$ periodicity as well as the scaling of $I_{\textrm{c}} R_{\textrm{N}}$\xspace with width, we have realized S-TI-S topological Josephson junctions with varying width. We also observe a non-trivial scaling of the critical current, normal state resistance and magnetic field modulation periodicity. However, a detailed analysis shows that all scaling effects can be explained by mere geometric effects of trivial modes. The dominance of trivial Andreev modes is supported by the absence of $4\pi$ periodicity signatures in the Shapiro steps under microwave irradiation.
\section{Expected Majorana related modifications of the critical current modulation by magnetic field and microwaves}
Screening external flux from a superconducting junction results in the characteristic Fraunhofer pattern in Josephson junctions due to the DC Josephson effect. The critical current is modulated by the magnetic flux with a periodicity of the superconducting flux quantum, $\Phi_0=h/2e $, threading the junction, due to the order-parameter being continuous around a closed contour. If the current-phase relationship is changed from $\sin(\phi)$ to $\sin(\phi/2)$ in a topologically non-trivial junction the periodicity is expected to become $h/e$.
Additionally, for junctions where MBS are present it has been proposed that the minima in the Fraunhofer pattern are non-zero \cite{Potter2013}. The current at the minima is predicted to be approximately equal to the supercurrent capacity of a single channel, $I_M \approx \Delta/\Phi_0$.
Applying an AC bias on top of the DC bias will create a frequency to voltage conversion, the AC Josephson effect. In the voltage state of the junction, at DC voltages equal to $k \Phi_0 f = k h f / 2e $, with $k$ an integer and $f$ the frequency in Hz, there will be a current plateau with zero differential resistivity at fixed finite voltages. The presence of these Shapiro steps in a superconducting junction is one of the hallmarks of the Josephson effect. In contrast to a Fraunhofer pattern, it does not depend on the geometry of the junction, but on the current-phase relationship of the junction. A sum of different current-phase relationships \cite{Kwon2003,Kwon2004,Kwon2004a,Fu2009,Ioselevich2011,Pal2014} $I = A_1 \sin{\phi/2} + A_2 \sin{\phi} + A_3 \sin{2\phi} + \ldots$ will result in current plateaus at $V_{1,l} = l h f/e$, $V_{2,m} = m hf/2e$, $V_{3,n} = n hf/4e$, etc. For a pure $\sin(\phi/2)$ relationship one expects steps only at $ k hf / e $. The actual width and the modulation as function of applied RF power of the current plateaus depends on the ratio between the applied RF frequency and the $I_{\textrm{c}} R_{\textrm{N}}$\xspace{} product of the junctions. This can be numerically obtained by solving the Resistively Shunted Josephson (RSJ) \cite{Tinkham2004} junction model.
\section{Sample layout and fabrication}
Devices were designed with junctions of constant electrode separation and varying width. The fabrication is similar to the method used by Veldhorst \textit{et al.}\ \cite{Veldhorst2012}, but has been modified to reduce the number of fabrication steps and increase the number of usable devices available on one chip. Exfoliated flakes are transferred to a Si/SiO$_2$ substrate. E-beam lithography, with \SI{300}{nm} thick PMMA resist, is used to define junctions and contacts in two different write fields, eliminating the photo-lithography step.
In figure~\ref{fig:bi2te3junctiondesign} the contact pads, written with a coarse write field, and the structure on the Bi$_{2}$Te$_{3}${} flake, written with a smaller and accurate write field, are visible. The smaller write field increases the resolution possible. An overlap of the structures was used in areas where the dose or write field was changed. These overlaps will cause overexposure and are only possible when the resolution is not critical. The \SI{80}{nm} niobium superconducting film and a \SI{2.5}{nm} capping layer of palladium are sputter deposited. The flake is Ar-ion etched at \SI{50}{eV} for \SI{1}{min} prior to deposition resulting in transparent contacts.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{design3.png}
\caption{ \textsc{Scanning electron microscope images of a typical device.} The white bar is \SI{200}{\micro m}, \SI{5}{\micro m} and \SI{100}{nm} wide in the three consecutive images respectively, and the white rectangles in the two left images mark the location of the image to the right. The first image show the Nb contact pads (dark grey) written with a large write field. The middle image shows the flake (trapezoid with bright edges and a step edge diagonal across) with leads (dark grey) leading to the junctions. The junctions are visible as faint white breaks in the leads. Along the top-row, left to right, are a 100, 250 and \SI{500}{nm} junction. On the bottom-row are a 750, 1000 and \SI{2000}{nm} junction. The \SI{5}{\micro m} on the right hand side of the flake is overexposed. The rightmost image is a close-up of a \SI{250}{nm} by \SI{150}{nm} designed junction.}
\label{fig:bi2te3junctiondesign}
\end{figure}
The edge of the flake with the substrate provides a step edge for the Nb\xspace{} from the substrate to the flake, and it is advisable to keep the thickness of the flake comparable or less than the thickness of the Nb\xspace{} layer. The thickness of the sputter deposited Nb\xspace{} is limited by the thickness of the e-beam resist layer. For flakes of usable lateral dimensions, the thickness is generally in the order of \SI{100}{nm}. The contacts for the voltage and current leads are split on the flake: if a weak link occurs on the lead transition from flake to substrate this will not influence the measured current-voltage ($IV$) characteristic of the junction.
Structures with a disadvantageous aspect ratio (junction width to electrode separation), such as wider junctions, are prone to overexposure. This increases the risk of the junction ends not being separated. For wider junctions a slightly larger separation has been used. Overexposure will decrease the actual separation. Actual dimensions are verified after fabrication as in figure~\ref{fig:bi2te3junctiondesign}.
The junctions are characterised in a pumped He\xspace cryostat with mu-metal screening and a superconducting Nb\xspace can surrounding the sample. The current and voltage leads are filtered with a two stage RC filter. A loop antenna for exposure to microwave radiation is pressed to the backside of the printed circuit board (PCB) holding the device. A coil perpendicular to the device surface is used to apply a perpendicular magnetic field. For different values of the applied microwave power or magnetic field current-voltage traces are recorded.
\section{Measured scaling of transport parameters}
\subsection{Junction overview}
The devices are characterised by measuring their $IV$ curves at \SI{1.6}{\kelvin} under different magnetic fields and microwave powers. The microwave frequency of \SI{6}{\giga \hertz} is chosen for maximum coupling as determined by the maximum suppression of the supercurrent at the lowest power. The main measured junction parameters are given in table \ref{tab:bi2te3junctioncharacteristics}.
\begin{table}
\begin{indented}
\item[]\begin{tabular}{@{}lll}
\br
Junction width (\si{\nano \meter}) & Critical current (\si{\micro \ampere}) & Normal state resistance (\si{\ohm})\\
\mr
100 & 0.2 & 1.5 \\
250 & 3.5 & 1.14 \\
500 & 13.5 & 0.84 \\
1000 & 16 & 0.64 \\
\br
\end{tabular}
\end{indented}
\caption{\label{tab:bi2te3junctioncharacteristics}\textsc{Junction characteristics.} The critical current $I_C$ and normal state resistance $R_N$ are given at \SI{1.6}{\kelvin}. The measured junction separation is $~$\SI{140}{nm}. The \SI{750}{\nano \meter} and \SI{5000}{\nano \meter} wide junctions have shorted junctions due to e-beam overexposure, and the \SI{2000}{nm} wide junction had a non-ohmic contact, caused by a break at the edge of the Bi$_{2}$Te$_{3}${} flake.}
\end{table}
Both the magnetic and microwave field response has been studied for all junctions.
Results for the \num{250}, \num{500} and \SI{1000}{nm} wide junctions are shown in figure \ref{fig:bi2te3MagneticAndMicrowaveField}.
The supercurrent for the \SI{100}{nm} wide junction was suppressed in a magnetic and microwave field without further modulation.
In the response to the microwave field a sharp feature is visible starting at \SI{200}{\micro A} and \SI{-10}{dBm} for the \SI{1000}{nm} junction.
This is likely the result of an unidentified weak link in one of the leads.
The fainter structures in the \SI{250}{nm} junction starting at \num{ 43} and \SI{60}{\micro A} and \SI{-40}{dBm} are reminiscent of an echo structure described by Yang \textit{et al.}\cite{Yang2012} for Pb-Bi$_{2}$Se$_{3}$\xspace{}-Pb Josephson junctions.
Measuring the microwave response at the minima of the Fraunhofer pattern \cite{Potter2013} yielded no Shapiro features.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{overview.png}
\caption{ \textsc{Magnetic field and microwave power dependence.} The top row figures show the dynamic resistance of the \SI{250}{nm} wide junction, the middle row figures correspond to the \SI{500}{nm} wide junction and the bottom row figure to the \SI{1000}{nm} wide junction. The left column shows the reaction to an applied magnetic field, the right column the reaction to microwave power at \SI{6}{GHz}. The horizontal line at \SI{\sim 19}{mT} is an artifact of the magnet current source. $IV$ curves for these junctions are shown in figure \ref{fig:bi2te3Flux}.}
\label{fig:bi2te3MagneticAndMicrowaveField}
\end{figure}
\subsection{Scaling of $I_\textrm{C}$ and $R_\textrm{N}$}
In general, the normal state resistance, $R_\textrm{N}$, of a lateral SNS junction \cite{Likharev1979,Golubov2004} is expected to scale inversely with junction width, whereas $I_c$ is expected to be proportional to the width, such that the $I_{\textrm{c}} R_{\textrm{N}}$\xspace product is constant \cite{Tinkham2004}. Josephson junctions on topological insulators are similar to SNS junctions with an induced proximity effect by superconducting leads into a TI surface state. For junctions on Bi$_{2}$Te$_{3}${} the transport was found to be in the clean limit, with a finite barrier at the interface between the superconductor and the surface states \cite{Veldhorst2012}. The supercurrent for ballistic SNS junctions with arbitrary length and barrier transparency is given by \cite{Galaktionov2002}, which was found to fit the data of Veldhorst \textit{et al.} well \cite{Veldhorst2012}. The normal state resistance in Bi$_{2}$Te$_{3}${} is complicated by the diffusive bulk providing an intrinsic shunt. The leads on the Bi$_{2}$Te$_{3}${} flake leading up to the junction also contribute towards a normal state conductivity shunt without carrying supercurrent. This results in current paths not only directly between the two electrodes but also through and across the whole area of the flake to the left and right of the electrodes.
The scaling of $I_c$ and $R_N$ with junction width is shown in Figure 3. In the junctions with an aspect ratio (width of the junction divided by electrode separation) greater than 5, the $I_{\textrm{c}} R_{\textrm{N}}$\xspace product is approximately \SI{11}{\micro V}, similar to junctions where the the length was varied instead of the width \cite{Veldhorst2012,Veldhorst2012a}. Below this, the $I_{\textrm{c}} R_{\textrm{N}}$\xspace product falls sharply. To verify whether this can be due to Majorana modes we estimate the number of conducting channels in these small junctions. The number of channels in a junction is related to the width of the junction: $M = \frac{k_F \times W}{\pi}$~\footnote{The wave vector for linear dispersion is given by $k_F = \frac{E_F}{\hbar v_F} \approx$ \SI{2e9}{m^{-1}}. The Fermi energy is taken as \SI{150}{\milli eV} and the Fermi velocity is in the order of \SI{1e5}{m/s}.}. For a \SI{100}{\nano m} junction this means that there are more than 60 channels active in the junction, and a MBS will not dominate transport properties.
Rather, due to the open edges of the junctions, the scaling of the normal state resistance is not directly proportional to the junction width, but offset due to the whole flake providing a current shunt. This is similar to an infinite resistor network \cite{Cserti2000} providing a parallel resistance to the resistance due to the separation of the two leads. Taking this into account, the resistance between the two leads takes the form $R = (\rho_{\textrm{W}} R_{\textrm{parallel}}) / ( WR_{\textrm{parallel}} + \rho_{\textrm{W}})$, where $rho_{\textrm{W}}/W$ gives the junction resistance without a current shunt and $R_{\textrm{parallel}}$ is the resistance due to the current shunt through the flake. This equation does not disentangle the surface and bulk contributions but treats them as scaling the same. In the zero-width limit, the resistance is cut-off and does not diverge to infinity. The resulting $I_{\textrm{c}} R_{\textrm{N}}$\xspace product can then be well explained by the scaling of $R_N$ (including the shunt) and the usual scaling of $I_c$ with width ($I_c$ being directly proportional to the number of channels, given by the width of the junctions with respect to the Fermi wavelength). Note, that the expected scaling of $I_c$ contrasts previous observations of inverse scaling \cite{Williams2012}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{IcandRnandIcRn2.png}
\caption{ \textsc{Scaling of the critical current, normal resistance and $I_{\textrm{c}} R_{\textrm{N}}$\xspace product.} In the left panel the measured critical currents (black squares) and normal state resistances (red circles) are plotted. The critical current is assumed to be linear (black dashes). The resistance is modelled as a width resistivity $\rho_w=$ \SI{7.9e-7}{\ohm m} in parallel with a constant shunt resistance $R_{\textrm{parallel}}=$ \SI{1.9}{\ohm}. The $I_{\textrm{c}} R_{\textrm{N}}$\xspace product is plotted in the right panel, with the dashed line as the product of the fits in the left panel. The fit approaches the $I_{\textrm{c}} R_{\textrm{N}}$\xspace product of \num{10} to \SI{15}{\micro V} found in junctions of varying width \cite{Veldhorst2012}.}
\label{fig:bi2te3IcandRnandIcRn}
\end{figure}
\subsection{Periodicty in $\Phi_0$}
The critical current of Josephson junctions oscillate in an applied magnetic field due to a phase difference induced across the junction. The magnetic flux in the junction area is the product of the area of weak superconductivity between the two electrodes and flux density in this area. The area of the junction is given by $W \times \left( l + 2\lambda_L \right)$, where $W$, $l$ and $\lambda_L$ are the width, length and London penetration depth respectively. The investigated junctions are smaller than or comparable to the Josephson penetration depth, $\lambda_J = \sqrt{ \Phi_0/( 2 \pi \mu_0 d' J_C) }$, where $d'$ is the largest dimension (corrected by the London penetration depth) of the junction and $J_C$ has been estimated using the bulk mean free path of Bi$_{2}$Te$_{3}$ crystals, \SI{22}{nm} \cite{Veldhorst2012}, which allows us to ignore the field produced by the Josephson current. For the \SI{80}{nm} thick Nb film used we use the bulk London penetration depth, \SI{39}{nm} \cite{Gubin2005, Maxfield1965}.
The superconducting leads may be regarded as perfect diamagnets. This leads to flux lines being diverted around the superconducting structure. This causes flux focussing in the junctions, as more flux lines pass through the junctions due to their expulsion from the superconducting bulk. We estimate the amount of flux focussing by considering the shortest distance a flux line has to be diverted to not pass through the superconducting lead. In a long lead, half are passed to the one side and half to the other side. At the end of the lead the flux lines are diverted into the junction area. The flux diverted is $((W/2-\lambda_L)^2 \times B$, see also the inset of Figure 4. This occurs at both electrodes, and is effectively the same as increasing the junction area by $2\times(W/2-\lambda_L)^2$. Without flux focussing, the expected magnetic field periodicity is given by the dashed line in figure \ref{fig:bi2te3Flux}(a). Correcting for flux focussing and taking $\lambda_L = 39$\,nm results in the solid line, closely describing the measured periods.
The colour graphs in figure~\ref{fig:bi2te3MagneticAndMicrowaveField} show the modulation of the critical current with microwave power. In figure~\ref{fig:bi2te3Flux}(b) $IV$ traces for different applied powers are plotted. The steps all occur at multiples of $\Phi_0 f=$\,\SI{12.4}{\micro V}. A $4 \pi$ periodic Josephson effect will result in steps only at even multiples of $\Phi_0 f$. Shapiro steps are not geometry dependent: in combination with the previously introduced geometry corrected magnetic field periodicity this illustrates the $2 \pi$ periodic Josephson effect in these junctions.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{flux5.png}
\caption{ \textsc{Behaviour of Fraunhofer oscillation frequency and Shapiro steps.} In the left panel the modulation period of the Josephson current as a function of the external field is plotted as a function of the inverse junction width. The dashed line is the expected period for a rectangular junction. The solid line takes into account flux focussing, as presented in the inset. The flux incident on the grey areas A and B is diverted to the sides of the junction lead, while the red area C is added to the effective junction area between the two leads. The dimensions of the superconductor have been reduced by the London penetration depth, since flux can penetrate this area. The right panel shows $IV$ characteristics under \SI{6}{GHz} microwave irradiation. The line graphs are at \num{-40}, \num{-30}, \num{-20}, \num{-10} and \SI{0}{dBm} powers for the 250, 500 and \SI{1000}{nm} wide junctions, and are offset in current for clarity. All current plateaus are at multiples of $\Phi_0 f =$\,\SI{12.4}{\micro V}.}
\label{fig:bi2te3Flux}
\end{figure}
\section{Conclusion}
We investigated superconducting junctions, coupling Nb\xspace leads on the surface of a Bi$_{2}$Te$_{3}${} flake, by varying the junction width. The critical current and normal state resistance decrease and increase respectively with reduced junction width. However, the $I_{\textrm{c}} R_{\textrm{N}}$\xspace product is found to be geometry dependent, as the normal state resistance does not diverge for zero width. The decreasing $I_{\textrm{c}} R_{\textrm{N}}$\xspace product with reduced junction width is understood when taking into account the resistance due to the entire flake surface. The $I_{\textrm{c}} R_{\textrm{N}}$\xspace product becomes of the order of \num{10} to \SI{15}{\micro V} for wide junctions, similar to previous junctions \cite{Veldhorst2012a}. The junctions are found to be periodic with $\Phi_0$ in a magnetic field when flux focussing is taken into account. Microwave irradiation results in steps at voltages at $k\Phi_0 f$, which is to be expected from junctions with ten to hundred conducting channels contributing to the coupling between the superconducting leads.
Using topological insulators with reduced bulk conductivity should result in increased $I_{\textrm{c}} R_{\textrm{N}}$\xspace products and allow for electrostatic control of the Fermi energy. With similar junction geometries this will allow for reduction and control of the number of superconducting channels. This step will allow the behaviour of a possible MBS to be uncovered and separated from geometric effects which affect all conducting channels in S-TI-S junctions.
\ack
This work is supported by the Netherlands Organization for Scientific Research (NWO), by the Dutch Foundation for Fundamental Research on Matter (FOM) and by the European Research Council (ERC).
\section*{References}
\bibliographystyle{iopart-num-nourl}
|
1,116,691,499,774 | arxiv | \section{Introduction}
\label{sec:introduction}
The statistical properties of the cosmic microwave background (CMB)
show remarkable consistency with the paradigm of early universe inflation.
But a number of troubling anomalies persist, including the large cold spot,
the quadrupole--octupole alignment, and a hemispherical amplitude asymmetry.
If these anomalies are primordial it is not yet clear whether they can be compatible
with the simplest inflationary models
which typically predict statistical independence of each multipole
(see Ref.~\cite{Schwarz:2015cma} and references therein).
In this Letter we report results for a special set of inflationary scenarios which
can accommodate the hemispherical asymmetry.
Working with the Planck 2013 temperature data, Aiola et al. demonstrated that
the asymmetry
could be approximately fit by a position-dependent power-spectrum
at the last-scattering surface of the form~\cite{Aiola:2015rqa}
\begin{equation}
\mathcal{P}^{\text{obs}}(k)
\approx
\frac{k^3 P(k)}{2\pi^2}
\Big(
1 + 2 A(k) \hat{\vect{p}} \cdot \hat{\vect{n}} + \cdots
\Big) ,
\label{eq:Pk-modulation}
\end{equation}
where $\hat{\vect{p}}$ represents the direction of maximal asymmetry,
$\hat{\vect{n}}$ is the line-of-sight from Earth,
and $A(k)$ is an amplitude which Aiola et al. found to scale roughly like
$k^{-0.5}$. Averaged
over $\ell \sim 2\,\text{--}\,64$
it is of order $0.07$.
In this paper our primary objective is to explain how an inflationary
model can produce an asymmetry which replicates this scale dependence.
The effect is seen in multiple frequency channels
and in the older WMAP data, which makes it less likely
to be attributable to an instrumental effect or foreground.
Future improvements in observation are likely to be driven by
polarization data, which provide an independent probe of the largest-scale
modes~\cite{Zibin:2015ccn}.
\para{Inflationary explanations}
Erickcek, Carroll and Kamionkowski proposed that~\eqref{eq:Pk-modulation}
could be produced during an inflationary epoch
if the two-point function at wavenumber
$k$ is modulated by perturbations of much
larger wavelength~\cite{Erickcek:2008sm}.
This entails the presence of a bispectrum with nonempty squeezed limit,
and if the amplitude is sufficiently large it would
be the first evidence for multiple active light fields in the inflationary era.
This is an exciting possibility but there is significant concern that
a bispectrum of this type may already be ruled out by observation.
Current experiments do not measure the bispectrum on individual
configurations,
but rather weighted averages over related groups of configurations---%
and at present are most sensitive to modestly squeezed examples.
Averaged over these configurations,
Planck observations
require
the non-Gaussian component to have amplitude
$|f_{\mathrm{NL}}| / 10^5 \lesssim 0.01\%$~\cite{Ade:2013ydc,*Ade:2015ava}
compared to the leading Gaussian part.
Meanwhile, ignoring all scale dependence,
Refs.~\cite{Kanno:2013ohv,Lyth:2014mga,Namjoo:2013fka,Kobayashi:2015qma}
showed that an inflationary origin would require
\begin{equation}
\frac{|a_{20}|}{6.9\times 10^{-6}} \frac{|f_{\mathrm{NL}}|}{10} \simeq 6
\left(
\frac{A}{0.07}
\right)^2
\beta
\label{eq:a20-A-fNL-basic}
\end{equation}
where $a_{20}$ is the quadrupole of the CMB temperature anisotropy,
measured to be approximately
$|a_{20}| \approx 6.9 \times 10^{-6}$~\cite{Efstathiou:2003tv},
and $\beta$ is a model-dependent number
which would typically be rather larger than unity.
Therefore Eq.~\eqref{eq:a20-A-fNL-basic}
suggests that an inflationary scenario may require
$|f_{\mathrm{NL}}| \gtrsim 60$,
in contradiction to measurement.
If so, we would have to abandon the possibility of an inflationary
origin, at least if produced by the Erickcek--Carroll--Kamionkowski
mechanism.
To evade this Eq.~\eqref{eq:a20-A-fNL-basic}
could be weakened by tuning our position
on the long-wavelength background
to reduce $\beta$,
but clearly we should not allow ourselves to entertain fine-tunings
which are less likely than the anomaly they seek to explain.
\para{Averaged constraints}
The requirement that $A(k)$
varies with scale
gives an alternative way out which has yet to be
studied in detail.
It could happen that the
bispectrum amplitude is large on long wavelengths
but runs to small values at shorter wavelengths in such a way
that the wavelength-averaged values measured by CMB experiments
remain acceptable.
Eq.~\eqref{eq:a20-A-fNL-basic}
might then apply for a small number of wavenumber configurations
but would have no simple relation to observable quantities.
In this Letter
we provide, for the first time, an analysis of the CMB temperature bispectrum
generated by a scale- and shape-dependent primordial bispectrum
which is compatible with the modulation $A(k)$.
We do this by constructing an explicit model which
can be contrived to match all current observations,
and also serves as a useful example
showing the complications which are encountered.
Despite its contrivance, we expect the bispectrum produced by
this model to be a good proxy for the bispectrum generated in a much
larger class of successful scenarios producing scale-dependence
through a large, negative $\eta$-parameter.
If the model can be embedded within a viable early universe scenario,
we show that it can explain the asymmetry
without introducing
tension with the $f_{\mathrm{NL}}$ or low-$\ell$ amplitude constraints
(the `Grischuk--Zel'dovich' effect).
In this Letter we focus on the simplest possibility that the
non-Gaussian fluctuations of a single field
generate
the asymmetry,
although we allow a second field to generate
the Gaussian part of the curvature perturbation.
Generalizations and further details are presented
in a longer companion paper~\cite{Byrnes:2015dub}.
\section{Generating the asymmetry}
We denote the field with scale-dependent fluctuations
by $\sigma$, and take it to substantially dominate the
bispectrum for the observable curvature perturbation $\zeta$.
The $\zeta$ two-point function
$\langle \zeta(\vect{k}_1) \zeta(\vect{k}_2) \rangle
= (2\pi)^3 \delta(\vect{k}_1 + \vect{k}_2) P(k)$
can depend on $\sigma$,
or alternatively on any combination of $\sigma$ and other Gaussian fields.
The question to be resolved is how $P(k)$
responds to a long-wavelength background of $\sigma$ modes
which we write $\delta\sigma(\vect{x})$.
\para{Response function}
In Ref.~\cite{Byrnes:2015dub} we show that this
response can be computed
using the operator product expansion (`OPE'),
and expressed in terms of the ensemble-averaged
two- and three-point functions of the inflationary model.
We focus on models in which the primary effect
is due to the amplitude of the long-wavelength background
rather than its gradients. Since the perturbation is
small it is possible to write
\begin{equation}
P(k,\vect{x}) = P(k)\Big(
1
+ \delta\sigma(\vect{x}) \rho_\sigma(k)
+ \cdots
\Big) .
\label{eq:twopf-response}
\end{equation}
We call $\rho_\sigma(k)$ the `response function'.
It can be regarded as the derivative $\d\ln P(k) / \d \sigma$.
The OPE gives~\cite{Byrnes:2015dub}
\begin{equation}
\rho_\sigma(k)
\simeq
\frac{1}{P(k)}
[\Sigma^{-1}(k_L)]_{\sigma\lambda}
B^\lambda(k, k, k_L)
\quad
\text{if $k \gg k_L$}
,
\label{eq:response-function}
\end{equation}
where a sum over $\lambda$ is implied,
and
$\Sigma^{\alpha\beta}$ and $B^\alpha$
are spectral functions for certain mixed two- and
three-point correlators of $\zeta$ with the light
fields of the inflationary model (and their momenta),
which we collectively denote $\delta\phi^\alpha$,%
\footnote{In the restricted setup we are describing, where only
$\sigma$ has a non-negligible bispectrum, the sum over
$\lambda$ in Eq.~\eqref{eq:response-function} would include the
field $\sigma$ and its momentum.}
\begin{subequations}
\begin{align}
\langle \delta\phi^\alpha(\vect{k}_1) \delta\phi^\beta(\vect{k}_2) \rangle
& = (2\pi)^3 \delta(\vect{k}_1 + \vect{k}_2) \Sigma^{\alpha\beta}
\\
\langle \delta\phi^\alpha(\vect{k}_1) \zeta(\vect{k}_2)
\zeta(\vect{k}_3) \rangle
& = (2\pi)^4 \delta(\vect{k}_1 + \vect{k}_2 + \vect{k}_3) B^\alpha .
\end{align}
\end{subequations}
Eq.~\eqref{eq:response-function}
is one of our principal new results. It enables us
to extend the analysis of inflationary models
beyond those already considered in the literature
to cases with nontrivial, scale-dependent correlation
functions. The conditions under which it
applies are discussed in more detail in Ref.~\cite{Byrnes:2015dub}.
In the special case of a
slow-roll model in which a single field generates all perturbations, it can be shown that the right-hand
side of~\eqref{eq:response-function}
is related to the reduced bispectrum,
\begin{equation}
\rho_\sigma(k) = \frac{12}{5} f_{\mathrm{NL}}(k,k,k_L), \quad
\text{$k \gg k_L$} ,
\end{equation}
where $f_{\mathrm{NL}}(k_1, k_2, k_3)$ is defined by
\begin{equation}
\frac{6}{5} f_{\mathrm{NL}}(k_1, k_2, k_3)
\equiv
\frac{B(k_1, k_2, k_3)}{P(k_1) P(k_2) + \text{2 cyclic perms}} .
\label{eq:single-field-response}
\end{equation}
Notice that,
for a generic $B(k_1, k_2, k_3)$,
the reduced bispectrum defined this way
has no simple relation
to any of the amplitudes
$\fNL^\text{local}$, $\fNL^\text{equi}$, etc., measured by experiment.
Eq.~\eqref{eq:single-field-response}
reproduces earlier results
given in the literature~\cite{Lyth:2013vha,Kanno:2013ohv,Lyth:2014mga,Kobayashi:2015qma}
but does not apply for the more realistic models
considered in this Letter.
\para{Long wavelength background}
To model the long-wavelength background we take
\begin{equation}
\delta \sigma(\vect{x}) \approx E \mathcal{P}_\sigma^{1/2}(k_{\mathrm{L}})
\cos( \vect{k}_{\mathrm{L}} \cdot \vect{x} + \vartheta )
\label{eq:modulating-mode}
\end{equation}
where $E$ labels the `exceptionality' of the amplitude, with $E=1$
being typical and $E \gg 1$ being substantially larger than typical.
We take the wavenumber $\vect{k}_{\mathrm{L}}$ to be fixed.
The phase $\vartheta$ will vary between realizations,
and the Earth is located at $\vect{x}=0$.
The last-scattering surface is at comoving radius $x_{\text{ls}} \approx 14,000 h \, \text{Mpc}^{-1}$.
Evaluating~\eqref{eq:twopf-response}
and~\eqref{eq:modulating-mode} on this surface
at physical location $\vect{x} = x_{\text{ls}} \hat{\vect{n}}$,
and assuming
$\alpha \equiv x_{\text{ls}} k_{\mathrm{L}} / 2\pi < 1$ so that
the wavelength associated with $k_{\mathrm{L}}$ is somewhat larger than
$x_{\text{ls}}$, we obtain
\begin{equation}
P(k,\vect{x}) = P(k)\Big(
1
- C(k)
+ 2A(k) \frac{\vect{x} \cdot \hat{\vect{k}}_{\mathrm{L}}}{x_{\text{ls}}}
+ \cdots
\Big) .
\label{eq:twopf-modulation}
\end{equation}
The quantities $A(k)$ and $C(k)$ are determined
in terms of the response $\rho_\sigma$
and long-wavelength background by
\begin{subequations}
\begin{align}
\label{eq:A-def}
A(k) & = \pi \alpha E \mathcal{P}_\sigma^{1/2}(k_{\mathrm{L}}) \rho_\sigma(k) \sin \vartheta \\
\label{eq:C-def}
C(k) & = - A(k) \frac{\cos \vartheta}{\pi \alpha \sin \vartheta} .
\end{align}
\end{subequations}
Both $A$ and $C$ share the same scale-dependence, so it is possible
that $C(k)$ could be used to explain the lack of power on large scales~\cite{Lyth:2013vha,Lyth:2014mga}.
If so, the model could simultaneously explain \emph{two} anomalies---%
although this would entail a stringent constraint on $\alpha$ in order that $C(k)$
does not depress the power spectrum too strongly at small $\ell$.
The relative amplitude of $A(k)$ and $C(k)$ depends on the unknown phase
$\vartheta$ and our assumption of the form~\eqref{eq:modulating-mode},
but the observation that they scale the same way with $k$
constitutes a new and firm prediction for all models which explain the power asymmetry
by modulation from a single super-horizon mode.
\section{Building a successful model}
\para{Single-source scenarios}
In the case where one field dominates the two- and three-point
functions of $\zeta$,
the bispectrum is equal in squeezed
and equilateral configurations~\cite{Dias:2013rla,Kenton:2015lxa}.
Therefore
\begin{equation}
\rho_\sigma=\frac{12}{5}f_{\mathrm{NL}}(k,k,k_L)=\frac{12}{5}f_{\mathrm{NL}}(k,k,k) ,
\end{equation}
and the asymmetry scales in the same way as the equilateral configuration
$f_{\mathrm{NL}}(k,k,k)$.
If the scaling is not too large it can be computed using~\cite{Byrnes:2010ft}
\begin{equation}
\frac{d \ln |f_{\mathrm{NL}}|}{d\ln k} = \frac{5}{6f_{\mathrm{NL}}} \sqrt{\frac{r}{8}}\frac{M_{\mathrm{P}}^3 V''' }{3H^2} ,
\label{eq:nfnl-ss}\end{equation}
where $r \lesssim 0.1$ is the tensor-to-scalar ratio.
To achieve strong scaling we require $M_{\mathrm{P}}^3 V'''/(3H^2)\gg1$.
But within a few e-foldings this
will typically generate an unacceptably large
second slow-roll parameter $\eta_\sigma$, defined by
\begin{equation}
\eta_\sigma= \frac{M_{\mathrm{P}}^2 V''}{3H^2} .
\label{eq:eta-isomode}
\end{equation}
Therefore it will
spoil the observed near scale-invariance of the power spectrum.
As a specific example, a self-interacting curvaton model
was studied in Ref.~\cite{Byrnes:2015asa}.
This gave rise to many difficulties,
including logarithmic running of $f_{\mathrm{NL}}(k,k,k)$ with $k$---%
which is not an acceptable fit to the scale dependence of $A(k)$~\cite{Aiola:2015rqa}.
Even worse, because~\eqref{eq:nfnl-ss}
is large only when $f_{\mathrm{NL}}$ is suppressed below its natural value,
both the trispectrum amplitude $g_{\mathrm{NL}}$ and the
quadrupolar modulation of the power spectrum
were unacceptable.
In view of these difficulties we will not pursue single-source models further.
\para{Multiple-source scenarios}
In multiple-source scenarios there is more flexibility.
If different fields contribute to the power spectrum and bispectrum
it need not happen that a large $\eta_\sigma$
necessarily spoils scale-invariance.
In these scenarios
$\rho_\sigma$ no longer scales
like the reduced bispectrum, but rather its square-root
$f_{\mathrm{NL}}(k,k,k)^{1/2}$.
Therefore
\begin{equation}
\begin{split}
\frac{\d\ln A}{\d\ln k}
& \approx
\frac{1}{2} \frac{\d\ln |f_{\mathrm{NL}}(k,k,k)|}{\d\ln k}
\\
& \approx
\frac{\d\ln(\mathcal{P}_\sigma / \mathcal{P}) }{\d\ln k}
\approx 2 \eta_\sigma - (n_s-1)
\label{eq:scalings}
\end{split}
\end{equation}
where $\mathcal{P}$ is the dimensionless power spectrum,
$n_s-1\simeq-0.03$ is the observed scalar spectral index
and $\eta_\sigma$ was defined in Eq.~\eqref{eq:eta-isomode}.
If we can achieve a constant $\eta_\sigma \approx -0.25$ while
observable scales are leaving the horizon then it is possible
to produce an acceptable power-law for $A(k)$.
For further details of these scaling estimates for $A(k)$
see Kenton et al.~\cite{Kenton:2015jga} or Ref.~\cite{Byrnes:2015dub}.
A simple potential with large constant $\eta_\sigma$ is
\begin{equation}
W(\phi,\sigma)
=
V(\phi)
\left(
1-\frac{1}{2} \frac{m_\sigma^2 \sigma^2}{M_{\mathrm{P}}^4}
\right) .
\end{equation}
The inflaton $\phi$ is taken to dominate the energy density and therefore
drives the inflationary phase.
Initially $\sigma$ lies near the hilltop at $\sigma=0$,
so its kinetic energy is subdominant and
$\epsilon \approx M_{\mathrm{P}}^2 V_\phi^2 / V^2$. (Here
$\epsilon = -\dot{H}/H^2$ is the conventional slow-roll parameter.)
As inflation proceeds $\sigma$ will roll down the hill
like $\sigma(N) = \sigma_\star \e{-\eta_\sigma N}$,
where `$\star$' denotes evaluation at the initial time and
$N$ measures the number of subsequent e-folds.
To keep the $\sigma$ energy density subdominant we must
prevent it rolling to large field values,
which implies that $\sigma_\star$ must be chosen
to be very close to the hilltop.
But the initial condition must also lie outside the diffusion-dominated
regime, meaning the classical rolling should be substantially larger
than quantum fluctuations in $\sigma$.
This requires
$|\d\sigma/\d N| \gg H_\star / 2\pi$.
In combination with the
requirement that $\sigma$ remain subdominant in the observed power spectrum,
we find that $\sigma_\star$ should be chosen so that
$|\sigma_\star| \gtrsim \sqrt{\epsilon_\star \mathcal{P}}M_{\mathrm{P}} / |\eta_\sigma |$.
For typical values of
$\epsilon = 10^{-2}$ and $\eta_\sigma =-0.25$
this requires $|\sigma(60)| \gtrsim 100 M_{\mathrm{P}}$
which is much too large.
The problem can be ameliorated by reducing $\epsilon_\star$,
but then $\sigma$ contributes significantly to
$\epsilon$ during the inflationary period.
This reduces the bispectrum amplitude to a tiny value,
or causes $\sigma$ to contaminate the power spectrum and
spoil its scale invariance~\cite{Byrnes:2015dub}.
\para{A working model}
To avoid these problems, consider a potential in which the
effective mass of the $\sigma$ field makes a
rapid transition.
An example is
\begin{equation}
W=W_0\left(1+\frac12\eta_\phi\frac{\phi^2}{M_{\mathrm{P}}^2}\right)\left(1+\frac12\eta_\sigma(N)\frac{\sigma^2}{M_{\mathrm{P}}^2}\right) ,
\label{eq:tanh-model}
\end{equation}
where $\eta_\sigma(N)$ is chosen
to be $-0.25$ while observable scales exit the horizon,
later running rapidly to settle near $-0.08$.
(For a concrete realization see Ref.~\cite{Byrnes:2015dub}.)
We take the transition to occur
roughly 16 e-folds after the largest observable
scales exited the horizon.
The field $\phi$ will dominate the Gaussian part of $\zeta$
and its mass should be chosen to match the observed spectral index.
Although simple and illustrative, this model is not trivial to embed in
a fully realistic early universe scenario.
The required initial value of $\sigma$ is only
a little outside the quantum diffusion regime
which may lead to unwanted observable consequences.
Also, not all isocurvature modes decay by the end of the inflationary epoch
so~\eqref{eq:tanh-model} should be completed by a specification
of the reheating model, and it is possible this could change the
prediction for the $n$-point functions.
But, assuming these problems are not insurmountable,
we can accurately compute the bispectrum generated by~\eqref{eq:tanh-model}.
Our predictions then apply to any successful realization of this
scenario.
\para{Estimator for $\fNL^\text{local}$}
The most urgent question is whether the bispectrum
amplitude is compatible with present constraints
for $\fNL^\text{local}$, $\fNL^\text{equi}$, etc., which as explained
above are weighted averages over the bispectrum amplitude on
groups of related configurations.
At present the strongest constraints apply to $\fNL^\text{local}$
which averages over modestly squeezed configurations.
To determine the response of these estimators
we construct a Fisher estimate.
We numerically compute $\sim 5 \times 10^6$
bispectrum configurations for~\eqref{eq:tanh-model}
covering the range from $\ell \sim 1$ to $\ell \sim 7000$
and use these to predict the
observed angular temperature bispectrum.
For a choice of parameter values which generate the correct
amplitude and scaling of $A(k)$, we find that a Planck-like
experiment would measure
order-unity values,
\begin{equation}
\hatfNL^\text{local} = 0.25
,
\quad
\hatfNL^\text{equi} = 0.6
,
\quad
\hatfNL^\text{ortho} = -1.0
.
\label{eq:fNL-estimator-predictions}
\end{equation}
These estimates are our second principal result.
They are one to two orders of magnitude smaller than
previous estimates based on Eq.~\eqref{eq:a20-A-fNL-basic},
and are easily compatible with present-day constraints.
The difference comes from the strong running
of the bispectrum amplitude required for compatibility with
$A(k)$,
and also the growing number of bispectrum configurations
available at large $\ell$.
This means that the signal-to-noise tends to be dominated
by the largest-$\ell$ configurations where the amplitude
is small, depressing the final weighted average;
in fact,
we find that the reduced bispectrum amplitude near the
Planck pivot scale $\ell \sim 700$ is a fair predictor
for the averages~\eqref{eq:fNL-estimator-predictions}.
We find that
it is possible to simultaneously satisfy observational constraints on
the amplitude of low-$\ell$ multipoles of the power spectrum~\cite{Byrnes:2015dub}.
For example, choosing
$\alpha=0.01$ (which makes the wavelength of the modulating mode roughly 100 times the
distance to the last-scattering surface)
requires an exceptionality $E \approx 300$
to match the measured amplitude of $A(k)$.
A value for $E$ in this range would likely require \emph{further} new physics,
but it could perhaps be reduced to a value of order $10$ by increasing the bispectrum amplitude.
For these parameter choices the low-$\ell$ suppression $C(k)$
may be larger than the
approximate bound $C(k) \lesssim 0.14$ suggested by Contaldi et al.~\cite{Contaldi:2014zua}.
(There is some uncertainty regarding the precise numerical bound, because
the result of Contaldi et al. assumed the BICEP measurement of $r$ which is now known
to have been confused by dust.)
If necessary this would apparently have to be mitigated
by tuning our position on the long-wavelength mode.
Finally, we note that
although the precise bispectrum used in our analysis
applies to the specific model~\eqref{eq:tanh-model}, any
model which generates scale dependence through a large $\eta_\sigma$
is expected to produce a similar shape.
Therefore, despite the contrivances of our example,
we expect our conclusions to be robust and apply much more generally.
\section{Conclusions}
The CMB power asymmetry is a puzzling feature which may impact on
our understanding of the very early universe.
The most popular inflation-based explanations deploy
the Erickcek--Carroll--Kamionkowski mechanism,
in which a single super-horizon mode of exceptional amplitude
modulates the small-scale power spectrum (see Ref.~\cite{Adhikari:2015yya} for a generalization to include all superhorizon modes).
But until now, comparisons of the scenario with observation
have not accounted for the scale-dependence of the asymmetry---%
or the bispectrum which is responsible for it.
This is a necessary feature of the model.
Previous analyses based on Eq.~\eqref{eq:a20-A-fNL-basic} have suggested
the required bispectrum amplitude may be incompatible with
observation, but it has not been clear how the inclusion of
scale-dependence would modify this conclusion.
In this Letter we have presented a direct determination of
the response function which couples the asymmetry to
the ensemble-averaged bispectrum and the super-horizon mode.
We have presented an illustrative example which satisfies
all current observational constraints,
and which can be used to obtain precise predictions for the primordial
bispectrum.
Using this to predict the angular bispectrum of the CMB temperature
anisotropy we have confirmed that the bispectrum amplitude
is well within the bounds set by current Planck data.
Although this bispectrum strictly applies for the step model~\eqref{eq:tanh-model}
we believe it to be a good proxy for any inflationary explanation of the asymmetry
which uses a large $\eta$ parameter to generate the scale dependence.
Our results show that such scenarios involve much less tension with
observation than would be expected on the basis of~\eqref{eq:a20-A-fNL-basic}.
Nevertheless, this does not mean that an inflationary explanation is
automatically attractive.
To build a successful model we have been forced to make a number of arbitrary
choices, including the initial and final values of the $\sigma$ mass,
and the location and rapidity of the transition.
It is also unclear whether this inflationary model can be embedded
within a viable early universe scenario,
which should include at least
initial conditions for the inflationary era and a description of
how reheating connects it to a subsequent radiation epoch.
In our present state of knowledge it seems challenging to construct
a scenario including all these
features, and capable of explaining the hemispherical asymmetry,
which does not involve choices at least as unlikely
as the asymmetry itself.
\subsection*{Acknowledgements}
DS acknowledges support from the Science and Technology
Facilities Council [grant number ST/L000652/1].
CTB is a Royal Society University Research Fellow.
The research leading to these results has received funding from
the European Research Council under the European Union's
Seventh Framework Programme (FP/2007--2013) / ERC Grant
Agreement No. [308082]. This work was supported in part by
National Science Foundation Grant No. PHYS-1066293.
\para{Data availability statement}
Please contact the authors
to obtain the bispectrum for the step model~\eqref{eq:tanh-model},
which was used to estimate the responses~\eqref{eq:fNL-estimator-predictions}.
|
1,116,691,499,775 | arxiv | \section{Introduction}
Studying the dynamics of complex systems is relevant in many scientific fields, from meteorology \cite{santhanam:statistics} over geophysics \cite{marwan:cross} to economics \cite{plerou:universal} and neuroscience \cite{pereda:nonlinear, zhou:hierarchical}. In many cases, this complex dynamics is to be conceived as arising through the interaction of subsystems, and it can be observed in the form of multivariate time series where measurements in different channels are taken from the different parts of the system. The degree of interaction of two subsystems can then be quantified using bivariate measures of signal interdependence \cite{brillinger:time, priestley:nonlinear, bendat:random}. A wide variety of such measures has been proposed, from the classic linear correlation coefficient over frequency-domain variants like magnitude squared coherence \cite{kay:modern} to general entropy-based measures \cite{kraskov:estimating}. A more specific model of complex dynamics that has found a large number of applications is that of a set of self-sustained oscillators whose coupling leads to a synchronization of their rhythms \cite{pikovsky:book, boccaletti:synchronization}. Especially the discovery of the phenomenon of phase synchronization \cite{rosenblum:phase} led to the widespread use of synchronization indices in time series analysis \cite{tass:detection, lachaux:measuring, mormann:mean}.
However, by applying bivariate measures to multivariate data sets an $N$-dimensional time series is described by an $N \times N$-matrix of bivariate indices, which leads to a large amount of mostly redundant information. Especially if additional parameters come into play (nonstationarity of the dynamics, external control parameters, experimental conditions) the quantity of data can be overwhelming. Then it becomes necessary to reduce the complexity of the data set in such a way as to reveal the relevant underlying structures, that is, to use genuinely multivariate analysis methods that are able to detect patterns of multichannel interaction.
One way to do so is to trace the observed pairwise correspondences back to a smaller set of direct interactions using e.g. partial coherence \cite{granger:spectral, brillinger:time}, an approach that has recently been extended to phase synchronization \cite{schelter:partial}. Another and complementary way to achieve such a reduction is cluster analysis, that is, a separation of the parts of the system into different groups, such that signal interdependencies within each group tend to be stronger than in between groups. This description of the multivariate structure in the form of clusters can eventually be enriched by the specification of a degree of participation of an element in its cluster. The straightforward way to obtain clusters by applying a threshold to the matrix entries has often been used \cite{zhou:hierarchical, kim:systematic, rodriguez:perceptions}, but it is very susceptible to random variation of the indices. As an alternative, several attempts have recently been made to identify clusters using eigenvectors of the correlation matrix \cite{plerou:random, kim:systematic, utsugi:random}, which were motivated by the application of random matrix theory to empirical correlation matrices \cite{plerou:universal, mueller:detection}.
In the context of phase synchronization analysis, a first approach to cluster analysis was based on the derivation of a specific model of the internal structure of a synchronization cluster \cite{allefeld:approach, allefeld:about}. The resulting method made the simplifying assumption of the presence of only one cluster in the given data set, and focused on quantifying the degree of involvement of single oscillators in the global dynamics. Going beyond that, the \emph{participation index} method \cite{allefeld:eigenvalue} defined a measure of oscillator involvement based on the eigenvalues and eigenvectors of the matrix of bivariate synchronization indices, and attributed oscillators to synchronization clusters based on this measure.
But despite the apparent usefulness of eigenvalue decomposition for the purposes of group identification, beyond some phenomenological evidence no good reason has been put forward why an eigenvector of a synchronization matrix should directly indicate the system elements taking part in a cluster. Moreover, in a recent survey of the performance of synchronization cluster analysis in simulation and field data \cite{bialonski:identifying} it has been shown that there are important special cases---clusters of similar strength that are slightly synchronized to each other---where the assumed one-to-one correspondence of eigenvectors and clusters is completely lost.
In this paper, we provide a better understanding of the role of eigenvectors in synchronization cluster analysis, and we present an improved method for detecting synchronization clusters, the \emph{eigenvector space} approach. The organization of the paper is as follows: In Section~II, we briefly recall the definition of the matrix of bivariate synchronization indices $R$ as the starting point of the analysis. We motivate its transformation into a stochastic matrix $P$ describing a Markov chain and detail the properties of that process. Utilizing recent results on the coarse-graining of finite-state Markov processes \cite{froyland:statistically, gaveau:dynamical, deuflhard:robust} we derive our method of synchronization cluster analysis, and we illustrate its operation using a system of coupled Lorenz oscillators. In Section~III, we compare the performance of the eigenvector space method with that of the previous approach, the participation index method \cite{allefeld:eigenvalue}, and we investigate its behavior in the case of small sample size, which is important with regard to the application to empirical data.
\section{Method}
\subsection{Measuring synchronization}
Synchronization is a generally occurring phenomenon in the natural sciences, which is defined as the dynamical adjustment of the rhythms of different oscillators \cite{pikovsky:book}. Because an oscillatory dynamics is described by a phase variable $\phi$, a measure of synchronization strength is based on the instantaneous phases $\phi_{im}$ of oscillators $i = 1 \ldots N$, where the index $m$ enumerates the values in a sample of size $n$. The nowadays commonly used bivariate index of phase synchronization strength \cite{rodriguez:perceptions, lachaux:measuring, mormann:mean, allefeld:approach, allefeld:eigenvalue, bialonski:identifying, schelter:partial} results from the application of the first empirical moment of a circular random variable \cite{mardia:directional} to the distribution of the phase difference of the two oscillators:
\begin{equation} \label{rbar}
R_{ij} = \left | \frac{1}{n} \sum_{m = 1}^n e ^ {\mathrm{i} \, (\phi_{im} - \phi_{jm})} \right |.
\end{equation}
The measure takes on values from the interval $[0, 1]$, representing the continuum from no to perfect synchronization of oscillators $i$ and $j$; the matrix $R$ is symmetric, its diagonal being composed of $1$s. Special care has to be taken in applying this definition to empirical data, because the interpretation of $R$ as a synchronization measure in the strict sense only holds if phase values were obtained from different self-sustained oscillators.
The determination of the phase values $\phi_{im}$ generally depends on the kind of system or data to be investigated. For the analysis of scalar real-valued time series $s_i(t)$ that are characterized by a pronounced dominant frequency, the standard approach utilizes the associated complex-valued analytic signal $z_i(t)$ \cite{gabor:theory}, within which every harmonic component of $s_i(t)$ is extended to a complex harmonic. The analytic signal is commonly defined \cite{rosenblum:phase} as
\begin{equation}
z_i(t) = s_i(t) + \mathrm{i} ~ \mathrm{H} s_i(t),
\end{equation}
where $\mathrm{H} s_i$ denotes the Hilbert transform of the signal $s_i$,
\begin{equation}
\mathrm{H} s_i(t) = {1 \over \pi} \, \textrm{\scriptsize P.V.} \!\!\! \int_{-\infty}^\infty {s_i(t') \over t - t'} \mathrm{d} t',
\end{equation}
and where P.V. denotes the Cauchy principal value of the integral. The instantaneous phase of the time series is then defined as
\begin{equation}
\phi_i(t) = \arg z_i(t).
\end{equation}
Equivalently, the analytic signal can be obtained using a filter that removes negative frequency components,
\begin{equation} \label{as}
z_i(t) = \mathcal{F}^{-1} \left ( \mathcal{F} \left [ s_i(t) \right ] ~ \left [ 1 + \mathrm{sgn}(\omega) \right ] \right ),
\end{equation}
where $\mathcal{F}$ denotes the Fourier transform into the domain of frequencies $\omega$ and $\mathrm{sgn}$ denotes the sign function \cite{carmona:practical}. This definition is more useful in practice because it can be straightforwardly applied to empirical time series, which are sampled at a finite number $n$ of discrete time points $t_m$, $s_{im} = s_i(t_m)$. If several time series (realizations of the same process) are available, the obtained phase values can be combined into a single multivariate sample of phases $\phi_{im}$, where the index $m = 1 \ldots n$ now enumerates the complete available phase data.
\subsection{Cluster analysis via Markov coarse graining}
In the participation index method \cite{allefeld:eigenvalue}, the use of eigenvectors of $R$ for synchronization cluster analysis was motivated by the investigation of the spectral properties of correlation matrices in random matrix theory \cite{mueller:detection}. Another context where eigenvalue decomposition turns up naturally is the computation of matrix powers, which becomes as simple as possible using the spectral representation of the matrix.
Powers of $R$ have a well defined meaning in the special case of a binary-valued matrix ($R_{ij} \in \{0,\,1\}$), as it is obtained for instance by thresholding: The matrix entries of $R^a$ count the number of possible paths from one element to another within $a$ steps, i.e., they specify the degree to which these elements are connected via indirect links of synchrony. By analogy, we interpret $(R^a)_{ij}$ also in the general case as quantifying the degree of common entanglement of two elements $i$ and $j$ within the same web of synchronization relations. In the following we will call this quantity the $a$-step synchronization strength of two oscillators because it reduces to the original bivariate synchronization strength $R_{ij}$ in the case $a = 1$.
This synchronization strength over $a$ steps is relevant for synchronization cluster analysis, because in a system where synchronization clusters are present it is possible that the degree of direct bivariate synchrony of two elements is not very strong, but they are both entangled into the same web of links of synchrony. These indirect links, which make the two elements members of the same synchronization cluster, become visible in $R^a$.
Moreover, with increasing power $a$ the patterns of synchrony within a cluster (the matrix columns) become more similar, approaching the form of one of the dominant eigenvectors. If there are different clusters in the system, a suitable $a$ can be chosen such that each cluster exhibits a different pattern, representing the web of synchronization relations it is comprised of. These patterns constitute \emph{signatures} by which elements can be attributed to clusters. The cluster signatures are related to the dominant eigenvectors of $R$; by transition to larger $a$ they become even more dominant, leading to an effective simplification of the matrix.
For the identification of the members of a cluster only the patterns of synchrony are relevant, while the absolute size of elements of different columns diverges with $a$, so that some sort of normalization is called for. Different normalization schemes might be used for this purpose. However, using the $L^1$-norm the procedure can be simplified, because for the normalized version of the synchronization matrix, given by
\begin{equation} \label{tm}
P_{ij} = \frac{R_{ij}}{\sum_{i'} R_{i'j}},
\end{equation}
it holds that powers of $P$ are automatically normalized, too. Moreover, the $L^1$-normalized matrix $P$ is a column-stochastic matrix, that is, it can be interpreted as the matrix of $i \leftarrow j$ transition probabilities describing a Markov chain, whose states correspond to the elements of the original system. Via this connection, the tools of stochastic theory and especially recent work on the coarse-graining of finite-state Markov processes \cite{froyland:statistically, gaveau:dynamical, deuflhard:robust} can be utilized for the purposes of synchronization cluster analysis.
The Markov process defined in this way possesses some specific properties \cite{meyn:markov}: It is aperiodic because of the nonzero diagonal entries of the matrix, and it is in general irreducible because the values of empirical $R_{ij}$ for $i \neq j$ will also almost never be exactly zero. For a finite-state process, these two properties amount to ergodicity, which implies that any distribution over states converges to a unique invariant distribution $p^{(0)}$, corresponding to the eigenvector of $P$ for the unique largest eigenvalue $1$. This distribution can be computed from $R$ as
\begin{equation}
p^{(0)}_i = \frac{\sum_j R_{ij}}{\sum_{i'} \sum_j R_{i'j}},
\end{equation}
where the vector components of $p^{(0)}$ are denoted by $p^{(0)}_i$. With the matrix $R$ also the stationary flow given by
\begin{equation}
P_{ij} ~ p^{(0)}_j = \frac{R_{ij}}{\sum_{i'} \sum_{j'} R_{i'j'}}
\end{equation}
is symmetric, i.e., the process fulfills the condition of detailed balance $P_{ij} ~ p^{(0)}_j = P_{ji} ~ p^{(0)}_i$, which makes eigenvalues and eigenvectors of $P$ real-valued \cite{froyland:statistically}.
For the Markov process, the $a$-step synchronization strength considered above translates into transitions between states over a period of $\tau$ time steps. To compute the corresponding transition matrix $P^\tau$ the eigenvalue decomposition of $P$ is used. If $\lambda_k$ with $k = 0 \ldots N - 1$ denote the eigenvalues of $P$, and the right and left eigenvectors $p_k$ and $A_k$ are scaled such that the orthonormality relation
\begin{equation}
A_k ~ p_l = \delta_{kl}
\end{equation}
is fulfilled, the spectral representation of $P$ is given by
\begin{equation}
P = \sum_k \lambda_k ~ p_k A_k
\end{equation}
and consequently
\begin{equation}
P^\tau = \sum_k \lambda_k^\tau ~ p_k A_k.
\end{equation}
We assume that eigenvalues are sorted such that $\lambda_0 = 1 > |\lambda_1| \geq |\lambda_2| \geq \ldots \geq |\lambda_{N-1}|$. The scaling ambiguity left by the orthonormality relation is resolved by choosing
\begin{equation}
p_{ik} = p^{(0)}_i ~ A_{ki}
\end{equation}
(where $p_{ik}$ and $A_{ki}$ denote the vector components of $p_k$ and $A_k$, respectively), which leads to the normalization equations
\begin{equation} \label{ne}
\sum_i \frac{p_{ik}^2}{p^{(0)}_i} = 1
\quad \text{and} \quad
\sum_i p^{(0)}_i ~ A_{ki}^2 = 1,
\end{equation}
with the special solutions $p_{i0} = p^{(0)}_i$ and $A_{0i} = 1$. Additionally, a generalized orthonormality relation
\begin{equation} \label{go}
\sum_i \frac{p_{ik} ~ p_{il}}{p^{(0)}_i} = \delta_{kl}
\end{equation}
follows for the right eigenvectors.
The convergence of every initial distribution to the stationary distribution $p^{(0)}$ corresponds to the fact that because of non-vanishing synchronies the whole system ultimately forms one single cluster. This perspective belongs to a timescale $\tau \rightarrow \infty$, at which all eigenvalues $\lambda_k^\tau$ go to $0$ except for the largest one, $\lambda_0^\tau = 1$. In the other extreme of a timescale $\tau = 0$, $P^\tau$ becomes the identity matrix, all of its columns are different, and the system disintegrates into as many clusters as there are elements. For the purposes of cluster analysis, intermediate timescales are of interest on which many but not all of the eigenvalues are practically zero. If we want to identify $q$ clusters, we expect to find that many different cluster signatures, and that means we have to consider $P^\tau$ at a time scale where eigenvalues $\lambda_k^\tau$ may be significantly different from zero only for the range $k = 0 \ldots q - 1$.
This is achieved by determining $\tau$ such that $|\lambda_q|^\tau \approx 0$. Using a parameter $\zeta \ll 1$ chosen to represent the quantity that is considered to be practically zero (e.g. $\zeta = 0.01$), from $|\lambda_q|^\tau = \zeta$ we calculate the appropriate timescale for a clustering into $q$ clusters as
\begin{equation} \label{ts}
\tau(q) = \frac{\log \zeta}{\log |\lambda_q|}.
\end{equation}
The vanishing of the smaller eigenvalues at a given timescale describes the loss of internal differentiation of the clusters, the removal of the structural features encoded in the corresponding weaker eigenvectors. On the other hand, the differentiation of clusters from each other via the dominant eigenvectors will be the clearer the larger the remaining eigenvalues are, especially $\lambda_{q-1}^\tau$. This provides a criterion for selecting the number of clusters $q$: the clustering will be the better the larger $|\lambda_{q-1}|^{\tau(q)}$ is. Equivalently, we select $q$ based on the \emph{timescale separation factor}
\begin{equation} \label{tsf}
F(q) = \frac{\tau(q-1)}{\tau(q)}= \frac{\log |\lambda_q|}{\log |\lambda_{q-1}|},
\end{equation}
which is independent of the particular choice of $\zeta$, and invariant under rescaling of the time axis. This criterion gives a ranking of the different possible choices (from 1 to $N - 1$). The fact that $\lambda_0 = 1$ and therefore $F(1) = \infty$ implies a limitation of this approach, since the choice $q = 1$ is always characterized as absolutely optimal. Therefore the first meaningful---and usually best---choice is the second entry in the $q$ ranking list.
To determine which elements belong to the same cluster, we need a measure $d$ of the dissimilarity of cluster signatures, that is, of the column vectors of $P^\tau$. Since these vectors belong to the space of right eigenvectors of $P$, the appropriate dissimilarity metric is based on the norm corresponding to the normalization equation for right eigenvectors [Eq.\,(\ref{ne}) left]:
\begin{equation}
|| p || = \sum_i \frac{p_{ik}^2}{p^{(0)}_i}.
\end{equation}
The resulting column vector dissimilarity
\begin{equation}
d^2(j, j') = \sum_i \frac{1}{p^{(0)}_i} \left | \left ( P^\tau \right ) _{ij} - \left ( P^\tau \right ) _{ij'} \right | ^2,
\end{equation}
has the convenient property that the dimensionality of the space within which the clustering has to be performed can be reduced, because the expression obtained by inserting the spectral representation for the matrix entries of $P^\tau$,
\begin{equation}
\left ( P^\tau \right ) _{ij} = \sum_k \lambda_k^\tau ~ p_{ik} A_{kj},
\end{equation}
simplifies to
\begin{equation}
d^2(j, j') = \sum_k |\lambda_k|^{2\tau} \left ( A_{kj} - A_{kj'} \right ) ^2
\end{equation}
using the generalized orthonormality of right eigenvectors, Eq.\,(\ref{go}). Since for appropriately chosen $\tau = \tau(q)$ contributions for larger $k$ vanish starting from $k = q$, and because $A_{0i} = 1$ for all $i$, it is sufficient to let the sum run over the range $1 \ldots q - 1$. The dissimilarity $d$ can therefore be interpreted as the Euclidean distance within a $(q - 1)$-dimensional left \emph{eigenvector space}, where each element $j$ is associated with a position vector
\begin{equation} \label{es}
\vec{o}(j) = \left ( |\lambda_k|^\tau ~ A_{kj} \right ),
\quad k = 1 \ldots q - 1.
\end{equation}
To actually perform the clustering, we can in principle use any algorithm that is designed to minimize the sum of within-cluster variances. Our implementation derives from the observation that clusters in eigenvector space form a $q$-simplex \cite{gaveau:dynamical, deuflhard:robust}. A first rough estimate of the cluster locations can therefore be obtained by searching for the extreme points of the data cloud, employing a subalgorithm described in Ref.\,\cite{deuflhard:robust}: Determine the point farthest from the center of the cloud, then the point farthest from the first one; then iteratively the point farthest from the hyperplane spanned by all the previously identified points, until $q$ points are found. Using the result of this procedure as initialization, the standard k-means algorithm \cite{macqueen:methods} that normally tends to get stuck in local minima converges in almost all cases quickly onto the correct solution.
In summary, the algorithmic steps \footnote{An implementation of the algorithm for \textsc{Matlab} and \textsc{Octave} is available from the corresponding author.} of the eigenvector space method introduced in this paper are: (1)~Calculate the matrix of bivariate synchronization indices $R_{ij}$, Eq.\,(\ref{rbar}). (2)~Convert the synchronization matrix $R$ into a transition matrix $P$, Eq.\,(\ref{tm}). (3)~Compute the eigenvalues $\lambda_k$ and left eigenvectors $A_k$ of $P$. (4)~Select the number of clusters $q$, $q > 1$, with the largest timescale separation factor $F(q)$, Eq.\,(\ref{tsf}). (5)~Determine the positions $\vec{o}(j)$, Eq.\,(\ref{es}), in eigenvector space for $\tau = \tau(q)$, Eq.\,(\ref{ts}). (6)~Search for $q$ extreme points of the data cloud. (7)~Use these as initialization for k-means clustering.
\subsection{Illustration of the eigenvector space method}
\begin{figure}
\includegraphics{fig1}
\caption{Application of the eigenvector space method to a system of nine partially coupled Lorenz oscillators. (a)~Coupling configuration: The left group of four oscillators is driven by \#1, the right group of three driven by \#9, the remaining two are uncoupled. (b)~Eigenvalues $\lambda_k$, timescales $\tau(q)$, and timescale separation factors $F(q)$. The maximal separation factor $F(4)$ indicates the presence of four clusters. (c)~Positions attributed to oscillators in 3-dimensional eigenvector space $(o_1, o_2, o_3)$. The clustering by the k-means algorithm results in a cluster composed of oscillators \#1--4 ($\scriptstyle\square$), two single-element clusters consisting in oscillators \#5 ($\diamond$) and \#6 ($\ast$), respectively, and a cluster composed of oscillators \#7--9 ($\circ$).}
\label{ex}
\end{figure}
In order to illustrate the operation of the method, we apply it to multivariate time series data obtained from simulated nonlinear oscillators, coupled in such a way as to be able to observe synchronization clusters of different size as well as unsynchronized elements. The system consists of $N=9$ Lorenz oscillators that are coupled diffusively via their $z$-components:
\begin{eqnarray}
\dot{x}_j &=& 10\,(y_j - x_j), \nonumber \\
\dot{y}_j &=& 28\,x_j - y_j - x_j z_j,\\
\dot{z}_j &=& -\tfrac{8}{3}\,z_j + x_j y_j \qquad + ~ \epsilon_{ij} \, (z_i - z_j). \nonumber
\end{eqnarray}
The coupling coefficients $\epsilon_{ij}$ were chosen from $\{0, 1\}$ to implement the coupling configuration depicted in Fig.\,\ref{ex}\,(a), such that oscillators \#2--4 are unidirectionally driven by \#1, oscillators \#7 and 8 driven by \#9, and \#5 and 6 are uncoupled. These differential equations were numerically integrated using a step size of $\Delta t = 0.01$, starting from randomly chosen initial conditions. After discarding an initial transient of $10^4$ data points, further $4 \times 10^4$ samples entered data processing. Instantaneous phases $\phi_{jm}$ of oscillators $j$ at time points $t_m = (m - 1) \, \Delta t$ were determined from the $z$-components using the analytic signal approach after removal of the temporal mean, and bivariate synchronization strengths $R_{ij}$ were computed.
The outcomes of the eigenvector space method applied to the resultant matrix of bivariate indices are presented in Fig.\,\ref{ex}. Figure~\ref{ex}\,(b) shows the spectrum of eigenvalues $\lambda_k$ of the transition matrix $P$ with the corresponding timescales $\tau(q)$ and timescale separation factors $F(q)$. A gap in the eigenvalue spectrum between indices $k = 3$ and $4$ translates into a maximum timescale separation factor for $q = 4$, which recommends a search for four clusters in the eigenvector space for timescale $\tau = 3.5$. This 3-dimensional space is depicted in Fig.\,\ref{ex}\,(c), where the expected grouping into four clusters can be clearly recognized in the arrangement of elements $j$ with positions $\vec{o}(j)$. These four clusters that correspond to the two groups of driven oscillators and the two uncoupled oscillators (each of which forms a single-element cluster) are easily identified by the k-means algorithm. The results shown here were obtained using $\zeta = 0.01$; alternative choices of $0.1$ and $0.001$ yielded the same clustering.
\section{Performance}
To assess the performance of the eigenvector space method introduced in this paper, we compare it with the previous approach to synchronization cluster analysis based on spectral decomposition. For reference, we briefly recollect the important details.
The participation index method \cite{allefeld:eigenvalue} is based on the eigenvalue decomposition of the symmetric synchronization matrix $R$ itself, into eigenvalues $\eta_k$ and $L^2$-normalized eigenvectors $v_k$. Each of the eigenvectors that belong to an eigenvalue $\eta_k > 1$ is identified with a cluster, and a system element $j$ is attributed to that cluster $k$ in which it participates most strongly, as determined via the participation index
\begin{equation}
\Pi_{jk} = \eta_k \, v_{jk}^2,
\end{equation}
where $v_{jk}$ are the eigenvector components of $v_k$. The method performs quite well in many configurations, but it encounters problems when confronted with clusters of similar strength that are slightly synchronized to each other, which was demonstrated in Ref.\,\cite{bialonski:identifying} using a simulation.
\begin{figure}
\includegraphics{fig2}
\caption{Comparative performance of the participation index~(a) and the eigenvector space method~(b). The methods are tested on a system of $N=32$ elements, divided into two clusters containing $r$ and $N-r$ elements, respectively. The inter-cluster synchronization strength $\rho_\textrm{int}$ is varied from $0$ up to the value of intra-cluster synchronization $0.8$. Synchronization matrices are generated based on samples of size $n = 200$. The quantity shown is the relative frequency (over 100 trials) with which the respective algorithm failed to recover exactly the given two-cluster structure; it is visualized in gray scale, covering the range from 0 (white) to 1 (black). Comparison shows that in a large area along $r=16$ where the participation index method fails, the eigenvector space method introduced in this paper performs perfectly.}
\label{tc}
\end{figure}
Here we employ a refined version of that simulation to compare the two methods. We consider a system of $N = 32$ oscillators forming two clusters, and check whether the methods are able to detect this structure from the bivariate synchronization matrix $R$ for different degrees of inter-cluster synchrony $\rho_\textrm{int}$. The cluster sizes are controlled via a parameter $r$, such that the first cluster comprises elements $j = 1 \ldots r$, the second $(r+1) \ldots N$.
To be able to time-efficiently perform a large number of simulation runs and to have precise control over the structure of the generated synchronization matrices, we do not implement the system via a set of differential equations. Instead, our model is parametrized in terms of the population value of the bivariate synchronization index, Eq.\,(\ref{rbar}),
\begin{equation} \label{rho}
\rho_{ij} = \left | \left \langle \exp \left [ \mathrm{i} \, (\phi_{i} - \phi_{j}) \right ] \right \rangle \right |
\end{equation}
(where $\langle \cdot \rangle$ denotes the expectation value), which is the first theoretical moment of the circular phase difference distribution \cite{mardia:directional}. For $i, j$ within the same cluster $\rho_{ij}$ is fixed at a value of $\rho_1 = \rho_2 = 0.8$. For inter-cluster synchronization relations it is set to $\rho_\textrm{int}$, which is varied from $0$ up to $0.8$ such that the two-cluster structure almost vanishes.
To be able to properly account for the effect of random variations of $R_{ij}$ around $\rho_{ij}$ due to finite sample size $n$ we generated samples of phase values, using an extension of the single-cluster model introduced in Ref.\,\cite{allefeld:approach}: The common behavior of oscillators within each of the two clusters is described by cluster phases $\Phi_1$ and $\Phi_2$. The phase differences between the members of each cluster and the respective cluster phase,
\begin{equation}
\Delta \phi_{j} = \left \{
\begin{array}{ll}
\phi_j - \Phi_1 & \text{ for } j = 1 \ldots r, \\
\phi_j - \Phi_2 & \text{ for } j = (r + 1) \ldots N,
\end{array}
\right .
\end{equation}
as well as the phase difference of the two cluster phases,
\begin{equation}
\Delta \Phi = \Phi_2 - \Phi_1,
\end{equation}
are assumed to be mutually independent random variables, distributed according to wrapped normal distributions \cite{mardia:directional} with circular moments $\rho_\mathrm{1C}$, $\rho_\mathrm{2C}$, and $\rho_\mathrm{CC}$, respectively. Since the summation of independent circular random variables results in the multiplication of their first moments \cite{allefeld:approach}, for the relation of model parameters and distribution moments holds
\begin{eqnarray}
& \rho_1 = \rho_\mathrm{1C}^2, & \nonumber \\
& \rho_2 = \rho_\mathrm{2C}^2, & \\
& \rho_\textrm{int} = \rho_\mathrm{1C} ~ \rho_\mathrm{CC} ~ \rho_\mathrm{2C}. & \nonumber
\end{eqnarray}
For the performance comparison of the two methods, $n = 200$ realizations of this model of the multivariate distribution of phases $\phi_j$ were generated for each setting of the parameters, and synchronization indices $R_{ij}$ were calculated via Eq.\,(\ref{rbar}).
The clustering results are presented in Fig.\,\ref{tc} for the participation index and the eigenvector space method (using $\zeta = 0.01$). The quantity shown is the relative frequency (over 100 instances of the matrix $R$) of the failure to identify correctly the two clusters built into the model. Figure~\ref{tc}\,(a) shows that the participation index method fails systematically within a region located symmetrically around $r = N/2 = 16$ (clusters of equal size). The region becomes wider for increasing $\rho_\textrm{int}$ but is already present for very small values of the inter-cluster synchronization strength. In contrast, the eigenvector space approach (b) is able to perfectly reconstruct the two clusters for all values of $r$ up to very strong inter-cluster synchronization. It fails to correctly recover the structure underlying the simulation data only in that region where inter-cluster synchronization indices attain values comparable to those within clusters, i.e., only where there are no longer two different clusters actually present. These results demonstrate that the eigenvector space method is a clear improvement over the previous approach.
\begin{figure}
\includegraphics{fig3.eps}
\caption{Performance of the eigenvector space method depending on the sample size $n$, investigated using the two-cluster system of Fig.\,\ref{tc} for different values of the inter-cluster synchronization strength $\rho_\textrm{int}$. The quantity shown here is the proportion of values of the parameter $r$ (controlling cluster sizes) at which the two-cluster structure failed to be recovered, visualized on a gray scale from 0 (white) to 1 (black). The plot shows that the ability of the eigenvector space method to correctly identify clusters up to high values of $\rho_\textrm{int}$ breaks down only for very small sample sizes $n$.}
\label{ss}
\end{figure}
For real-world applications, a time series analysis method has to be able to work with a limited amount of data. In the case of synchronization cluster analysis, a small sample size attenuates the observed contrast between synchronization relations of different strength, making it harder to discern clusters. Using the two-cluster model described above, in a further simulation we investigated the effect of the sample size $n$ on the performance of the eigenvector space method. Parameters $\rho_\textrm{int}$ and $r$ were varied as before, and synchronization matrices were generated for $n = 10 \ldots 200$ (in steps of 10). For each value of $\rho_\textrm{int}$ and $n$, as a test quantity the proportion of $r$-values for which the algorithm did not correctly identify the two clusters was calculated. The result shown in Fig.\,\ref{ss} demonstrates that the performance of the eigenvector space method degrades only very slowly with decreasing sample size. The method seems to be able to provide a meaningful clustering down to a data volume of about $n = 30$ independent samples, making it quite robust against small sample size.
\section{Conclusion}
We introduced a method for the identification of clusters of synchronized oscillators from multivariate time series. By translating the matrix of bivariate synchronization indices $R$ into a stochastic matrix $P$ describing a finite-state Markov process, we were able to utilize recent work on the coarse-graining of Markov chains via the eigenvalue decomposition of $P$. Our method estimates the number of clusters present in the data based on the spectrum of eigenvalues, and it represents the synchronization relations of oscillators by assigning to them positions in a low-dimensional space of eigenvectors, thereby facilitating the identification of synchronization clusters. We showed that our approach does not suffer from the systematic errors made by a previous approach to synchronization cluster analysis based on eigenvalue decomposition, the participation index method. Finally, we demonstrated that the eigenvector space method is able to correctly identify clusters even given only a small amount of data. This robustness against small sample size makes it a promising candidate for field applications, where data availability is often an issue. Concluding we want to remark that though the method was described and assessed in this paper within the context of phase synchronization analysis, our approach might also give useful results when applied to other bivariate measures of signal interdependence.
\begin{acknowledgments}
The authors would like to thank Harald Atmanspacher, Peter beim Graben, Mar{\'i}a Herrojo Ruiz, Klaus Lehnertz, Christian Rummel, and Ji{\v r}{\'i} Wackermann for comments and discussions.
\end{acknowledgments}
|
1,116,691,499,776 | arxiv |
\section*{Supplementary Information}
\setcounter{figure}{4}
\textbf{Chip fabrication and {single photon} characterization} - Waveguides were micromachined in borosilicate glass {substrate (Corning EAGLE2000)} using a cavity-dumped Yb:KYW mode-locked oscillator, which delivers $300 fs$ pulses at $1 MHz$ repetition rate. For the waveguide fabrication pulses with $220 nJ$ energy were focused $170 \mu m$ under the glass surface, using a $0.6 N.A.$ microscope objective while the sample was translated at a constant speed of $40 mm/s$ by high precision, three-axis air-bearing stages (Aerotech FiberGlide 3D).
Measured propagation losses are $0.5 dB/cm$ and coupling losses to single mode fiber arrays at the input facet are about $1 dB$. Birefringence of these waveguides is on the order of $B = 7 \times 10^{-5}$, as characterized in \cite{sans10prl}.
In order to evaluate the effect of this residual birefringence we measured the single photon distributions for horizontal, vertical, diagonal and antidiagonal polarized light. The obtained probability distributions for photons injected in both the modes $k_A$ and $k_B$ are reported in Fig. \ref{fig:figure3}. It is clear from the plot that the behaviours are polarization independent, as confirmed by the similarities associated to each distribution, whose values are reported in table \ref{tab:similarity}, both for photons injected in mode $k_A$ and $k_B$.
\begin{table}[htdp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Input state & Similarity $k_A$ & Similarity $k_B$\\\hline\hline
H & $0.991\pm0.002$ & $0.992\pm0.002$\\\hline
V & $0.997\pm0.002$ & $0.993\pm0.002$\\\hline
+ & $0.983\pm0.002$ & $0.991\pm0.002$\\\hline
- & $0.995\pm0.002$ & $0.992\pm0.002$\\\hline
\end{tabular}
\end{center}
\caption{Similarities between the experimental distributions and the theoretical one for horizontal (H), vertical (V), diagonal (+) and antidiagonal (-) polarized input photons in the single-particle regime.}
\label{tab:similarity}
\end{table}
\textbf{Equivalence between entangled states and boson-fermion quantum walk}
\label{sec-walk}.
Let us consider a generic $T$ step {single-photon quantum walk implemented as in Fig. 1b}.
If the particle
is injected at mode $\ket{J}$, with input state $a^\dag_J\ket0$,
the walk {performs} a unitary transformation on the creation
operator, namely $a^\dag_J\rightarrow \sum_KU_{JK}a^\dag_K$.
Let's now consider two photons injected into the walk in a polarization entangled state
$\ket{\Psi^\pm_{IJ}}\equiv\frac{1}{\sqrt2}(a^\dag_Ib^\dag_J\pm a^\dag_Jb^\dag_I)\ket0$
where $a^\dag$ and $b^\dag$ are the horizontal and vertical polarization
creation {operators, respectively}.
The evolution of the walk becomes
\begin{equation}
\ket{\Psi^\pm_{IJ}}\rightarrow \frac{1}{\sqrt2}\sum_{K,L}
(\underbrace{U_{IK}U_{JL}\pm U_{JK}U_{IL}}_{\psi^\pm_{IJ,KL}})a^\dag_Kb^\dag_L\ket0
\end{equation}
\begin{figure}[ht]
\includegraphics[width=8.6cm]{figure5}
\caption{Measured {output} probability distributions and {theoretical expectations for} a 4 steps quantum
walk {of a single particle injected into mode $k_A$ (\textit{a}) and mode $k_B$ (\textit{b}).
The experimental results demonstrate that the DCs network leads to the same quantum dynamics independently
from the initial polarization state and in agreement with the theory}.}
\label{fig:figure3}
\end{figure}
The probability of detecting one photon at $K$ and the other at $L$
(without {measuring the} polarization) is given by
\begin{equation}
p_\pm(I,J;K,L)=
\left\{
\begin{aligned}
&|\psi^\pm_{IJ,KL}|^2&&\text{for }L\neq K
\\
&\frac12|\psi^\pm_{IJ,KL}|^2&&\text{for }L=K
\end{aligned}
\right.
\end{equation}
\begin{figure*}[ht!!!!]
\includegraphics[width=18cm]{figure6}
\caption{Theoretical (top) and measured (bottom) distributions for two-photon states emerging from the integrated array of BS. Depending on the symmetry of the two-photon Bell state injected into the device bosonic (a), fermionic (b) and anionic (c) behaviors are observed.}
\label{fig:modi}
\end{figure*}
It is easy to show that $p_+(I,J;K,L)$ and $p_-(I,J;K,L)$ respectively correspond to the
probabilities of detecting at positions $K$ and $L$
two bosons or fermions injected into the modes $I$ ad $J$ of the quantum walk.
In fact, if at the input state we have two identical bosons characterized by commuting creation operator
$d^\dag_I$ and $d^\dag_J$, the output state is $d^\dag_Id^\dag_J\ket0\rightarrow\frac12\sum_{K,L}
\psi^+_{ij,KL}d^\dag_Kd^\dag_L\ket0
$. The probability of detecting one boson at $K$ and the other at $L$
is precisely $p_+(I,J;K,L)$.
If at the input state we have two identical fermions characterized
by anticommuting creation operator $c^\dag_I$ we have
$c^\dag_Ic^\dag_J\ket0\rightarrow\frac12\sum_{K,L}
\psi^-_{IJ,KL}c^\dag_Kc^\dag_L\ket0$
and the probability of detecting one fermion at $K$ and the other at $L$
is $p_-(I,J;K,L)$.
Note that the dynamics of two-particle quantum walk
cannot be reconstructed by multiplying the output probabilities ($|U_{JK}|^2$)
of two single-particle walks.
Furthermore it is evident from {Fig. \ref{fig:figure3}} that the plotted probability distributions
refer to the position
$j$ of the walker and not to the output sites
$J$ of the BS array. Indeed, referring to Fig. 1b {\color{cyan}}, for a walk with $T^*$ steps, the relation between the probabilities of
photons emerging from one of the $N=2T^*$ outputs of the BS array ($P^{BS}_J$) and the final position of the walker ($p^{walk}_j$)
is:
\begin{equation}
\begin{aligned}
\label{eq:prob}
&p_{-T^*}^{walk}=P_{1}^{BS},
\\
&p_{-T^*+2k}^{walk}=P_{2k}^{BS}+P_{2k+1}^{BS},\ k=1,...,T^*-1
\\
&p_{T^*}^{walk}=P_{2T^*}^{BS},
\end{aligned}
\end{equation}
Then it is clear that in an array with an even
(odd) number of steps the output ports J are grouped into
the even (odd) final positions $j$ of the walker.
These relations between the output probabilities of the BS array and the sites of the QW
cause the diagonal elements of the two-particle fermionic distribution to be non-vanishing, as clearly shown in Fig. \ref{fig:modi}(b).
This can be ascribed to the relation between the outputs $J$s of the BS array and the positions $j$s of the walker: the symmetrization postulate of quantum mechanics makes the probability of the photons emerging from the same output of the BS array to be zero -i.e. $P^{BS}_{JJ}=0$-, all the same, according to the relations (\ref{eq:prob}) generalized to the case of two-particle walk, a probability $p_{jj}^{walk}\neq0$ for the walk distribution can be observed.
This is not surprising because two degrees of freedom are involved, the lattice site and the internal coin state: the antisymmetry for fermions in the same lattice site $k$ is provided by the different internal coin state, one particle is in the $\ket{U}$ state and the other one in the $\ket{D}$ state with a total wavefunction of the form:
\begin{equation}
\ket{\Psi_{kk}}=\frac{1}{\sqrt{2}}\left[\ket{j,U}-\ket{j,D}\right]
\end{equation}
which is clearly antisymmetric.
|
1,116,691,499,777 | arxiv | \section{Introduction}
In this paper we study the Rees algebra $\,\mathcal{R}(E)\,$ and the fiber cone $\,\mathcal{F}(E)\,$ of a finitely generated $R$-module $E$, where $R$ is either a Noetherian local ring or a standard graded ring. The Rees algebra $\,\mathcal{R}(E)\,$ is defined as the symmetric algebra $\,\mathcal{S}(E)\,$ modulo its $R$-torsion submodule. The fiber cone of $\,\mathcal{F}(E)\,$ is then obtained by tensoring the Rees algebra with the residue field $k$ of $R$.
Much of the motivation to study Rees algebras and fiber cones comes from algebraic geometry. Indeed, Rees algebras arise for instance as homogeneous coordinate rings of blow-ups of schemes along one or more subschemes, or as bihomogeneous coordinate rings of graphs of rational maps between varieties in projective spaces. Correspondingly, the fiber cone is the homogeneous coordinate ring of the special fiber of the blow-up at the unique closed point, or of the image of the given rational map. In many situations, these are Rees algebras and fiber cones of modules which are not ideals, for instance when considering a sequence of successive blow-ups of a scheme along two or more disjoint subschemes, or the Gauss map on an algebraic variety.
In addition, Rees algebras and fiber cones of modules arise when studying how algebraic varieties and morphisms change as they vary in families, in connection with the theory of multiplicity and Whitney equisingularity \cite{{Teissier1},{Teissier2}}. Algebraically, this relates to the study of integral dependence of ideals and modules \cite{{NR},{ReesAmao},{Rees},{SUVmult},{KR},{KT},{FM},{UVj},{UVepsilon}}.
In this paper we have two main goals. The first is to understand the Cohen-Macaulay property of the fiber cone $\,\mathcal{F}(E)\,$ of a module $E$. In \cite{SUV2003}, Simis, Ulrich and Vasconcelos introduced the notion of \emph{generic Bourbaki ideals} to reduce the study of the Cohen-Macaulay property of Rees algebras of modules to the case of ideals, exploiting the remarkable fact that torsion-free module of rank one are isomorphic to ideals of positive grade (see \cref{SecPrelim} for details). However, to the best of our knowledge there is no known general technique to study the Cohen-Macaulayness of fiber cones of modules that are not ideals, and this property is understood only for a few classes of modules (see for instance \cite{{Miranda},{LinP},{CLS}}).
We address this issue using generic Bourbaki ideals, and show that they allow to reduce the study of the Cohen-Macaulay property of fiber cones of modules to the case of ideals, at least in the case when enough information on the Rees algebra $\mathcal{R}(E)$ of $E$ is available. More precisely, our main result, \cref{3.5FiberCone}, is a refined version of the following theorem.
\begin{thm} \label{introFiber} \hypertarget{introFiber}{}
Let $R$ be a Noetherian local ring, $E$ a finite $R$-module with a rank, and let $I$ be a generic Bourbaki ideal of $E$.
\begin{itemize}
\item[$($a$)$] If $\mathcal{F}(E)$ is Cohen-Macaulay, then $\mathcal{F}(I)$ is Cohen-Macaulay.
\item[$($b$)$] Assume that after a generic extension the Rees algebra $\,\mathcal{R}(E)$ is a deformation of $\,\mathcal{R}(I)$. If $\mathcal{F}(I)$ is Cohen-Macaulay, then $\mathcal{F}(E)$ is Cohen-Macaulay.
\end{itemize}
\end{thm}
The deformation condition in \cref{introFiber}(b) is satisfied in particular whenever the Rees algebra of $E$ is Cohen-Macaulay. In general, the Cohen-Macaulayness of $\mathcal{F}(E)$ and of $\mathcal{R}(E)$ are not related to each other. Indeed, suppose that $R$ is Cohen-Macaulay and that $E$ isomorphic to an $R$-ideal $I$ of positive grade. Then the Rees algebra $\,\mathcal{R}(E)\,$ is isomorphic to the subalgebra
$$\,\mathcal{R}(I) = R[It]= \oplus_{j \geq 0\,} I^j t^j\,$$
of the polynomial ring $R[t]$, and is known to be Cohen-Macaulay whenever the associated graded ring $\,\mathcal{G}(I) = \displaystyle{\oplus_{j \geq 0\,} I^j /I^{j+1}\,}$ is Cohen-Macaulay and some additional numerical conditions are satisfied \cite{{HuGrI},{IkedaTrung},{JK},{SUVDegPolyRel}}. However, one can construct perfect ideals $I$ of height two over a power series ring over a field so that $\,\mathcal{F}(I)\,$ is Cohen-Macaulay while $\,\mathcal{R}(I)\,$ is not (see the introduction of \cite{CGPU}). Moreover, for some ideals $I$ defining monomial space curves one has that $\,\mathcal{R}(I)\,$ is Cohen-Macaulay while $\,\mathcal{F}(I)\,$ is not \cite{DAnna}.
Nevertheless, in some circumstances the Cohen-Macaulay property of the Rees algebra $\,\mathcal{R}(I)\,$ of an ideal $I$ implies that of the fiber cone $\,\mathcal{F}(I)$ (see for instance \cite{{CGPU},{Jonathan}}), so it makes sense to investigate similar connections in the case of Rees algebras and fiber cones of modules as well. Our \cref{GenCGPU3.4} and \cref{GenCGPU3.1} provide classes of modules whose Rees algebra and fiber cone are both Cohen-Macaulay, generalizing previous work of Corso, Ghezzi, Polini and Ulrich on the fiber cone of an ideal (see \cite[3.1 and 3.4]{CGPU}).
A class of modules with Cohen-Macaulay fiber cone but non-Cohen-Macaulay Rees algebra is instead given in \cref{GenBM}. In this case, the deformation condition in \cref{introFiber}(b) is guaranteed by \cref{NewSUV3.7}, which is a crucial technical result in this work, as it in fact extends the applicability of generic Bourbaki ideals to the study of Rees algebras which are not necessarily Cohen-Macaulay, nor even $S_2$.
The second main goal of this work is to study the defining ideal of Rees algebras of modules. Recall that for an $R$-module $\,E=Ra_1 + \ldots + Ra_n$ the \emph{defining ideal} of $\, \mathcal{R}(E)\,$ is the kernel of the natural homogeneous epimorphism
\begin{eqnarray*}
\phi \colon R[T_1, \ldots, T_n] & \longrightarrow & \mathcal{R}(E) \\
T_i & \mapsto & a_i \in [\mathcal{R}(E)]_1
\end{eqnarray*}
Determining the defining ideal is usually a difficult task, but it becomes treatable for Rees algebras of ideals or modules whose free resolutions have a rich structure. Using generic Bourbaki ideals, in \cite[4.11]{SUV2003} Simis, Ulrich and Vasconcelos determined the defining ideal of $\,\mathcal{R}(E)\,$ in the case when $E$ is a module of projective dimension one with linear presentation matrix over a polynomial ring $k[X_1, \ldots, X_d]$, where $k$ is a field. Their proof ultimately relies on the fact that a generic Bourbaki ideal $I$ of $E$ has Cohen-Macaulay Rees algebra $\,\mathcal{R}(I),\,$ which allows to deduce the shape of the defining ideal of $\,\mathcal{R}(E)\,$ from that of $\,\mathcal{R}(I)$. In fact, their proof only requires that after a generic extension $\,\mathcal{R}(E)\,$ is a deformation of $\,\mathcal{R}(I)$.
With a similar approach, the deformation condition of \cref{NewSUV3.7} allows us to describe the defining ideal of the Rees algebra of an \emph{almost linearly presented} module $E$ of projective dimension one over $k[X_1, \ldots, X_d]$ (see \cref{GenBM}). This condition means that all entries in a presentation matrix of $E$ are linear, except possibly those in one column, which are assumed to be homogeneous of degree $m \geq 1$.
Our result generalizes work of Boswell and Mukundan \cite[5.3]{BM} on the Rees algebra of almost linearly presented perfect ideals of height two. While this manuscript was being written, in his Ph.D. thesis \cite{Weaver} Matthew Weaver extended Boswell and Mukundan's techniques to linearly presented perfect ideals of height two over a hypersurface ring $\,\displaystyle{R = k[X_1, \ldots, X_d] /(f)}\,$, and used our methods to determine the defining ideal of the Rees algebra of linearly presented modules of projective dimension one. His work suggests potential applications to the case of Rees algebras and fiber cones of modules of projective dimension one over complete intersection rings, which include the module of K\"ahler differentials of such a ring $R$. This is particularly interesting from a geometrical perspective, since its fiber cone is the homogeneous coordinate ring of the tangential variety to the algebraic variety defined by the ring $R$.
We now briefly describe how this paper is structured.
In \cref{SecPrelim} we give the necessary background on Rees algebras and fiber cones of modules and set up the notation that will be used throughout the paper. In particular, we briefly review the construction and main properties of generic Bourbaki ideals from \cite{SUV2003}, as well as Boswell and Mukundan's construction of \emph{iterated Jacobian duals} \cite{BM}, which we will need later in \cref{SecDefEqs}.
\cref{SecDeform} contains our main technical result, namely the deformation condition of \cref{NewSUV3.7}, which is going to be crucial throughout the paper and in particular in the proofs of \cref{3.5FiberCone}, \cref{FiberType} and \cref{GenBM}.
In \cref{SecFiberCone} we study the Cohen-Macaulay property of fiber cones of modules via generic Bourbaki ideal. Our main results are \cref{3.5FiberCone}, which reduces the problem to the case of fiber cones of ideals, as well as \cref{GenCGPU3.4} and \cref{GenCGPU3.1}, which produce modules with Cohen-Macaulay fiber cones.
\cref{SecDefEqs} is dedicated to the study of the defining ideal of Rees algebras of modules. Besides the aforementioned \cref{GenBM} on almost linearly presented modules of projective dimension one, another key result in this section is \cref{FiberType}, which characterizes the fiber type property of a module over a standard graded $k$-algebra, where $k$ is a field.
\section{Preliminaries} \label{SecPrelim} \hypertarget{SecPrelim}{}
In this section we recall the definitions and main properties of Rees algebras and fiber cones of modules, and review the construction of generic Bourbaki ideals.
\subsection{Rees algebras and fiber cones of modules}
Unless otherwise specified, throughout this work, $R$ will be a Noetherian local ring and all modules will be assumed to have a rank. Recall that a finite $R$-module $E$ has a \emph{rank}, $\mathrm{rank}_{\,}E =e$, if $E \otimes_R \mathrm{Quot}(R) \cong (\mathrm{Quot}(R))^e$, or, equivalently, if $E_{\mathfrak{p}} \cong R_{\mathfrak{p}} ^e$ for all $\mathfrak{p} \in \mathrm{Ass}(R)$.
This is not a restrictive assumption. In fact, if $R$ is a domain all finite $R$-modules have a rank. Moreover, every module with a finite free resolution over any Noetherian ring has a rank. Most importantly, torsion-free modules of rank one are isomorphic to ideals of positive grade, which is crucial for our purposes.
In this setting, let $R^s \stackrel{\varphi}\longrightarrow R^n \twoheadrightarrow E$ be any presentation of $\,E =Ra_1+ \ldots +Ra_n$. Then, the natural homogeneous epimorphism
\begin{eqnarray*}
\phi \colon R[T_1, \ldots, T_n] & \longrightarrow & \mathcal{S}(E) \\
T_i & \mapsto & a_i \in E = [\mathcal{S}(E)]_1
\end{eqnarray*}
onto the \emph{symmetric algebra} of $E$ induces an isomorphism
$$\mathcal{S}(E) \cong R[T_1, \dots ,T_n]/ \mathcal{L},$$
where the ideal $\mathcal{L}$ is generated by linear forms $\,\ell_1, \dots, \ell_s\,$ in $\,R[T_1, \dots ,T_n]\,$ so that
$$ [T_1, \dots ,T_n] \cdot \varphi= [\ell_1, \dots, \ell_s].$$
This definition is independent of the choice of the presentation matrix $\varphi$ (see for instance \cite[Section 1.6]{BH}).
The \emph{Rees algebra} $\mathcal{R}(E)$ of $E$ is the quotient of $\mathcal{S}(E)$ modulo its $R$-torsion submodule. In particular,
$$ \mathcal{R}(E) \cong R[T_1, \dots ,T_n]/ \mathcal{J} $$
for some ideal $\mathcal{J}$, called the \emph{defining ideal} of $\mathcal{R}(E)$. Notice that by construction $\,\mathcal{J} \supseteq \mathcal{L}\,$ and the module $E$ is said to be of \emph{linear type} if equality holds, since in this case $\mathcal{J}$ is generated by linear equations.
Let $k$ be the residue field of $R$. The \emph{fiber cone} (or \emph{special fiber ring}) of $E$ is defined as
$$ \mathcal{F}(E) \coloneq \mathcal{R}(E) \otimes_R k $$
(see \cite[2.3]{EHU}). It can be described as
$$ \mathcal{F}(E) \cong k[T_1, \dots ,T_n]/ \mathcal{I}$$
for some ideal $\,\mathcal{I}$ in $\,k[T_1, \dots ,T_n]$. The Krull dimension $\,\ell(E) \coloneq \mathrm{dim}_{\,}\mathcal{F}(E)\,$ is called the \emph{analytic spread} of $E$ (see \cite[2.3]{EHU}) and satisfies the inequality
$$ \,e \leq \ell(E) \leq \mathrm{dim}\,R +e-1 $$
whenever $\,\mathrm{dim}\,R>0\,$ and $\,\mathrm{rank}\,E=e$ (see \cite[2.3]{SUV2003}).
Similarly as for powers of an ideal $I$, one defines the power $E^j$ of a module $E$ as the $j$-th graded component of the Rees algebra $\mathcal{R}(E)$. A \emph{reduction} of $E$ is a submodule $\,U \subseteq E\,$ so that $\,E^{r+1} = U E^r\,$ for some integer $\,r \geq 0$. The least such $r$ is denoted by $r_U(E)$. A reduction $U$ of $E$ is a \emph{minimal reduction} if it is minimal with respect to inclusion and the \emph{reduction number} of $E$ is
$$ \, r(E) \coloneq \mathrm{min} \, \{r _U(E) \, | \, U \mathrm{\,is \; a \; minimal \; reduction \; of\, } E\}$$
(see \cite[2.3]{EHU}). Moreover, if $k$ is infinite then any minimal reduction of $E$ is generated by $\ell(E)$ elements, and any general $\ell(E)$ elements in $E$ generate a minimal reduction $U$ of $E$ with $\,r_U(E)=r(E)$.
\subsection{Generic Bourbaki ideals}
Generic Bourbaki ideals were introduced by Simis, Ulrich and Vasconcelos in \cite{SUV2003} as a tool to study the Cohen-Macaulay property of Rees algebras of modules. Most of our technical work in this paper will consist in providing modifications of the known theory of generic Bourbaki ideals in order to extend their applicability to new situations. For this reason, we recall their construction and main properties below, referring the reader to \cite{SUV2003} for the proofs.
\begin{notat} \label{trueNotationBourbaki} \hypertarget{trueNotationBourbaki}{}
\em{(\cite[3.3]{SUV2003}). Let $(R,\mathfrak{m})$ be a Noetherian local ring, $E$ a finite $R$-module with $\mathrm{rank}_{\,}E=e>0$. Let $U=Ra_1 + \dots + Ra_n$ be a submodule of $E$ for some $a_i \in E$, and consider a set of indeterminates
\begin{displaymath}
Z= \{ Z_{ij} \, | \, 1 \leq i \leq n, 1\leq j \leq e-1 \}.
\end{displaymath}
Denote $R'\coloneq R[Z]\,$ and $\,E' \coloneq E \otimes_R R'.\,$ For $\,1 \leq j \leq e-1$, let $ \, x_j= \sum_{i=1}^n Z_{ij} a_i \in E'\,$ and $\, F'=\sum_{j=1}^{e-1} R' x_j. \,$
Also, denote $\,\displaystyle R'' = R(Z)= R[Z]_{\mathfrak{m}\,R[Z]},$ $\,\displaystyle E''=E \otimes_R R''\,$ and $\, \displaystyle F''= F' \otimes_{R'} R''$.}
\end{notat}
In the setting of \cref{trueNotationBourbaki}, the existence of generic Bourbaki ideals is guaranteed by the following result, and exploits the fundamental fact that torsion-free modules of rank one are isomorphic to ideals of positive grade.
\begin{tdefn} \label[theorem]{truetdefBourbaki} \hypertarget{truetdefBourbaki}{}
$($\cite[3.2 and 3.3]{SUV2003}$)$. Let $R$ be a Noetherian local ring, and $E$ a finite $R$-module with $\mathrm{rank}_{\,}E=e>0$, $U \subseteq E$ a submodule. Also, assume that:
\begin{itemize}
\item[$($i$)$] $E$ is torsion-free.
\item[$($ii$)$] $\,E_{\mathfrak{p}}$ is free for all $\,\mathfrak{p} \in \mathrm{Spec}(R)$ with $\,\mathrm{depth}_{\,}R_\mathfrak{p} \leq 1$.
\item[$($iii$)$] $\,\mathrm{grade}(E/U) \geq 2$.
\end{itemize}
Then, for $R'$, $E'$ and $F'$ as in \cref{trueNotationBourbaki}, $F'$ is a free $R'$-module of rank $e-1$ and $\,E'/F'$ is isomorphic to an $R'$-ideal $J$ with $\mathrm{grade}_{\,}J >0$. Also, $E''/F''$ is isomorphic to an $R''$-ideal $I$, called a \emph{generic Bourbaki ideal} of $E$ with respect to $U$. If $U=E$, $I$ is simply called a \emph{generic Bourbaki ideal} of $E$.
\end{tdefn}
Generic Bourbaki ideals of $E$ with respect to a submodule $U$ are essentially unique. Indeed, if $K$ is another ideal constructed as in \cref{truetdefBourbaki} using variables $Y$, then the ideals generated by $I$ and $K$ in $T=R(Z,Y)$ coincide up to multiplication by a unit in $\mathrm{Quot}(T)$, and are equal whenever $I$ and $K$ have grade at least 2 (see \cite[3.4]{SUV2003}).
Notice that assumption (iii) in \cref{truetdefBourbaki} is automatically satisfied if $U$ is a minimal reduction of $E$. Moreover, if in this case $I \cong E''/F''$ is a generic Bourbaki ideal with respect to $U$, then the ideal $\,K \cong U''/F''$ is a minimal reduction of $I$.
Sometimes it is possible to relate the reduction number of $E$ and the reduction number of $I$, as described in part (d) of the following theorem, which summarizes the main properties of generic Bourbaki ideals.
\begin{thm} \label{trueMainBourbaki} \hypertarget{trueMainBourbaki}{}
$($\cite[3.5]{SUV2003}$)$.
In the setting of \cref{trueNotationBourbaki}, let $U$ be a reduction of $E$. Let $I$ be a generic Bourbaki ideal of $E$ with respect to $U$, and let $K \cong U''/F''$. Then the following statements hold.
\begin{itemize}
\item[$($a$)$] $\mathcal{R}(E)$ is Cohen-Macaulay if and only if $\,\mathcal{R}(I)\,$ is Cohen-Macaulay.
\item[$($b$)$] $E$ is of linear type and $\,\mathrm{grade} \, \mathcal{R}(E)_+ \geq e\,$ if and only if $I$ is of linear type, if and only if $J$ is of linear type.
\item[$($c$)$] If any of condition (a) or (b) hold, then $\mathcal{R}(E'')/(F) \cong \mathcal{R}(I)$ and $\,x_1, \ldots, x_{e-1}\,$ of $F$ form a regular sequence on $\mathcal{R}(E'')$.
\item[$($d$)$] If $\,\mathcal{R}(E'')/(F) \cong \mathcal{R}(I), \,$ then $K$ is a reduction of $I$ with $r_K(I)=r_U(E)$. In this case, if in addition the residue field of $R$ is infinite and $U=E$, then $r(E)=r(I)$.
\end{itemize}
\end{thm}
Condition (c) above says that $\mathcal{R}(E'')$ is a \emph{deformation} of $\mathcal{R}(I)$. This is in fact the key property that allows to transfer properties from $\mathcal{R}(E)$ to $\mathcal{R}(I)$ and backwards. The following result characterizes the deformation property along a Bourbaki exact sequence.
\begin{thm} \label{SUV3.11} \hypertarget{SUV3.11}{}
$($\cite[3.11]{SUV2003}$)$.
Let $R$ be a Noetherian ring, $E$ a finite $R$-module with $\mathrm{rank}_{\,}E=e>0$. Let $\,0 \to F \to E \to I \to 0\,$ be an exact sequence where $F$ is a free $R$-module with free basis $x_1, \ldots, x_{e-1}$ and $I$ is an $R$-ideal. The following are equivalent.
\begin{itemize}
\item[$($a$)$] $\mathcal{R}(E)/(F)$ is $R$-torsion free.
\item[$($b$)$] $\mathcal{R}(E)/(F) \cong \mathcal{R}(I)$.
\item[$($c$)$] $\mathcal{R}(E)/(F) \cong \mathcal{R}(I)$ and $x_1, \ldots, x_{e-1}$ of $F$ form a regular sequence on $\mathcal{R}(E)$.
\end{itemize}
Moreover, if $I$ is of linear type, then so is $E$ and the equivalent conditions above hold.
\end{thm}
For our purposes, it will often be convenient to think of the rings $R'$ and $R''$ as the result of an iterative process, where at each step only $n$ variables are adjoined. This is formalized in the following notation.
\begin{notat} \label{IterativeNotation} \hypertarget{IterativeNotation}{}
\em{Let $R$ be a Noetherian ring, $E$ a finite $R$-module with positive rank, $U=Ra_1 + \dots + Ra_n$ a submodule of $E$ for some $a_i \in E$. Let $Z_1, \ldots Z_n$ be indeterminates, $\,\widetilde{R} \coloneq R[Z_1, \ldots, Z_n]$, $\,\widetilde{E} \coloneq E \otimes_R \widetilde{R}$, $\, \widetilde{U} \coloneq U \otimes_R \widetilde{R}$, and $\,x \coloneq \sum_{i=1}^n Z_{i} a_i \in \widetilde{U}$. If $R$ is local with maximal ideal $\mathfrak{m}$, let $\,\displaystyle{S \coloneq R(Z_1, \ldots, Z_n)= \widetilde{R}_{\mathfrak{m}\widetilde{R}}}$.}
\end{notat}
In fact, the rings $R'$ and $R''$ as in \cref{trueNotationBourbaki} are respectively obtained from $R$ by iterating the construction of the rings $\widetilde{R}$ and $S$ as in \cref{IterativeNotation} $e-1$ times. Moreover, in \cref{trueMainBourbaki} the Cohen-Macaulay property is transferred from $\mathcal{R}(E)$ to $\mathcal{R}(I)$ and backwards using the following two results iteratively.
\begin{thm} \label{forward} \hypertarget{forward}{}
$($\cite[3.6 and 3.8]{SUV2003}$)$
In the setting of \cref{IterativeNotation}, assume that $\mathrm{rank}_{\,} E =e \geq 2$ and that $E/U$ is a torsion $R$-module. Let $\,\overline{E} \coloneq \widetilde{E} / \widetilde{R}x$ and $\,\overline{\mathcal{R}} \coloneq \mathcal{R}(\widetilde{E}\,)/(x)$. Then,
\begin{itemize}
\item[$($a$)$] $x$ is regular on $\mathcal{R}(\widetilde{E}\,)$.
\item[$($b$)$] The kernel of the natural epimorphism $\, \displaystyle \pi \colon \, \overline{\mathcal{R}} \twoheadrightarrow \mathcal{R}(\overline{E})\,$ is $K=H^0_{U\overline{\mathcal{R}}}(\overline{\mathcal{R}})$ and coincides with the $\widetilde{R}$-torsion submodule of $\overline{\mathcal{R}}$.
\item[$($c$)$] If $U$ is a reduction of $E$ and $\mathrm{grade}_{\,}\mathcal{R}(E)_{+} \geq 2$, then $\pi$ is an isomorphism.
\end{itemize}
\end{thm}
\begin{thm} \label{backward} \hypertarget{backward}{}
$($\cite[3.7]{SUV2003}$)$
In the setting of \cref{IterativeNotation}, assume that $R$ is local, that $\mathrm{rank}_{\,} E =e \geq 2$ and that $U$ is a reduction of $E$. Let $\,\overline{E}\,$ denote $\,(E \otimes_R S)/xS\,$ and $\,\overline{\mathcal{R}} \coloneq \mathcal{R}(E \otimes_R S)/(x)$. If $\,\mathcal{R}(\overline{E})\,$ satisfies $S_2$, then the natural epimorphism $\, \displaystyle \pi \colon \,\overline{\mathcal{R}} \twoheadrightarrow \mathcal{R}(\overline{E})\,$ is an isomorphism, and $x$ is regular on $\mathcal{R}(E \otimes_R S)$. In particular, $\,\mathcal{R}(E)\,$ satisfies $S_2$.
\end{thm}
Notice that formation of Rees algebras of finite modules commutes with flat extensions (see \cite[1.3]{EHU}). Hence, one has that $\,\mathcal{R}(\widetilde{E}) \cong \mathcal{R}(E)\otimes_R \widetilde{R},\,$ as well as $\,\mathcal{R}(E \otimes_R S) \cong \mathcal{R}(E) \otimes_R S$. Therefore, tensoring with the residue field $k$ yields isomorphisms $\,\mathcal{R}(\widetilde{E}) \otimes_R k \cong \mathcal{F}(E)\otimes_R \widetilde{R},\,$ and $\,\mathcal{F}(E \otimes_R S) \cong \mathcal{F}(E) \otimes_R S$.
\subsection{Iterated Jacobian duals} \label{SecJac} \hypertarget{SecJac}{}
When studying the defining ideal of Rees algebras, the most challenging aspect usually consists in identifying its non-linear part. In many cases of interest \cite{{Vasconcelos},{SUVjacduals},{MU},{UVeqLinPres},{Morey},{Johnson},{PU99},{BM},{KPU}}, this can be done by examining some auxiliary matrices associated with $\varphi$, namely the \emph{Jacobian dual} or the \emph{iterated Jacobian duals} of $\varphi$, introduced by Vasconcelos \cite{Vasconcelos} and by Boswell and Mukundan \cite{BM} respectively. We briefly recall these notions here, as we will use them intensively in \cref{SecDefEqs}.
Although both definitions make sense over any Noetherian ring, for our purposes we assume that $\,R=k[Y_1, \ldots, Y_d]\,$ is a standard graded polynomial ring over a field $k$. Let $S=R[T_1, \ldots, T_n]$ be bigraded, and set $\, \underline{Y}= Y_1, \ldots, Y_d$, $\,\underline{T}=T_1, \ldots, T_n$. Let
$$\,R^{s} \stackrel{\varphi}{\longrightarrow} R^n $$
be an $\,n \times s\,$ matrix whose entries are homogeneous of constant $\underline{Y}$-degrees $\,\delta_1, \ldots, \delta_s$ along each column and assume that $I_1(\varphi) \subseteq (\underline{Y})$.
\begin{tdefn} \label[definition]{JacDual} \hypertarget{JacDual}{}
\em{\cite{Vasconcelos}$\,$ With $R$, $S$ and $\varphi$ as above, let $M=\mathrm{coker}(\varphi)$ and let $\,\ell_1, \ldots, \ell_s\,$ be linear forms in the $T_i$ variables, generating the defining ideal of the symmetric algebra $\,\mathcal{S}(M)$. Then,
\begin{itemize}
\item[(a)] There exists a $d \times s$ matrix $B(\varphi)$ whose entries are linear in the $T_i$ variables and homogeneous of constant $\underline{Y}$-degrees $\,\delta_1 -1, \ldots, \delta_s -1\,$ along each column, satisfying
$$ [\ell_1, \ldots, \ell_s]=[\underline{T}] \cdot \varphi = [\underline{Y}] \cdot B(\varphi).$$
$B(\varphi)$ is called a \emph{Jacobian dual} of $\varphi$.
\item[(b)] $B(\varphi)$ is not necessarily unique, but it is if the entries of $\varphi$ are all linear. Moreover, by Cramer's rule it follows that $\,\mathcal{L} + I_d(B(\varphi)) \subseteq \mathcal{J}$.
\end{itemize}}
\end{tdefn}
For a matrix $A$, let $\,(\underline{Y} \cdot A)$ denote the ideal generated by the entries of the row vector $[\underline{Y}] \cdot A$.
\begin{tdefn} \label{IterJacDuals} \hypertarget{IterJacDuals}{} \hypertarget{IterJacDuals}{}
\em{(\cite[4.1, 4.2 and 4.5]{BM})$\,$
With $R$, $S$ and $\varphi$ as above, let $B_1(\varphi)=B(\varphi)$ for some Jacobian dual $B(\varphi)$ of $\varphi$. Assume that matrices $B_j(\varphi)$ with $d$ rows have been inductively constructed for $1 \leq j \leq i$, such that each $B_j(\varphi)$ has homogeneous entries of constant $\underline{Y}$-degrees and $\underline{T}$-degrees along each column. There exists a matrix $C_i$ whose entries in $S$ are homogeneous of constant $\underline{Y}$-degrees and $\underline{T}$-degrees in each column, such that $\,B_{i+1}(\varphi) \coloneq [B_i(\varphi) \,|\, C_i]\,$ satisfies
\begin{displaymath} \label{eqIterJac} \hypertarget{eqIterJac}{}
(\underline{Y} \cdot B_i(\varphi)) + (I_d(B_i(\varphi)) \cap (\underline{Y})) = (\underline{Y} \cdot B_i(\varphi)) + (\underline{Y} \cdot C_i).
\end{displaymath}
A matrix $\,B_i(\varphi)\,$ as above is called an \emph{$i$-th iterated Jacobian dual} of $\varphi$. Moreover, for all $i \geq 1$:
\begin{itemize}
\item[$($a$)$] The ideal $(\underline{Y} \cdot B(\varphi))+ I_d(B_i(\varphi))$ only depends on $\varphi$.
\item[$($b$)$] $\, (\underline{Y} \cdot B_i(\varphi))+ I_d(B_i(\varphi))= (\underline{Y} \cdot B(\varphi))+ I_d(B_i(\varphi)) \subseteq (\underline{Y} \cdot B(\varphi)) + I_d(B_{i+1}(\varphi))$. In particular, there exists an $N>0$ so that $\,(\underline{Y} \cdot B(\varphi))+ I_d(B_i(\varphi)) = (\underline{Y} \cdot B(\varphi)) + I_d(B_{i+1}(\varphi))\,$ for all $i \geq N$.
\item[$(c)$] $(\underline{Y} \cdot B(\varphi)) + I_d(B_i(\varphi)) \subseteq ((\underline{Y} \cdot B(\varphi)) \, \colon (\underline{Y})^i)$.
\end{itemize}}
\end{tdefn}
\section{A deformation condition for the Rees algebra of a module} \label{SecDeform} \hypertarget{SecDeform}{}
As described in \cref{SecPrelim}, transferring properties from a module $E$ to a generic Bourbaki ideal $I$ of $E$ and backwards depends on whether the Rees algebra $\mathcal{R}(E'')$ is a deformation of $\mathcal{R}(I)$. \cref{trueMainBourbaki} and \cref{SUV3.11} show that this always occurs when $I$ is of linear type, or if $\mathcal{R}(E)$ or $\mathcal{R}(I)$ are known to be Cohen-Macaulay. On the other hand, it is interesting to find alternative conditions on $E$ or $I$ that guarantee this deformation property.
Inspired by \cite[3.7]{SUV2003} (which was stated here as \cref{backward}), the following result provides a new deformation condition, which applies to ideals and modules not necessarily of linear type and whose Rees algebras are not necessarily Cohen-Macaulay (see for instance \cref{3.5FiberCone} and \cref{GenBM}).
\begin{thm}\label{NewSUV3.7}
Let $R$ be a Noetherian local ring, $E$ a finite $R$-module with $\mathrm{rank}_{\,} E =e \geq 2$ and let $\,U=Ra_1 + \dots + Ra_n\,$ be a reduction of $E$. Let $S$ and $x$ be as in \cref{IterativeNotation} and denote $\,\overline{E} \coloneq (E \otimes_R S) / Sx$.
Assume that $\,\mathrm{depth}_{\,} \mathcal{R}(\overline{E}_{\mathfrak{q}}) \geq 2\,$ for all $\,\mathfrak{q} \in \mathrm{Spec}(S)\,$ such that $\,\overline{E}_{\mathfrak{q}}\,$ is not of linear type. Then, the natural epimorphism
$$\, \pi \colon \, \mathcal{R}(E \otimes_R S)/(x) \twoheadrightarrow \mathcal{R}(\overline{E})\,$$
is an isomorphism, and $x$ is regular on $\mathcal{R}(E \otimes_R S)$.
\end{thm}
\emph{Proof}. We modify the proof of \cite[3.7]{SUV2003}. Since $x$ is regular on $\mathcal{R}(E \otimes_R S)$ by \cref{forward}(a), we only need to show that $K= \mathrm{ker}(\pi)$ is zero. In fact, we only need to prove this locally at primes $\mathfrak{q} \in \mathrm{Spec}(S)$ such that $\overline{E}_{\mathfrak{q}}$ is not of linear type. Indeed, if $\overline{E}_{\mathfrak{q}}$ is of linear type, then $\,\mathcal{R}(\overline{E}_{\mathfrak{q}}) \cong \mathcal{S}(\overline{E}_{\mathfrak{q}})\,$ is isomorphic to $\,\mathcal{S}((E \otimes_R S)_{\mathfrak{q}}) / (x)\,$ by construction, whence $K_{\mathfrak{q}}=0$.
Let $\overline{\mathcal{R}}$ denote $\mathcal{R}(E \otimes_R S)/(x)$ and let $\,M=(\mathfrak{m}, \mathcal{R}(E \otimes_R S)_{+})\,$ be the unique homogeneous maximal ideal of $\mathcal{R}(E \otimes_R S)$. Notice that $K \subseteq H^0_M (\overline{\mathcal{R}})$. In fact, after localizing $S$ if needed, we may assume that $K$ vanishes locally on the punctured spectrum of $S$. Hence, $K$ is annihilated by a power of $\mathfrak{m}$. Also, by \cref{forward} it follows that $K$ is annihilated by a power of $\,U \overline{\mathcal{R}}, \,$ and hence by a power of $\,E \overline{\mathcal{R}} = (\overline{\mathcal{R}})_{+}, \,$ since $E$ is integral over $U$.
Thus, for all $\mathfrak{q} \in \mathrm{Spec}(S)\,$ $\,K_{\mathfrak{q}} \subseteq H^0_{M_{\mathfrak{q}}} (\overline{\mathcal{R}}_{\mathfrak{q}})$ and it suffices to show that $H^0_{M_{\mathfrak{q}}}(\overline{\mathcal{R}}_{\mathfrak{q}})=0\,$ whenever $\,\overline{E}_{\mathfrak{q}}$ is not of linear type. Consider the long exact sequence of local cohomology induced by the exact sequence
$$ \,0 \to K_\mathfrak{q} \to \overline{\mathcal{R}}_{\mathfrak{q}} \to \mathcal{R}(\overline{E}_{\mathfrak{q}}) \to 0\, . $$
Since by assumption $\,\mathrm{depth}_{\,} \mathcal{R}(\overline{E})_{\mathfrak{q}}) \geq 2$, then $\, H^i_{M_{\mathfrak{q}}}(\overline{\mathcal{R}}_{\mathfrak{q}}) \cong H^i_{M_{\mathfrak{q}}}(K_{\mathfrak{q}}) \,$ for $i=0,1$. In particular, since $\,K_\mathfrak{q} \subseteq H^0_{M_{\mathfrak{q}}}(\overline{\mathcal{R}}_{\mathfrak{q}}), \,$ it follows that $\,0=H^1_{M_{\mathfrak{q}}}(K_{\mathfrak{q}}) \cong H^1_{M_{\mathfrak{q}}}(\overline{\mathcal{R}}_{\mathfrak{q}})$. Therefore, the exact sequence
$$ 0 \to \mathcal{R}((E \otimes_R S)_\mathfrak{q})(-1) \stackrel{x}{\longrightarrow} \mathcal{R}((E \otimes_R S)_\mathfrak{q}) \longrightarrow \overline{\mathcal{R}}_\mathfrak{q} \to 0 $$
induces the exact sequence
$$ 0 \to H^0_{M_{\mathfrak{q}}}(\overline{\mathcal{R}}_\mathfrak{q}) \longrightarrow H^1_{M_{\mathfrak{q}}}(\mathcal{R}((E \otimes_R S)_\mathfrak{q}))(-1) \stackrel{x}{\longrightarrow} H^1_{M_{\mathfrak{q}}}(\mathcal{R}((E \otimes_R S)_{\mathfrak{q}})) \to 0\, .$$
Now, similarly as in \cite[3.7]{SUV2003}, one can show that $\,H^1_{M_{\mathfrak{q}}}(\mathcal{R}((E \otimes_R S)_\mathfrak{q}))$ is finitely generated, as a consequence of the graded version of the Local Duality Theorem. Therefore, by the graded version of Nakayama's Lemma, it follows that $\,H^1_{M_{\mathfrak{q}}}(\mathcal{R}((E \otimes_R S)_\mathfrak{q}))=0,\,$ whence also $\,H^0_{M_{\mathfrak{q}}}(\overline{\mathcal{R}}_\mathfrak{q})=0$. $\,\blacksquare$
\section{Cohen-Macaulay property of fiber cones of modules} \label{SecFiberCone} \hypertarget{SecFiberCone}{}
In this section we examine the Cohen-Macaulay property of fiber cones of modules. We first show that the construction of generic Bourbaki ideals allows to reduce the problem to the case of ideals, as long as the passage to a generic Bourbaki ideal induces a deformation between the Rees algebras (see \cref{3.5FiberCone}). We then provide sufficient conditions for the fiber cone of a module to be Cohen-Macaulay, generalizing known results of Corso, Ghezzi, Polini and Ulrich for the fiber cone of ideals \cite[3.1 and 3.4]{CGPU}.
Let $I$ be a generic Bourbaki ideal of $E$. The proof of \cite[3.5]{SUV2003} (which was stated here as \cref{trueMainBourbaki}) suggests that, in order to transfer the Cohen-Macaulay property from $\, \mathcal{F}(E)\,$ to $\,\mathcal{F}(I)\,$ and backwards, the natural map
$$ \pi \colon \mathcal{F}(E'') \cong \mathcal{R}(E'')\otimes_R k \twoheadrightarrow \mathcal{R}(I)\otimes_R k \cong \mathcal{F}(I) $$
needs to be an isomorphism. Hence, it suffices to provide conditions on the module $E$ or on the ideal $I$ so that this isomorphism is guaranteed. Our first goal in this direction is to prove that an analogous statement as that of \cref{forward} holds for fiber cones. This is done through the next two propositions.
\begin{prop} \label{3.6FiberCone} \hypertarget{3.6FiberCone}{}
Let $(R, \mathfrak{m}, k)$ be a Noetherian local ring, $E$ a finite $R$-module with $\mathrm{rank}_{\,} E =e \geq 2$, and let $U$ be a submodule of $E$ such that $E/U$ is torsion. In the setting of \cref{IterativeNotation}, let $L$ be the kernel of the natural epimorphism
$$ \pi \colon \, (\mathcal{F}(E) \otimes_R \widetilde{R})/(x) \twoheadrightarrow \mathcal{R}(\widetilde{E} / \widetilde{R}x) \otimes_R k.$$
Then,
\begin{itemize}
\item[$($a$)$] $L\subseteq H^0_U((\mathcal{F}(E) \otimes_R \widetilde{R})/(x))$.
\item[$($b$)$] If in addition $U$ is a reduction of $E$ and $\mathrm{depth}_{\,}\mathcal{F}(E)>0$, then $x$ is regular on $\,\mathcal{F}(E) \otimes_R \widetilde{R}$.
\end{itemize}
\end{prop}
\emph{Proof}. Let $\,\overline{\mathcal{R}}$ denote $\,\mathcal{R}(\widetilde{E}) / (x)$. By \cref{forward}(b), there is an exact sequence
$$0 \to K \stackrel{\iota}{\longrightarrow} \overline{\mathcal{R}} \stackrel{\pi}\longrightarrow \mathcal{R}(\widetilde{E} / \widetilde{R}x) \to 0 $$
where $K= H^0_{U\overline{\mathcal{R}}}(\overline{\mathcal{R}})$. Tensoring with the residue field $k$, it then follows that
$$ L= (\iota \otimes k) (H^0_{U\overline{\mathcal{R}}}(\overline{\mathcal{R}}) \otimes_R k) \subseteq H^0_U((\mathcal{F}(E) \otimes_R \widetilde{R})/(x)) .$$
This proves (a). Part (b) follows from \cite{Hochster}, after noticing that $\mathrm{depth}_{\,}\mathcal{F}(E) = \mathrm{grade}_{\,}U\mathcal{F}(E)$. $\blacksquare$ \\
\begin{prop} \label{3.8FiberCone} \hypertarget{3.8FiberCone}{}
Let $(R, \mathfrak{m}, k)$ be a Noetherian local ring, $E$ a finite $R$-module with $\mathrm{rank}_{\,} E =e \geq 2$, and let $U$ be a reduction of $E$. With the notation of \cref{3.6FiberCone}, assume that $\mathrm{depth}_{\,}\mathcal{F}(E) \geq 2$. Then,
\begin{displaymath}
\pi \colon \, (\mathcal{F}(E) \otimes_R \widetilde{R})/(x) \twoheadrightarrow \mathcal{R}(\widetilde{E} / \widetilde{R}x) \otimes_R k.
\end{displaymath}
is an isomorphism, and $x$ is regular on $\,\mathcal{F}(E) \otimes_R\widetilde{R}$.
\end{prop}
\emph{Proof}. Let $\overline{\mathcal{R}}$ denote $\,\mathcal{R}(E\otimes_R \widetilde{R})/(x)$. By \cref{3.6FiberCone} it follows that $\,L= \mathrm{ker}(\pi) = (\iota \otimes k) (H^0_{U\overline{\mathcal{R}}}(\overline{\mathcal{R}}) \otimes_R k) \subseteq H^0_U((\mathcal{F}(E) \otimes_R \widetilde{R})/(x))\,$ and that $x$ is regular on $\,\mathcal{F}(E) \otimes_R\widetilde{R}$.
In particular, there is an exact sequence
$$ 0 \to (\mathcal{F}(E) \otimes_R \widetilde{R})(-1) \stackrel{x}\longrightarrow \mathcal{F}(E) \otimes_R \widetilde{R} \longrightarrow (\mathcal{F}(E) \otimes_R \widetilde{R})/(x) \to 0.$$
Now, notice that $\,H^1_U(\mathcal{F}(E) \otimes_R \widetilde{R})=0\,$ since
$$\mathrm{grade}_{\,}U \mathcal{F}(E) \otimes_R \widetilde{R}=\mathrm{grade}_{\,}E \mathcal{F}(E) \otimes_R \widetilde{R} \geq \mathrm{depth}_{\,}\mathcal{F}(E) \geq 2. $$
Hence, the long exact sequence of local cohomology implies that $$\,H^0_U((\mathcal{F}(E) \otimes_R \widetilde{R})/(x))=0.$$
Thus, $L=0$ and $\pi$ is an isomorphism. $\blacksquare$ \\
By applying \cref{3.8FiberCone} repeatedly, we obtain the following useful corollary.
\begin{cor} \label{Fiberforward} \hypertarget{Fiberforward}{}
Let $R$ be a Noetherian local ring, and let $E$ be a finite $R$-module with $\mathrm{rank}_{\,} E =e$. In the setting of \cref{trueNotationBourbaki}, let $I$ be a generic Bourbaki ideal of $E$ with respect to a reduction $U$ of $E$. Assume that $\,\mathrm{depth}_{\,}\mathcal{F}(E) \geq e$, then the natural epimorphism
$$ \pi \colon \,\mathcal{F}(E'')/(F'') \twoheadrightarrow \mathcal{F}(I)\,$$
is an isomorphism and $\,F''\mathcal{F}(E'')\,$ is generated by a regular sequence of linear forms.
\end{cor}
We now proceed to set up the technical framework in order for the Cohen-Macaulay property to be transferred from $\,\mathcal{F}(I)\,$ back to $\,\mathcal{F}(E)$. The key result is \cref{3.7FiberCone}, whose proof relies on the next two lemmas. \cref{filter-regular} states that $x$ is a \emph{filter-regular element} on $\,\mathcal{F}(E) \otimes_R \widetilde{R}\,$ with respect to the ideal $\,E(\mathcal{F}(E) \otimes_R \widetilde{R})$ (see for instance \cite[p. 13]{RV}).
\begin{lemma} \label{filter-regular} \hypertarget{filter-regular}{}
Let $R$ be a Noetherian local ring, $E$ a finite $R$-module with $\mathrm{rank}_{\,} E =e \geq 2$, and let $U$ be a reduction of $E$. Then, in the setting of \cref{IterativeNotation},
$$ \mathrm{Supp}_{\mathcal{F}(E) \otimes_R \widetilde{R}\,}(0 \colon_{\!\mathcal{F}(E) \otimes_R \widetilde{R}\,} x) \subseteq V(E(\mathcal{F}(E) \otimes_R \widetilde{R})).$$
\end{lemma}
\emph{Proof}. Let $\, \mathfrak{q} \in \mathrm{Spec}(\mathcal{F}(E) \otimes_R \widetilde{R}) \setminus V(E(\mathcal{F}(E) \otimes_R \widetilde{R}))\,$ and let $\,\mathfrak{p}=\mathfrak{q} \cap \mathcal{F}(E)$. Since $\,U=R_{\,} a_1 + \ldots +R_{\,} a_n$ is a reduction of $E$, it follows that $\,\mathfrak{q} \nsupseteq U(\mathcal{F}(E) \otimes_R \widetilde{R})$. Hence, $\,\mathfrak{p} \nsupseteq U\mathcal{F}(E),\,$ which means that at least one of the $a_i$ is a unit in $\,\mathcal{F}(E)_{\mathfrak{p}}$. Therefore, $\,x=\sum_{i=1}^n Z_{i} a_i\,$ is a nonzerodivisor in $\,\mathcal{F}(E)_{\mathfrak{p}}[Z_1, \ldots, Z_n]$, hence also in its further localization $\,(\mathcal{F}(E)[Z_1, \ldots, Z_n])_{\mathfrak{q}} = (\mathcal{F}(E) \otimes_R \widetilde{R})_{\mathfrak{q}}$. Thus, $\,(0 \colon_{\!\mathcal{F}(E) \otimes_R \widetilde{R}\,} x)_{\mathfrak{q}} =0$. $\blacksquare$\\
\begin{lemma} \label{homework} \hypertarget{homework}{}
Let $R$ be a positively graded Noetherian ring with $R_0$ local, and let $x$ be a homogeneous non-unit element of $R$. Let $M$ be a finite graded $R$-module, and assume that $\mathrm{dim}_{\,}(0 \colon_{\!\!M \,}x) < \mathrm{depth}_{\,}(M/xM)$. Then, $x$ is a nonzerodivisor on $M$.
\end{lemma}
\emph{Proof}. It suffices to show that $\,H^0_{(x)}(M)=0$,
which would follow from Nakayama's Lemma once we prove that $\,H^0_{(x)}(M) / x H^0_{(x)}(M)=0$. For this, it suffices to show that $\, \mathrm{Ass}(H^0_{(x)}(M) / x H^0_{(x)}(M)) = \emptyset$.
Consider the short exact sequences
\begin{equation} \label{sescolon}
0 \to 0 \colon_{\!\!M\,} x \to M \to M/(0 \colon_{\!\!M\,} x) \to 0
\end{equation}
and
\begin{equation} \label{sesxM}
0 \to M/(0 \colon_{\!\!M\,} x) \stackrel{x}{\longrightarrow} M \to M/xM \to 0.
\end{equation}
Since $\,(0 \colon_{\!\!M\,} x) = H^0_{(x)}(0 \colon_{\!\!M\,} x),\,$ it follows that $\,H^1_{(x)}(0 \colon_{\!\!M\,} x)=0$. Hence, the long exact sequence of local cohomology induced by~(\ref{sescolon}) implies that $\, H^0_{(x)}(M)\,$ surjects onto $\,H^0_{(x)}(M/(0 \colon_{\!\!M\,} x))$. Therefore, the long exact sequence of local cohomology induced by~(\ref{sesxM})
$$ 0 \to H^0_{(x)}(M/(0 \colon_{\!\!M\,} x)) \stackrel{x}{\longrightarrow} H^0_{(x)}(M) \to H^0_{(x)}(M/xM) $$
in turn induces an exact sequence
$$H^0_{(x)}(M) \stackrel{x}{\longrightarrow} H^0_{(x)}(M) \to H^0_{(x)}(M/xM) \subseteq M/xM.$$
In particular, $\, H^0_{(x)}(M) / x H^0_{(x)}(M) \,$ embeds into $\,M/xM$.
Also, notice that $(0 \colon_{\!} x)=0$ if and only if $\,H^0_{(x)}(M)=0$, hence if and only if $\,H^0_{(x)}(M) / x H^0_{(x)}(M)=0\,$ by Nakayama's Lemma. Therefore, $\, \mathrm{Supp}(0 \colon_{\!\!M\,} x)$ $= \mathrm{Supp}(H^0_{(x)}(M) / x H^0_{(x)}(M))$. Hence, if there exists some $\, \mathfrak{p} \in \mathrm{Ass}(H^0_{(x)}(M) / x H^0_{(x)}(M))$, then $\,\mathrm{dim}(R/\mathfrak{p}) \leq \mathrm{dim}(0 \colon_{\!\!M\,} x)$. On the other hand, since $\, \mathrm{Ass}(H^0_{(x)}(M) / x H^0_{(x)}(M)) \subseteq \mathrm{Ass}(M/xM), \,$ we also have that $\, \mathrm{dim}(R/\mathfrak{p}) \geq \mathrm{depth}(M/xM)$. But then
$$\,\mathrm{dim}(0 \colon_{\!\!M\,} x) = \mathrm{dim}(H^0_{(x)}(M) / x H^0_{(x)}(M)) \geq \mathrm{depth}(M/xM), \,$$
which contradicts the assumption. So, it must be that $\, \mathrm{Ass}(H^0_{(x)}(M) / x H^0_{(x)}(M)) = \emptyset$, as we wanted to prove. $\blacksquare$ \\
\begin{thm} \label{3.7FiberCone} \hypertarget{3.7FiberCone}{}
In the setting of \cref{IterativeNotation}, assume that $R$ is local and that $\mathrm{rank}_{\,} E =e \geq 2$. Let $\,U$ be a reduction of $E$, and denote $\,\overline{E} \coloneq (E \otimes_R S)/Sx$. Assume that one of the two following conditions hold:
\begin{itemize}
\item[$($i$)$] $\,\mathcal{R}(\overline{E})\,$ satisfies $S_2$, or
\item[$($ii$)$] $\mathrm{depth}_{\,} \mathcal{R}(\overline{E}_{\mathfrak{q}}) \geq 2$ for all $\mathfrak{q} \in \mathrm{Spec}(S)$ such that $\,\overline{E}_{\mathfrak{q}}$ is not of linear type.
\end{itemize}
Then, the natural epimorphism
$$ \pi \colon \,\mathcal{F}(E \otimes_R S)/(x) \twoheadrightarrow \mathcal{F}(\overline{E})\,$$
is an isomorphism. Moreover, $x$ is regular on $\,\mathcal{F}(E \otimes_R S)\,$ if $\,\mathrm{depth}_{\,} \mathcal{F}(\overline{E})>0$.
\end{thm}
\emph{Proof}. Assumption (i) and \cref{backward} together imply that the natural epimorphism
$$\pi \colon \,\mathcal{R}(E\otimes_R S)/(x) \twoheadrightarrow \mathcal{R}(\overline{E}) $$
is an isomorphism. The same conclusion holds if assumption (ii) is satisfied, thanks to \cref{NewSUV3.7}. Hence, $\, \displaystyle \pi \colon \,\mathcal{F}(E \otimes_R S)/(x) \twoheadrightarrow \mathcal{F}(\overline{E})\,$ is an isomorphism as well. In particular, if in addition $\,\mathrm{depth}_{\,} \mathcal{F}(\overline{E})>0\,$ then $\,\mathrm{depth}_{\,}(\mathcal{F}(E \otimes_R S)/(x)) >0$. Moreover, by \cref{filter-regular} we know that $\,(0 \colon_{\mathcal{F}(E\otimes_R S)}\, x)$ is an Artinian $\mathcal{F}(E\otimes_R S)$-module. Hence, $x$ is regular thanks to \cref{homework}. $\blacksquare$ \\
By applying \cref{3.7FiberCone} repeatedly, we obtain the following corollary.
\begin{cor} \label{Fiberbackward} \hypertarget{Fiberbackward}{}
Let $R$ be a Noetherian local ring, and let $E$ be a finite $R$-module with $\mathrm{rank}_{\,} E =e$. In the setting of \cref{trueNotationBourbaki} $I$ be a generic Bourbaki ideal of $E$ with respect to a reduction $U$ of $E$.
\begin{itemize}
\item[$($a$)$] Assume that either $\, \mathcal{R}(I)$ is $S_2$, or $\,\mathrm{depth}_{\,} \mathcal{R}(I_{\mathfrak{q}}) \geq 2\,$ for all $\mathfrak{q} \in \mathrm{Spec}(R'')$ so that $I_{\mathfrak{q}}$ is not of linear type. Then, the natural epimorphism
$$ \pi \colon \,\mathcal{F}(E'')/(F'') \twoheadrightarrow \mathcal{F}(I)\,$$
is an isomorphism.
\item[$($b$)$] If in addition $\mathcal{F}(I)$ is Cohen-Macaulay, then $\,F''\mathcal{F}(E'')\,$ is generated by a regular sequence of linear forms.
\end{itemize}
\end{cor}
\emph{Proof}. Notice that the assumptions in (a) imply that the assumptions (i) or (ii) in \cref{3.7FiberCone} are satisfied at each iteration, thanks to \cref{backward} or \cref{NewSUV3.7} respectively. Hence, $\,\mathcal{R}(E'')/(F'') \cong \mathcal{R}(I),\,$ and $F''\mathcal{R}(E'')$ is generated by a regular sequence on $\mathcal{R}(E'')$. Hence, by iteration of \cref{3.7FiberCone}, it follows that $\,\mathcal{F}(E'')/(F'') \cong \mathcal{F}(I)$. Now, if furthermore $\,\mathcal{F}(I)$ is Cohen-Macaulay, then the proof of \cref{3.7FiberCone} implies that also $\,F''\mathcal{F}(E'')$ is generated by a regular sequence on $\mathcal{F}(E'')$. That the generators of $\,F''\mathcal{F}(E'')$ are linear forms in $\mathcal{F}(E'')$ is clear by construction. $\blacksquare$ \\
We are now ready to state and prove our main result.
\begin{thm} \label{3.5FiberCone} \hypertarget{3.5FiberCone}{}
Let $R$ be a Noetherian local ring, $E$ a finite $R$-module with $\mathrm{rank}_{\,} E =e$, $U$ a reduction of $E$. Let $I$ be a generic Bourbaki ideal of $E$ with respect to $U$.
\begin{itemize}
\item[$($a$)$] If $\mathcal{F}(E)$ is Cohen-Macaulay, then $\mathcal{F}(I)$ is Cohen-Macaulay.
\item[$($b$)$] Assume that either $\, \mathcal{R}(I)$ is $S_2$, or $\,\mathrm{depth}_{\,} \mathcal{R}(I_{\mathfrak{q}}) \geq 2\,$ for all $\mathfrak{q} \in \mathrm{Spec}(R'')$ so that $I_{\mathfrak{q}}$ is not of linear type. If $\mathcal{F}(I)$ is Cohen-Macaulay, then $\mathcal{F}(E)$ is Cohen-Macaulay.
\end{itemize}
\end{thm}
\emph{Proof}. We may assume that $e\geq 2$. Suppose that $\,\mathcal{F}(E)\,$ is Cohen-Macaulay, then $\,\mathrm{depth}_{\,} \mathcal{F}(E) =\ell(E) \geq e$. Hence, by \cref{Fiberforward} it follows that $\,\mathrm{depth}_{\,} \mathcal{F}(I) =\ell(E) -e+1$. The latter equals $\ell(I)$ by \cite[3.10]{SUV2003}, hence $\,\mathcal{F}(I)\,$ is Cohen-Macaulay.
Conversely, if the assumptions in (b) hold, by \cref{Fiberforward} it follows that $\,\mathrm{depth}_{\,} \mathcal{F}(E) = \ell(I)+e-1 =\ell(E)$. Therefore, $\,\mathcal{F}(E)\,$ is Cohen-Macaulay. $\blacksquare$ \\
From the proofs of \cref{Fiberforward} and \cref{3.5FiberCone} it follows that $\,\mathcal{F}(E)\,$ is Cohen-Macaulay whenever $\,\mathcal{F}(I)\,$ is Cohen-Macaulay and $\,\mathcal{R}(E'')\,$ is a deformation of $\,\mathcal{R}(I)$. In particular, finding conditions other than those in \cref{backward} or \cref{NewSUV3.7} to guarantee this deformation property for the Rees algebras would provide alternative versions of \cref{3.5FiberCone}. In fact, in order to transfer the Cohen-Macaulay property from $\,\mathcal{F}(E)\,$ to $\,\mathcal{F}(I)\,$ and backwards one would only need $\,\mathcal{F}(E'')\,$ to be a deformation of $\,\mathcal{F}(I)$. Finding conditions for this to occur without any prior knowledge of the Rees algebra would potentially allow to use generic Bourbaki ideals also in the case when the Cohen-Macaulayness of $\,\mathcal{F}(I)\,$ is possibly unrelated to that of $\,\mathcal{R}(I)\,$ (see for instance \cite{{Shah},{D'CruzRV},{CGPU},{JV1},{JV2},{CPV},{Viet08},{Jonathan}}).
\subsection{Modules with Cohen-Macaulay fiber cone} \label{SecCMFiberE} \hypertarget{SecCMFiberE}{}
\cref{3.5FiberCone} above implies that the fiber cone $\,\mathcal{F}(E)\,$ of $E$ is Cohen-Macaulay whenever both the Rees algebras $\,\mathcal{R}(I)\,$ and the fiber cone $\,\mathcal{F}(I)\,$ of a generic Bourbaki ideal $I$ of $E$ are Cohen-Macaulay. The goal of this section is to provide specific classes of modules with this property. A class of modules with Cohen-Macaulay fiber cone and non-Cohen-Macaulay Rees algebra will be provided later in \cref{GenBM}.
Our first result regards modules of projective dimension one, and extends a known result proved by Corso, Ghezzi, Polini and Ulrich for perfect ideals of height two (see \cite[3.4]{CGPU}). Recall that a module $E$ of rank $e$ satisfies \emph{condition $G_s$} if $\,\mu(E_{\mathfrak{p}}) \leq \mathrm{dim}R_\mathfrak{p} -e +1\,$ for every $\mathfrak{p} \in \mathrm{Spec}(R)$ with $1\leq \mathrm{dim}_{\,}R_{\mathfrak{p}} \leq s-1$. Moreover, by \cite[3.2]{SUV2003} if $E$ satisfies $G_s$ then so does a generic Bourbaki ideal $I$ of $E$.
\begin{thm} \label{GenCGPU3.4} \hypertarget{GenCGPU3.4}{}
Let $R$ be a local Cohen-Macaulay ring, and let $E$ a finite, torsion-free $R$-module with $\, \mathrm{projdim}(E)=1$, with $\ell(E)=\ell$. Assume that $E$ satisfies $G_{\ell-e+1}$. If $\,\mathcal{R}(E)$ is Cohen-Macaulay, then $\,\mathcal{F}(E)$ is Cohen-Macaulay.
\end{thm}
\emph{Proof}. Since $E$ is a torsion-free module of projective dimension one which satisfies $G_{\ell-e+1}$, then $E$ admits a generic Bourbaki ideal $I$, which is perfect of height 2 (see for instance the proof of \cite[4.7]{SUV2003}). If $e=1$, then the conclusion follows from \cite[3.4]{CGPU}. Otherwise, notice that $\, \mathcal{R}(I)\,$ is Cohen-Macaulay by \cref{trueMainBourbaki}, whence $\, \mathcal{F}(I)\,$ is Cohen-Macaulay by \cite[3.4]{CGPU}. Hence, $\, \mathcal{F}(E)$ is Cohen-Macaulay by \cref{3.5FiberCone}. $\blacksquare$
\begin{cor}
Let $R$ be a local Cohen-Macaulay ring, and let $E$ a finite, torsion-free $R$-module with $\, \mathrm{projdim}\,E=1$, with $\ell(E)=\ell$. Let $n=\mu(E)$ and let
\begin{displaymath}
0 \to R^{\,n-e} \stackrel{\varphi}{\longrightarrow} R^n \to E \to 0
\end{displaymath}
be a minimal free resolution of $E$. Assume that $E$ satisfies $G_{\ell-e+1}$ and that one of the following equivalent conditions hold.
\begin{itemize}
\item[$($i$)$] $\, r(E) \leq \ell-e$.
\item[$($ii$)$] $\, r(E_{\mathfrak{p}}) \leq \ell-e\,$ for every prime $\,\mathfrak{p}$ with $\, \mathrm{dim}_{\,}R_{\mathfrak{p}}= \ell(E_{\mathfrak{p}})-e+1 = \ell-e+1$.
\item[$($iii$)$] After elementary row operations, $\,I_{n-\ell}(\varphi)$ is generated by the maximal minors of the last $n-\ell$ rows of $\varphi$.
\end{itemize}
Then, $\, \mathcal{F}(E)$ is Cohen-Macaulay.
\end{cor}
\emph{Proof}. By \cite[4.7]{SUV2003}, each of the conditions (i)-(iii) is equivalent to $\, \mathcal{R}(E)$ being Cohen-Macaulay. Hence, the conclusion follows from \cref{GenCGPU3.4}. $\blacksquare$ \\
The following result was proved for fiber cones of ideals in \cite[3.1]{CGPU}.
\begin{thm} \label{GenCGPU3.1} \hypertarget{GenCGPU3.1}{}
Let $R$ be a Cohen-Macaulay local ring with infinite residue field. Let $E$ be a finite, torsion-free $R$-module with $\,\mathrm{rank}_{\,}E =e>0,\,$ $\,\ell(E)=\ell\,$ and $r(E)=r$. Assume that $E$ satisfies $G_{\ell-e+1}$ and $\ell-e+1 \geq 2$. Let $I$ be a generic Bourbaki ideal of $E$, and $\,g=\mathrm{ht}(I)$. Suppose that one of the following conditions holds.
\begin{itemize}
\item[$($i$)$] If $\mu(E) \geq \ell+2$, then $\,\mathcal{F}(E)$ has at most two homogeneous generating relations in degrees $\,\leq \mathrm{max} \{\, r, \ell-e-g+1\,\}$.
\item[$($ii$)$] If $\mu(E) = \ell+1$, then $\,\mathcal{F}(E)$ has at most two homogeneous generating relations in degrees $\, \leq \ell-e-g+1$.
\end{itemize}
If $\, \mathcal{R}(E)\,$ is Cohen-Macaulay, then $\, \mathcal{F}(E)\,$ is Cohen-Macaulay.
\end{thm}
\emph{Proof}. By \cref{truetdefBourbaki}, $E$ admits a generic Bourbaki ideal $I$ with $\,\mathrm{ht}_{\,}I = g \geq 1, \,$ $\,\mu(I)=\mu(E)-e+1\,$ and $\,r(I) \leq r, \,$ which satisfies $G_{\,\ell-e+1}$, i.e. $G_{\,\ell(I)}$ (see \cite[3.10]{SUV2003}). If $e=1$ the conclusion follows from \cite[3.1]{CGPU}, since $\,\mathcal{G}(I)\,$ is Cohen-Macaulay whenever $\,\mathcal{R}(I)\,$ is by \cite[Proposition 1.1]{HuGrI}. So we may assume that $e \geq 2$. We show that both $\,\mathcal{R}(I)\,$ and $\,\mathcal{F}(I)\,$ are Cohen-Macaulay, whence $\,\mathcal{F}(E)\,$ is Cohen-Macaulay by \cref{3.5FiberCone}.
Since by assumption $\,\mathcal{R}(E)\,$ is Cohen-Macaulay, then $\,\mathcal{R}(I)\,$ is Cohen-Macaulay by \cref{trueMainBourbaki}. Hence also the associated graded ring $\,\mathcal{G}(I)\,$ is Cohen-Macaulay by \cite[Proposition 1.1]{HuGrI}. Also, by \cref{Fiberbackward} there is a homogeneous isomorphism $\, \mathcal{F}(E'')/(F'') \cong \mathcal{F}(I)$. Therefore, if condition (i) holds, then whenever $\mu(I)\geq \ell+2-e+1 =\ell(I)+2\,$ it follows that $\, \mathcal{F}(I)$ has at most two homogeneous generating relations in degrees at most $\, \mathrm{max} \{\, r, \ell-e-g+1\,\}, \,$ hence in degrees at most $\, \mathrm{max} \{\, r(I), \ell(I)-g\,\}$. Similarly, in the situation of assumption (ii), whenever $\,\mu(I) = \ell(I)+1\,$ it follows that $\,\mathcal{F}(I)$ has at most two homogeneous generating relations in degrees at most $\, \ell-e-g+1=\ell(I)-g$. Hence, $\,\mathcal{F}(I)\,$ is Cohen-Macaulay by \cite[3.1]{CGPU}. $\blacksquare$ \\
In particular, with some additional assumptions on the module $E$, we can give more explicit sufficient conditions for $\, \mathcal{F}(E)\,$ to be Cohen-Macaulay. The next two corollaries exploit results on the Cohen-Macaulay property of Rees rings from \cite{myCMRees} and recover \cite[2.9]{CGPU} in the case when $E$ is an ideal of grade at least two.
Recall that a module $E$ is called \emph{orientable} if $E$ has a rank $\,e >0\,$ and $\, (\bigwedge^{e} E)^{\ast \ast} \cong R, \,$ where $(-)^{\ast}$ denotes the functor $\mathrm{Hom}_R (-, R))$.
\begin{cor}
Let $R$ be a local Gorenstein ring of dimension $d$ with infinite residue field. Let $E$ be a finite, torsion-free, orientable $R$-module, with $\mathrm{rank}E=e>0$ and $\ell(E)=\ell$. Let $g$ be the height of a generic Bourbaki ideal of $E$, and assume that the following conditions hold.
\begin{itemize}
\item[$($a$)$] $E$ satisfies $G_{\,\ell-e+1}$.
\item[$($b$)$] $r(E) \leq k$ for some integer $1 \leq k \leq \ell-e$.
\item[$($c$)$] $\displaystyle{
\mathrm{depth}_{\,}E^j \geq \Big\lbrace \begin{array}{cc}
d-g-j+2\, \; \; & \mathrm{for} \; \; 1 \leq j \leq \ell-e-k-g+1 \\
d-\ell+e+k-j\, \; & \mathrm{for} \; \; \ell-e-k-g+2 \leq j \leq k \\
\end{array}}$
\item[$($d$)$] If $g = 2$, $\,\mathrm{Ext}_{R_{\mathfrak{p}}}^{\,j+1}(E_{\mathfrak{p}}^j , R_{\mathfrak{p}}) =0\,$ for $\,\ell-e-k \leq j \leq \ell-e-3\,$ and for all $\,\mathfrak{p} \in \mathrm{Spec}(R)$ with $\, \mathrm{dim} R_{\mathfrak{p}} = \ell-e\, $ such that $E_p$ is not free.
\end{itemize}
Assume furthermore that one of the following two conditions holds.
\begin{itemize}
\item[$($i$)$] If $\mu(E) \geq \ell+2$, then $\,\mathcal{F}(E)$ has at most two homogeneous generating relations in degrees $\,\leq \mathrm{max} \{\, r, \ell-e-g+1\,\}$.
\item[$($ii$)$] If $\mu(E) = \ell+1$, then $\,\mathcal{F}(E)$ has at most two homogeneous generating relations in degrees $\, \leq \ell-e-g+1 \,$.
\end{itemize}
Then, $\,\mathcal{F}(E)\,$ is Cohen-Macaulay.
\end{cor}
\emph{Proof}. Assumptions (a)-(d) together imply that $\,\mathcal{R}(E)$ is Cohen-Macaulay, thanks to \cite[4.3]{myCMRees}. Hence, if either condition (i) or (ii) hold it follows that $\,\mathcal{F}(E)\,$ is Cohen-Macaulay by \cref{GenCGPU3.1}. $\blacksquare$ \\
Recall that an $R$-module $E$ is called an \emph{ideal module} if $E \neq 0$ is finitely generated, torsion-free and so that $E^{**}$ is free, where ${-}^*$ denotes the functor $\mathrm{Hom}_R(-, R)$. Equivalently, $E$ is an ideal module if and only if $E$ embeds into a finite free module $G$ with $\,\mathrm{grade}(G/E) \geq 2$ (see \cite[5.1]{SUV2003}).
\begin{cor}
Let $R$ be a local Cohen-Macaulay ring, and let $E$ be an ideal module with $\mathrm{rank}_{\,}E=e$ and $\ell(E)=\ell$. Assume that the following conditions hold.
\begin{itemize}
\item[$($a$)$] $r(E) \leq k$, where $k$ is an integer such that $1 \leq k \leq \ell -e$.
\item[$($b$)$] $E$ is free locally in codimension $\,\ell-e- \mathrm{min} \{2, k\},\,$ and satisfies $G_{\,\ell-e+1}$.
\item[$($c$)$]$\mathrm{depth}(E^j) \geq d-\ell+e+k-j\,$ for $1 \leq j \leq k$.
\end{itemize}
Assume furthermore that one of the following two conditions holds.
\begin{itemize}
\item[$($i$)$] If $\mu(E) \geq \ell+2$, then $\,\mathcal{F}(E)$ has at most two homogeneous generating relations in degrees $\,\leq \mathrm{max} \{\, r, \ell-e-g+1\,\}$.
\item[$($ii$)$] If $\mu(E) = \ell+1$, then $\,\mathcal{F}(E)$ has at most two homogeneous generating relations in degrees $\, \leq \ell-e-g+1 \,$.
\end{itemize}
Then, $\,\mathcal{F}(E)\,$ is Cohen-Macaulay.
\end{cor}
\emph{Proof}. From assumptions (a)-(c) it follows that $\,\mathcal{R}(E)$ is Cohen-Macaulay, thanks to \cite[4.10]{myCMRees}. Hence, if either condition (i) or (ii) hold, then $\,\mathcal{F}(E)\,$ is Cohen-Macaulay by \cref{GenCGPU3.1}. $\blacksquare$
\section{Defining ideal of Rees algebras}
\label{SecDefEqs} \hypertarget{SecDefEqs}{}
In this section we use generic Bourbaki ideals to understand the defining ideal of Rees algebras of modules. The idea is not completely novel, as it appears in the proof of \cite[4.11]{SUV2003}, where the authors determine the defining ideal for the Rees algebra of a module $E$ of projective dimension one having a linear presentation. In their case, the Rees algebra of a generic Bourbaki ideal $I$ of $E$ is Cohen-Macaulay, however the latter condition will not be guaranteed nor required in the situations considered in this section.
The first key observation is that it is always possible to relate a presentation matrix of $E$ to a presentation matrix of a generic Bourbaki ideal $I$ of $E$.
\begin{rmk} \label{BourbakiPres} \hypertarget{BourbakiPres}{}
\em{(See also \cite[p.617]{SUV2003}). Let $(R, \mathfrak{m}, k)$ be a Noetherian local ring and let $E$ be a finite $R$-module with a minimal presentation $\, \displaystyle R^{s} \stackrel{\varphi}{\longrightarrow} R^n \to E \to 0\,$.
\begin{itemize}
\item[(a)] With $Z$ and $x_j$ as in \cref{trueNotationBourbaki}, by possibly multiplying $\varphi$ from the left by an invertible matrix with coefficients in $k(Z)$, we may assume that $\varphi\,$ presents $E''$ with respect to a minimal generating set of the form ${x_1, \ldots, x_{e-1}, a_e, \ldots, a_n}$. Then, $\, \displaystyle \varphi = \left[ \begin{array}{c}
A \\
\hline
\psi \\
\end{array} \right]$,
where $A$ and $\psi$ are submatrices of size $(e-1) \times s$ and $(n-e+1) \times s$ respectively. By construction, $\psi$ is a presentation of $I$, and is minimal since $\mu(I)=\mu(E)-e+1=n-e+1$.
\item[(b)] Assume that $\,R=S_{\mathfrak{m}}$, where $S$ is a standard graded algebra over a field and $\mathfrak{m}$ is its unique homogeneous maximal ideal. If the entries of $\varphi$ are homogeneous polynomials of constant degrees $\delta_1, \ldots, \delta_s$ along each column, then the entries of $\psi$ are homogeneous polynomials of constant degrees $\delta_1, \ldots, \delta_s$ along each column.
\end{itemize}}
\end{rmk}
Let $R$ be a standard graded algebra over a field $k$ and let $E$ be a finite $R$-module. Then, the fiber cone $\mathcal{F}(E)$ of $E$ has a particularly useful description as a subring of a polynomial ring over $k$, which is summarized in the following remark.
\begin{rmk} \label{bigrading} \hypertarget{bigrading}{}
\em{Let $R$ be a standard graded algebra over a field $k$ and homogeneous maximal ideal $\mathfrak{m}$. Let $\,E=Ra_1 + \ldots +Ra_n\,$ be finite $R$-module
minimally generated by elements of the same degree. On the polynomial ring $\,S = R[T_1, \ldots T_n]$ define a bigrading by setting $\, \mathrm{deg}\,R_i = (i,0)\,$ and $\,\mathrm{deg}\,T_i = (0,1)\,$. Then, the Rees algebra $\,\displaystyle{\mathcal{R}(E) \cong S/ \mathcal{J}} \,$ has a natural bigraded structure induced by the bigrading on $S$ and is generated in degrees $(0,1)$. Moreover, $\,\displaystyle{\mathfrak{m}\, \mathcal{R}(E) \cong [\mathcal{R}(E)]_{(>0, -)}}\,$. Hence, the fiber cone $\mathcal{F}(E)$ satisfies
$$ \mathcal{F}(E) \,\cong \,\mathcal{R}(E) / \mathfrak{m}\mathcal{R}(E) \,\cong \,[\mathcal{R}(E)]_{(0,-)} \subseteq k[T_1, \ldots, T_n].$$
As a consequence, the homogeneous epimorphism
$$ k[T_1, \ldots, T_n] = S \otimes_R k \twoheadrightarrow \mathcal{R}(E) \otimes_R k = \mathcal{F}(E)$$
has kernel $\, \mathcal{I} \cong \mathcal{J}_{(0,-)}$.}
\end{rmk}
Notice also that for an $R$-module $E$ as in \cref{bigrading} the defining ideal $\mathcal{L}$ of the symmetric algebra $\mathcal{S}(E)$ satisfies $\, \displaystyle{\mathcal{L} = \mathcal{J}_{(-,1)} }$. In particular, if $\mathcal{I}R[T_1, \ldots, T_n]$ denotes the extension of $\mathcal{I}$ to the ring $R[T_1, \ldots, T_n]$, then
$$\,\mathcal{J} \supseteq \mathcal{L} + \mathcal{I}R[T_1, \ldots, T_n].$$
$E$ is said to be of \emph{fiber type} if the latter inclusion is an equality, or equivalently, if $\mathcal{J}$ is generated in bidegrees $(-,1)$ and $(0,-)$. The following theorem characterizes the fiber type property of modules.
\begin{thm} \label{FiberType} \hypertarget{FiberType}{}
Let $R=S_{\mathfrak{m}}$ where $S$ is a standard graded algebra over a field with unique homogeneous maximal ideal $\mathfrak{m}$. Let $E$ be a finite $R$-module with $\mathrm{rank}\,E=e\geq 2$, minimally generated by elements $\,a_1, \ldots, a_n\,$ that are images in $R$ of homogeneous elements of the same degree in $S$. Assume that $E$ is torsion-free and that $E_{\mathfrak{p}}$ is free for all $\mathfrak{p} \in \mathrm{Spec}\,R$ with $\mathrm{depth}\,R \leq 1$, and let $I$ be a generic Bourbaki ideal of $E$ constructed with respect to the generators $a_1, \ldots, a_n$.
Assume that one of the following conditions hold.
\begin{itemize}
\item[$($i$)$] $\mathcal{R}(I)$ satisfies $S_2$; or
\item[$($ii$)$] $\mathrm{depth}_{\,} \mathcal{R}(I_{\mathfrak{q}}) \geq 2\,$ for all $\mathfrak{q} \in \mathrm{Spec}(R'')$ so that $I_{\mathfrak{q}}$ is not of linear type.
\end{itemize}
Then, $E$ is of fiber type if and only if $I$ is of fiber type.
\end{thm}
\emph{Proof}. Let $ \,R[T_1, \ldots, T_n] \twoheadrightarrow \mathcal{R}(E) \,$ be the natural epimorphism mapping $T_i$ to $a_i$ for all $i$. As in \cref{trueNotationBourbaki}, for $\, 1 \leq j \leq e-1\,$ let $\, \displaystyle{x_j = \sum_{i=1}^{n} Z_{ij}a_i}$. For every $j\,$ let $\, \displaystyle{X_j = \sum_{i=1}^{n} Z_{ij}T_i \subseteq R''[T_1, \ldots, T_n] \,}$ and notice that $X_j$ is mapped to $x_j$ via the natural epimorphism $\, \displaystyle{R''[T_1, \ldots, T_n] \twoheadrightarrow \mathcal{R}(E'')}$. Let $\varphi$ be a minimal presentation of $E$ with respect to the generators $\,a_1, \ldots, a_n,\,$ let $\psi$ be a minimal presentation of $I$ constructed as in \cref{BourbakiPres}, and use these presentations to construct the symmetric algebras $\mathcal{S}(E)$ and $\mathcal{S}(I)$ respectively. Let $\mathcal{L}_E$, $\mathcal{J}_E$ and $\mathcal{I}_E$ denote the defining ideals of $\mathcal{S}(E)$, $\mathcal{R}(E)$ and $\mathcal{F}(E)$ respectively. Similarly, let $\mathcal{L}_I$, $\mathcal{J}_I$ and $\mathcal{I}_I$ denote the defining ideals of $\mathcal{S}(I)$, $\mathcal{R}(I)$ and $\mathcal{F}(I)$ respectively.
By construction it then follows that $\,\displaystyle{\mathcal{L}_I = \mathcal{L}_E R'' + (X_1, \ldots, X_{e-1})},\,$ as well as $\,\displaystyle{\mathcal{J}_I = \mathcal{J}_E R'' + (X_1, \ldots, X_{e-1})}$. Since $S$ is standard graded, this implies that
$$\mathcal{I}_I = [\mathcal{J}_I]_{(0,-)}= [\mathcal{J}_E]_{(0,-)} R'' + (X_1, \ldots, X_{e-1}) = \mathcal{I}_E R''+ (X_1, \ldots, X_{e-1}). $$
Hence, $I$ is of fiber type if and only if
$$\mathcal{J}_I = \mathcal{L}_I + [\mathcal{J}_I]_{(0,-)} R'' = \mathcal{L}_E + [\mathcal{J}_E]_{(0,-)} R'' + (X_1, \ldots, X_{e-1}).$$
Now, if either assumption (i) or (ii) are satisfied, by \cref{backward} or \cref{NewSUV3.7} it follows that $\,X_1, \ldots, X_{e-1}\,$ form a regular sequence modulo $\mathcal{J}_E R''$. Therefore, it follows that
\begin{eqnarray*}
\mathcal{J}_E R'' & = & \Big( \mathcal{L}_E R'' + [\mathcal{J}_E]_{(0,-)} R''+ (X_1, \ldots, X_{e-1}) \Big) \cap \mathcal{J}_E R'' \\
& = & \mathcal{L}_E R'' + [\mathcal{J}_E]_{(0,-)} R''+ (X_1, \ldots, X_{e-1}) \,\mathcal{J}_E R''
\end{eqnarray*}
By the graded version of Nakayama's Lemma, this means that
$$ \mathcal{J}_E R'' = \mathcal{L}_E R'' + [\mathcal{J}_E]_{(0,-)} R'', $$
which can occur if and only if in $\,R[T_1, \ldots, T_n]\,$ one has
$\,\displaystyle{\mathcal{J}_E = \mathcal{L}_E + [\mathcal{J}_E]_{(0,-)}}, \,$
i.e. if and only if $E$ is of fiber type. $\blacksquare$
\subsection{Almost linearly presented modules of projective dimension one}
In this section we describe the Rees algebra and fiber cone of almost linearly presented modules of projective dimension one. Throughout we will consider the situation of \cref{SetDefEqs} below.
\begin{set} \label{SetDefEqs} \hypertarget{SetDefEqs}{}
\em{Let $R=k[Y_1, \ldots, Y_d]$ be a polynomial ring over a field $k$, where $d \geq 2$. Let $E$ be a finite $R$-module, minimally generated by homogeneous elements of the same degree. Assume also that $E$ has projective dimension one and satisfies $G_d$. Then, has positive rank $e$ and admits a minimal free resolution of the form $\,\displaystyle{0 \to R^{\,n-e} \stackrel{\varphi}{\longrightarrow} R^n \to E \to 0},\,$ where $n= \mu(E)$. Assume that $\varphi$ is \emph{almost linear}, i.e. has linear entries, except possibly for those in the last column, which are homogeneous of degree $m \geq 1$.}
\end{set}
In the situation of \cref{SetDefEqs}, after localizing at the unique homogeneous maximal ideal, by \cref{truetdefBourbaki} $\,E$ admits a generic Bourbaki ideal $I$, which is perfect of grade 2. Let $\psi$ be a minimal presentation of $I$ obtained from $\varphi$ as in \cref{BourbakiPres}. By construction, $\psi$ is also almost linear. In particular, the defining ideal of $\,\mathcal{R}(I)\,$ is described by the following theorem of Boswell and Mukundan \cite[5.3 and 5.6]{BM}.
\begin{thm} \label{BM} \hypertarget{BM}{}
Let $R=k[Y_1, \ldots,Y_d]$ be a standard graded polynomial ring over a field $k$. Let $I$ be a perfect ideal of height 2 admitting an almost linear presentation $\psi$. Assume that $I$ satisfies $G_d$ and that $\mu(I)=d+1$. Then, the defining ideal of the Rees algebra $\mathcal{R}(I)$ is
$$\mathcal{J}= (\underline{Y} \cdot B(\psi)) + I_d(B_m(\psi)) = (\underline{Y} \cdot B(\psi)) \colon (Y_1, \ldots, Y_d)^m,$$
where $m$ is the degree of the non-linear column of $\psi$ and $B_m(\psi)$ is the $m$th-iterated Jacobian dual of $\psi$ as in Definition~\ref{IterJacDuals}. Moreover:
\begin{itemize}
\item[$($i$)$] $\mathcal{R}(I)$ is almost Cohen-Macaulay, i.e. $\mathrm{depth}_{\,}\mathcal{R}(I) \geq d-1$, and it is Cohen-Macaulay if and only if $m=1$.
\item[$($ii$)$] $\,\mathcal{F}(I)$ is Cohen-Macaulay.
\end{itemize}
\end{thm}
We now generalize \cref{BM} to almost linearly presented modules of projective dimension one.
\begin{thm} \label{GenBM}
Under the assumptions of \cref{SetDefEqs}, set $\underline{Y}= [Y_1, \ldots, Y_d]$ and assume that $\,n=d+e$. Then, the defining ideal of $\, \mathcal{R}(E)$ is
$$ \mathcal{J}= ((\underline{Y} \cdot B(\varphi))\, \colon (\underline{Y})^m) = (\underline{Y} \cdot B(\varphi)) + I_d(B_m(\varphi)),$$
where $m$ is the degree of the non-linear column of $\varphi$ and $B_m(\varphi)$ denotes an $m$-th iterated Jacobian dual as in Definition~\ref{IterJacDuals}. Moreover:
\begin{itemize}
\item[$($i$)$] $\mathcal{R}(E)$ is almost Cohen-Macaulay, and it is Cohen-Macaulay if and only if $m=1$.
\item[$($ii$)$] $\,\mathcal{F}(E)$ is Cohen-Macaulay.
\end{itemize}
\end{thm}
\emph{Proof}. We modify the proof of \cite[4.11]{SUV2003}. Let $a_1, \ldots, a_n$ be a minimal generating set for $E$ corresponding to the presentation $\,\varphi$, and let $ \,R[T_1, \ldots, T_n] \twoheadrightarrow \mathcal{R}(E) \,$ be the natural epimorphism, mapping $T_i$ to $a_i$ for all $i$. Localizing at the unique homogeneous maximal ideal, we may assume that $R$ is local and that $E$ admits a generic Bourbaki ideal $I$, which is perfect of grade 2 and such that $\, \mu(I)=n-e+1=d+1.\,$ If $e=1$, then $E \cong I$ and the statement follows from \cref{BM}.
So, assume that $e \geq 2$. With $x_j$ as in \cref{trueNotationBourbaki}, for $1 \leq j \leq e-1\,$ set $\,X_j= \sum_{i=1}^{n} Z_{ij} T_i, \,$ and note that $X_j$ is mapped to $x_j$ under the epimorphism $\,R''[T_1, \ldots, T_n] \twoheadrightarrow \mathcal{R}(E''). \,$ Set $\,\underline{T}= [T_1, \ldots, T_n]$. As in \cref{BourbakiPres}, we can construct a minimal almost linear presentation $\psi$ of $I$, such that
$$ [\underline{Y}] \cdot B(\varphi) \equiv [\underline{T}] \cdot \left[ \begin{array}{c}
0 \\
\hline
\psi \\
\end{array} \right]\, \; \mathrm{modulo} \, (X_1, \ldots, X_{e-1}). $$
Let $B(\psi)$ be a Jacobian dual of $\psi$ defined by $\, \displaystyle [\underline{T}] \cdot \left[ \begin{array}{c}
0 \\
\hline
\psi \\
\end{array} \right] = [\underline{Y}] \cdot B(\psi)$.
Then, by \cref{BM}, the defining ideal of $\,\mathcal{R}(I)$ is
\begin{equation} \label{eqJI} \hypertarget{eqJI}{}
\mathcal{J}_I = (\underline{Y} \cdot B(\psi))+ I_d(B_m(\psi)) = (\underline{Y} \cdot B(\psi)) \, \colon (\underline{Y})^m,
\end{equation}
where $m$ is the degree of the non-linear column of $\varphi$. Moreover, $\,\mathcal{R}(I)$ is almost Cohen-Macaulay, and Cohen-Macaulay if and only if the entries of $\psi$ are all linear, while $\,\mathcal{F}(I)$ is Cohen-Macaulay. In particular, by \cref{trueMainBourbaki} it follows that $\mathcal{R}(E)$ is Cohen-Macaulay if and only if $\varphi$ is linear.
To prove the remaining statements, notice that $E''_{\mathfrak{q}}\,$ is of linear type for all primes $\mathfrak{q}$ in the punctured spectrum of $R''$ (this is because $E''$ has projective dimension one and satisfies $G_d$, by \cite[Propositions 3 and 4]{Avramov}). Hence, also $I_{\mathfrak{q}}\,$ is of linear type for the same primes $\mathfrak{q}$. Moreover, the discussion above shows that $\, \displaystyle{ \mathrm{depth}_{\,}\mathcal{R}(I) \geq \mathrm{dim}_{\,}\mathcal{R}(I) -1=d \geq 2}$. Hence, inducting on $e\,$ and using \cref{NewSUV3.7}$\,$ in the case when $e=2$, we obtain that $\, \displaystyle{\mathcal{R}(I) \cong \mathcal{R}(E'')/(F'')} \,$ and $\,x_1, \ldots, x_{e-1}\,$ form a regular sequence on $\,\mathcal{R}(E'')$. Thus, $X_1, \ldots, X_{e-1}$ form a regular sequence modulo $\mathcal{J} R''$. This shows that $\,\mathcal{R}(E'')$ is almost Cohen-Macaulay, whence $\mathcal{R}(E)$ is almost Cohen-Macaulay. It also implies that
$$ \mathcal{J}_I= \mathcal{J} R'' + (X_1, \ldots, X_{e-1}) $$
Hence, from (\ref{eqJI}) and \cref{IterJacDualsPass} below it follows that
$$ \mathcal{J}R'' + (X_1, \ldots, X_{e-1})= (\underline{Y} \cdot B(\varphi)) + I_d(B_m(\varphi)) +(X_1, \ldots, X_{e-1}). $$
On the other hand, since $E$ is of linear type locally on the punctured spectrum of $R$, it follows that
$$ \mathcal{J} \supseteq (\underline{Y} \cdot B(\varphi)) \, \colon (\underline{Y})^m \supseteq (\underline{Y} \cdot B(\varphi)) + I_d(B_m(\varphi)), $$
where the last inclusion follows from Theorem~\ref{IterJacDuals}(c). Therefore, since $X_1, \ldots, X_{e-1}$ form a regular sequence modulo $\mathcal{J}R'',\,$ in $R''[T_1, \ldots, T_n]$ we have:
\begin{eqnarray*}
\mathcal{J} \!\!\! & = & \!((\underline{Y} \cdot B(\varphi)) + I_d(B_m(\varphi)) +(X_1, \ldots, X_{e-1})) \cap \mathcal{J} \\
\, & = & ((\underline{Y} \cdot B(\varphi)) + I_d(B_m(\varphi))) + (X_1, \ldots, X_{e-1})\, \mathcal{J} .
\end{eqnarray*}
By the graded version of Nakayama's Lemma, this means that
$$\mathcal{J} = (\underline{Y} \cdot B(\varphi)) + I_d(B_m(\varphi))= (\underline{Y} \cdot B(\varphi)) \, \colon (\underline{Y})^m, $$
as claimed. Finally, since $\mathcal{F}(I)$ is Cohen-Macaulay and $\, \mathrm{depth}_{\,}\mathcal{R}(I) \geq 2$, from \cref{3.5FiberCone}(b) it follows that $\,\mathcal{F}(E)\,$ is Cohen-Macaulay. $\blacksquare$\\
\begin{lemma} \label{IterJacDualsPass} \hypertarget{IterJacDualsPass}{}
Let $R=k[Y_1, \ldots, Y_d]_{(Y_1, \ldots, Y_d)}$, and denote $\underline{Y}= [Y_1, \ldots, Y_d]$. Let $\varphi$, $\psi$, $B(\psi)$, and $X_1, \ldots, X_{e-1}\,$ be as in the proof of \cref{GenBM}. Then, for all $i$ and for any Jacobian dual $\,B(\varphi)$ of $\varphi$, in $\,R''[T_1, \ldots, T_n]$ we have
$$(\underline{Y} \cdot B(\varphi)) + I_d(B_i(\varphi)) +(X_1, \ldots, X_{e-1})= (\underline{Y} \cdot B(\psi))+ I_d(B_i(\psi)).$$
\end{lemma}
\emph{Proof}. Choose $B(\psi)$ such that $[\underline{Y}] \cdot B(\psi)= [\underline{T}] \cdot \left[ \begin{array}{c}
0 \\
\hline
\psi \\
\end{array} \right], \,$ as in the proof of \cref{GenBM}. Then, in $R''[T_1, \ldots, T_n]$ we have
$$ [\underline{Y}] \cdot B(\varphi) \equiv [\underline{Y}] \cdot B(\psi) \: \mathrm{modulo} \,(X_1, \ldots, X_{e-1}). $$
So, the statement is proved for $i=1$. Now, let $i+1 \geq 2$ and assume that the statement holds for $B_i(\varphi)$. Let $C_i$ be a matrix as in Definition~\ref{IterJacDuals}. Since
$$(\underline{Y} \cdot B_i(\varphi)) + (I_d(B_i(\varphi)) \cap (\underline{Y})) = (\underline{Y} \cdot B_i(\varphi)) + (\underline{Y} \cdot C_i)$$
and the $B_i(\varphi)$ are bigraded, going modulo $(X_1, \ldots, X_{e-1})$, in $R''[T_1, \ldots, T_n]$ we have
$$(\underline{Y} \cdot B_i(\psi)) + (I_d(B_i(\psi)) \cap (\underline{Y})) = (\underline{Y} \cdot B_i(\psi)) + (\underline{Y} \cdot \overline{C_i}),$$
where $\overline{C_i}$ denotes the image of $C_i$ modulo $(X_1, \ldots, X_{e-1})$. Now, let $\,B_{i+1}(\psi) = [B_i(\psi)) \, | \, \overline{C_i}]$. Then, in $R''[T_1, \ldots, T_n]\,$ we have that
$$(\underline{Y} \cdot B(\varphi)) + I_d(B_{i+1}(\varphi)) +(X_1, \ldots, X_{e-1})= (\underline{Y} \cdot B(\psi))+ I_d(B_{i+1}(\psi)),$$
as we aimed to show. $\, \blacksquare$\\
We remark that the equality $\, \mathcal{J}= (\underline{Y} \cdot B(\varphi)) \colon (Y_1, \ldots, Y_d)^m \, $ for the defining ideal of the Rees algebra of a module $E$ as in \cref{GenBM} could be obtained without using generic Bourbaki ideals, by modifying the proof of \cite[6.1(a)]{KPU}. In fact, even if \cite[6.1(a)]{KPU} is stated for perfect ideals of height two, its proof only uses the structure of the presentation matrix, and one would only need to adjust the ranks to prove the statement for modules of projective dimension one. More generally, up to this minor adjustment in the proof, \cite[6.1(a)]{KPU} shows that if $E$ is a finite module over $\,k[Y_1, \ldots, Y_d]\,$ minimally generated by homogeneous elements of the same degree, then the defining ideal of $\mathcal{R}(E)$ is $\, \mathcal{J}= (\underline{Y} \cdot B(\varphi)) \, \colon (\underline{Y})^N, \, $ where $\, N= 1+ \sum_{i=1}^d (\epsilon_i -1) \,$ and the $\epsilon_i$ are the degrees of the columns of $\varphi$.
Similarly, a good portion of the proof of \cref{BM} could be adjusted to the case of modules of projective dimension one by modifying the ranks of the presentation. However, we would not be able to generalize the whole statement of \cref{BM} using this method. Indeed, the proof of the equality $\, \mathcal{J}= (\underline{Y} \cdot B(\varphi)) + I_d(B_i(\varphi))\,$ in the case of perfect ideals of height two crucially makes use of the ideal structure of the cokernel of the presentation matrix (see the proof of \cite[5.3]{BM}).
\section*{Acknowledgements}
Most of the work presented in this manuscript was part of the author's Ph.D. thesis. The author wishes to thank her advisor Bernd Ulrich for his insightful comments on some of the results here presented and for assigning \cref{homework} as a homework problem in one of his courses. Also, part of the content of \cref{SecFiberCone} was motivated by a question of Jonathan Monta\~no, whom we thank very much for fruitful conversations on the topic.
|
1,116,691,499,778 | arxiv | \section{Introduction}
Emission line images of star forming regions often reveal spectacular collimated,
supersonic jets that emerge along the rotation axes of protostellar accretion disks
\citep[see][for reviews]{rb01,ray06}.
The jets break up into knots which form multiple bow shocks as faster
material overtakes slower material \citep[e.g.][]{hartigan01}.
Although measurements are scarce, when detected magnetic fields ahead of bow shocks
are weak; hence, the dynamics of the bow shocks are controlled by velocity perturbations
rather than by any magnetic instabilities. In these systems the magnetic field
affects the flow mainly by reducing the compression in the dense postshock regions by adding
magnetic pressure support \citep{morse92,morse93}.
However, close to the star there is evidence that magnetic fields may dominate
the dynamics of jets. Strong observational correlations
exist between accretion and outflow signatures \citep{cabrit90,heg},
and most mechanisms for accelerating jets from disks
involve magnetic fields \citep{ouyed97a,casse00}.
Recent evidence for rotation in jets \citep{bacc02,coffey04} suggests
that fields play an important role in jet dynamics, at least
in the region where the disk accelerates the flow.
There has been considerable work done on the propagation of radiative jets
with strong ($\beta$ $\lesssim$ 1) magnetic fields \citep{cerq98,frank98,frank99,frank00,
gardiner00,gf00,or00,stone00,cerq99,cerq01a,cerq01b,colle06}.
These studies have tended to explore how magnetic fields influence the
large scale structure of jets, with the hope that
the shape of jets may constrain the strength of the
magnetic fields. These papers explored different field geometries, including
ones connected to magneto-centrifugal launch models. Early studies
focused on the development of nose-cones, which form when toroidal magnetic
field is trapped due to pinch forces at the head of the flow.
The role of toroidal fields acting as shock absorbers within
internal working surfaces has also been explored by a number of authors.
More recent studies have focused on the H$\alpha$ emission
properties of MHD jets.
These papers did not, however, address the principle question of the
current work, which is to link together measurements of the field strengths
at different locations in real YSO jets and to infer the
global run of the magnetic field and density with distance from the source. While
earlier studies \citep{gf00,or00,cerq01a} did explicitly identify the crucial connection
between internal working surfaces and magnetic field geometry when the
initial field is helical, the effect this would have on the dependence of
B($\rho$) and hence B(r) in a velocity-variable flow
was not considered, nor was the possibility of a `magnetic
zone' close to the source where $v_{shock}\ \sim\ V_A$. The realization that
such a region may have dynamically differentiable properties from the
super-fast zones downstream is, to the best of our knowledge, new to this
paper. Thus, the work we present here represents the first attempt to
consider how the sparse magnetic fields measurements available in real YSO
jets can be used to infer large scale field patterns in these objects.
In what follows we show that magnetically dominated outflows
close to the disk are consistent with observations of
hydrodynamically dominated jets at larger distances,
provided the jets vary strongly enough in velocity to generate
strong compressions and rarefactions.
We begin by summarizing typical parameters of stellar jets, and then consider
what these numbers imply for the MHD behavior of a jet as a function of its distance
from the source for both the steady-state and time variable cases.
\section{Observed Parameters of Stellar Jets}
\subsection{Velocity Perturbations}
Stellar jets become visible as material passes through shock waves
and radiates emission lines as it cools. Flow velocities, determined
from Doppler motions and proper motions, are typically $\sim$ 300
km$\,$s$^{-1}$. The emission lines are characteristic of much
lower shock velocities, $\sim$ 30 km$\,$s$^{-1}$ in most cases,
leading to the idea that small velocity perturbations on the order
of 10\%\ of the flow speed (with occasional larger amplitudes as high
as 50\%)
continually heat the jet \citep{rb01}.
For jets like HH~111 which lie in the plane of the sky
we can observe how the velocity varies at each point along the
flow in real time by measuring proper motions of the emission.
Thanks to the excellent spatial resolution of the Hubble Space Telescope,
errors in these proper motions measurements are now only $\sim$ 5 km$\,$s$^{-1}$,
which is low enough to discern real differences in the velocity of
material in the jet. As predicted from
emission line studies, the observed differences between adjacent knots
of emission are typically 30 $-$ 40 km$\,$s$^{-1}$ \citep{hartigan01}.
\subsection{Density}
Opening angles of stellar jets are fairly constant along the
flow, ranging between a few degrees to $\gtrsim$ 20 degrees
\citep[e.g.][]{rb01,cdr04}. Hence, to a good approximation we can
take the flow to be conical. Once the jet has entered a strong
working surface it splatters to the sides, making its width
appear larger, so the most reliable measures of jet widths
are those close to the source. Other effects,
such as precession of the jet and inhomogeneous ambient media
also influence jet widths at large distances.
In the absence of these effects, stellar jets can stay collimated for large
distances because they are cool $-$ the sound speeds of $\lesssim$ 10 km$\,$s$^{-1}$
are small compared with the flow speeds of several hundred km$\,$s$^{-1}$.
A well-known example of a conical flow is HH~34, which has a bright
jet that has a nearly constant opening angle until it reaches
a strong working surface \citep[cf. Figure 6 of ][]{reip02}.
If we extend the opening angle defined by the sides of the jet
close to the source to large enough distances to meet the large bow shock HH~34S,
we find that the size of the jet at that distance
is close to that inferred for the Mach disk of
that working surface \citep{morse92}, as expected for a conical flow.
If jets emerge from a point then
the density should be proportional to r$^{-2}$ except perhaps
within a few AU of the source where the wind is accelerated.
New observations of jet widths range from a few AU at the source,
to as high as 15~AU for bright jets like HH~30. For a finite source
region of radius h, the density n $\sim$ $(r+r_0)^{-2}$ for a conical
flow, where $r_0$ = h/$\theta$, and $\theta$ is the half opening angle of the
jet. For h = 5~AU and $\theta$ = 5 degrees, $r_0$ = 57~AU.
For the purposes of constructing a set of fiducial values for jets,
we adopt a density of $10^4$ cm$^{-3}$ at 1000~AU, and assume the width of
the jet at the base to be 10 AU, with an opening half-angle of 5 degrees.
These parameters produce a mass loss rate of
$5\times 10^{-8}$ M$_\odot$yr$^{-1}$ for a flow velocity of 300 km$\,$s$^{-1}$.
With these values we can calculate densities as a function of distance
(the third column of Table~1). The fiducial values in the Table are
only a rough guide to the densities observed in a typical jet.
In addition to intrinsic variations between objects and density variations
lateral to the jet, beyond $\sim$ 1000~AU the observed densities
increase substantially over a volume-averaged density in the Table owing to compression
in the cooling zones of the postshock gas. Densities are correspondingly lower
in the rarefaction regions between the shocks.
The density dependence in Table 1 for a conical flow appears about right from the
data. New observations of the electron densities and ionization fractions
at distances of $\sim$ 30~AU of the jet in HN~Tau indicate a total density
between $\sim$ 2$\times 10^6$ cm$^{-3}$ and $10^7$ cm$^{-3}$ \citep{hep04},
while the average density in jets such as HH~47, HH~111,
and HH~34 at $\sim$ $10^4$~AU are
$10^3$ -- $10^4$ cm$^{-3}$ \citep[Table 5 of][]{hmr94}.
\subsection{Magnetic Field}
Because most stellar jets radiate only nebular emission lines, which are unpolarized and
do not show any Zeeman splitting, measurements of magnetic fields in jets
are not possible except for a few special cases. The only measurement
of a field in a collimated flow close to the star appears to be that of \citet{ray97}, who
found strong circular polarization in radio continuum observations of T~Tau~S.
The left-handed and right-handed circularly polarized light appear offset
from one another some 10~AU on either side of the star, and the degree of
polarization suggests a field of several Gauss. \citet{ray97} argue that the fields
are too large to be attached to the star, and must come from compressed gas behind
a shock in an outflow.
However, \citet{loinard05} interpret the extended continuum emission from this
object in terms of reconnection events at the star-disk interface.
If the emission does arise in a jet, then
even taking into account compression, the fields must
be at least hundreds of mG in front of the shock to produce the observations.
One other technique has been successful in measuring magnetic fields in jets,
albeit at larger distances.
As gas cools by radiating behind a shock, the density, and hence the
component of the magnetic field parallel to the plane of the shock
(which is tied to the density by flux-freezing) increases
to maintain the postshock region in approximate pressure equilibrium. As a result,
the ratio of the magnetic pressure to the thermal pressure scales as T$^{-2}$
\citep{hartigan03}, so at some point in the cooling zone the magnetic pressure
must become comparable to the thermal pressure
even if the field was very weak in the preshock
gas. The difference between the electron densities
inferred from emission line ratios such as [S~II] 6716/6731 for a nonmagnetic
and weakly-magnetized shock can be as large as two orders of magnitude. Hence,
one can easily measure the component of the magnetic field in the plane of the
shock by simply observing the [S~II] line ratio, provided the preshock density
and the shock velocity are known from other data.
The total luminosity in an emission line constrains the preshock density well,
so the problem comes down to estimating the shock velocity. For most jets this
is a difficult task from line ratios alone because spectra from shocks with
large fields and high shock velocities resemble those from small fields and
low shock velocities \citep{hmr94}. The easiest way to break this
degeneracy is if the shock is shaped like a bow and the velocity is large enough
that there is [O~III] emission at the apex. Emission lines of [O~III] are
relatively independent of the field, and occur only when the shock velocity
exceeds about 90 km$\,$s$^{-1}$. Hence, by observing how far [O~III] extends
away from the apex of the bow, and observing the shape of the bow, one
can infer the shock velocity. Combining the shock velocity, the preshock
density and the observed density in the cooling zone gives the magnetic field.
Unfortunately, only a few bow shocks have high enough velocities to emit
[O~III], so only HH~34S and HH~111V have measured fields. The two cases yield
remarkably similar results. In HH~34S, located
$5.1\times 10^4$~AU from the source, the preshock
gas has a density of 65 cm$^{-3}$ and a magnetic field of 10$\mu$G \citep{morse92},
while HH~111V is $6.4\times 10^4$~AU from the star and has a preshock density of
200 cm$^{-3}$ and a magnetic field of 30$\mu$G \citep{morse93}.
The ratio of B/n is the same for both HH~34S and HH~111V -- we
take 15$\mu$G at a density of 100 cm$^{-3}$ as a typical value.
To fill in the field strengths throughout the table requires a relationship
between B and n, which we now explore.
\section{The Scaling Law B $\sim$ n$^p$}
There are two analytical scaling laws between the magnetic field and the
density that might apply to stellar jets. If jets are driven by some sort
of disk wind, then at distances beyond the Alfven radius
\citep[typically a few AU,][]{anderson05}, the field will be mostly toroidal,
and should decline as r$^{-1}$ along the axis of the jet, where r is the
distance from a point in the jet to the source.
This radial dependence can be visualized
by taking a narrow slice of thickness dz perpendicular to the axis of the jet.
As the slice moves down the jet, its thickness remains constant because the
jet velocity is constant at large distances from the disk, and the diameter
of the slice increases linearly with the distance from the source as the
flow moves. Hence the cross sectional area of the slice increases linearly
with distance. The toroidal field strength, proportional to the number of
field lines per unit area in the slice, must therefore scale as r$^{-1}$.
A similar argument shows that the radial B scales as r$^{-2}$ for a conical
flow, which is why the toroidal field dominates in the jet outside of the
region near the disk. For a conical
flow, the density drops as r$^{-2}$, so B $\sim$ n$^{0.5}$ for a steady flow.
In contrast, if shocks and rarefactions dominate the dynamics, then
the field is tied to the density, so B $\sim$ n.
To determine which of these dependencies dominates we
simulated an expanding magnetized flow that produces shock waves from
velocity variability. Our simulations are carried out in 2.5D using
the AstroBEAR adaptive mesh refinement (AMR) code. AMR allows high
resolution to be achieved only in those regions which require it due
to the presence of steep gradients in critical quantities such as gas
density. AstroBEAR has been well-tested on variety of problems in 1,
2, 2.5D \citep{var06} and 3D \citep{lebedev04}. Here we use the MHD
version of the code in cylindrical symmetry (R,z) with {\bf B} =
B$_\phi${\bf e}$_\phi$, hence maintenance of $\nabla \cdot {\bf B} = 0$
is automatically achieved. We initialize our jet with magnetic
field and gas pressure profiles ($B_\phi(R), P(R)$) which maintain
cylindrical force equilibrium \citep{frank98}.
The spatial scale of the grid is arbitrary, but for plotting purposes we
take it to be 10~AU so that the extent of the simulation resembles that of
a typical stellar jet. Choosing a scale of 1~AU would match the
dimensions at the base of the flow. The time steps are set to be 0.5 of the
Courant-Friedrich-Levy condition, which is the smallest travel time for
information across a cell in the simulation. For a 200 km$\,$s$^{-1}$ jet
and a 10~AU cell size this time interval is $\Delta$t = 0.12 years.
The input jet velocity is a series
of steps, whose velocity in km$\,$s$^{-1}$ is given by V = 200(1+fr), where f
is the maximum amplitude of the velocity perturbation,
and r is a random number between $-$1 and 1. We
ran simulations with f = 0.5, 0.25, and 0.10.
We verified that a constant velocity jet gave a constant Alfven velocity and
n $\sim$ $(r+r_0)^{-2}$ as predicted by analytical theory.
The opening half angle of the jet was 5 degrees;
a numerical run with a wider opening half angle of 15 degrees produced the same
qualitative behavior as the more collimated models.
The first ten cells, taken to be the smallest AMR grid size, are
kept at a fixed velocity V for the entire length of the pulse, and these
ten cells are overwritten with a new random velocity after a pulse
time of $\sim$ 7.2 years (60$\Delta$t) for a grid size of 10~AU and a
velocity of 200 km$\,$s$^{-1}$. Densities, velocities, and magnetic field strengths
are mapped to a uniform spatial grid and
printed out whenever the input velocity changes.
Cooling is taken into account in an approximate manner by using a polytropic
equation of state with index $\gamma$ = 1.1.
The density of the ambient medium is 1000 cm$^{-3}$ and the
initial density of the jet is held constant at 7500 cm$^{-3}$.
We fixed the initial magnetic field to give
a constant initial Alfven speed of 35 km$\,$s$^{-1}$.
Figs.~1 $-$ 4 show the results obtained for the f = 0.5 case.
Similar plots were made for a single, nonmagnetic velocity perturbation in 1-D
by \citet{hr93}. Positive velocity perturbations
form compression waves that steepen to form forward and reverse shocks (a bow
shock and Mach disk in 2D), while negative velocity perturbations produce
rarefactions as fast material runs ahead of slower material.
The top panel in Fig.~1 shows the density along the axis of the jet once
the leading bow shock has progressed off the grid.
The strongest rarefactions, marked as open
squares, follow closely to an r$^{-2}$ law. Essentially once these
strong rarefactions form in the flow, the gas there expands freely until it
is overrun by a shock wave. Because each of the input velocity perturbations
begins by forcing a velocity into the first 10 AMR zones
(a region $\sim$ 100 AU from the source depending on the size of the AMR zone),
rarefactions caused by drops in the random velocity
originate from log(r) $\sim$ 2 (Fig.~1). Hence, the open squares
lie close to a line that goes through the steady-state solution at this point.
The bottom plot shows that shock waves and rarefactions dominate the flow
dynamics. By the end of the simulation, the $\sim$ 35 perturbations have
interacted with one another, colliding and merging to
create only seven clear rarefactions and a
similar number of shocks. The jet evolves quite differently than
it would in steady state (V$_A$ = constant). While the gas initially
follows a B $\sim$ n$^p$ law with p = 0.5, as soon as
shocks and rarefactions begin to form, the
value of p becomes closer to unity, with p $\sim$ 0.85 a reasonable match
to the entire simulation.
The important point is that
once shock waves and rarefactions form, they will increase the value
of p above that expected for a steady state flow. This increase means that the
magnetic signal speed (a term that refers to fast magnetosonic waves,
slow magnetosonic waves, or Alfven waves, all of which have
similar velocities because the sound speed is low, $\sim$ 10 km$\,$s$^{-1}$)
drops overall at larger distances, especially within the
rarefaction waves. Hence, small velocity perturbations that form only magnetic
waves close to the star will generate shocks if they overrun rarefacted gas at
large distances from the star. Essentially velocity perturbations redistribute
the magnetic flux and thereby facilitate shock formation over much of the jet.
Using the numerical values from section 2.3, we can fill in the
fourth column in Table~1 using B/(15~$\mu$G) = (n/100 cm$^{-3}$)$^{0.85}$.
The fifth column of the Table gives the Alfven
speed in the preshock gas assuming full ionization, which is also appropriate for
dynamics of partially ionized gas as discussed below.
\section{Discussion}
\subsection{Evolution of a Typical Velocity Perturbation in an MHD Jet}
Following how individual velocity perturbations evolve with time illustrates many
of the dynamical processes that govern these flows. Fig.~2 shows a typical
sequence of such perturbations, labeled A, B, C, D, and E, with initial velocities
of 192, 230, 172, 223, and 295 km$\,$s$^{-1}$, respectively.
In the left panel, which shows the simulation after 11 velocity pulses,
a compression zone (marked as a solid vertical line) forms as B overtakes A,
and both the density and Alfven velocity V$_A$ increase at this interface.
Other compression zones grow from the interfaces of E/D and D/C.
The rarefaction (dashed line) between B and C creates a characteristic `ramp' profile
in velocity, and at the center of this feature lies a broad, deep density
trough an order of magnitude lower than the surrounding flow. The Alfven
speed in this trough has already dropped to nearly 10 km$\,$s$^{-1}$.
For comparison, the steady state solution has V$_A$ = 35 km$\,$s$^{-1}$
everywhere, with a density that declines from the input value of 7500 cm$^{-3}$.
The right panel shows the simulation several hundred time steps later,
after 12 velocity pulses have passed through the input nozzle of the jet.
Pulses A, B and C, have all evolved into something other than a step function,
and little remains of pulse D, which will soon form the site of a merger
between the denser knots at the D/E and C/D interfaces. The compression wave
between A and B (1125 AU at left, and 1475 AU at right) has an interesting
kink in its velocity profile. The two steep sides of this kink would become
forward and reverse shocks if it were not for the fact that the
Alfven speed there remains high enough, $\sim$ 35 km$\,$s$^{-1}$, to
inhibit the formation of a shock.
The left panel of Fig.~3 shows the same region of the jet several pulse times
later. The only remaining pulse in this section of the jet is E, which has formed both a forward
(bow) shock and a reverse (Mach disk) shock. The Alfven speed at $\sim$ 2300 AU
ahead of the forward shock and at $\sim$ 2100 AU behind
the reverse shock are both only 10 $-$ 20 km$\,$s$^{-1}$, so this gas
shocks easily. Both the forward and reverse shocks have magnetosonic
Mach numbers of 2 $-$ 3. The working surface between these shocks has a density
of $\sim$ $3\times 10^4$ cm$^{-3}$, a factor of 4 increase over the
initial jet density at the source and about two orders of magnitude higher
than the surrounding gas. The Alfven speed there is 120 km$\,$s$^{-1}$,
having reached a maximum of 140 km$\,$s$^{-1}$ when the shock
first formed. Pulses A through C have merged to create a zone of nearly
constant velocity from 2400 $-$ 3400~AU. The density in this region is
far from constant, however, with the density in the feature at 2900 AU
a factor of 500 higher than its surroundings. This type of feature can
cause problems in estimating mass loss rates, because it is a dense
blob with substantial mass that is no longer being heated by shocks, and
may therefore not appear in emission line images.
The right panel of Fig.~3 shows the working surface of knot E after 3
more pulse times. The velocity perturbation E has weakened to $\sim$ 30 km$\,$s$^{-1}$
but still forms a pair of shocks because the surrounding gas has an Alfven speed of
only 10 km$\,$s$^{-1}$. The magnetic pressure in the working surface is
high enough to cause the region to expand, which lowers the density and
the Alfven speed. In the right panel the working surface is now 200 AU
wide and the Alfven speed has dropped to about 70 km$\,$s$^{-1}$.
A new shock is just forming at 3900 AU as all the material on the
left side of the plot with V $>$ 200
km$\,$s$^{-1}$ overtakes slower, but relatively dense gas from 3900 AU to
4400 AU.
The continuous creation and merging of shocks, rarefactions, and compression
waves leads to some interesting and unexpected results. Because dense knots can
have significant magnetic pressure support, when they collide they can `bounce',
as has been seen before in simulations of colliding magnetized clouds
\citep{miniati99}. Evidence for splashback from such a collision
is evident later in the
simulation where the velocity at one point drops to 70 km$\,$s$^{-1}$,
lower than any of the input velocities, which all lie between 100 km$\,$s$^{-1}$
and 300 km$\,$s$^{-1}$.
Magnetically, the overall effect is to concentrate the field
into a few dense areas, which then subsequently expand \citep[see also][]{gf00}.
Fig.~4 shows the Alfven speed at end of the simulation, by which time the leading bow
shock has propagated off the right end of the grid. Though there are a few
areas that have large Alfven speeds, most of the gas in the jet has a
significantly lower V$_A$ than the steady-state solution does (solid line).
The graph shows that, on average, magnetic fields tend to be
more important dynamically close to the star.
Lower-amplitude simulations (Fig.~5) show similar qualitative behavior both in
the formation and propagation of shocks and rarefactions, and in
the dependency of B \hbox{vs.} n. As expected, fewer shocks and rarefactions
form in the low-amplitude simulations and the results are closer to the
steady-state solution (p = 0.5). In all cases, areas of high Alfven velocity concentrate
into a few shocked regions where the density is high, and most locations along
the jet have lower Alfven speeds than those of the steady state case.
\subsection{The Hydrodynamic and Magnetic Zones}
As noted in section 3 and in Figs.~4 and 5, because
B $\sim$ n$^p$ along the jet with $p>0.5$, the Alfven
speed V$_A$ $\sim$ increases, as the density rises. When n $\gtrsim$ $10^5$ cm$^{-3}$, a typical
velocity perturbation of 40 km$\,$s$^{-1}$ will produce a magnetosonic wave rather
than a shock. This variation of the average magnetic signal speed with density, and therefore with
distance, implies that jets can behave hydrodynamically
at large distances, and magnetically close to the star.
Far from the star, the densities are low and the
dynamics are dominated by multiple bow shocks and rarefactions that
form as faster material overtakes slower material. The
magnetic field reduces the compression in the cooling
zones behind the shocks and cushions any collisions between knots,
but is otherwise unimportant in the dynamics.
The fiducial values in Table~1 show that this hydrodynamic
zone typically extends from infinity to within about 300~AU
of the star ($\sim$ 1$^{\prime\prime}$ for a typical source),
so most emission line images of jets show gas in this zone.
Alternatively, when the magnetic signal speed is greater than a typical
velocity perturbation, the magnetic field inhibits the formation of
a shock unless the perturbation is abnormally large.
Figs. 4 and 5 show that the boundary between the magnetic and hydrodynamic
zones is somewhat ill-defined:
magnetic forces dominate wherever the field is
high enough, as occurs in a few places in the simulations at
large distances, for example, in the aftermath of the collision
of two dense knots. However, statistically we expect magnetic fields
to prevent typical velocity perturbations from forming shocks
inside of $\sim$ 300~AU.
A potential complication with the above picture is that fields may dampen
small velocity perturbations in the magnetic zone before the
perturbations ever reach the hydrodynamic zone where they are able to create shocks.
How such perturbations behave depends to a large degree on how disk winds initially
generate velocity perturbations in response to variable disk accretion rates.
If the mass loss is highly clumpy, then plasmoids of dense magnetized gas may simply
decouple from one another at the outset, produce rarefactions, and thereby
reduce the magnetic signal speed enough to allow the first shocks to form.
In addition, the geometry of the field will
not remain toroidal if the flow becomes turbulent owing
to fragmentation, precession, or interactions between clumps.
When both toroidal and poloidal fields are present, velocity variability
concentrates the toroidal fields into the dense shocked regions and the poloidal
field into the rarefactions \citep{gf00}. The magnetic signal speed in poloidally-dominated
regions drops as the jet expands, facilitating the formation of shocks in these
regions.
It might be possible to confirm the existence of stronger fields in knots
close to the source with existing instrumentation. As described in section 2.3,
by combining proper motion observations with emission line studies
one can infer magnetic fields provided the velocity perturbations have
large enough amplitudes.
\subsection{Connection to the Disk}
At distances closer to the disk than 10~AU, a conical flow with a finite
width (n $\sim$ $(r+r_0)^{-2}$)
is not likely to model the jet well. For a disk wind, the field
lines should curve inward until they intersect the disk at $\lesssim$ 1~AU,
while the field changes from being toroidal to mostly poloidal.
We can use the scaling law between magnetic field strength and density derived above
to see if the field strengths are roughly consistent with an MHD launching scenario.
With B $\sim$ n$^{0.85}$, the Alfven velocity equals the jet speed,
$\sim$ 300 km$\,$s$^{-1}$, when n $\sim$ $4\times 10^7$cm$^{-3}$ and
B $\sim$ 0.9~G. A moderately strong shock could then increase the field strength
to a few Gauss, as observed by
\citet{ray97}. Taking the density proportional to r$^{-2}$ within 10~AU
gives r = 2.5~AU when v = 300 km$\,$s$^{-1}$,
the correct order of magnitude for the Alfven radius of an MHD disk wind.
The footpoint of the field line in the disk would be $\sim$ 0.4~AU for a
central star of one solar mass.
The observed correlation of accretion and outflow signatures, together
with the existence of a few very strong bow shocks in some jets, suggests
that sudden increases in the mass accretion rate through the disk produce
episodes of high mass loss that form knots in jets as the material
moves away from the star. Young stars occasionally exhibit large accretion
events known as FU~Ori and EX~Ori outbursts \citep{hartmann04,briceno04}, that
may produce such knots.
However, because knots typically take tens of years to move far enough away
from the star to be spatially resolved, it has been difficult to tie an
accretion event to a specific knot in a jet. In the case of a newly-ejected
knot from the T Tauri star CW~Tau, there does not appear to have been
an accretion event at the time of ejection, though the photometric
records are incomplete \citep{hep04}.
Because magnetic fields must dominate jets close to the disk, it is
possible that the origin of jet knots is purely magnetic.
Models of time-dependent MHD jets have
produced knots that are purely magnetic in nature, and do not require
accretion events \citet{ouyed97b}. For this
scenario to work the mechanism of creating the knots must also
impart velocity differences on the order of 10\%\ of the flow velocity
in order to be consistent with observations of velocity variability at
large distances from the star. It may also be necessary to
decouple the field from the gas via ambipolar diffusion in order to
reduce the Alfven speed enough to allow these velocity
perturbations to initiate shocks and rarefactions. However, ambipolar
diffusion timescales appear to be too long to operate efficiently in jet beams
\citep{frank99}. One way
to distinguish between accretion-driven knots and pure
magnetic knots is to systematically monitor the brightness of T~Tauri
stars with bright forbidden lines over several decades to see whether
or not accretion events are associated with knot ejections.
\subsection{Effects of Partial Preionization}
The ionization fraction of a gas affects how it responds to magnetic disturbances.
Cooling zones of jets are mostly neutral --
the observed ionization fractions of
bright, dense jets range from $\sim$ 3\%\
-- 7\%\ \citep{hmr94,podio06}, and rise to $\sim$ 20\%\ for some objects \citep{be99}.
The ionization fraction is higher close to star in some jets, $\sim$
20\%\ if the emission comes from a shocked zone, and
as much as 50\%\ for a knot of uniform density \citep{hep04}, while in
HH~30 the ionization fraction rises from a low value of $\lesssim$ 10\%\
to about 35\%\ before declining again at larger distances \citep{hm07}.
The Alfven speed in a partially ionized gas like a stellar jet
is inversely proportional to the density of ions,
not to the total density. If the Alfven speed exceeds the shock velocity,
then ions accelerated ahead of the shock collide with neutrals and form a
warm precursor there. If the precursor is strong enough
it can smooth out the discontinuity of the flow variables at the shock front
into a continuous rise of density and temperature known as a C-shock
\citep{draine80,draine83}. Precursors have been studied when the gas is molecular
\citep{flower03,ciolek04}, but we have not found any calculations of
the effects precursors have on emission lines from shocks when the
preshock gas is atomic and mostly neutral.
Dynamically the main issue is whether or not the magnetic signal speed in the preshock
gas is large enough to inhibit the formation of a shock. Because ions
couple to the neutrals in the precursor region via strong charge exchange
reactions, any magnetic waves in this region should be quickly mass-loaded with neutrals.
Hence, the relevant velocity for affecting the dynamics is the Alfven velocity
calculated from the total density, and not the density of the ionized
portion of the flow. Another way to look at the problem is to consider the
compression behind a magnetized shock, taking a large enough grid size so
the precursor region is unresolved spatially. By conserving mass, momentum,
and energy across the shock one finds that the compression in a magnetized
shock varies with the fast magnetosonic Mach number in almost an identical way
that the compression in a nonmagnetized shock varies with Mach number
\citep[Figure 1 of][]{hartigan03}. Hence,
the effective signal speed that determines the compression is
calculated using the total density, and not the density of the ionized component.
For this reason we use
the total density to calculate the Alfven speed in the fifth column of
Table~1.
\section{Summary}
We have used observations of magnetic fields and densities in stellar
jets at large distances from the star to infer densities and field
strengths at all distances under the assumptions of a constant opening
angle for the flow and flux-freezing of the field. Numerical simulations
of variable MHD jets show that shocks and rarefactions dominate the
relation between the density n and the magnetic field B, with the relation
approximately B $\sim$ n$^p$, with 1 $>$ p $>$ 0.5. Because p $>$ 0.5, the
Alfven velocity increases at higher densities, which occur on average closer to
the star. This picture of a magnetically dominated jet close to the star that gives
way to a weakly-magnetized flow at larger distances is consistent with
existing observations of stellar jets that span three orders of magnitude
in distance. Velocity perturbations effectively sweep up the magnetic flux
into dense clumps, and the magnetic signal speed drops markedly in the
rarefaction zones between the clumps, which allows shock waves to form easily
there. For this reason, magnetic fields will have only modest dynamical effects on
the visible bow shocks in jets, even if fields are dynamically important
in a magnetic zone near the star.
\acknowledgements{This research was supported in part by a NASA grant from
the Origins of Solar Systems Program to Rice University. We thank Sean Matt
and Curt Michel for useful discussions on the nature of magnetic flows.
}
\clearpage
|
1,116,691,499,779 | arxiv | \section{Introduction}
Understanding the dynamics of small-size tracer particles or of a
passive field transported by an incompressible turbulent flow plays an
important role in the description of several natural and industrial
phenomena. For instance it is well known that turbulence has the
property to induce an efficient mixing over the whole range of length
and time scales spanned by the turbulent cascade of kinetic energy
(see e.g.~\cite{d05}). Describing quantitatively such a mixing has
consequences in the design of engines, in the prediction of pollutant
dispersion or in the development of climate models accounting for
transport of salinity and temperature by large-scale ocean streams.
However, in some settings, the suspended particles have a finite size
and a mass density very different from that of the fluid. Thus they
can hardly be modeled by tracers because they have inertia. In order
to fully describe the dynamics of such inertial particles, one has to
consider many forces that are exerted by the fluid even in the simple
approximation where the particle is a hard sphere much smaller than
the smallest active scale of the fluid flow \cite{mr83}. Nevertheless
the dynamics drastically simplifies in the asymptotics of particles
much heavier than the carrier fluid. In that case, and when buoyancy
is neglected, they interact with the flow only through a viscous drag,
so that their trajectories are solutions to the Newton equation\,:
\begin{equation}
\frac{d^2\bm X}{dt^2} = -\frac{1}{\tau} \left[ \frac{d\bm X}{dt} -
\bm u(\bm X,t)\right]\,,
\end{equation}
where $\bm u$ denotes the underlying fluid velocity field and $\tau$
is the response time of the particles. Even if the carrier fluid is
incompressible, the dynamics of such heavy particles lags behind that
of the fluid and is not volume-preserving. At large times particles
concentrate on singular sets evolving with the fluid motion, leading
to the appearance of strong spatial inhomogeneities dubbed
preferential concentrations. At the experimental level such
inhomogeneities have been known for a long time (see~\cite{ef94} for a
review). At present the statistical description of particle
concentration is a largely open question with many applications. We
mention the formation of rain droplets in warm clouds~\cite{ffs02},
the coexistence of plankton species~\cite{lp01}, the dispersion in the
atmosphere of spores, dust, pollen, and chemicals~\cite{s86}, and the
formation of planets by accretion of dust in gaseous circumstellar
disks~\cite{pl01}.
The dynamics of inertial particles in turbulent flows involves a
competition between two effects: on the one hand particles have a
dissipative dynamics, leading to the convergence of their trajectories
onto a dynamical attractor~\cite{b05}, and on the other hand, the
particles are ejected from the coherent vortical structures of the
flow by centrifugal inertial forces~\cite{m87}. The simultaneous
presence of these two mechanisms has so far led to the failure of all
attempts made to obtain analytically the dynamical properties or the
mass distribution of inertial particles. In order to circumvent such
difficulties a simple idea is to tackle independently the two aspects
by studying toy models, either for the fluid velocity field, or for
the particle dynamics that are hopefully relevant in some asymptotics
(small or large response times, large observation scales,
etc.). Recently an important effort has been made in the understanding
of the dynamics of particles suspended in flows that are
$\delta$-correlated in time, as in the case of the well-known
Kraichnan model for passive tracers~\cite{k68}. Such settings, which
describe well the limit of large response time of the particles,
allows one to obtain closed equations for density correlations by
Markov techniques. The $\delta$-correlation in time, of course, rules
out the presence of any persistent structure in the flow; hence any
observed concentrations can only stem from the dissipative
dynamics. Most studies in such simplified flows dealt with the study
of the separation between two particles~\cite{p02,mw04,dmow05,
detal06,bch06}.
Recent numerical studies in fully developped turbulent
flows~\cite{betal06} showed that the spatial distribution of particles
at lengthscales within the inertial range are strongly influenced by
the presence of voids at all active scales spanned by the turbulent
cascade of kinetic energy. The presence of these voids has a
noticeable statistical signature on the probability density function
(PDF) of the coarse-grained mass of particles which displays an
algebraic tail at small values. To understand at least from a
qualitative and phenomenological viewpoint such phenomena, it is
clearly important to consider flows with persistent vortical
structures which are ejecting heavy particles. For this purpose, we
introduce in this paper a toy model where the vorticity field of the
carrier flow is assumed piecewise constant in both time and space and
takes either a finite fixed value $\omega$ or vanishes. In addition to
this crude simplification of the spatial structure of the fluid
velocity field we assume that the particle mass dynamics obeys the
following rule: during each time step there is a mass transfer between
the cells having vorticity $\omega$ toward the neighboring cells where
the vorticity vanishes. The amount of mass escaping to neighbors is at
most a fixed fraction $\gamma$ of the mass initially contained in the
ejecting cell. We show that such a simplified dynamics reproduces many
aspects of the mass distribution of heavy particles in incompressible
flow. In particular, we show that the PDF of the mass of inertial
particles has an algebraic tail at small values and decreases as
$\exp(-A\,m\,\log m)$ when $m$ is large. Analytical predictions are
confirmed by numerical experiments in one and two dimensions.
In section~\ref{sec:ejection} we give some heuristic motivations for
considering such a model and a qualitative relation between the
ejection rate $\gamma$ and the response time $\tau$ of the heavy
particles. Section \ref{sec:model} consists in a precise definition
of the model in one dimension and in its extension to higher
dimensions. Section \ref{sec:pdfm} is devoted to the study in the
statistical steady state of the PDF of the mass in a single cell. In
section~\ref{sec:pdfcoarse} we study the mass distribution averaged
over several cells to gain some insight on the scaling properties in
the mass distribution induced by the model.
Section~\ref{sec:conclusion} encompasses concluding remarks and
discussions on the extensions and improvements of the model that are
required to get a more quantitative insight on the preferential
concentration of heavy particles in turbulent flows.
\section{Ejection of heavy particles from eddies}
\label{sec:ejection}
The goal of this section is to give some phenomenological arguments
explaining why the model which is shortly described above, might be of
relevance to the dynamics of heavy particles suspended in
incompressible flows. In particular we explain why a fraction of the
mass of particles exits a rotating region and give a qualitative
relation between the ejection rate $\gamma$ and the response time
$\tau$ entering the dynamics of heavy particles. For this we focus on
the two-dimensional case and consider a cell of size $\ell$ where the
fluid vorticity $\omega$ is constant and the fluid velocity vanishes
at the center of the cell. This amounts to considering that the fluid
velocity is linear in the cell with a profile given to leading order
by the strain matrix. Having a constant vorticity in a
two-dimensional incompressible flow means that we focus on cases where
the two eigenvalues of the strain matrix are purely imaginary complex
conjugate. The particle dynamics reduces to the second-order
two-dimensional linear system
\begin{equation}
\frac{d^2\bm X}{dt^2} = -\frac{1}{\tau}\,\frac{d\bm X}{dt} +
\frac{\omega}{\tau} \left[\!\!\!\begin{array}{rl} 0 & 1 \\ -1 &
0
\end{array} \right]\, \bm X \,.
\end{equation}
It is easily checked that the four eigenvalues of the evolution matrix
are the following complex conjugate
\begin{equation}
\lambda_{\pm,\pm} = \frac{-1 \pm \sqrt{1 \pm 4 i \tau\omega}}{2\tau}\,.
\end{equation}
Only $\lambda_{+,-}$ and $\lambda_{+,+}$ have a positive real part
which is equal to
\begin{equation}
\mu = \frac{-1 +\frac{1}{2}
\sqrt{2\sqrt{1+16\tau^2\omega^2}+2}}{2\tau}\,.
\end{equation}
This means that the distance of the particles to the center of the
cell increases exponentially fast in time with a rate $\mu$. If we now
consider that the particles are initially uniformly distributed inside
the cell, we obtain that the mass of particles remaining in it
decreases exponentially fast in time with a rate equal to
$-2\mu$. Namely the mass of particles which are still in the cell at
time $T$ is
\begin{equation}
m(T) = m(0)\,(1-\gamma) = m(0)\,
\exp\!\!\left[-\frac{T}{\tau}\!\left(-1 +\frac{1}{2}
\sqrt{2\sqrt{1+16\tau^2\omega^2}+2} \right)\!\right]\!\!.
\end{equation}
The rate $\gamma$ at which particles are expelled from the cell
depends upon the response time $\tau$ of the particles and upon two
characteristic times associated to the fluid velocity. The first is
the time length $T$ of the ejection process which is given by the
typical life time of the structure with vorticity $\omega$. The
second time scale is the turnover $\omega^{-1}$ which measures the
strength of the eddy. There are hence two dimensionless parameters
entering the ejection rate $\gamma$: the Stokes number ${\mbox{\it St}} =
\tau\omega$ giving a measure of inertia and the Kubo number ${\mbox{\it Ku}} =
T\omega$ which is the ratio between the correlation time of structures
and their eddy turnover time. One hence obtain the following estimate
of the ejection rate
\begin{equation}
\gamma = 1 - \exp\!\!\left[-\frac{{\mbox{\it Ku}}}{{\mbox{\it St}}}\,\left(-1 +\frac{1}{2}
\sqrt{2\sqrt{1+16{\mbox{\it St}}^2}+2} \right)\!\right]\!\!.
\label{eq:estimgamma}
\end{equation}
The graph of the fraction of particles ejected from the cell as a
function of the Stokes number is represented in
figure~\ref{fig:gamma_fn_St} for three different values of the Kubo
number. The function goes to zero as ${\mbox{\it Ku}}\,{\mbox{\it St}}$ in the limit ${\mbox{\it St}}\to0$
and as ${\mbox{\it Ku}}\,{\mbox{\it St}}^{-1/2}$ in the limit ${\mbox{\it St}}\to\infty$. It reaches a
maximum which is an indication of a maximal segregation of the
particles, for ${\mbox{\it St}}\approx 1.03$ independently of the value of ${\mbox{\it Ku}}$.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.666\textwidth]{gamma_fn_St.eps}}
\vspace{-10pt}
\caption{Fraction of the mass of particles that are uniformly
distributed and ejected from an eddy of arbitrary size $\ell$ as a
function of the Stokes number ${\mbox{\it St}}=\tau\omega$. The various curves
refer to different values of the Kubo number ${\mbox{\it Ku}}$ as labeled. }
\label{fig:gamma_fn_St}
\end{figure}
In three dimensions, one can extend the previous approach to obtain an
ejection rate for cells with a uniform rotation, i.e.\ a constant
vorticity $\bm\omega$. There are however two main difficulties. The
first is that in three dimensions the eigenvalues of the strain matrix
in rotating regions are not anymore purely imaginary but have a real
part given by the opposite of the rate in the stretching
direction. Such a vortex stretching has to be considered to match
observation in real flows. The second difficulty stems from the fact
that the vorticity is now a vector and has a direction, so that
ejection from the cell can be done only in the directions
perpendicular to the direction of $\bm\omega$. These two difficulties
imply that the spectrum of possible ejection rates is much broader
than in the two-dimensional case. However the rough qualitative
picture is not changed.
\section{A simple mass transport model}
\label{sec:model}
We here describe with details the model in one dimension and mention
at the end of the section how to generalize it to two and higher
dimensions. Let us consider a discrete partition of an interval in $N$
small cells. Each of these cell is associated to a mass which is a
continuous variable. We denote by $m_j(n)$ the mass in the $j$-th cell
at time $t=n$. At each integer time we choose randomly $N$ independent
variables; $\Omega_j=1$ with probability $p$ and $\Omega_j=0$ with
probability $1-p$. The evolution of mass between times $n$ and $n+1$
is given by:
\begin{equation}
m_j(n+1) = \left\{ \begin{array}{ll} m_j(n) -
\frac{\gamma}{2}\,\left[2-\Omega_{j-1}-\Omega_{j+1}\right]\,m_j(n) &
\mbox{if } \Omega_j = 1\,, \\ m_j(n) +
\frac{\gamma}{2}\,\left[\Omega_{j-1}\,m_{j-1}(n)+\Omega_{j+1}\,
m_{j+1}(n)\right] & \mbox{if } \Omega_j = 0\,. \end{array} \right.
\label{eq:massdynamics}
\end{equation}
In other terms, when $\Omega_j=1$, the $j$-th cell looses mass if
$\Omega_{j-1}=0$ or $\Omega_{j+1} = 0$, and when $\Omega_j=0$, it
gains mass if $\Omega_{j-1}=1$ or $\Omega_{j+1} = 1$. The flux of mass
between the $j$-th and the $(j+1)$-th cell is proportional $\Omega_j -
\Omega_{j+1}$ (see figure~\ref{fig:sketch}).
\begin{figure}[ht]
\centerline{\includegraphics[width=0.55\textwidth]{sketch.eps}}
\vspace{-10pt}
\caption{Sketch of the dynamics in the one-dimensional case: the
fluxes of mass are represented as arrows. A cross means no flux. }
\label{fig:sketch}
\end{figure}
In particular, if $\Omega_j = \Omega_{j+1}$, no mass is transfered
between cells. When the system is supplemented by periodic boundary
conditions between the cells $N$ and 1, it is clear that the total
mass is conserved. Hereafter we assume that the mass is initially
$m_j=1$ in all cells., so that the total mass is $\sum_j m_j =
N$. Spatial homogeneity of the random process $\Omega_j$ implies that
$\langle m_j \rangle = 1$ for all later times, where the angular
brackets denote average with respect to the realizations of the
$\Omega_j$'s.
A noticeable advantage of such a model for mass transportation is that
the mass field $\bm m = (m_1, \dots, m_N)$ defines a Markov
process. Its probability distribution $p_N(\bm m, n+1)$ at time $n+1$,
which is the joint PDF of the masses in all cells, is related to that
at time $n$ by a Markov equation, which under its general form can be
written as
\begin{eqnarray}
p_N(\bm m, n+1) &=& \int d^N\! m^{\prime}\, p_N(\bm m^{\prime}, n)\,
P[\bm m^\prime \to \bm m] \nonumber \\ &=& \int d^N\! m^{\prime}\,
p_N(\bm m^{\prime}, n) \int d^N \!\Omega\,\, p(\bm\Omega) \, P[\bm
m^\prime \to \bm m | \bm\Omega],
\label{eq:markovN}
\end{eqnarray}
where $P[\bm m^\prime \to \bm m | \bm\Omega]$ denotes the transition
probability from the field $\bm m^\prime$ to the field $\bm m$
conditioned on the realization of $\bm\Omega =
(\Omega_1,\dots,\Omega_N)$. In our case it takes the form
\begin{equation}
P[\bm m^\prime \to \bm m | \bm\Omega] = \prod_{j=1}^{N}
\delta[m_j-(m_j^\prime +\mu_{j-1}(n) - \mu_j(n))]\,.
\end{equation}
The variable $\mu_j$ denotes here the flux of mass between the $j$-th
and the $(j+1)$-th cell. It is a function of $\Omega_j$,
$\Omega_{j+1}$, and of the mass contained in the two cells. It can be
written as
\begin{equation}
\mu_j(n) = \frac{\gamma}{2}\!\left[ \Omega_j(n)(1-\Omega_{j+1}(n))
m^\prime_j(n) - \Omega_{j+1}(n)(1-\Omega_j(n))
m^\prime_{j+1}(n)\right]\!.
\end{equation}
In the particular case we are considering, the joint probability of
the $\Omega_j$'s factorizes and we have
\begin{equation}
p(\Omega_j) = p\,\delta(\Omega_j-1) + (1-p)\,\delta(\Omega_j)\,,
\end{equation}
so that the Markov equation (\ref{eq:markovN}) can be written in a
rather explicit and simple manner.
The extension of the model to two dimensions is straightforward. The
mass transfer out from a rotating cell can occur to one, two, three or
four of its direct nearest neighbors (see figure~\ref{fig:sketch2d}
left). One can similarly derive a Markov equation which is similar to
(\ref{eq:markovN}) for the joint PDF $p_{N,N}(\mathbb{M},n)$ at time
$n$ of the mass configuration $\mathbb{M} = \{ m_{i,j}\}_{1\le i,j \le
N}$. The transition probability reads in that case
\begin{equation}
P[\mathbb{M}^\prime \to \mathbb{M} | \bm\Omega] = \prod_{i=1}^{N}
\prod_{j=1}^{N} \delta[m_{i,j}-(m_{i,j}^\prime +\mu^{(1)}_{i-1,j} -
\mu^{(1)}_{i,j})+\mu^{(2)}_{i,j-1} - \mu^{(2)}_{i,j})]\,.
\end{equation}
where the fluxes now take the form
\begin{equation}
\mu^{(1)}_{i,j} = \frac{\gamma}{4}\left[ \Omega_{i,j}
(1-\Omega_{i+1,j}) \,m^\prime_{i,j}- \Omega_{i+1,j}
(1-\Omega_{i,j}) \,m^\prime_{i+1,j}\right]
\end{equation}
and $\mu^{(2)}_{i,j}$ defined by inverting $i$ and $j$ in the
definition of $\mu^{(1)}_{i,j}$.
After a large number of time steps, a statistically steady state is
reached. The stationary distribution is obtained assuming that
$p_{N,N}(\mathbb{M},n) = p_{N,N}(\mathbb{M})$ is independent of $n$ in
the Markov equation (\ref{eq:markovN}). In this stationary state the
mass fluctuates around its mean value $1$ corresponding to a uniform
distribution; strong deviations at small masses can be qualitatively
observed (see figure~\ref{fig:sketch2d} right).
\begin{figure}[ht]
\centerline{\strut\qquad
\includegraphics[height=0.3\textwidth]{sketchflow2d.eps} \ \
\includegraphics[height=0.3\textwidth]{snap2d.eps}}
\vspace{-10pt}
\caption{Left: sketch of the ejection model in two dimensions; the
rotating cells are designated by small eddies; the flux of mass
(blue arrows) is from the rotating cells to those without any
eddy. Right: snapshot of the distribution of mass in the
statistically steady regime for a domain of $50^2$ cells with
$p=0.75$; white squares are almost empty cells and the darker
regions correspond to cells where the mass is concentrated.}
\label{fig:sketch2d}
\end{figure}
The model can be easily generalized to arbitrary space
dimension. However, as we have seen in previous section, besides its
interest from a purely theoretical point of view, the straightforward
extension to the three-dimensional case might not be relevant to
describe concentrations of inertial particles in turbulent flows.
\section{Distribution of mass}
\label{sec:pdfm}
Let us consider first the one-dimensional case in the statistically
stationary regime. After integrating (\ref{eq:markovN}), one can
express the single-point mass PDF $p_1$ in terms of the three-point
mass distribution $p_3$ at time $n$
\begin{eqnarray} p_1(m_j) &=& \int\!\!
dm^\prime_{j-1}\,dm^\prime_{j}\,dm^\prime_{j+1}\,
p_3(m^\prime_{j-1},m^\prime_{j},m^\prime_{j+1})\int\!\!
d\Omega_{j-1}\,d\Omega_{j}\,d\Omega_{j+1}\times \nonumber\\ &&
\strut\quad p(\Omega_{j-1})\,p(\Omega_{j})\,p(\Omega_{j+1})\,
\delta[m_j-(m^\prime_j + \mu_{j-1} - \mu_{j})]\,.
\label{eq:dynamics}
\end{eqnarray}
We then explicit all possible fluxes, together with their
probabilities, by considering all possible configurations of the spin
vorticity triplet $(\Omega_{j-1}, \Omega_{j}, \Omega_{j+1})$. The
results are summarized in table~\ref{tab:threecells}. This leads to
rewrite the one-point PDF as
\begin{eqnarray}
p_1(m) &=& \left[ p^3 + (1-p)^3 \right ]\, p_1(m) +
\frac{2p^2\,(1-p)}{1-\gamma/2}\,
p_1\!\left(\frac{m}{1-\gamma/2}\right) + \nonumber \\ &+&
\frac{p\,(1-p)^2}{1-\gamma}\, p_1\!\left(\frac{m}{1-\gamma}\right)
\nonumber + 2p\,(1-p)^2 \!\int_0^{2m/\gamma}\!\!\!\!\!\!\!\!\!\!
dm^\prime\, p_2\!\left(m^\prime, m-\frac{\gamma}{2} m^\prime\right) +
\nonumber\\ &+& p^2(1-p) \!\int_0^{2m/\gamma}\!\!\!\!\!\!\!\!\!\!
dm^\prime
\!\!\!\int_0^{2m/\gamma-m^\prime}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
dm^{\prime\prime}\,p_3\!\left(m^\prime,
m-\frac{\gamma}{2}(m^\prime+m^{\prime\prime}),
m^{\prime\prime}\right). \label{eq:p1form1}
\end{eqnarray}
The first term on the right-hand side comes from realizations with no
flux. The second term is ejection to one neighbor and the third to two
neighboring cells. The fourth term involving an average over the
two-cell distribution is related to events when mass is transfered
from a single neighbor to the considered cell. Finally, the last term
accounts for transfers from the two direct neighbors. Note that, in
order to write (\ref{eq:p1form1}), we made use of the fact that
$p_2(x,y)=p_2(y,x)$.
\begin{table}[ht]
\caption{\label{tab:threecells}Enumeration of all possible
configurations of the spin vorticity $\Omega$ in three neighboring
cells, together with their probabilities and the associated mass
fluxes.}
\begin{indented}
\item[]
\begin{tabular}{@{}cccccc}
\br $\Omega_{j-1}$ & $\Omega_{j}$ & $\Omega_{j+1}$ & Prob &
$\mu_{j-1}$ & $\mu_{j}$ \\ \mr 0&0&0 & $(1-p)^3$ & 0 & 0 \\
0&0&1 & $p\,(1-p)^2$ & 0 & $-\gamma m^\prime_{j+1}/2$ \\ 0&1&0 &
$p\,(1-p)^2$ & $-\gamma m^\prime_{j}/2$ & $\gamma
m^\prime_{j}/2$ \\ 0&1&1 & $p^2\,(1-p)$ & $-\gamma
m^\prime_{j}/2$ & 0 \\ 1&0&0 & $p\,(1-p)^2$ & $\gamma
m^\prime_{j-1}/2$ & 0\\ 1&0&1 & $p^2\,(1-p)$ & $\gamma
m^\prime_{j-1}/2$ & $-\gamma m^\prime_{j+1}/2$\\ 1&1&0 &
$p^2\,(1-p)$ & 0 & $\gamma m^\prime_{j}/2$\\ 1&1&1 & $p^3$ & 0 &
0 \\ \br
\end{tabular}
\end{indented}
\end{table}
Numerical simulations of this one-dimensional mass transport model are
useful to grab qualitative information on $p_1$. Figure~\ref{fig:pdfm}
represents the functional form of $p_1$ in the stationary regime for
various values of the ejection rate $\gamma$ and for $p=1/2$. The
curves are surprisingly similar to measurements of the spatial
distribution of heavy particles suspended in homogeneous isotropic
turbulent flows \cite{ef94,betal06}. This gives strong evidence that,
on a qualitative level, the model we consider reproduces rather well
the main mechanisms of preferential concentration. More specifically,
a first observation is that in both settings the probability density
functions display an algebraic behavior for small masses. This implies
that the ejection from cells with vorticity one has a strong
statistical signature. A second observation is that at large masses,
the PDF decays faster than exponentially, as also observed in
realistic flows. As we will now see these two tails can be understood
analytically for the model under consideration.
\begin{figure}[t]
\centerline{\includegraphics[width=0.666\textwidth]{pdfm_p0.5.eps}}
\vspace{-10pt}
\caption{Log-log plot of the one-point PDF of the mass in one
dimension for $p=1/2$ and different values of the parameter
$\gamma$ as labeled. The integration was done on a domain of
$2^{16} = 65536$ cells and time average wee performed during
$10^6$ time steps after a statistical steady state is reached.}
\label{fig:pdfm}
\end{figure}
We here first present an argument explaining why an algebraic tail is
present at small masses. For this we exhibit a lower bound of the
probability $P^<(m)$ that the mass in the given cell is less than
$m$. Namely, we have
\begin{equation}
P^<(m) = {\rm Prob}(m_j(n)<m) \ge {\rm Prob}\,(\mathcal{A})\,,
\end{equation}
where $\mathcal{A}$ is a set of space-time realizations of $\Omega$
such that the mass in the $j$-th cell at time $n$ is smaller than
$m$. For instance we can choose the set of realizations which are
ejecting mass in the most efficient way: during a time $N$ before $n$,
the $j$-th cell has spin vorticity 1 and its two neighbors have 0. The
mass at time $n$ is related to the mass at time $n-N$ by
\begin{equation}
m_j(n) = (1-\gamma)^N m_j(n-N)\,,\ \mbox{ that is } \ N =
\frac{\log [m_j(n-N)/m_j(n)]}{\log (1-\gamma)}\,.
\label{eq:Nfnm}
\end{equation}
The probability of such a realization is clearly
$p^N\,(1-p)^{2N}$. Replacing $N$ by the expression obtained in
(\ref{eq:Nfnm}), we see that
\begin{equation}
{\rm Prob}\,(\mathcal{A}) =
\left[\frac{m_j(n)}{m_j(n-N)}\right]^\beta \mbox{ with } \beta =
\frac{\log [p(1-p)^2]}{\log (1-\gamma)}.
\label{eq:pA}
\end{equation}
After averaging with respect to the initial mass $m_j(n-N)$, one
finally obtains
\begin{equation}
P^<(m) \ge A\, m^\beta.
\label{eq:lowbound}
\end{equation}
Hence the cumulative probability of mass cannot have a tail faster
than a power law at small arguments. It is thus reasonable to make
the ansatz that $p_1(m)$ have an algebraic tail at $m\to0$, i.e.\ that
$p_1(m) \simeq C m^\alpha$. To obtain how the exponent $\alpha$
behaves as a function of the parameters $\gamma$ and $p$, this ansatz
is injected in the stationary version of the Markov equation
(\ref{eq:p1form1}). One expects that the small-mass behavior involves
only the terms due to ejection from a cell, namely the three first
terms in the r.h.s.\ of (\ref{eq:p1form1}), and that the terms
involving averages of the two-point and three-point PDFs give only
sub-dominant contributions. This leads to
\begin{eqnarray}
Cm^\alpha &\approx& C\left[ p^3 + (1-p)^3 \right ] m^\alpha +
C\,\frac{2p^2(1-p)}{1-\gamma/2}\left[\frac{m}{1-\gamma/2}\right]^\alpha
+ \nonumber\\ && + C\,\frac{p(1-p)^2}{1-\gamma}
\left[\frac{m}{1-\gamma}\right]^\alpha\,.
\label{eq:p1form2}
\end{eqnarray}
Equating the various constants we finally obtain that the exponent
$\alpha$ satisfies
\begin{equation}
\frac{2p}{(1-\gamma/2)^{\alpha+1}} +
\frac{(1-p)}{(1-\gamma)^{\alpha+1}} = 3\,. \label{eq:alphafnT1D}
\end{equation}
Note that the actual exponent $\alpha$ given by this relation is
different from the lower-bound $\beta+1$ obtained above in
(\ref{eq:pA}) and (\ref{eq:lowbound}). However it is easily checked
that $\alpha$ approach the lower bound when $p\to 0$. As seen from
figure~\ref{fig:alpha_fn_T}, formula (\ref{eq:alphafnT1D}) is in good
agreement with numerics.
\begin{figure}[h]
\centerline{\includegraphics[width=0.666\textwidth]{alpha_fn_gamma}}
\vspace{-10pt}
\caption{Scaling exponent $\alpha$ as a function of the ejection
rate $\gamma$ for three different values of $p$ as labeled. The
solid lines represents the prediction given by
(\ref{eq:alphafnT1D}); the error bars are estimated from the maximal
deviation of the logarithmic derivative from the estimated
value. Inset: difference between the numerical estimation and the
value predicted by theory.}
\label{fig:alpha_fn_T}
\end{figure}
Note that the large error bars obtained for $p$ small and $\gamma$
large are due to the presence of logarithmic oscillations in the left
tail of the PDF of mass. This log periodicity is slightly visible for
$\gamma=0.9$ in figure~\ref{fig:pdfm}. It occurs when the spreading of
the distribution close to the mean value $m=1$ is much smaller than
the rate at which mass is ejected. This results in the presence of
bumps in the PDF at values of $m$ which are powers of
$1-2\gamma$. Notice that for all values of $p$, one has $\alpha\leq0$
when $\gamma\ge 2/3$. However, according to the estimate
(\ref{eq:estimgamma}), values of the ejection rate larger than $2/3$
can be attained only for large enough Kubo numbers. This is consistent
with the fact that power-law tails with a negative exponent were not
observed in the direct numerical simulations of turbulent fluid
flows~\cite{betal06} where ${\mbox{\it Ku}}\approx 1$.
It is much less easy to get from numerics the behavior of the right
tail of the mass PDF $p_1(m)$. As seen from figure~\ref{fig:pdfm},
there was no events recorded where the mass is larger than roughly ten
times its average. We however present now an argument suggesting that
the tail is faster than exponential, and more particularly that $\log
p_1(m) \propto - m\,\log m$ when $m\gg1$. We first observe that in
order to have a large mass in a given cell, one needs to transfer to
it the mass coming form a large number $M$ of neighboring
cells. Estimating the probability of having a large mass is equivalent
to understand the probability of such a transfer. For moving mass
from the $j$-th cell to the $(j-1)$-th cell, the best configuration is
clearly $(\Omega_{j-1},\Omega_{j},\Omega_{j+1})=(0, 1, 1)$. After $N$
time steps with this configuration, the fraction of mass transfered is
$1-(1-\gamma/2)^{N}$. This process is then repeated for moving mass
to the second neighbor, and so on. After order $M$ iterations, the
mass in the $M$-th neighbor is
\begin{equation}
m = \frac{1-\left[1-(1-\gamma/2)^{N}\right]^M}{(1-\gamma/2)^{N}}\,.
\label{eq:transfer}
\end{equation}
This means that
\begin{equation}
M = M(m,N) = \frac{\log\left[1-m(1-\gamma/2)^{N}\right]}
{\log\left[1-(1-\gamma/2)^{N}\right]}\,,
\label{eq:transfer2}
\end{equation}
with the condition that $N > -(\log m)/[\log(1-\gamma/2)]$. The
probability of this whole process of mass transfer is
\begin{equation}
\mathcal{P} = \left[p^2(1-p)\right]^{N\,M} =
\exp\left[\log(p^2(1-p))\,N\,M(m,N) \right]\,.
\label{eq:probtransfer}
\end{equation}
All the processes of this type will contribute terms in the right tail
of the mass PDF. The dominant behavior is given by choosing
$N=N^{\star}$ such that $N^{\star}\,M(m,N^{\star})$ is minimal. Such a
minimum cannot be written explicitly. One however notices that, on
the one hand, if $N$ is much larger than its lower bound (i.e.\ $N \gg
-(\log m)/[\log(1-\gamma/2)]$), then $N\,M(m,N) \gg -m(\log
m)/[\log(1-\gamma/2)]$. On the other hand when $N$ is chosen of the
order of $\log m$, then $N\,M(m,N)\propto m\,\log m$. This suggests
that the minimum is attained for $N^{\star} \propto \log m$. Finally,
such estimates lead to predict that the right tail of the mass
probability density function behaves as
\begin{equation}
p_1(m) \propto \exp\left[ -C \,m\,\log m \right]\,,
\label{eq:right-tail}
\end{equation}
where $C$ is a positive constant that depends upon the parameters $p$
and $\gamma$. As seen in figure~\ref{fig:right_tail}, such a behavior
is confirmed by numerical experiments.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.666\textwidth]{right_tail.eps}}
\vspace{-10pt}
\caption{Lin-log plot of the one-point PDF of the mass $m$ in one
dimension represented as a function of $m\log m$ for $p=1/2$ and
various values of the parameter $\gamma$ as labeled; the different
colors and symbols are the same as those used in
figure~\ref{fig:pdfm}. Inset: behavior of the constant $C$
appearing in (\ref{eq:right-tail}) as a function of the ejection
rate $\gamma$ for three different values of the fraction of space
$p$ occupied by eddies (blue crosses: $p=0.1$, black times:
$p=0.5$, red circles: $p=0.9$).}
\label{fig:right_tail}
\end{figure}
The estimations of the left and right tails of the distribution of
mass in a given cell can be extended to the two-dimensional case. The
results do not qualitatively change. The exponent $\alpha$ of the
algebraic behavior at small masses is given as a solution of
\begin{eqnarray}
\frac{4 p^3}{(1-\gamma/4)^{\alpha+1}} &+&\frac{6
p^2(1-p)}{(1-\gamma/2)^{\alpha+1}} +\frac{4
p(1-p)^2}{(1-3\gamma/4)^{\alpha+1}} + \nonumber \\ &+&
\frac{(1-p)^3}{(1-\gamma)^{\alpha+1}} =
5(1-p+p^2)\,.\label{eq:alphafnT2D}
\end{eqnarray}
By arguments which are similar to the one-dimensional case and which
are not detailed here, one obtains also that $\log p_1(m) \propto
-m\,\log m$. Numerical experiments in two dimensions confirm these
behaviors of the mass probability distribution. As seen from
figure~\ref{fig:pdfm2d} an algebraic behavior of the left tail of the
PDF of $m$ is observed and the value of the exponent is in good
agreement with (\ref{eq:alphafnT2D}).
\begin{figure}[ht]
\centerline{\includegraphics[width=0.666\textwidth]{pdfm2d.eps}}
\vspace{-10pt}
\caption{Log-log plot of the one-point PDF of the mass in two
dimensions for $p=1/2$ and various values of the parameter
$\gamma$; the symbols and color refer to the same values of
$\gamma$ as in figure~\ref{fig:pdfm}. The integration was done on
a domain of $1024^2$ cells and time average were performed during
$3\times 10^4$ time steps after a statistical steady state is
reached. Inset: exponent $\alpha$ of the algebraic left tail as a
function of the ejection rate $\gamma$ for three different values
of $p$ (blue crosses: $p=0.1$, black times: $p=0.5$, red circles:
$p=0.9$). The solid lines shows the analytic values obtained from
(\ref{eq:alphafnT2D}).}
\label{fig:pdfm2d}
\end{figure}
\section{Coarse-grained mass distribution}
\label{sec:pdfcoarse}
We investigate in this section the probability distribution of the
mass coarse-grained on a scale $L$ much larger than the box size
$\ell$, which is defined as
\begin{equation}
\bar{m}_L = \frac{\ell}{L} \sum_{j=-K}^{K} m_j \ \mbox{ where } K =
L/2\ell.
\label{defmbar}
\end{equation}
As seen from the numerical results presented on
figure~\ref{fig:pdfmbar}, the functional form of the PDF
$p_L(\bar{m})$ is qualitatively similar to that of the mass in a
single cell. In particular for various values of $L$ it also displays
an algebraic tail at small arguments with an exponent which depends
both on $L$ and on the parameters of the model. We here present some
heuristic arguments for the behavior of the exponent.
For this, we consider the cumulative probability $P^{<}_L(\bar{m})$ to
have $\bar{m}_L$ smaller than $\bar{m}$. We first observe that in
order to have $\bar{m}_L$ small, the mass has to be transfered from
the bulk of the coarse-grained cell to its boundaries. Assume we start
with a mass order unity in each of the $2K+1$ sub-cells. The best
realization to transfer mass is to start with ejecting an order-unity
fraction of the mass contained in the central cell with index $j=0$ to
its two neighbors. For this the three central cells must have
vorticities $(\Omega_{-1},\Omega_0,\Omega_1) = (0, 1, 0)$,
respectively, during $N$ time steps. After that the second step
consists in transferring the mass toward the next neighbors; the best
realization is then to have during $N$ time steps
$(\Omega_{-2},\Omega_{-1}, \Omega_0, \Omega_1,\Omega_2) = (0, 1, 1, 1,
0)$. The transfer toward neighbors is repeated recursively. At the
$j$-th step, the best transfer is given by choosing
$(\Omega_{-j-1},\Omega_{-j}, \Omega_{-j+1}) = (0,1,1)$ and
$(\Omega_{j-1},\Omega_{j}, \Omega_{j+1}) = (1,1,0)$ during a time
$N$. One can easily check that for large $N$ and after repeating $K$
times this procedure, the mass which remains in the $2K+1$ cells
forming the coarse-grained cell is $\bar{m}_L \simeq
(1-\gamma/2)^{N}$. The total probability of this process is
$\left[p^4(1-p)^2\right]^{KN}$,which leads to estimate the cumulative
probability of $\bar{m}_L$ as
\begin{equation}
P^{<}_L(\bar{m}) \propto \bar{m}^{\alpha_L}\,, \ \ \mbox{ with } \
\alpha_L \approx \frac{1}{2}\,\frac{L}{\log(1-\gamma/2)}
\log\left[p^4(1-p)^2\right]\,.
\label{eq:pdfcoarse}
\end{equation}
This approach guarantees that the probability density function
$p_L(\bar{m}) = dP^<_L/d\bar{m}$ of the coarse-grained mass $\bar{m}$
behaves as a power law at small values. Note that only the
contribution from realizations with an optimal mass transfer is here
evaluated and the actual value of the exponent should take into
account realizations of the vorticity which may lead to a lesser mass
transfer. However we expect the estimation given by
(\ref{eq:pdfcoarse}) to hold for $L$ sufficiently large, because the
contribution from realizations with a sub-dominant mass transfer become
negligible in this limit.
As to the right tail of $p_L(\bar{m})$, one expects a behavior similar
to that obtained in the case of the one-cell mass distribution, namely
$\log p_L(\bar{m}) \propto -\bar{m}\,\log \bar{m}$ for
$\bar{m}\gg1$. Indeed, the probability of having a large mass in a
coarse-grained cell should clearly be of the same order as the
probability of having a large mass in a single cell. This, together
with the estimates (\ref{eq:pdfcoarse}) for the exponent of the left
tail, gives a motivation for looking, at least in some asymptotics,
for possible rescaling behaviors of $p_L(\bar{m})$ as a function of
$L$ and of the ejection rate $\gamma$. For instance one can argue
whether the limits $L\to\infty$ and $\gamma\to0$ are equivalent. The
estimation (\ref{eq:pdfcoarse}) suggests that the exponent $\alpha_L$
depends only on the ratio $\kappa = L/\log(1-\gamma/2)$. Note that the
limit of small $\gamma$ should mimic that of small response time of
the heavy particles. Rescaling of the distribution of the
coarse-grained mass was observed in direct numerical simulations of
heavy particles in turbulent homogeneous isotropic
flows~\cite{betal06}.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.666\textwidth]{collapsepdfm.eps}}
\vspace{-10pt}
\caption{Log-log plot of the PDF of the coarse-grained mass
$\bar{m}_L$ in one dimension for $p=2/3$ and various values of
$\gamma$ and $L$ associated to three different ratios $\kappa =
L/\log(1-\gamma/2)$ as labeled.}
\label{fig:pdfmbar}
\end{figure}
Such a rescaling is confirmed by numerical
simulations. Figure~\ref{fig:pdfmbar} represents the PDF of the
coarse-grained mass $\bar{m}_L$ for various values of $L$ and $\gamma$
chosen such that the ratio $\kappa$ is 8, 10 or 14. While the
left-tail and the core of the distribution are clearly collapsing, the
rescaling is much less evident for the right-hand tail. Obtaining
better evidence would require much longer statistics in order to
resolve the distribution at large masses.
\section{Conclusions}
\label{sec:conclusion}
We introduced here a simple model for the dynamics of heavy inertial
particles in turbulent flow which solely accounts for their ejection
from rotating structures of the fluid velocity field. We have shown
that this model is able to reproduce qualitatively most features of
the particle mass distribution which are observed in real turbulent
flows. Namely the probability density of the mass in cells is shown
to behave as a power-law at small arguments and to decrease faster
than exponentially at large values. Moreover, we studied how this
distribution depends on the parameters of the model, namely the
ejection rate of particles from eddies and the fraction of space
occupied by them. Such dependence reproduce again qualitatively
observations from numerical simulations in homogeneous turbulent
flows. Finally, we have seen that coarse-graining masses on scales
larger than the cell size is asymptotically equivalent to decrease the
ejection rate related to particle inertia. This gives evidence that
there exists a scaling in the limit of large observation scale and
small response time of the particles, even if the flow has no scale
invariance.
There are several extensions that need to be investigated in order to
gain from the study of such models a more quantitative information on
the distribution of particles in real flows. The most significant
improvement is to give a spatial structure to the fluid velocity. This
can be done by introducing a spatial correlation between the
vorticities of cells. Preliminary investigations suggest that such a
modified model could be approached by taking its continuum
limit. Another effect that may be worth taking into consideration is
random sweeping of structures by the fluid flow. We assumed that the
eddies are frozzen (and occupy the same cell) during their whole life
time. The model could be extended by adding to the dynamics random
hops between cells of these structures. Another extension could
consist in investigating in a systematic manner three-dimensional
versions of the model. As stated above, many statistical quantities
may depend on how ejection from rotating regions is implemented in
three dimensions. Finally, it is worth mentioning again that the main
advantage of such models is to give a heuristic understanding of the
relations between the properties of the fluid velocity field and the
mass distribution of particles. This first step is necessary in the
development of a phenomenological framework for describing the spatial
distribution of heavy particles in turbulent flows. This would allow
for using Kolmogorov 1941 dimensional arguments to understand how the
particle dynamical properties depend on scale. Moreover, such a
framework could be used to obtain refined predictions accounting for
the effect of the fluid flow intermittency and to describe the
dependence upon the Reynolds number of the spatial distribution of
particles.
\section*{Acknowledgments}
This work benefited from useful discussions with M.~Cencini and
S.~Musacchio who are warmly acknowledged.
\section*{References}
\bibliographystyle{unsrt}
|
1,116,691,499,780 | arxiv | \section{Introduction}
Computer generation of stories and other kinds of creative writing is a challenging endeavor. It entangles two difficult tasks: the generation of fluent natural language and the generation of a coherent storyline. In the recent year, neural language models have made tremendous progress with respect to fluency~\cite{bahdanau2014neural,vaswani2017attention,bengio2003lm,devlin2019bert}, but coherency is still a major challenge~\cite{see:2019:storyteller}. The generation of coherent stories has recently been addressed with additional conditioning: \citet{fan2018hierarchical} suggest conditioning on a story prompt, \citet{clark2018creative} propose collaboration between a generative model and a human writer, and \citet{guan2019commonsense} suggest attending to a commonsense graph relevant to the story plot. Conditioning based on a generated story plan \citep{martin2018event,fan2019strategies,yao2019plan}, a sequence of images \citep{chandu2019storyboarding} or character roles \citep{liu2020character} have also been considered.
Our work is orthogonal to these efforts. Rather than considering additional conditioning, we propose a model which takes as input several sentences of context and selects the best next sentence within a large set of fluent candidate sentences.
We leverage pre-trained BERT embeddings \citep{devlin2019bert} to build this sentence-level language model.
Given the embeddings of the previous sentences of the story, our model learns to predict a likely embedding of the next sentence.
This task isolates the modeling of long-range dependencies from the prediction of individual words, which has several advantages.
First, since our model only needs to determine how well each candidate sentence would fit as a coherent continuation to the story, it does not spend capacity and time to learn fluency.
Second, our model does not manipulate individual words but full sentences, which allows us to consider tens of thousands of candidate sentences at a time.
This contrasts with prior work \citep{logeswaran2018an} where the need to learn token-level representations limited the number of candidate next sentences that could be considered to a few hundred.
Third, we can rely on compact model architectures that train quickly because we take advantage of strong semantic representation from a pre-trained bidirectional language model, BERT, as our sentence embeddings.
Of course, these benefits also imply that our sentence representation is limited to the information extracted by the pre-trained model.
Nevertheless, we show that our model achieves state-of-the-art accuracy among unsupervised approaches on the Story Cloze task: predicting which of two sentences coherently ends a short story.
Our work also opens up the possibility of ranking thousands of candidate sentences from a large literature repository.
On the ROC Stories dataset, we observe that training with a large number of candidates is key for selecting the most coherent ending among a large set of candidates at test time.
We also show preliminary results on the efficacy of our method for ranking candidate next sentence on the Toronto Book Corpus~\citep{kiros2015skip}, a much larger book dataset.
We envision that our methods for scoring many candidate next sentences by their coherence with the context might be useful to downstream generation tasks where it is possible to generate many fluent continuations of a text, but it remains an unsolved problem how to refine and choose the best of them. To encourage this exploration, we release our code and models\footnote{Code for ROC Stories experiments can be found at {\small \url{https://github.com/google-research/google-research/tree/master/better_storylines}}.}.
\section{Proposed Method}
We propose a sentence-level language model: our model estimates $P(s_{t+1} | s_{1:t})$, the probability distribution for sentence $s_{t+1}$ given the $t$ previous sentences, $s_1, \ldots s_t$. Since it is intractable to marginalize over all possible candidate next sentences, we consider a finite but large set of $N$ valid, fluent sentences. Without loss of generality, we can consider $s_{t+1} \in \{1, \ldots, N\}$ as an integer index into that set of possible next sentences. This strategy resembles negative sampling in word2vec~\cite{mikolov:word2vec}.
Our model represents sentences with pre-computed vector embeddings. Specifically, sentences are represented by the mean of the 768-dimensional contextual word embeddings of the second-to-last layer of BERT~\citep{devlin2019bert}. This representation has shown to encode more transferable features compare to other layers~\citep{liu2019linguistic}. Alternative sentence representations were considered, including embeddings from the universal sentence encoder \citep{cer2018universal} and a weighted mean of the BERT embeddings using inverse document frequency weighting \citep{zhang2019bertscore}. None of these alternatives improved our results however.
Motivated by simplicity, we consider a classical multi-layer perceptron (MLP) $f_\theta$ which takes as input the context sentence embeddings concatenated into a single vector. At the output layer, we perform a softmax operation. If we represent candidate sentences $\{1, \ldots, N\}$ by the embeddings $\{e_i\}^N_{i=1}$, our model estimates the probability that $i$ is the next sentence by the softmax
\begin{align*}
\log P(s_{t+1} = i|s_{1:t}) = e_i^\top h - \log Z(h)
\end{align*}
where $h = f_\theta(s_{1:t})$ is the output of the MLP given context $s_{1:t}$, and $Z(h) = \sum_{j=1}^{N} \exp e_j^\top h$ is the partition function. At train time, the candidate set $\{1, \ldots, N\}$ consists of the correct next sentence along with $N-1$ distractor sentences. The distractors can either be static (the same set used throughout training) or dynamic (picked at random from a larger set for each train batch). In this case, the ``vocabulary" of next values to choose from changes with each train step, similar to negative sampling~\cite{mikolov:word2vec}.
At test time, novel sentences can be embedded with BERT and scored by our model.
Like a classical language model, we optimize for the likelihood of the true next sentence's embedding.
However, when training we found that the sentences from the context ($s_1,\ldots,s_t$) often ended up being given very high scores by our model. Inspired by work in sentence reordering \citep{lapata2003ordering,logeswaran2018an}, we incorporated an auxiliary loss, which we refer to as \textbf{CSLoss}, that only includes the context sentences $s_{1:t}$ in the distractor set.
Lastly, we consider a residual variant of the MLP (referred to as \textbf{resMLP}) with skip connection between layers, as described in \citet{he2016deep}.
The residual model trains faster and sometimes achieves higher accuracy than the non-residual model.
Though we experimented with recurrent \citep{sundermeyer2012lstm} and self-attention \citep{vaswani2017attention} models, we did not observe improvements, perhaps because the input to our model is already the high-dimensional output of a large mask language model.
We leave deeper architecture exploration, which will be especially critical as context length is extended, to future work.
\section{Experimental Setup}
We first describe our experiments on the ROC Stories dataset of short 5-sentence stories before showing our setup on the larger Toronto Book Corpus.
\begin{table*}[h]
\centering
\small
\begin{tabular}{l l||r|r|r|r}
& & Valid 2016 & Test 2016 & Valid 2018 & Test 2018\\
\hline
Our model & MLP & 69.7 & 68.8 & 70.1 & 69.0 \\
& + CSLoss & \textbf{73.5} & \textbf{73.0} & \textbf{73.1} & \textbf{72.1} \\
\hline
Alternatives & \citet{peng2017joint} & -- & 62.3 & -- & --\\
& \citet{schenk2017resource} & 62.9 & 63.2 & -- & --\\
\hline
Lang. Models & \citet{schwartz2017story} & -- & 67.7 & -- & --\\
& GPT-2 \citep{radford2019lm} & 54.5 & 55.4 & 53.8 & --\\
& GPT-2 + finetuning & 59.0 & 59.9 & 59.0 & --\\
\end{tabular}%
\caption{Accuracies (\%) for the Story Cloze binary classification task. \citet{schwartz2017story} is a semi-supervised technique. GPT-2 refers to predicting the more likely ending according to the 355M parameter model, and GPT-2 finetuning was done on the ROC Stories train set.}
\label{tab:roc_task_eval}%
\end{table*}%
\subsection{ROC Stories}
\minisection{Dataset} Our experiments use the ROC Stories dataset, which consists of stories focusing on common sense \cite{mostafazadeh2016rocstories}.
The training set has 98k stories, with five sentences each. The validation and test sets each contain 1.8k stories consisting of four sentences followed by two alternative endings: one ending is coherent with the context; the other is not.
The dataset was introduced for the Story Cloze task, inspired by ~\citet{taylor1953cloze}, where the goal is to select the coherent ending.
While the dataset and task were introduced as a way to probe for coherence and commonsense in models trained only on the unlabeled portion, most research
derived from this dataset focuses on a supervised setting, using the validation set as a smaller, labeled training set \citep{chaturvedi2017story,sun2019reading,cui2019story,li2019story,zhou2019story}.
Our work is faithful to the original task objective. We train solely on the training set, i.e. the model never sees incoherent endings at training time.
\minisection{Model} We consider two models, an MLP and a residual MLP. They take as input the previous sentences represented as the concatenation of their embeddings.
Alternative context aggregation strategies were considered with recurrent \citep{sundermeyer2012lstm} and attention \citep{vaswani2017attention} architectures, without strong empirical advantages.
The models maps its input to a vector which is compared to a set of candidate sentence embeddings via dot product. The embedding of the true next sentence should receive the highest score. For each example, we consider all other fifth sentences in the training set (96k in total) as the candidate set.
The input of our model is 3,072 dimensional, i.e. 4 context sentences represented by 768 dimensional BERT embeddings. After an architecture search, our best MLP has 3 layers of 1,024 units, and our best resMLP has a single residual layer with hidden size of 1,024. Both contain just over 6M trainable parameters.
Both apply dropout with a rate of 0.5 after each ReLU, and layer normalization is performed on the concatenated context sentence embedding passed in as input to the network and on the final predicted embedding for the next sentence.
For the Story Cloze task, the two architectures achieve similar validation accuracy, but when considering more than two distractors, the resMLP significantly outperforms the standard MLP.
The resMLP also converges quicker than the MLP.
Training to convergence takes under 2 hours for each model on a Tesla V100.
\subsection{Toronto Book Corpus}
\minisection{Dataset} ROC Stories contains only self-contained five-sentence stories, focusing on everyday life scenarios. They contain no dialog and very little flowery, expository language. Ideally our method would also be successful at scoring potential continuations to more naturally-written stories. To this end, we test out our approach on excerpts from the Toronto Book Corpus \citep{kiros2015skip}, a dataset of self-published novels.
The dataset contains over 7,000 unique books totalling over 45 million sentences. Since these stories are much longer than the ROC Stories ones and many of the sentences are uninformative (nearly 5\% of sentences are 3 words or shorter, and 14\% are 5 words or shorter), we double the context length to 8 sentences.
\minisection{Model} In addition to experimenting with a similar residual MLP architecture to the one used on ROC Stories, we also ran experiments with a Transformer model \citep{vaswani2017attention}. The residual MLP architecture contains 2 residual layers with hidden size of 1024 (11M params total).
The transformer has 4 self-attention layers with hidden size of 768, filter size of 2048 and 8 attention heads (22M params total).
While the residual MLP is trained to predict the 9th sentence given the previous 8 sentences, the Transformer is trained to predict each next sentence given the previous sentences in a sequence of length 10 sentences.
However, we only evaluate the Transformer on the task of predicting the 9th sentence so that evaluation results are directly comparable to the residual MLP.
For each batch during training, 2k distractors are randomly selected from the train set.
Like with ROC Stories, we experiment with an auxiliary loss where just sentences from the context were used as distractors.
Table~\ref{tab:full_task_eval} reports the results.
\begin{table}[]
\centering
\small
\begin{tabular}{l||rr}
& P@10 & MRR \\
\hline
MLP & 6.2 & 0.052 \\
+CSLoss & 3.4 & 0.029 \\
\hline
ResMLP & \textbf{10.3} & \textbf{0.087} \\
+CSLoss & 6.2 & 0.051 \\
\hline
Random & 0.01 & 2e-5 \\
\end{tabular}%
\caption{Precision@10 and mean-reciprocal rank on the 2018 valid set when considering all 5th sentences in the train and valid sets (98k total) as candidate endings.}
\label{tab:full_task_eval}%
\end{table}%
\section{Results}
We evaluate on the Story Cloze task, a binary classification task, as well as on the task of ranking a large set of possible next sentences.
\subsection{Story Cloze Task}
Table \ref{tab:roc_task_eval} shows that our method outperforms unsupervised alternatives. The introduction of the CSLoss which considers only context sentences as candidates improves accuracy compared to only using a loss over all possible fifth sentences.
For comparison, we include the accuracies of the best unsupervised methods in the literature. \citet{schenk2017resource} construct negative examples for their binary classification task by pairing contexts with random fifth sentences selected from the training set. \citet{peng2017joint} train a language model to predict a representation of the semantic frame, entities, and sentiment of the fifth sentence given the representations of the previous sentences, then take the more likely fifth sentence. We achieve higher accuracy without relying on a task-specific architecture.
Table \ref{tab:roc_task_eval} also shows that picking the ending that is more likely according to a word-level language model, in our case GPT-2's 355M parameter model, does not yield very high accuracies, even when the language model is finetuned on ROC Stories text~\citep{radford2019lm}.
Lastly, we also include the accuracy reported by \citet{schwartz2017story}, where a logistic classifier is trained to combine multiple language model scores.
It is worth noting that state-of-the-art on the Story Cloze task is over 90\% accuracy \citep{li2019story,cui2019story} for semi-supervised settings. The methods achieving this level of performance are not comparable to our {\it unsupervised} approach as they require training on the {\it labeled} validation set. The language model approach from \citet{schwartz2017story} also falls into this category.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{rocstories_num_distractors_v3}
\caption{The impact of the number of negative sentences used during training on the rank of the true ending out of 98k distractors. Results are with the resMLP on the 2018 valid set.}
\label{fig:num_distractors}
\end{figure}
\subsection{Ranking Many Sentences on ROC Stories}
For generation and suggestion scenarios, it is useful to be able to surface the best next sentence out of hundreds or thousands of candidates. In Table \ref{tab:full_task_eval}, we show the performance of our method on the 2018 validation set when all 98,161 fifth sentences in the training set plus all 1,571 correct 5th sentences in the 2018 validation are considered as candidate endings.
Top-10 accuracy is highest, at 10.3\%, when training a residual MLP without CSLoss.
Interestingly, strong performance on the Story Cloze task does not necessarily translate to strong performance on the large-scale ranking task.
The CSLoss improves performance on the Story Cloze task but hurts it for large-scale ranking.
In Figure \ref{fig:num_distractors}, we show how large-scale ranking performance improves as the size of the train-time distractor set is increased.
However, on the Story Cloze task, the number of training distractors has no significant impact on performance.
Even when only a single distractor is randomly chosen at each step of training, our method achieves over 70\% 2016 test accuracy.
It seems that training for the goal of detecting the true next sentence out of a very diverse candidate set is useful at test time only when the set of distractors at test time is similarly large and diverse.
The many-distractors training regime might be less useful for the Story Cloze task since the two candidate endings are designed to be quite topically similar to each other.
Some qualitative examples are shown in Table \ref{tab:qualEval}.
The failure examples showcase a side-effect of relying on pre-trained sentence embeddings: if common names like ``Becky" or ``Laura" or sports such as ``fishing" and ``golf" are close to each other in embedding space, our model will fail to distinguish between them.
\noindent
\begin{table}[]
\centering
\small
\begin{tabular}{l|rrr}
& 10k & 100k & same book \\
\hline
resMLP & 22.5\% & 7.4\% & 7.8\% \\
+CSLoss & 11.5\% & 2.5\% & 5.3\% \\
\hline
Transformer & 15.2\% & 4.0\% & 4.8\% \\
+CSLoss & 4.8\% & 0.8\% & 2.0\% \\
\end{tabular}%
\caption{Precision@10 On Toronto Book Corpus for retrieving the correct next sentence (given the 8 previous sentences) when considering 10k or 100k distractor sentences, or all of the sentences from the same book as distractors.}
\label{tab:full_task_eval}%
\end{table}%
\begin{table}[t]
\centering
\tiny
\begin{tabular}{p{31em}}
\Xhline{1.5pt}
\textbf{Context:} My family got up one morning while on vacation. We loaded our boat onto a trailer and drove to the beach. After loading up from the dock, we took off on our boat. After only a few minutes on the sea, dolphins began to swim by us. \\
\hline
\textbf{GT:} (22.89) We played with them for a while and then returned to the dock. \\
\textbf{Rank:} 9\\
\hline
\multicolumn{1}{l}{\textbf{Top scored:}} \\
(25.06) We were definitely lucky to see them and it made the trip more fun! \\
(24.31) They loved everything about that trip and vowed to do it again! \\
(23.76) We were sad to come home but excited to plan our next vacation. \\
(23.72) It was one of our best vacations ever! \\
\Xhline{1.5pt}
\textbf{Context:} Ellen wanted to be smart. She started reading the dictionary. She learned two hundred new words the first day. Ellen felt smart and educated. \\
\hline
\textbf{GT:} (30.23) She couldn't wait to use the new words. \\
\textbf{Rank:} 1\\
\hline
\multicolumn{1}{l}{\textbf{Top scored:}} \\
(30.23) She couldn't wait to use the new words. \\
(29.78) She felt like a new woman when she was done! \\
(29.01) She decided to go back to speaking like her normal self! \\
(28.95) She felt like a new girl! \\
\Xhline{1.5pt}
\textbf{Context:} It was a very cold night. Becky was shivering from the cold air. She needed to cover up before she caught a cold. She wrapped up in her favorite blanket. \\
\hline
\textbf{GT:} (18.717398) Becky finally got warm. \\
\textbf{Rank:} 3,028\\
\hline
\multicolumn{1}{l}{\textbf{Top scores:}} \\
(39.09) Laura ended up shivering, wrapped in a blanket for hours. \\
(36.71) After being cold all day, the warmth felt so good. \\
(33.77) Sam was able to bundle up and stay cozy all winter. \\
(33.38) The breeze felt good on her wet shirt. \\
\Xhline{1.5pt}
\textbf{Context:} Benjamin enjoyed going fishing with his grandfather as a kid. They would pick a new location to go to every summer. Benjamin liked seeing who would catch the biggest fish. Even after his grandfather passed he continued the tradition. \\
\hline
\textbf{GT:} (26.65) He now takes his own grandchildren to create memories for themselves. \\
\textbf{Rank:} 2,281\\
\hline
\multicolumn{1}{l}{\textbf{Top ranked:}} \\
(34.71) Greg grew to love golfing and is now his favorite thing to do. \\
(33.82) It was a tradition Tim continues with his own family. \\
(33.63) Alex learned to be grateful of his family's unique tradition. \\
(33.40) Tom was sad that he would have to let his son down. \\
\Xhline{1.5pt}
\end{tabular}%
\caption{Top-scoring sentences (using resMLP without CSLoss) among 98k possible endings when using prompts from the validation set. Two success and two failures cases are shown.}
\label{tab:qualEval}%
\end{table}%
\subsection{Ranking Many Sentences on Toronto Book Corpus}
When evaluating with 100k distractors, about as many as our ROC Stories large-scale ranking task, P@10 is at best 7.1\%, compared with 22.7\% for ROC Stories.
We suspect that this task would benefit from longer contexts and better selection of distractors.
In particular, a qualitative evaluation of the data highlighted the presence of a large quantify of short, generic sentences in the high ranking sentences (e.g. ``he said." and ``Yes.").
We see reducing the density of such sentences at training time as a potential for improvement.
In addition, further investigation is necessary into why the Transformer did not work as well as the residual MLP.
The use of variable sequence length architectures like the Transformer will become more critical as the input sequence length is increased beyond what an MLP can easily handle.
\section{Conclusions}
This work introduces a sentence-level language model which takes a sequence of sentences as context and predicts a distribution over a finite set of candidate next sentences. It takes advantage of pre-trained BERT embeddings to avoid having to learn token-level fluency, allowing the model to focus solely on the coherence of the sentence sequences. Our results on the Story Cloze task highlight the advantage of this strategy over word-level language models. At train time, our model considers much larger amounts of text per update than typical token-level language models. We show that this strategy allows our model to surface appropriate endings to short stories out of a large set of candidates.
As future work, we plan to further evaluate the impact of different sequential architectures, longer contexts, alternative sentence embeddings, and cleverer selection of distractors.
Inspired by deliberation networks and automatic post editing methods \citep{xia2017deliberation,freitag2019ape}, we ultimately want to apply our model to two-step generation, first selecting a sentence from a large set before refining it to fit the context.
\section*{Acknowledgements}
This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750-19-2-1004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
1,116,691,499,781 | arxiv | \section{Introduction}
\label{intro}
The study of the Kaon-nucleon interaction has triggered several experiments and theoretical calculations in the last two decades. From an experimental point of view, the kaon production has been investigated at intermediate energies ($\mathrm{E_{kin}= 1-4GeV}$) for heavy ion collisions and elementary reactions. Normally, the measured kinematic variables can be compared to transport models to infer information about the kaon-nucleus interaction. In this context, the $\Lambda (1405)$ resonance plays an important role. Indeed this baryon is theoretically described as a molecular state composed of either a $\mathrm{\bar{K}-p}$ or $\pi-\Sigma$ combination. Moreover, one expects that the production process and also the properties of the $\Lambda (1405)$ might differ upon the entrance reaction channel. If we consider that the $\Lambda(1405)$ is partially composed by a $\mathrm{K^--p}$ bound state, by adding a additional proton we might obtain a $\mathrm{ppK^-}$ cluster \cite{Aka02}. This hypothesis also relies upon the fact that the kaon nucleon interaction is thought to be strongly attractive \cite{Fuchs}. One could really think that the $\Lambda(1405)$ produced together with an additional proton might stick to it and form a $ppK^-$. Experimentally, we have addressed this issue by studying on the one hand the reaction $\mathrm{p+p\rightarrow \Lambda(1405) +K^+ +p}$ and on the other hand $\mathrm{p+p\rightarrow ppK^- +K^+\rightarrow p+\Lambda + K^+}$. In this work, we discuss the status of the analysis of the $\mathrm{p}\,\mathrm{K}^+ \Lambda$ final state.
Our recent results about the $\Lambda(1405)$ production \cite{L1405Hades} show that the position of the maximum of the spectral function is found to be below $1390\,\mathrm{ MeV/c^2}$, suggesting a shift of the $\Lambda(1405)$ towards smaller masses with respect to the nominal value reported in the PDG.
The analysis presented in \cite{L1405Hades} does not include the contribution of interferences between the $\Lambda(1405)$ and the I=0 phase space background, which could account for the shift and also modify the obtained differential cross-sections.
Nevertheless, by neglecting interferences the angular distribution in the center of mass system (CMS) extracted for the $\Lambda(1405)$ indicates a rather isotropic production of the resonance, which is in agreement with the hypothesis of a rather large momentum exchange and a rather central p+p collision linked to this final state \cite{Has13}.
According to the theoretical predictions by \cite{Aka02}, the formation of the most fundamental of the kaonic bound states ($ \mathrm{ppK^-}$) can happen in p+p collisions through the $\Lambda(1405)$ doorway. The underlying idea is that the $\Lambda(1405)$ being already a $\mathrm{K^-p}$ bound state, if this resonance is produced together with another proton and the relative momentum between the two particles is relatively small, the high attractive $K^-$-nucleon interaction might lead to the capture of a second proton by the $\Lambda(1405)$ and hence to the formation of a $\mathrm{ppK^-}$ molecule.
This scenario is predicted to be favored for p+p collisions at kinetic energies between $3-4\,\mathrm{GeV}$, where a large momentum transfer from the projectile to the target characterizes the dynamics and creates the optimal conditions for the formation of the kaonic cluster \cite{Aka02}.
From a theoretical point of view, the situation is rather controversial \cite{ppKTheo}. As summarized in \cite{Gal10}, different theoretical approaches predict the existence of a bound state like a $\mathrm{ppK^-}$, but the range of the predicted binding energies and width is rather broad and vary from $16-95\,\mathrm{MeV/c^2}$ and $34-110\,\mathrm{MeV/c^2}$ respectively.
From an experimental point of view, signatures connected to the $\mathrm{ppK^-}$ have been collected by \cite{DIS1,Fin05}.
The result by the FINUDA collaboration \cite{Fin05} refers to measurement of stopped kaons on several solid targets and reports about a $ \mathrm{ppK^-}$ state with a binding energy of $115^{+6+3}_{-5-4}\,\mathrm{MeV}$ and a width of $67^{+14+2}_{-11-3}\,\mathrm{MeV}$; while the DISTO collaboration measured p+p reactions at $2.85\,GeV$ kinetic energy and found
evidence for an exotic state with a binding energy of about $100\,\mathrm{MeV}$ and a width of $118 \pm 8\,\mathrm{MeV}$.
Following the same assumptions discussed in \cite{DIS1}, we have carried out an analysis of the final state:
\begin{equation}
\label{pkl}
p+p \rightarrow p+ K^+ +\Lambda \rightarrow p+K^+ + p + \pi^-
\end{equation}
to investigate the possibility of having an intermediate state $\mathrm{p+p \rightarrow ppK^- + K^+ }$ and the successive decay $\mathrm{ppK^-\rightarrow p+ \Lambda }$.
\section{Events Selection and Analysis}
The experiment was performed with the
{\bf H}igh {\bf A}cceptance {\bf D}i-{\bf E}lectron {\bf S}pectrometer (HADES) \cite{hades_nim}
at the heavy-ion synchrotron SIS18 at GSI Helmholtzzentrum f\"ur Schwerionenforschung in Darmstadt, Germany.
A proton beam of $\sim 10^7$ particles/s with $3.5\, \mathrm{GeV}$ kinetic energy was incident on a liquid hydrogen target of $50\, \mathrm{ mm}$ thickness corresponding to $0.7\,\%$ interaction length.
The data readout was started by a first-level trigger (LVL1) requiring a charged-particle multiplicity, $\mathrm{MUL}\,>3$, in the META system.
A total of $1.14\times10^9$ events was recorded under these experimental conditions.
The first analysis step consists of selecting events containing four charged particles ($p$, $\pi^-$, p, K$^+$).
Particle identification is performed employing the energy loss ($dE/dx$) of protons and pions in the MDCs. The selection of the $\Lambda$ hyperon is carried out by exploiting the invariant mass of the $\mathrm{p- \pi^-}$ pairs and the cuts described in \cite{Sig1385}. A kinematic refit of the events containing a $\Lambda$ candidate, a proton and a third positive particle is first carried out, employing the energy and momentum conservation and also requiring the $\Lambda$ nominal mass for the selected $\mathrm{p}-\pi^-$ combination as constraints.
\begin{figure}[htb]
\label{kaon}
\begin{minipage}[t]{70mm}
\begin{picture}(200,160)(0,0)
\put(0,-2.6){ \includegraphics[width=0.88\textwidth,height=.7\textwidth]
{KaonMass}}
\end{picture}
\end{minipage}
\hspace{\fill}
\begin{minipage}[t]{70mm}
\begin{picture}(200,160)(0,0)
\put(0,-2.6){ \includegraphics[width=.88\textwidth,height=.7\textwidth]
{DalitzPlot}}
\end{picture}
\end{minipage}
\caption{(Left) Color online. Reconstructed mass of the kaon candidates via the measurement of the $\beta$ versus momentum. The full circles represent the experimental data, the red dashed line the contribution by the $K^+$ and the blue solid line the contribution from the protons. The gray solid line shows the global fit to the experimental data (see text for details). (Right) Color online. Correlation plot of the $\mathrm{K^+-\Lambda}$ ($\mathrm{M(K^+-\Lambda)}$) invariant mass as a function of the $\mathrm{p-\Lambda}$ invariant mass the experimental data for the exclusive reaction $p+p\rightarrow p+ K^+ +\Lambda$. }
\end{figure}
The kinematic refit allows to select events corresponding to the $\mathrm{p+K^+ +\Lambda}$ final state. A total statistic of $11.000$ events is extracted and the mass of the third positive particle is shown in Fig.~1 (left panel). The full circles represent the experimental data corresponding to the selected $\mathrm{p+K^++\Lambda}$ events after the kinematic refit, the red dashed and the blue solid line correspond to full-scale simulations and represent the response to the kaon and proton signal respectively. The simulation are not absolutely normalized but the scaling factor is chosen such to reproduce to experimental distribution.
One can see that the exclusive analysis allows a good $\mathrm{K^+}$ identification with a rather low contamination by protons, which translates into a signal to background ratio of about $15$. Within a $3\,\sigma$ cut around the nominal $\mathrm{K^+}$ mass, a background contribution of about $2\,\%$ has been estimated.
Fig.~1 (right panel) shows a correlation plot for the selected reaction $\mathrm{p+p \rightarrow p+\Lambda +K^+}$ where the $\mathrm{K^+-\Lambda}$ ($\mathrm{M(K^+-\Lambda)}$) invariant mass is shown as a function of the $\mathrm{p-\Lambda}$ invariant mass ($\mathrm{M(p-\Lambda)}$) within the HADES acceptance and before the efficiency corrections. This distribution gives an impression of the phase space coverage which is accessible for this final state using the HADES spectrometer.
The analysis method discussed in \cite{DIS1} relies upon the method of the deviation plot. The experimental $\mathrm{pK^+\Lambda}$ Dalitz plot is divided by the Dalitz plot obtained by simulating the production of the $\mathrm{pK^+\Lambda}$ final state by pure phase space emission. The projection of the so obtained ratio along the $\mathrm{M(p-\Lambda)^2}$ shows a large bump, and this bump is interpreted in \cite{DIS1} as the evidence of an exotic state. By fitting the deviation plot obtained for $\mathrm{M(p-\Lambda)}$ and the $\mathrm{K^+}$ missing mass ($\mathrm{MM(K^+)}$) with a Gaussian superimposed to a linear background, a structure with the mass $\mathrm{M_X=\,2.265 \pm 0.002\,GeV/c^2}$ and a width $\mathrm{\Gamma_X =\, (0.118\pm\, 0.008)\, GeV/c^2}$ has been identified and associated to a bound state of two protons and a $\mathrm{K^-}$.
\begin{figure}[h]
\centering
\includegraphics*[width=13.2cm]{IM}
\caption{Color online. The full circles in black show the experimental distribution for the invariant mass of the particle pairs: $\mathrm{\Lambda K}$ (a), $\mathrm{pK}$ (b) and $\mathrm{p\Lambda}$ (c). The full circles in blue show the same distributions obtained from the phase-space simulation of the $\mathrm{pK\Lambda}$ final state.}
\label{Sigma}
\end{figure}
It is clear that such a method does not take into account the role played by resonances like $\mathrm{N^*}$, the interferences among the different intermediate states and their contribution to the experimental spectrum.
As a first step, we would like to address the comparison of the experimental data to the $\mathrm{pK^+\Lambda}$ phase space simulation.
We have carried out full scale simulation of the $\mathrm{pK^+\Lambda}$ final state by pure phase space emission within the HADES acceptance and we have compared these simulations to the experimental data within the acceptance. Fig.~\ref{Sigma} shows the three invariant mass spectra of the $\mathrm{pK^+\Lambda}$ final state. The full circles in black show the experimental distributions for the invariant mass of the particle pairs: $\mathrm{M(\Lambda K^+)}$ (a), $\mathrm{M(pK^+)}$ (b) and $\mathrm{M(p\Lambda)}$ (c). The full circles in blue show the same distributions obtained from the phase-space simulation of the $\mathrm{pK^+\Lambda}$ final state. One can see that the invariant mass distributions differ evidently, especially the $\mathrm{\Lambda K}$ and $\mathrm{pK}$ invariant mass distribution. If we compare the phase space simulations and the experimental data on the base of the angular distribution in the CMS, Gottfried-Jackson and helicity reference frames defined analog to \cite{Sig1385}, the disagreement is visible as well.
\begin{figure}[h]
\centering
\includegraphics*[width=14.2cm]{Angles}
\caption{Angular distributions for the production in CMS of $\Lambda$, $p$ and $K^+$ (top row: $a:\,\Theta^{\Lambda}_{CMS},\,b:\Theta^{p}_{CMS}, \,c:\,\Theta^{K^+}_{CMS}$), Gottfried-Jackson (middle row $d:\,\Theta^{K-B/T}_{p-K},\,e:\,\Theta^{K--B/T}_{\Lambda-K},\,f:\,\Theta^{p-B/T}_{p-\Lambda}$) and helicity angles (bottom row: $g:\,\Theta^{K-p}_{p-\Lambda},\, h:\,\Theta^{K-\Lambda}_{p-K},\,i:\,\Theta^{p-\Lambda}_{K-\Lambda}$) angle frames. The full circles in black show the experimental data and those in blue the same distributions obtained from the phase-space simulation of the $\mathrm{pK\Lambda}$ final state.}
\label{angles}
\end{figure}
Fig.~\ref{angles} shows the angular distribution for the experimental data and the phase space simulations within the HADES acceptance for all the combinations in the CMS, Gottfried-Jackson and helicity reference frames. The fact that the phase space simulations do not show isotropic and symmetric distributions is partially due to the geometrical acceptance of the spectrometer for the studied reaction, but these effect are under control in the simulation package.
The same disagreement is found if the momentum distribution of the single particles are compared.
These comparisons show that the deviation between the phase space distribution and the experimental $\mathrm{pK^+\Lambda}$ final states can not be explained by the incoherent sum of the phase space distribution with a single additional resonant state in the $\mathrm{p-\Lambda}$ channel. For this reason a deviation plot would be very difficult to interpret.
\section{Contribution by the N$^*$ Resonances}
As suggested by the experimental invariant mass distribution of the $\mathrm{K^+-\Lambda}$ pairs and as visible in Fig.~\ref{Sigma}, the contribution by intermediate $\mathrm{N^*}$ resonances decaying into $\mathrm{K-\Lambda}$ pairs should be considered. The left panel of Fig.~\ref{Sigma} shows two broad peaks around $1700$ and $1900\,\mathrm{MeV/c^2}$ and suggests the presence of at least two N$^*$, but due to the acceptance effects this hypothesis needs to be verified via full-scale simulations.
\begin{figure}[h]
\centering
\includegraphics*[width=12.2cm]{NCocktail}
\caption{Color online. K$^+$ Missing mass (a), $p-\Lambda$ invariant mass (b), $\Lambda-K$ invariant mass (c) and $\Lambda$ missing mass (d) distributions. The black dots show the experimental data, the cyan and the magenta histograms show the contributions by the $N^*(1900)$ and $N^*(1720)$ resonances obtained from full scale simulations, the violet histogram shows the total sum of the simulations.}
\label{NCock}
\end{figure}
As a first attempt, simulations have been carried out including the incoherent sum of four $\mathrm{N^*}$ resonances with a mass of $1650,\,1720,\,1900$ and $2190\,\mathrm{MeV/c^2}$ together with the phase-space production of the $\mathrm{pK^+\Lambda}$ state. The parameters of the resonances used in the simulations are summarized in Table 1. The choice of these resonances is rather arbitrary and constrained by the fact that exploiting a mere incoherent simulation model will not allow to distinguish the contributions by the $\mathrm{N^*(1710)}$ and $\mathrm{N^*(1720)}$ or other resonance pairs lying at higher masses with a mass difference lower than $\mathrm{20\,MeV/c^2}$, being all these states rather broad.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
$N^*$ Mass [$\mathrm{MeV/c^2}$] & 1650& 1720& 1900 &2190 \\
\hline
$N^*$ Width [$\mathrm{MeV/c^2}$] & 165 & 200 & 180 & 500\\
\hline
PDG Evidence &***&**& &* \\
\hline
\end{tabular}
\end{center}
\label{t1}
\caption{Masses and widths of the $N^*$ resonances employed in the simulations. The values are taken from the PDG \cite{PDG11}.}
\end{table}
The strength of the different contributions has been varied such to reproduce as good as possible the experimental data. Fig.~\ref{NCock} shows the final result, after the optimization of the simulation cocktail, the $\mathrm{K^+}$ missing mass (a), $\mathrm{p-\Lambda}$ invariant mass (b), $\mathrm{\Lambda-K}$ invariant mass (c) and $\Lambda$ missing mass (d) distributions are displayed. The black dots represent the experimental points within the HADES acceptance, the cyan and magenta histograms the contributions from the $\mathrm{N^*(1900)}$ and $\mathrm{N^*(1720)}$ resonances respectively, while the violet histogram correspond to the total simulated distributions. The contributions from the other two $\mathrm{N^*}$ resonances included in the full-scale simulations is set to 0 by the minimization procedure and also the contribution by the pure phase space production amounts to only $1.5\%$ of the total yield and is not clearly visible in Fig.~\ref{NCock}. The contribution by the $\mathrm{N^*(1720)}$ and $\mathrm{N^*(1900)}$ resonances amounts to $41.5\%$ and $57\%$ respectively and a total $\chi^{2}$ value of $3.2$ is obtained by the comparison of the simulated distribution to the experimental data for the kinematic variables shown in Fig.~\ref{NCock}.
The $\mathrm{M(\Lambda-K^+)}$ distribution shows a much improved agreement between the simulations and the experimental data, if compared to the distributions discussed in Fig.~\ref{Sigma}, and the two structures can mainly be associated to the contribution of the $\mathrm{N^*(1720)}$ and $\mathrm{N^*(1900)}$ resonances. The $\mathrm{\Lambda}$ missing mass distribution shows a similar qualitative agreement between the simulation and the experimental data, in particular the presence of the $\mathrm{N^*(1900)}$ resonance seems mandatory to describe the low missing mass region.
On the other hand, the incoherent simulation employed here, that does not even contain the proper angular distribution of the different final states, does not aim a quantitative determination of the different $\mathrm{N^*}$ contributions. A more compete analysis in this direction is currently being carried out.
When looking at the $\mathrm{p-\Lambda}$ invariant mass (Fig.~\ref{NCock} (b)), the simulated distribution is shiftd to the right hand side of the mass range, probably due to the fact that the dynamic of the reaction is not completely described by simulations.
Indeed, one has to point out that the experimental angular distributions in the CMS, Gottfried-Jackson and helicity reference frames can not be described by the new simulations including the $\mathrm{N^*}$ resonances, implying that interferences among the different intermediate states might play an important role and should be accounted for but also because the simulations so far have not been weighted with the correct production and decay angular distributions.
\begin{figure}[h]
\centering
\includegraphics*[width=13.2cm]{DevPlotN}
\caption{Deviation plot for the $p-\Lambda$ (a) and $K-\Lambda$ (b) invariant mass distribution.}
\label{DevN}
\end{figure}
If we want to compare the experimental correlation plot shown in Fig.~1 (right panel) to the new simulations obtained adding incoherently the phase space production of the $pK^+\Lambda$ final state to the $\mathrm{N^*}$ contribution, we can build a deviation plot by dividing the experimental data with the simulation. The projections of this ratio on the $\mathrm{p-\Lambda}$ and $\mathrm{\Lambda-K^+}$ invariant mass axis is shown in Fig.~\ref{DevN}, (a) and (b) respectively. As one can see, the distribution for the $\mathrm{\Lambda-K^+}$ ratio is rather flat, while for the $\mathrm{p-\Lambda}$ invariant mass ratio the shift of the simulated distribution to the right hand side of the spectrum with respect to the experimental distribution, as visible in Fig.~\ref{NCock} (b), generates a broad bump in the deviation plot. Hence this bump can not be directly attributed to a resonance since the shift of the two spectra, which is also visible in the $\mathrm{K^+}$ missing mass distribution (Fig.~\ref{NCock} (a)), can be due to the fact that the simulation model that has been used for the comparison does not include basic features of the $\mathrm{pK^+\Lambda}$ production.
It has to be pointed out that several attempts have been made to model non isotropic angular distribution for the $\mathrm{N^*}$ resonances following the same line of reasoning as shown in the analysis in \cite{COSY_10}, but no solution was found which enables to reproduce the experimental data. New studies employing partial wave analysis have been started and look very promising. A detailed modeling of the experimental data is also necessary to extract a valid acceptance correction, since the geometrical acceptance of the HADES spectrometer is not $100\%$.
The DISTO results assign the signature to the exotic state after a cut on the polar angle of the final state proton ($\mathrm{|{\cos \theta_{CMS}}| \geq 0.6}$) in order to suppress the phase space
production contribution. This cut would not affect at all the HADES data, since small polar angles for final state protons are not accessible for this colliding system due to the limited
geometrical acceptance of the HADES spectrometer in the forward direction. Moreover, our results stay the same even if a further cut on the
$K^+$ emission angle ($\mathrm{-0.2 < \cos \theta_{K^+} < 0.4} $), as employed in the DISTO analysis to improve the S/B ratio, is applied.
\section{Summary}
We have shown the analysis of the reaction $\mathrm{p+p\rightarrow p+\Lambda +K^+}$ for an incoming beam with a kinetic energy of $\mathrm{3.5\,\mathrm{GeV/c^2}}$ measured with the HADES spectrometer. A high purity sample of about $11.000$ exclusive $\mathrm{pK^+\Lambda}$ events has been extracted and the invariant mass correlation plot and the relative one dimensional projection have been
compared to full scale simulation with a pure phase-space event generator.
The comparison shows that the phase-space simulations can not describe the experimental missing mass, invariant mass and angular distributions. The disagreement can not be overcome by adding the contribution of one resonance in the $\mathrm{p-\Lambda}$ decay channel with a mass around $2300\,\mathrm{MeV/c^2}$. The $K^+-\Lambda$ invariant mass shows a clear contribution by at least two $\mathrm{N^*}$ resonances to the analyzed final state. A dedicated full-scale simulation, including additionally to the $\mathrm{pK^+\Lambda}$ phase-space distribution the contribution from $\mathrm{N^*(1720)}$ and $\mathrm{N^*(1900)}$ achieves a better description of the experimental data, but still fails to describe the angular distributions. The deviation plot in the $\mathrm{p-\Lambda}$ invariant mass distribution shows a wide bump around $2400\,\mathrm{MeV/c^2}$ that seems to be originated from a shift in the kinematic of the simulation respect to the experimental data. This observation jeopardizes the solidity of the deviation method exploited to extract the DISTO $\mathrm{ppK^-}$ signal.
Currently the partial wave analysis method is being investigated to include the interferences among the $\mathrm{N^*}$ resonances and all the other intermediate states contributing to the $pK^+\Lambda$ final state.
The HADES collaboration gratefully acknowledges the support by the grants LIP Coimbra, Coimbra (Portugal) PTDC/FIS/113339/2009, SIP JUC Cracow, Cracow (Poland): N N202 286038 28-JAN-2010 NN202198639 01-OCT-2010, FZ Dresden-Rossendorf (FZD), Dresden (Germany) BMBF 06DR9059D, TU M�nchen, Garching (Germany) MLL M\"unchen: DFG EClust 153, VH-NG-330 BMBF 06MT9156 TP5 GSI TMKrue 1012 NPI AS CR, Rez, Rez (Czech Republic) MSMT LC07050 GAASCR IAA100480803, USC - S. de Compostela, Santiago de Compostela (Spain) CPAN:CSD2007-00042, Goethe-University, Frankfurt (Germany): HA216/EMMI HIC for FAIR (LOEWE) BMBF:06FY9100I GSI F\&E.
|
1,116,691,499,782 | arxiv | \section{Notations and definitions}
\begin{definition}
We use notations inspired by the paper \cite{and}.
\begin{enumerate}
\item Let $\mathcal{X}$ and $\mathcal{A}$ be two disjoint alphabets
for distinguishing the $\lambda$-variables and $\mu$-variables
respectively. We code deductions by using a set of terms
$\mathcal{T}$ which extends the $\lambda$-terms and is given by the
following grammars:
\begin{center}
$
\mathcal{T} \; := \;\mathcal{X} \; | \;
\lambda\mathcal{X}.\mathcal{T}\;
|\; (\mathcal{T}\;\;\mathcal{E}) \; | \; \< \mathcal{T},\mathcal{T} \>
\; | \;$$
\omega_1 \mathcal{T}$$ \; |\;$$ \omega_2 \mathcal{T}$$ \; | \;
\mu\mathcal{A}.\mathcal{T} \; | \; (\mathcal{A}\; \; \mathcal{T})
$
$
\mathcal{E} \; := \; \mathcal{T} \; | \; $$\pi_1$$ \; | \;$ $\pi_2 $$\;
|
\; [\mathcal{X}.\mathcal{T},\mathcal{X}.\mathcal{T}]
$
\end{center}
An element of the set $\mathcal{E}$ is said to be an
$\mathcal{E}$-term.
\item The meaning of the new constructors is given by the typing rules
below where $\Gamma$ (resp. $\Delta$) is a context, i.e. a set of declarations of the form
$x : A$ (resp. $a : A$) where $x$ is a $\lambda$-variable (resp. $a$
is a $\mu$-variable) and $A$ is a formula.
\begin{center}
$\displaystyle\frac{}{\Gamma, x:A\,\, \vdash x:A\,\, ; \, \Delta}{ax}$
\end{center}
\begin{center}
$\displaystyle\frac{\Gamma, x:A \vdash t:B;\Delta}{\Gamma \vdash \lambda x.t:A \to
B;\Delta}{\to_i}
\quad\quad\quad
\displaystyle\frac{\Gamma \vdash u:A \to B;\Delta \quad \Gamma \vdash
v:A;\Delta}{\Gamma\vdash (u\; \; v):B;\Delta}{\to_e}$
\end{center}
\begin{center}
$\displaystyle\frac{\Gamma \vdash u:A;\Delta \quad \Gamma \vdash v:B ; \Delta}{\Gamma
\vdash \<u,v\>:A \wedge B ; \Delta}{\wedge_i}$
\end{center}
\begin{center}
$\displaystyle\frac{\Gamma \vdash t:A \wedge B ; \Delta}{\Gamma \vdash (t\;\;\pi_1):A ;
\Delta}{\wedge^1_e}
\quad
\displaystyle\frac{\Gamma \vdash t:A\wedge B ; \Delta}{\Gamma \vdash (t\;\;\pi_2):B ;
\Delta}{\wedge^2_e}$
\end{center}
\begin{center}
$\displaystyle\frac{\Gamma\vdash t:A;\Delta}{\Gamma\vdash \omega_1
t:A \vee B ;\Delta}{\vee^1_i}
\quad
\displaystyle\frac{\Gamma \vdash t:B; \Delta}{\Gamma\vdash \omega_2 t:A\vee B
;\Delta}{\vee^2_i}$
\end{center}
\begin{center}
$\displaystyle\frac{\Gamma \vdash t:A\vee B ;\Delta\quad\Gamma, x:A \vdash u:C ;
\Delta\quad\Gamma, y:B \vdash v:C ; \Delta}{\Gamma \vdash (t\;\;[x.u,
y.v]):C ; \Delta}{\vee_e}$
\end{center}
\begin{center}
$\displaystyle\frac{\Gamma\vdash t:A ;\Delta, a:A}{\Gamma \vdash (a\;\;t):\bot ;\Delta,
a:A}{abs_i}
\quad
\displaystyle\frac{\Gamma\vdash t:\bot; \Delta, a:A}{\Gamma \vdash\mu a.t:A;
\Delta}{abs_e}$
\end{center}
\item The cut-elimination procedure corresponds to the reduction rules
given below. They are those we need to the subformula
property.
\begin{itemize}
\item $(\lambda x.u \;\; v) \triangleright u[x:=v]$
\item $(\<t_1,t_2\>\;\;\pi_i) \triangleright t_i$
\item $(\omega_i t\;\;[x_1.u_1,x_2.u_2]) \triangleright u_i[x_i:=t]$
\item $((t\;\;[x_1.u_1,x_2.u_2])\;\;\varepsilon) \triangleright
(t\;\;[x_1.(u_1\; \varepsilon),x_2.(u_2\;\varepsilon)])$
\item $(\mu a.t\;\; \varepsilon) \triangleright \mu
a.t[a:=^*\varepsilon]$.
\end{itemize}
where $ t[a:=^*\varepsilon]$ is obtained from $t$ by replacing
inductively each subterm in the form $(a \; v)$ by $(a \; (v \;
\varepsilon))$.
\item Let $t$ and $t'$ be $\mathcal{E}$-terms. The notation $t
\triangleright t'$ means that $t$ reduces to $t'$ by using one step
of the reduction rules given above. Similarly, $t \triangleright^* t'$
means that $t$ reduces to $t'$ by using some steps of the reduction
rules given above.
\end{enumerate}
\end{definition}
The following result is straightforward
\begin{theorem}(Subject reduction)
If $\Gamma \vdash t : A ; \Delta$ and $t \triangleright^* t'$, then $\Gamma
\vdash t' : A ; \Delta$.
\end{theorem}
We have also the following properties (see \cite{and}, \cite{dav2}, \cite{deG2}, \cite{Nour7} and \cite{nour}).
\begin{theorem}(Confluence) If $t\triangleright^* t_1$ and $t\triangleright^* t_2$,
then there exists $t_3$ such that $t_1\triangleright^* t_3$ and $t_2\triangleright^* t_3$.
\end{theorem}
\begin{theorem}(Strong normalization) If $\Gamma \vdash t : A ; \Delta$,
then $t$ is strongly normalizable.
\end{theorem}
\section {The semantics}
\begin{definition}
\begin{enumerate}
\item We denote by $\mathcal{E}^{<\omega}$ the set of finite sequences
of $\mathcal{E}$-terms. The empty sequence is denoted by $\emptyset$.
\item We denote by $\bar{w}$ the sequence $w_1 w_2...w_n$. If $\bar{w}=w_1 w_2...w_n$, then $(t \;\bar{w})$ is $t$ if $n=0$ and $((t \; w_1)\;w_2...w_n)$ if $n\neq 0$. The term $t[a:=^*\bar{w}]$ is the term obtained from $t$ by replacing
inductively each subterm in the form $(a \; v)$ by $(a \; (v
\;\bar{w}))$.
\item A set of terms $S$ is said to be $\mu$-saturated iff:
\begin{itemize}
\item For each terms $u$ and $v$, if $ u\in S$
and $ v \triangleright^* u$, then $ v\in S$.
\item For each $a \in \mathcal{A}$ and for each $t \in S$, $\mu a.t \in
S$ and $(a\; t) \in S$.
\end{itemize}
\item Consider two sets of terms $K$, $L$ and a $\mu$-saturated set $S$,
we define new sets of terms:
\begin{itemize}
\item $K \to L =\{ t$ / $(t\; u) \in L,$ for each $ u \in K\}$.
\item $K \wedge L = \{t$ / $(t\;\pi_1) \in K$ and $(t\;\pi_2) \in L
\}$.
\item $K \vee L = \{t$ / for each $u,v$: if (for each $r \in K$,$s
\in L$: $u[x:=r]\in S$ and $ v[y:=s]\in S)$, then $(t\;[x.u,y.v])\in
S\}$.
\end{itemize}
\item Let $S$ be a $\mu$-saturated set and $\{ R_i\}_{ i\in I}$ subsets of terms such that
$R_i = X_i \to S$ for certains $X_i \subseteq
\mathcal{E}^{<\omega}$. A model $\mathcal{M}$ $ = \langle S; \{R_i\}_{
i\in I}\rangle$ is the smallest set of subsets of terms containing $S$
and $R_i$ and closed under constructors $\to$, $\wedge$ and $\vee$.
\end{enumerate}
\end{definition}
\begin{lemma} Let $\mathcal{M} =\langle S; \{R_i\}_{ i\in I}\rangle$
be a model and $G \in \mathcal{M}$.
There exists a set $X\subseteq \mathcal{E}^{<\omega} $ such that $G= X \to S$.
\end{lemma}
\begin{proof} By induction on $G$.
\begin{itemize}
\item $G=S$: Take $X=\{\emptyset\}$, it is clear that
$S=\{\emptyset\} \to S$.
\item $G=G_1 \to G_2$: We have $G_2=X_2 \to
S$ for a certain set $X_2$. Take $X=\{u\;
\bar{v}$ / $u \in G_1, \bar{v}\in X_2\}$. We can easly check that $G = X \to S $.
\item $G=G_1 \wedge G_2$: Similar to the previous case.
\item $G=G_1 \vee G_2$: Take $X=\{[x.u,y.v]$ / for each $r\in G_1$ and $s\in
G_2\;,\; u[x:=r] \in S$ and $ v[y:=s] \in S \}$. By definition $G = X
\to S$.
\end{itemize}
\end{proof}
\begin{definition} Let $\mathcal{M} = \langle
S; \{R_i\}_{ i\in I}\rangle$ be a model and $G \in \mathcal{M}$, we
define the set $G^\perp = \cup \{X$ / $G = X \to S \}$.
\end{definition}
\begin{lemma}
Let $\mathcal{M} = \langle S; \{R_i\}_{ i\in I}\rangle$ be a model and
$G \in \mathcal{M}$.
We have $G= G^\perp \to S$ ($G^\perp$ is the greatest $X$ such that $G
= X \to S$).
\end{lemma}
\begin{proof}
This comes from the fact that: if, for every $j \in J$, $G=X_j \to S$,
then $G=\cup_{j \in J} X_j \to S$.
\end{proof}
\begin{definition}
\begin{enumerate}
\item Let $\mathcal{M} = \langle S; \{R_i\}_{ i\in I}\rangle$ be a model. An
$\mathcal{M}$-interpretation $I$ is an application from
the set of propositional variables to $\mathcal{M}$ which we extend
for any type as follows:
\begin{itemize}
\item $I(\perp)=S$
\item $I(A \to B)= I(A) \to I(B)$.
\item $I(A\wedge B)= I(A) \wedge I(B)$.
\item $I(A\vee B)= I(A) \vee I(B)$.
\end{itemize}
The set $\vert A \vert_{\mathcal{M}} =\cap \{ I(A)$ / $I$ an
$\mathcal{M}$-interpretation$\}$ is the interpretation of $A$ in
$\mathcal{M}$.
\item The set $\vert A \vert = \cap \{\vert A \vert_{\mathcal{M}}$ /
$\mathcal{M}$ a model$\}$ is the interpretation of $A$.
\end{enumerate}
\end{definition}
\begin{lemma}(Adequation lemma) \label{adq} Let $\mathcal{M} =\langle
S; \{R_i\}_{ i\in I} \rangle$ be a model, $I$ a
$\mathcal{M}$-interpretation, $\Gamma =\{x_i : A_i\}_{1\le i\le n}$, $\Delta =\{a_j : B_j\}_{1\le j\le m}$, $ u_i
\in I(A_i)$, $\bar{v_j} \in I(B_j)^\perp$.
If $\Gamma \vdash t:A ; \Delta$, then
$t[x_1:=u_1,...,x_n:=u_n,a_1:=^*\bar{v_1},...,a_m:=^*\bar{v_m}]\in I(A)$.
\end{lemma}
\begin{proof}
Let us denote by $s'$ the term
$s[x_1:=u_1,...,x_n:=u_n,a_1:=^*\bar{v_1},...,a_m:=^*\bar{v_m}]$.
The proof is by induction on the derivation, we consider the last
rule:
\begin{enumerate}
\item ax, $\to_e$ and $\wedge_e$: Easy.
\item $\to_i$: In this case $t=\lambda x.u$ and $A= B\to C$ such that
$\Gamma, x:B\vdash u: C\,\,\, ; \, \Delta$. By induction hypothesis,
$u'[x:=v] \in I(C)= I(C)^\perp \to S$ for each $v\in I(B)$, then
$(u'[x:=v]\; \bar{w}) \in S$ for each $ \bar{w} \in I(C)^\perp
$, hence $((\lambda x.u'\;v)\;\bar{w})\in S$ because $((\lambda x.u'\;
v)\;\bar{w}) \triangleright^* (u'[x:=v]\;\bar{w})$. Therefore $t'=\lambda x.u' \in I(B) \to
I(C) = I(A)$.
\item $\wedge_i$ and $\vee_i^j$: A similar proof.
\item $\vee_e$: In this case $t=(t_1\;[x.u,y.v])$ with $(\Gamma \vdash
t_1:B\vee C ; \Delta)$, $(\Gamma, x:B\vdash u: A ; \Delta)$ and
$(\Gamma, y:C\vdash v: A ; \Delta)$. Let $r \in I(B)$ and $s \in
I(C)$, by induction hypothesis, $t'_1 \in I(B) \vee I(C)$, $u'[x:=r]
\in I(A) $ and $v'[y:=s] \in I(A)$. Let $\bar{w} \in I(A)^\perp$, then
$(u'[x:=r]\; \bar{w}) \in S $ and $(v'[y:=s]\; \bar{w})
\in S$, hence
$(t'_1\;[x.(u'\; \bar{w}),y.(v'\; \bar{w})]) \in S$, since
$((t'_1\;[x.u',y.v') ]\;\bar{w}) \triangleright^* (t'_1\;[x.(u'\;
\bar{w}),y.(v'\; \bar{w})])$ then $((t'_1\;[x.u',y.v') ]\;\bar{w})\in
S$. Therefore $t'=(t'_1\;[x.u',y.v'])\in I(A)$.
\item $abs_e$: In this case $t=\mu a.t_1 $
and $\Gamma \vdash t_1 : \perp\,\,\, ; \, \Delta' , a:A$. Let
$\bar{v} \in I(A)^\perp$. It suffies to prove that $(\mu a.t'_1
\;\bar{v})
\in S$. By induction hypothesis, $t'_1[a:=^*\bar{v}] \in I(\perp)=S$, then $\mu
a.t'_1[a:=^*\bar{v}]\in S$ and $(\mu a.t'_1
\; \bar{v})\in S$.
\item $abs_i$: In this case $t= (a_j \;u)$ and $\Gamma \vdash u : B_j ; \Delta', a_j:B_j$. We have to prove that $t' \in S$.
By induction hypothesis $u' \in I(B_j)$, then $(u'\;\bar{v_j}) \in S$,
hence $t'=(a\;(u'\; \bar{v_j})) \in S$.
\end{enumerate}
\end{proof}
\begin{theorem}(Correctness theorem)\label{closed}
If $\vdash t:A$, then $t \in\vert A \vert$.
\end{theorem}
\begin{proof}
Immediately from the previous lemma.
\end{proof}
\section{The operational behaviors of some typed terms}
The following results are some applications of the correctness theorem.
\begin{definition}
Let $t$ be a term. We denote $M_t$ the smallest set containing $t$
such that: if $u \in M_t$ and $a \in {\cal A}$, then $\mu a.u \in M_t$
and $(a \; u) \in M_t$. Each element of $M_t$ is denoted $\underline{\mu}.t$.
For exemple, the term $\mu a.\mu b.(a \; (b \; (\mu c. (a \; \mu d.t))))$ is
denoted by $\underline{\mu}.t$.
\end{definition}
In the next of the paper, the letter $P$ denotes a propositional
variable which represents an arbitrary type.
\subsection{Terms of type $\perp \to P $ ``Ex falso sequitur quodlibet''}
\begin{example}
Let ${\cal T} =\lambda z.\mu a.z$. We have ${\cal T} : \perp \to P$ and for
every term $t$ and $\bar{u} \in \mathcal{T}^{<\omega}$, $(({\cal T}\; t) \;
\bar{u}) \triangleright^* \mu a.t$.
\end{example}
\begin{remark} The term $({\cal T} \;t)$ modelizes an instruction like
${\tt exit}(t)$ (${\tt exit}$ is to be understood as in the C
programming language). In the reduction of a term, if the sub-term
$({\cal T} \;t)$ appears in head position (the term has the form
$(({\cal T}\; t) \; \bar{u})$), then after some reductions, we obtain
$t$ as result.
\end{remark}
The general operational behavior of terms of type $\perp \to P$ is
given in the following theorem:
\begin{theorem}
Let $T$ be a closed term of type $\perp \to P$, then for every term
$t$ and $\bar{u} \in \mathcal{E}^{<\omega}$, $((T\; t) \;
\bar{u}) \triangleright^* \underline {\mu}. t$.
\end{theorem}
\begin{proof}
Let $t$ be a term and $\bar{u}\in \mathcal{E}^{<\omega}$. Take
$S=\{v$ / $v\triangleright^* \underline{\mu}. t\}$ and $R=\{\bar{u}\}\to S$. It is clear
that $S$ is $\mu$-saturated set and $t \in S$. Let $\mathcal{M}=\langle
S;R \rangle$ and $I$ an $\mathcal{M}$-interpretation such that
$I(P)=R$. By the theorem
\ref{closed}, we have $T \in S \to (\{\bar{u}\}\to S)$, then $((T\;t)\;
\bar{u}) \in S$ and $((T\; t) \;
\bar{u}) \triangleright^* \underline {\mu}. t$.
\end{proof}
\subsection{Terms of type $(\neg P \to P) \to P $ ``Pierce law''}
\begin{example} Let ${\cal C}_1 =\lambda z.\mu a.(a \, (z\,\, \lambda y.(a\, y)))$ and
${\cal C}_2 =\lambda z.\mu
a.(a \,
(z\,\,(\lambda x.a (z\,\, \lambda y. (a\,x)))))$.
We have $\vdash {\cal C}_i : (\neg P \to P) \to P$ for $i \in \{1,2\}$.
Let $u,v_1,v_2$ be terms and $\bar{t} \in
\mathcal{E}^{<\omega}$, we have :
$(({\cal C}_1\; u)\; \bar{t}) \triangleright^* \mu a.a\;((u\,\te_1) \, \bar{t})$ and $(\te_1
\, v_1) \triangleright^* (a \, (v_1\; \bar{t}))$
and
$(({\cal C}_2 \;u)\; \bar{t}) \triangleright^* \mu a.((a \, ((u \, \te_1) \, \bar{t}))\,\bar{t})$,
$(\te_1\; v_1) \triangleright^* (a \,((u \, \te_2)\,\bar{t}))$ and
$(\te_2\; v_2) \triangleright^* (a \, (v_1 \, \bar{t}))$.
\end{example}
\begin{remark}
The term ${\cal C}_1$ allows to modelizing the ${\tt Call/cc}$ instruction in
the Scheme functional programming language.
\end{remark}
The following theorem describes the general operational behavior of
terms with type $(\neg P \to P) \to P$.
\begin{theorem}
Let $T$ be a closed term of type $(\neg P \to P) \to P$, then for
every term $u$ and $\bar{t}
\in\mathcal{E}^{<\omega}$, there exist $m \in \mathbb{N}$ and terms
$\te_1,...,\te_m$ such that for every terms $v_1,...,v_m$, we
have:
$((T\; u)\; \bar{t}) \triangleright^* \underline{\mu}. ((u\,\te_1)\, \bar{t})$
$(\te_i\; v_i)\triangleright^* \underline{\mu}. ((u \, \te_{i+1})\, \bar{t})$ for every $1
\le i \le m-1 $
$(\te_m\; v_m) \triangleright^* \underline{\mu}. (v_{i_0}\, \bar{t})$ for a certain $1\le i_0 \le m$
\end{theorem}
\begin{proof}
Let $u$ be a $\lambda$-variable and $\bar{t} \in \mathcal{E}^{<\omega}$.
Take $S=\{t$ / $\exists m\ge 0, \exists \te_1,...,\te_m$ : $t \triangleright^*
\underline{\mu}. ((u \;\te_1)\,\bar{t})$, $(\te_i\; v_i) \triangleright^* \underline{\mu}.
((u \,\te_{i+1})\,\bar{t})$ for every $1\le i \le m-1$ and $(\te_m\; v_m) \triangleright^*
\underline{\mu}. (v_{i_0} \bar{t})$ for a certain $1\le i_0 \le m \}$ and $R= \{\bar{t}\} \to S$. It
is clear that $S$ is a $\mu$-saturated set. Let $\mathcal{M} =
\langle S;R \rangle$ and an $\mathcal{M}$-interpretation $I$ such that
$I(P)= R$. By the theorem \ref{closed}, $T \in [(R \to S) \to R] \to
(\{\bar{t}\}
\to S)$. It is suffies to check that $u \in (R \to S) \to R$. For
this, we take $\te \in (R \to S)$ and we prove that $(u\; \te) \in R$
i.e. $((u\; \te)\;\bar{t}) \in S$. But by the definition of $S$, it
suffies to have $(\te\; v_i) \in S$, which is true since the terms
$v_i \in R$, because $(v_i\;\bar{t})\in S$.
\end{proof}
\subsection{Terms of type $\neg P \vee P $ ``Tertium non datur''}
\begin{example}
Let ${\cal W} =\mu b.(b\, \omega_1 \mu a.(b\, \omega_2 \lambda y.(a\,y)))$. We
have $\vdash {\cal W} : \neg P \vee P$.
Let $x_1, x_2$ be $\lambda$-variables, $u_1, u_2,v$ terms and $\bar{t}
\in \mathcal{E}^{<\omega}$. We have:
$({\cal W} \,[x_1.u_1,x_2.u_2]) \triangleright^* \mu b.(b \, \,u_1\,[x_1:=\te_1^1])$
$(\te_1^1 \,\bar{t}) \triangleright^* \mu a.(b\, u_2\,[x_2:=\te_2 ^2])$
$(\te_2^2 \,v) \triangleright^* (a(v \,\bar{t}))$
where $\te_1^1=\mu a.(b\; (\omega_2 \lambda y.(a\;y) \;[x_1.u_1,x_2.u_2]))$
and $\te_2 ^2=\lambda y.(a\;(y\;\bar{t}))$.
\end{example}
\begin{remark}
The term ${\cal W}$ allows to modelizing the ${\tt try...with...}$ instruction in
the Caml programming language.
\end{remark}
The following theorem gives the behavior of all terms with type $\neg
P \vee P$.
\begin{theorem}
Let $T$ be a closed term of type $\neg P \vee P$, then for every $\lambda$-variables $x_1, x_2$ and terms
$u_1 ,u_2$ and $(\bar{t_n})_{n \ge 1}$ a sequence of
$\mathcal{E}^{<\omega}$, there exist $m \in \mathbb{N}$ and terms
$\te_1^i,...,\te_m^i$ $1\le i \le 2$ such that for all terms
$v_1,...,v_m$, we have:
$(T\,[x_1.u_1,x_2.u_2]) \triangleright^* \underline{\mu}. u_i[x_i:=\te_1^i]$
$(\te_j^1 \, \bar{t_j}) \triangleright^* \underline{\mu}. u_i[x_i:=\te_{j+1}^i]$ for all $1\le j
\le m-1 $
$(\te_j^2\; v_j) \triangleright^* \underline{\mu}. u_i[x_i:=\te_{j+1}^i]$ for all $1\le
j \le m-1 $
$(\te_m^1 \bar{t_m}) \triangleright^* \underline{\mu}.(v_p \;\bar{t_q})$ for a certain $1\le p \le m
$ and a certain $1\le q \le m$
$(\te_m^2\; v_m) \triangleright^* \underline{\mu}.(v_p \;\bar{t_q})$ for a certain $1\le p \le m$
and a certain $1\le q \le m$
\end{theorem}
\begin{proof} Let $u_1$, $u_2$ be terms and $(\bar{t_n})_{n \ge 1
}$ a sequence of $\mathcal{E}^{<\omega}$. Take then $S=\{t$ /
$\exists m\ge 0, \exists \te_1^i,...,\te_m^i$ $ 1\le i \le 2$ : $t \triangleright^*
\underline{\mu}. u_i[x_i:=\te_1^i],$ $(\te_j^1\; \bar{t_j}) \triangleright^*
\underline{\mu}. u_i[x_i:=\te_{j+1}^i]$ for all $1\le j
\le m-1$,
$(\te_j^2\; v_j) \triangleright^* \underline{\mu}. u_i[x_i:=\te_{j+1}^i]$ for all $1\le j
\le m-1$, $(\te_m^1 \;\bar{t_m}) \triangleright^* \underline{\mu}.v_p (\bar{t_q})$ for
certain $(1\le p \le m$ and $1\le q \le m)$ and $(\te_m^2 \;v_m) \triangleright^*
\underline{\mu}.( v_p\; \bar{t_q})$ for certain $(1\le p \le m$ and $1\le
q \le m) \}$. $R=\{\bar{t_1},...,\bar{t_n}\} \to S $. By
definition $S$ is a $\mu$-saturated set. Let $\mathcal{M}=\langle
S;R\rangle$ and an $\mathcal{M}$-interpretation $I$ such that $I(P)=
R$. By the theorem \ref{closed}, $T \in [R \to S] \vee R$. Let $\te
\in R$, then, for all $i$, $(\te\; \bar{t_i}) \in S$. Let $\te' \in R
\to S$, hence $(\te'\; v_i) \in S$ since $v_i \in R$ (because $(v_i\;
\bar{t_i}) \in S$), therefore $(T\; [x_1.u_1,x_2.u_2]) \in S$.
\end{proof}
|
1,116,691,499,783 | arxiv | \section{Introduction}
In this paper we address the nature of spectral measure for generalized Anderson type models with single site potentials of higher rank or a constant randomness over several neighboring collection of sites . The basic setup of the problem is the following. We have a self adjoint operator $A$ on separable Hilbert space $\mathscr{H}$, and rank $N$ projections $\{P_n\}_{n\in\mathcal{N}}$ where $\mathcal{N}$ is countable or a finite set. Given an absolutely continuous measure $\mu$ on $\mathbb R$, we define the set of operators
\begin{equation}\label{OpEq1}
H^\omega=A+\sum_{n\in\mathcal{N}}\omega_n P_n
\end{equation}
for $\{\omega_n\}_{n\in\mathcal{N}}\in\mathbb{R}^\mathcal{N}$ distributed identically and independently following the distribution $\mu$. This defines a map from measure space $(\Omega,\mathscr{B},\mathbb{P})$ (product measure space $(\mathbb{R}^\mathcal{N},\mathscr{B}(\mathbb{R}^\mathcal{N}),\otimes\mu)$) to the set of essentially self adjoint linear operators on $\mathscr{H}$. We are interested in the spectral measure of this set of operators.
In the case of Anderson tight-binding model, we have the Hilbert space $l^2(\mathbb{Z}^d)$ on which we have the operator $\Delta$ defined by
$$(\Delta u)(n)=\sum_{|n-m|=1}u(m)\qquad\forall u\in l^2(\mathbb Z^d),n\in\mathbb Z^d$$
and the collection of rank one projection $\{|\delta_n\rangle\langle\delta_n|\}_{n\in\mathbb{Z}^d}$. Prior works \cite{JL1,JL2,BS2,BS1} proved simplicity of spectrum for such models using the property that $\{|\delta_n\rangle\langle\delta_n|\}_{n\in\mathbb Z^d}$ are rank one.
Similar to the tight-binding model, we have the random Schr\"{o}dinger operators, defined by
$$(H^\omega f)(x)=-\sum_{i=1}^d\frac{\partial^2 f}{\partial x^2_i}(x)+\sum_{p\in\mathbb Z^d}\omega_pG(x-p)f(x)\qquad\forall x\in\mathbb R^d,f\in C_c^\infty(\mathbb R^d)$$
where $G$ is a compactly supported function on $[0,1]^d$. Simplicity of the singular spectrum for this model is still an open problem.
These models are also considered on graphs (for example Bethe lattices and one dimensional strips). In the case of Bethe lattice and Bethe strips absolute continuous spectrum was shown to exists \cite{FHS,K1,KS}. All these models show localization at high disorder \cite{AM} and so have pure point spectrum also.
Multiplicity of these spectra are not well understood for projection valued perturbation. Known results study rank one perturbation \cite{JL1,JL2} and cyclicity \cite{BS1,BS2}. In work by Naboko, Nichols and Stolz \cite{NNS}, such a problem is handled for pure point part of the spectrum. Sadel and Schulz-Baldes \cite{SSH} were looking at quasi-one-dimensional stochastic dirac operator, and provided conditions for singular and absolutely continuous spectrum in terms of size of fibres. They also proved that the multiplicity of ac spectrum is $2$. In this paper we prove results similar to those in \cite{JL1} and \cite{JL2}.
Before stating the main results, we introduce some notations. For $n\in\mathcal{N}$ and $\omega\in\Omega$ define $\mathscr{H}_{n,\omega}$ as the cyclic subspace generated by $H^\omega$ defined by \eqref{OpEq1} and the vector space $P_n\mathscr{H}$ (this vector space is isomorphic to $\mathbb C^N$), and set $Q_n^\omega:\mathscr{H}\rightarrow\mathscr{H}_{n,\omega}$ as the canonical projection. Let $E_{H^\omega}$ be the spectral projection measure for the operator $H^\omega$; set $\Sigma^\omega_n(\cdot)=P_nE_{H^\omega}(\cdot)P_n$
(which is now a matrix valued measure) and set $\sigma_n^\omega(\cdot)=tr(\Sigma_n^\omega(\cdot))$ as the trace measure (these are finite measures). Let $P^\omega_{ac}$ be the orthogonal projection onto the absolutely continuous spectral subspace $\mathscr{H}_{ac}(H^\omega)$. For $n,m\in\mathcal{N}$, define
\begin{equation}\label{CorEq1}
E_{n,m}=\{\omega\in\Omega|\ Q_n^\omega P_m\text{has same rank as }P_m\}
\end{equation}
We will be working with the following set
$$\mathscr{M}=\{n\in\mathcal{N}|\ \sigma^\omega_n\text{ is not equivalent to Lebesgue measure for a.e }\omega\}$$
The reason for confining oneself in this set is a theorem of F. and M. Riesz \cite{RR}, which implies that \emph{the Borel transform of any complex measure which is zero in $\mathbb C^{+}$ has to be absolutely continuous with respect to Lebesgue measure}. But one can prove that the total variation measure need to be equivalent to Lebesgue measure, see \cite[Theorem 2.2]{JL2} for a proof. So confining to $\mathscr{M}$ we get that the Borel transform of non-zero measure in $\mathscr{M}$ can never be identically zero in $\mathbb C^{+}$, and so one can use results about boundary values of analytic functions like \cite{BR2,BR1}. We state the main theorem:
\begin{thm}\label{MainThm}
For any $N\in\mathbb N$, let $\{P_n\}_{n\in\mathcal{N}}$ be collection of rank $N$ projections such that $\sum_{n\in\mathcal{N}}P_n=I$, and $\mu$ be a absolutely continuous measure on $\mathbb R$. Let $\{H^\omega\}_{\omega\in\Omega}$ be a family of operator defined as in \eqref{OpEq1}, then
\begin{enumerate}
\item For $n,m\in\mathscr{M}$ we have $\mathbb{P}(E_{n,m})\in\{0,1\}$.
\item Let $n,m\in\mathscr{M}$ such that $\mathbb{P}(E_{n,m}\cap E_{m,n})=1$. For a.e $\omega\in\Omega$, the restrictions $P^\omega_{ac}H^\omega|_{\mathscr{H}_{n,\omega}}$ and $P^\omega_{ac}H^\omega|_{\mathscr{H}_{m,\omega}}$ are unitary equivalent.
\item Let $n,m\in\mathscr{M}$ such that $\mathbb{P}(E_{n,m}\cap E_{m,n})=1$, for a.e $\omega\in\Omega$ the measures $\sigma^\omega_n$ and $\sigma^\omega_m$ are equivalent as Borel measures.
\end{enumerate}
\end{thm}
\begin{rem} Two examples for which the condition $\mathbb{P}(E_{n,m})=1$ can be verified are:
\begin{enumerate}
\item Consider $l^2(\mathbb Z)$ with the operator $H^\omega=\Delta+\sum_{n\in\mathbb Z}\omega_n P_n$ where $P_n=\sum_{k=0}^{N-1}\delta_{Nn+k}$, we have $\mathbb{P}(E_{n,m})=1$ for each $n,m\in\mathbb Z$. This is because for $n_1,n_2\in\mathbb Z$, $\dprod{\delta_{n_1}}{(H^\omega)^{|n_1-n_2|}\delta_{n_2}}=1$ and hence all cyclic subspaces intersect with each other non-trivially. If we considered $l^2(\mathbb Z^d)$ and some enumeration of its basis $\{\delta_{n_k}\}_{k\in\mathbb Z}$ and define $P_n$ as before, again we can prove $\mathbb{P}(E_{n,m})=1$.
\item Consider the Hilbert space $\oplus_{i=1}^N l^2(\mathbb Z)$, and $\Delta$ as adjacency operator on each space separately. Set $\pi_i:\mathbb Z\rightarrow\mathbb Z$ be surjective map for $i=1,\cdots,N$, and define $P_n=\sum_{i=1}^N \delta^i_{\pi_i(n)}$, where $\{\delta_n^i\}_{n\in\mathbb Z}$ is basis for each $l^2(\mathbb Z)$. Then for this case also $\mathbb{P}(E_{n,m})=1$.
\end{enumerate}
In case the measure $\mu$ has compact support on $\mathbb R$ and $A$ is bounded, none of the $\sigma^\omega_n$ can have full support on $\mathbb{R}$, and so $\mathscr{M}=\mathcal{N}$ similar to the rank $1$ case of Jak\v{s}i\'{c} and Last \cite{JL2}.
\end{rem}
The approach to gain information about the spectral measure is by using the matrix valued function:
$$P_n(H^\omega-z)^{-1}P_n:P_n\mathscr{H}\rightarrow P_n\mathscr{H}$$
for $z\in\mathbb C^{+}$. Since we will be working with $n\in\mathscr{M}$, it is enough to look at the above matrices. These are termed Matrix valued Herglotz functions or Birman-Schwinger operators. Birman-Schwinger principle was developed for compact perturbations in \cite{BMS,SJ} and some notable applications can be found in \cite{CS,KM,BS3}.
We will be working with the above as Matrix valued Herglotz functions whose properties can be found in \cite{GT1}. By combining theorem \ref{FgetThm1} (see \cite[Theorem 5.4]{GT1} for proof) and \ref{SpecThm} we obtain conditions in terms of $\lim_{\epsilon\downarrow0}P_n(H^\omega-E-\iota\epsilon)^{-1}P_n$.
Second and third part of the theorem \ref{MainThm} are consequences of perturbations by two projections, and the first part is because of Kolmogorov $0$-$1$ law. Lemma \ref{InvLem2} is the primary step for the first part of the main theorem. It tells us that the event $E_{n,m}$ (\emph{$Q^\omega_nP_m$ has same rank as $P_m$}), is independent of any other perturbation, whence Kolmogorov $0$-$1$ law applies. For second part, whenever the condition is satisfied, we have to show that for $E$ in a
full measure set, the density of the measure has same rank for both indices; this is done in corollary \ref{AbsEqu1}. For the last part, the second part of the theorem \ref{MainThm} helps by asserting that absolute continuous parts are equivalent and for the singular part we only need to consider the lowest (Hausdorff) dimensional part. This is the case because we are using Poltoratskii's theorem \cite{POL1}, and lowest dimensional part of the spectrum contributes maximum rate of growth to the
Herglotz function as its argument approaches the boundary of $\mathbb C^{+}$ . Corollary \ref{SingLem3} gives the equivalence for the lowest dimensional parts of the measure.
Before attempting to handle the problem, it is important to note that the \emph{set of perturbations where the procedure may not be applicable is a measure zero set}. Lemma \ref{MbleVar1} gives such a statement, and also tells us that for almost all perturbation, the measure of singular part (w.r.t to Lebesgue measure) is zero.
\section{Preliminaries}
Following lemma is a result concerning the zero sets of polynomials. This lemma helps in the proof of our main theorem by ensuring that for almost all perturbation the set where singular part lie is measure zero.
\begin{lemma}\label{MbleVar1}
For a $\sigma$-finite positive measure space $(X,\mathscr{B},m)$, and a collection of measurable functions $a_i:X\rightarrow\mathbb C$, define the function $f(\lambda,x)=1+\sum_{n=1}^N \lambda^n a_n(x)$. The set defined by
\begin{equation}\label{solset1}
\Lambda_f=\{\lambda\in\mathbb C|m\{x\in X| f(\lambda,x)=0\}>0\}
\end{equation}
is countable.
\end{lemma}
\begin{proof}
The proof is by induction on degree of $f$ (as a polynomial of $\lambda$). We will use the notation:
\begin{equation}\label{MV1Eq1}
S_\lambda=\{x\in X| f(\lambda,x)=0\}
\end{equation}
By definition the sets $S_\lambda$ are measurable.
Base case of induction is $N=1$, so $f(\lambda,x)=1+\lambda a_1(x)$. Clearly for $\lambda_1\neq \lambda_2\in\mathbb C$ we have $S_{\lambda_1}\cap S_{\lambda_2}=\phi$. Since, if $x\in S_{\lambda_1}\cap S_{\lambda_2}$ then
\begin{align*}
&1+\lambda_1 a_1(x)=0\ \&\ 1+\lambda_2 a_1(x)=0\\
\Rightarrow\qquad &\frac{1}{\lambda_1}=-a_1(x)=\frac{1}{\lambda_2}\\
\Rightarrow\qquad &\lambda_1=\lambda_2
\end{align*}
but we assumed $\lambda_1\neq\lambda_2$. Since $(X,m)$ is $\sigma$-finite, we have a countable collection $\{X_i\}_{i\in\mathbb N}$ such that $\cup_i X_i=X$ and for each $i$ we have $m(X_i)<\infty$. Now for each $\lambda\in\mathbb C$ and $n\in\mathbb N$ define $S_{\lambda,n}=S_\lambda\cap X_n$, so we have $\cup_n S_{\lambda,n}=S_\lambda$, and $\cup_{\lambda\in\Lambda_f}S_{\lambda,n}\subset X_n$. We have
\begin{align*}
\sum_{\lambda\in\Lambda_f}m(S_{\lambda,n})=m(\cup_{\lambda\in\Lambda_f}S_{\lambda,n})\leq m(X_n)<\infty,
\end{align*}
so only for countably many $\lambda\in\Lambda_f$ we have $m(S_{\lambda,n})\neq 0$. Set $\Lambda_n=\{\lambda\in\Lambda_f| m(S_{\lambda,n})>0\}$, we have $\Lambda_f=\cup_{n\in\mathbb N}\Lambda_n$, but since countable union of countable set is countable, we get $\Lambda_f$ countable. This completes base case.
Now assume the induction hypothesis, i.e for measurable functions $a_i:X\rightarrow\mathbb C$, and $f(\lambda,x)=1+\sum_{n=1}^{N}\lambda^n a_n(x)$, the set $\Lambda_f$ is countable.
We have to show for $f(\lambda,x)=1+\sum_{n=1}^{N+1}\lambda^n a_n(x)$, the set $\Lambda_f$ is countable. First we define the relation $\sim$ for elements of $\Lambda_f$; for $\mu,\nu\in\Lambda_f$ we define $\mu\sim\nu$ if there exists $\{\lambda_i\}_{i=1}^k$ such that $\lambda_1=\mu$, $\lambda_k=\nu$ and $m(S_{\lambda_i}\cap S_{\lambda_{i+1}})>0$ for $i=1,\cdots,k-1$. For $\mu\in\Lambda_f$ we have $\mu\sim\mu$ because $m(S_\mu)>0$ hence $\sim$ is reflexive. If $\mu\sim\nu$ for $\mu,\nu\in\Lambda_f$, then we have a sequence $\{\lambda_i\}_{i=1}^k$ such that $\lambda_1=\mu$ and $\lambda_k=\nu$ and $m(S_{\lambda_i}\cap S_{\lambda_i+1})>0$, hence choosing $\tilde{\lambda}_i=\lambda_{k-i+1}$ we get $\nu\sim \mu$ and so $\sim$ is symmetric. If $\mu\sim\nu$ and $\nu\sim\eta$, then we have sequences $\{\alpha_i\}_{i=1}^p$ and $\{\beta_i\}_{i=1}^q$ such that $\alpha_1=\mu$, $\alpha_p=\beta_1=\nu$ and $\beta_q=\eta$, so defining the sequence $\{\lambda_i\}_{i=1}^{p+q}$ defined as $\lambda_i=\alpha_i$ for $i\leq p$
and $\lambda_{i}=\beta_{i-p}$ for $i>p$ we get $\mu\sim\eta$ giving transitivity of $\sim$.
So $\sim$ is a equivalence relation on $\Lambda_f$, and can break the set $\Lambda_f$ into equivalence classes indexed by $\tilde{\Lambda}=\Lambda_f/\sim$, where we view $[\lambda]\in\tilde{\Lambda}$ as $[\lambda]=\{\mu\in\Lambda_f|\mu\sim\lambda\}$ and define $S_{[\lambda]}=\cup_{\mu\in[\lambda]}S_\mu$.
First we will show for any $[\lambda]\in\tilde{\Lambda}$, the set $[\lambda]$ is countable. Let $\lambda\in\Lambda_f$, so we have the $m(S_\lambda)\neq0$. We will restrict to subspace $S_\lambda$, on this space $f(\nu,x)$ can be written as $f(\nu,x)=\frac{1}{\lambda}(\lambda-\nu)\left(1+\sum_{n=1}^N \tilde{a}_n(x)\nu^n\right)$ (since $\lambda$ is a solution). So we have the new function $\tilde{f}(\nu,x)=1+\sum_{n=1}^N \tilde{a}_n(x)\nu^n$, and by our assumption (induction hypothesis) we get $\Lambda_{\tilde{f}}$ is countable. For any $\nu\in\Lambda_f$ with $m(S_\lambda\cap S_\nu)\neq 0$ implies $\nu\in\Lambda_{\tilde{f}}$, so for fixed $\lambda\in\Lambda_f$ the set of $\nu\in\Lambda_f$ such that $m(S_\lambda\cap S_\nu)\neq 0$ is countable.
Next choose $\lambda\in\Lambda_f$, and set $A_0=\{\lambda\}$, and define
$$A_i=\cup_{\beta \in A_{i-1}}\{\nu\in\Lambda_f| m(S_\nu\cap S_\beta)\neq 0\}\qquad\forall i\in\mathbb N$$
by previous step each $A_{i}$ are countable. So $\cup_{i=0}^\infty A_{i}$ is countable. By definition of $\sim$ we have $[\lambda]=\cup_{i=0}^\infty A_{i}$.
Now we will prove $\tilde{\Lambda}$ is countable. By definition $m(S_{[\lambda]})>0$ for $[\lambda]\in\tilde{\Lambda}$, and for $[\lambda]\neq [\mu]\in\tilde{\Lambda}$ we have $m(S_{[\lambda]}\cap S_{[\nu]})=0$. For $n\in\mathbb N$ define $S_{[\lambda],n}=S_{[\lambda]}\cap X_n$, then we have
\begin{align*}
\sum_{n\in\tilde{\Lambda}}m(S_{[\lambda],n})=m(\cup_{[\lambda]\in\tilde{\Lambda}}S_{[\lambda],n})\leq m(X_i)<\infty
\end{align*}
From last step only countably many $[\lambda]$ can have $m(S_{[\lambda],n})> 0$. Call $\tilde{\Lambda}_n=\{[\lambda]\in\tilde{\Lambda}| m(S_{[\lambda],n})>0\}$ (which are countable); for any $[\lambda]\in\tilde{\Lambda}$ we have
$$0<m(S_{[\lambda]})\leq \sum_{n\in\mathbb N}m(S_{[\lambda],n})$$
So $[\lambda]\in \tilde{\Lambda}$ for some $n\in\mathbb N$ we have $m(S_{[\lambda],n})>0$, hence $\tilde{\Lambda}=\cup_{n\in\mathbb N}\tilde{\Lambda}_n$; giving us $\tilde{\Lambda}$ is countable.
Since $\Lambda_f=\cup_{[\lambda]\in\tilde{\Lambda}}[\lambda]$ and both the sets are countable we get the countability of $\Lambda_f$.
\end{proof}
\begin{rem}
It should be clear that above result holds for a function of the type $f(\lambda,x)=\sum_{n=0}^N a_n(x)\lambda^n$ on the set $\{x\in X|a_0(x)\neq 0\}$. One should note that one cannot extend the result for whole of $X$.
We can view $f(\lambda,x)=\lambda^N\left(\sum_{n=0}^N a_{N-n}(x)\left(\frac{1}{\lambda}\right)^n\right)$, and so the result also holds on the set $\{x\in X|a_N(x)\neq 0\}$.
\end{rem}
\begin{cor}\label{MbleVar2}
For a $\sigma$-finite positive measure space $(X,\mathscr{B},m)$ and a collection of functions $a_i:X\rightarrow\mathbb C$, $b_i:X\rightarrow\mathbb C$, define the function $f(\lambda,x)=\frac{1+\sum_{i=1}^N a_i(x)\lambda^i}{1+\sum_{i=1}^N b_i(x)\lambda^i}$, then the set
\begin{equation}\label{solset2}
\Lambda_f=\{\lambda\in\mathbb C|m\{x\in X| f(\lambda,x)=0\}\neq0\}
\end{equation}
is countable
\end{cor}
\begin{proof}
Set $g(\lambda,x)=1+\sum_{n=1}^N a_n(x)\lambda^n$, then $\{(x,\mu)\in X\times\mathbb C| f(\lambda,x)=0\}\subseteq \{(x,\mu)\in X\times\mathbb C| g(\lambda,x)=0\}$. So by lemma \ref{MbleVar1} we get the desired result.
\end{proof}
We will need the spectral averaging result (see\cite[Corollary 4.2]{CH} for proof):
\begin{lemma}\label{SpecAvaLem}
Let $E_\lambda(\cdot)$ be the spectral family for the operator $A_\lambda=A+\lambda P$, where $A$ is self adjoint operator, and $P$ is a rank $N$ projection. Then for $M\subset\mathbb R$ such that $|M|=0$ (Lebesgue measure), we have $PE_\lambda(M)P=0$ for Lebesgue almost all $\lambda$.
\end{lemma}
This lemma guarantees us that we can omit any Lebesgue measure zero set from any analysis that follows. Following lemma from \cite[Proposition 2.1]{JL1} will be used extensively, as it guarantees the existence of limits, almost surely. We denote $\mathscr{H}_\phi$ to be the cyclic subspace generated by $A$ and $\phi\in\mathscr{H}$.
\begin{lemma}\label{NonZeroTms1}
Let $A$ be a self adjoint operator on a separable Hilbert space $\mathscr{H}$ with $\phi,\psi\in\mathscr{H}$ such that $\mathscr{H}_\phi\not\perp\mathscr{H}_\psi$. Then for a.e $E\in\mathbb R$ (Lebesgue) the limit
$$\lim_{\epsilon\downarrow 0}\dprod{\phi}{(A-E-\iota\epsilon)^{-1}\psi}=\dprod{\phi}{(A-E-\iota 0)^{-1}\psi}$$
exists and is non-zero.
\end{lemma}
We note that the limit always exists a.e $E$, and it is non-zero if and only if $\mathscr{H}_\phi\not\perp\mathscr{H}_\psi$.
We will need Poltoratskii's theorem \cite{POL1}.
\begin{thm}\label{PolThm}
For any complex valued Borel measure $\mu$ on $\mathbb R$ and for $f\in L^1(\mathbb R,d\mu)$, with Borel transform $F_\mu(z)=\int\frac{d\mu(z)}{x-z}$
$$\lim_{\epsilon\rightarrow 0}\frac{F_{f\mu}(E+\iota\epsilon)}{F_{\mu}(E+\iota\epsilon)}=f(E)$$
for a.e $E$ with respect to $\mu$-singular.
\end{thm}
A proof can be found in \cite{JL3}. This theorem will be used for proof of equivalence of measure for the singular part in lemma \ref{SingPtSpec1} and corollary \ref{SingLem3}.
\section{Perturbation by finite Rank Projection}
In this section we will be working with $(A,\mathscr{H},\{P_i\}_{i=1}^3)$, where $A$ is a self adjoint operator on the Hilbert space $\mathscr{H}$, and $\{P_i\}_{i=1}^3$ are three rank $N$ projections. We will work with the case that the measures $tr(P_iE_A(\cdot)P_i)$ are not equivalent to lebesgue measure (hence using Riesz theorem \cite{RR}, the Borel transform of these measures are non-zero on the upper half plane). Define $A_\mu=A+\mu P_1$, $G_{ij}(z)=P_i(A-z)^{-1}P_j$ and $G^\mu_{ij}(z)=P_i(A_\mu-z)^{-1}P_j$ for $i,j=1,2,3$ and $z\in\mathbb C^{+}$, and will use the notation
$$g(E+\iota0):=\lim_{\epsilon\downarrow0}g(E+\iota\epsilon)$$
for $E\in\mathbb R$ (whenever the limit exists). Using the relation $A^{-1}-B^{-1}=B^{-1}(B-A)A^{-1}=A^{-1}(B-A)B^{-1}$, we have
\begin{align}
&G^\mu_{11}(z)=G_{11}(z)(I+\mu G_{11}(z))^{-1}\label{MHEq1}\\
&(I+\mu G_{11}(z))(I-\mu G^\mu_{11}(z))=I\label{MHEq3}\\
&G^\mu_{ij}(z)=G_{ij}(z)-\mu G_{i1}(z)(I+\mu G_{11}(z))^{-1}G_{1j}(z)\qquad(i,j)\neq(1,1)\label{MHEq2}
\end{align}
For any $E\in\mathbb R$ such that $G_{11}(E+\iota0)$ exists and finite, and $f:(0,\infty)\rightarrow\mathbb C$ be such that $\lim_{\epsilon\rightarrow 0}f(\epsilon)=0$ look at $\lim_{\epsilon\downarrow 0} f(\epsilon) G_{11}^\mu(E+\iota\epsilon)$ using equation \eqref{MHEq3}
\begin{align*}
&\lim_{\epsilon\downarrow 0} f(\epsilon)(I-\mu G^\mu_{11}(E+\iota\epsilon))(I+\mu G_{11}(E+\iota\epsilon))-f(\epsilon) I=0\\
&(I+\mu G_{11}(E+\iota 0))\left(\lim_{\epsilon\downarrow 0}f(\epsilon) G^\mu_{11}(E+\iota\epsilon)\right)=0
\end{align*}
So we get
\begin{equation}\label{RkNuEq1}
range\left(\lim_{\epsilon\downarrow 0}f(\epsilon) G^\mu_{11}(E+\iota\epsilon)\right)\subseteq ker(I+\mu G_{11}(E+\iota 0))\subseteq ker(\Im G_{11}(E+\iota 0))
\end{equation}
Since $\Im G_{11}(E+\iota 0)\geq 0$ it decomposes the space $P_1\mathscr{H}=ker(\Im G_{11}(E+\iota 0))\oplus ker(\Im G_{11}(E+\iota 0))^\bot$ with $range(\Im G_{11}(E+\iota 0))=ker(\Im G_{11}(E+\iota 0))^\bot$, so on $ker(\Im G_{11}(E+\iota 0))^\bot$ we have $\Im G_{ii}(E+\iota 0)>0$. This fact will be used in identifying appropriate subspaces.
We will need some preliminary results before we attempt to prove our main results. The Following lemma relates the invertibility of the matrices $G^\mu_{12}(z)$ with the ranks of $Q_1P_2$ and $P_2$.
\begin{lemma}\label{InvLem1}
Let $A$ be a self-adjoint operator on a Hilbert space $\mathscr{H}$ and $P_1$ and $P_2$ be two projections of rank $N$. Let $\mathscr{H}_i$ denote the cyclic subspace generated by $A$ and $P_i\mathscr{H}$ and $Q_i:\mathscr{H}\rightarrow\mathscr{H}_i$ be the canonical projection onto that subspace, for $i=1,2$. If $Q_1P_2$ has same rank as $P_2$, then $P_1(A-z)^{-1}P_2$ is almost surely invertible for a.e $z\in\mathbb C^{+}$.
\end{lemma}
\begin{proof}
Let $\phi\in P_2\mathscr{H}\setminus\{0\}$. Since $Q_1P_2$ has same rank as $P_2$, we have $0\neq Q_1\phi\in\mathscr{H}_1$ (if it is zero, then $ker(Q_1)\cap P_2\mathscr{H}\neq\{0\}$ and so $rank(Q_1 P_2)<rank(P_2)$), so there is $\psi\in P_1\mathscr{H}$ and $f\in L^2(\mathbb R,d\mu_\psi)$ such that $Q_1\phi=f(A)\psi$. So
\begin{align*}
0\neq\dprod{Q_1\phi}{Q_1\phi}=\dprod{\psi}{f^\ast(A)Q_1\phi}=\dprod{\psi}{f^\ast(A)\phi}=\int \bar{f}(x)d\mu_{\psi,\phi}(x)
\end{align*}
since $Q_1$ commutes with any functions of $A$. So the measure $\mu_{\psi,\phi}$ is non-zero, hence the Borel transform
$$\int\frac{d\mu_{\psi,\phi}(x)}{x-z}=\dprod{\psi}{(A-z)^{-1}\phi},$$
is almost surely non-zero on $\mathbb C^{+}$.
So for each vector $\phi\in P_2\mathscr{H}$ there exists a $\psi\in P_1\mathscr{H}$ such that $\dprod{\psi}{(A-z)^{-1}\phi}$ is non-zero, in other words $P_1(A-z)^{-1}P_2$ is an injection, and since $P_1(A-z)^{-1}P_2$ is an $n\times n$ matrix we get invertibility.
\end{proof}
\begin{rem}
The lemma above also assures that for almost all $E$ the matrix valued function $P_1(A-E-\iota 0)^{-1}P_2$ is invertible.
For some $z\in\mathbb C^{+}$, the invertibility of $P_1(A-z)^{-1}P_2$ gives us $Q_1P_2$ has same rank as $P_2$. So by looking at $\det(G_{mn}(z))$ we can obtain a statement about non-orthogonality of the subspace $\{\mathscr{H}_i\}_{i=1,2}$.
\end{rem}
Choose a basis of $P_i\mathscr{H}$, then $G_{ij}(z)$ is a matrix in that basis. We can write
\begin{equation}\label{WorkSet1}
\hspace{-0.3cm}S=\{E\in\mathbb R|\ \text{Entries of $G_{ij}(E+\iota 0)$ exists and are finite } \forall i,j=1,2,3\}
\end{equation}
Then by lemma \ref{NonZeroTms1} we know that $S$ has full measure. Define
\begin{equation}\label{WorkSet2}
S_{ij}=\{E\in S| G_{ij}(E+\iota 0) \text{ is invertible}\}\qquad\forall i,j=1,2,3
\end{equation}
By lemma \ref{InvLem1}, $S_{ij}$ has full measure whenever $Q_iP_j$ has same rank as $P_j$.
\begin{rem}\label{InvRem1}
On the set $S$, the limit $G_{11}(E+\iota 0)$ exists, and since $\det(I+\mu G_{11}(E+\iota 0))=1+\sum_{i=1}^N a_i(E)\mu^i$, using lemma \ref{MbleVar1} for almost all $\mu$ the matrix $I+\mu G_{11}(E+\iota 0)$ is invertible on a set of full measure.
\end{rem}
\begin{lemma}\label{InvLem2}
Let $A$ be self adjoint operator on Hilbert space $\mathscr{H}$ and $\{P_i\}_{i=1}^3$ be rank $N$ projections. Define $A_\mu=A+\mu P_1$, $G_{ij}(z)=P_i(A-z)^{-1}P_j$ and $G^\mu_{ij}(z)=P_i(A_\mu-z)^{-1}P_j$. If $G_{23}(E+\iota 0)$ is invertible for a.e $E$, then $G^\mu_{23}(E+\iota 0)$ is also invertible for a.e $(E,\mu)$.
\end{lemma}
\begin{proof}
From equations \eqref{MHEq1} and \eqref{MHEq2} and remark \ref{InvRem1} we get
$$G^\mu_{23}(E+\iota 0)=G_{23}(E+\iota 0)-\mu G_{21}(E+\iota 0)(I+\mu G_{11}(E+\iota 0))^{-1}G_{13}(E+\iota 0)$$
since we are only looking for invertibility, looking at determinant is enough. And so
$$\det(G^\mu_{23}(E+\iota 0))=\frac{\det(G_{23}(E+\iota 0))+\sum a_n(E)\mu^n}{\det(I+\mu G_{11}(E+\iota 0))}$$
Again by corollary \ref{MbleVar2} we get that for almost all $\mu$ the matrix $G_{23}(E+\iota 0)$ is invertible on a set of full measure.
\end{proof}
Next lemma provide the relation between the absolute continuous component of the measure.
\begin{lemma}\label{AbsPtSpec1}
On Hilbert space $\mathscr{H}$ we have two rank $N$ projections $P_1,P_2$ and a self adjoint operator $A$. Set $A_\mu=A+\mu P_1$, $G_{ij}(z)=P_i(A-z)^{-1}P_j$ and $G_{ij}^\mu(z)=P_i(A_\mu-z)^{-1}P_j$; set $S$ and $S_{12}$ as \eqref{WorkSet1},\eqref{WorkSet2}. Define
$$V_{E,i}^\mu=ker(\Im G_{ii}^\mu(E+\iota 0))^{\bot}$$
for each $E\in S\cap\{x\in\mathbb R|\ \lim_{\epsilon\downarrow0} G_{11}^\mu(x+\iota\epsilon)\ \text{exists and finite}\}$. Assume $S_{12}$ has full measure. Then for a.e $\mu$
$$(G_{12}(E+\iota 0))^{-1}:V_{E,1}^\mu\rightarrow V_{E,2}^\mu$$
is injective, and
$$(I+\mu G_{11}(E+\iota 0)):V_{E,1}^0\rightarrow V_{E,1}^\mu$$
is isomorphism.
\end{lemma}
\begin{proof}
From the equation \eqref{MHEq2} and \eqref{MHEq3} we get
$$G^\mu_{22}(z)=G_{22}(z)-\mu G_{21}(z)G_{12}(z)+\mu^2 G_{21}(z)G_{11}^\mu(z)G_{12}(z)$$
For $E\in S\cap \{x\in\mathbb R|\ \lim_{\epsilon\downarrow0} G_{11}^\mu(x+\iota\epsilon)\ \text{exists and finite}\}$, let $v\in V_{E,1}^\mu$, and set $\phi=(G_{12}(E+\iota 0))^{-1}v$, observe (every quantity in RHS below exists and finite so limit can be taken)
\begin{align*}
\lim_{\epsilon\downarrow 0}\dprod{\phi}{(\Im G_{22}^\mu(E+\iota\epsilon))\phi}&=\lim_{\epsilon\downarrow 0} \left[ \dprod{\phi}{(\Im G_{22}(E+\iota\epsilon))\phi}-\mu\dprod{\phi}{\Im( G_{21}(E+\iota\epsilon)G_{12}(E+\iota \epsilon))\phi}\right.\\
&\qquad+\left.\mu^2\dprod{\phi}{(\Im G_{21}(E+\iota\epsilon)G^\mu_{11}(E+\iota \epsilon)G_{12}(E+\iota \epsilon))\phi}\right]
\end{align*}
Since $\Im G_{22}^\mu(E+\iota 0)$ is positive matrix, looking at $\dprod{\phi}{(\Im G_{22}^\mu(E+\iota 0))\phi}$ is enough.
If $\dprod{\phi}{(\Im G_{22}(E+\iota0))\phi}=0$ which implies $(\Im G_{22}(E+\iota0))\phi=0$ so using \eqref{eq4} we have $G_{12}(E+\iota 0)\phi=G_{21}^\ast(E+\iota 0)\phi$, and so
\begin{align*}
\lim_{\epsilon\downarrow 0}\dprod{\phi}{(\Im G_{22}^\mu(E+\iota\epsilon))\phi}&=\mu^2\dprod{G_{12}(E+\iota 0)\phi}{(\Im G^\mu_{11}(E+\iota 0))G_{12}(E+\iota 0)\phi}\\
&\qquad-\mu\dprod{\phi}{\Im( G_{21}(E+\iota0)G_{12}(E+\iota 0))\phi}\\
&=\mu^2\dprod{v}{(\Im G_{11}^\mu(E+\iota 0))v}
\end{align*}
So $\phi\in V^\mu_{E,2}$ and hence $G_{12}(E+\iota 0)^{-1}$ gives the injection.
For the other assertion, let $v\in V_{E,1}^0$ observe
$$\dprod{v}{(I+\mu G_{11}(E+\iota 0))v}=\norm{v}_2^2+\mu(\dprod{v}{\Re G_{11}(E+\iota 0)v}+\iota \dprod{v}{\Im G_{11}(E+\iota 0)v})$$
since $\dprod{v}{\Im G_{11}(E+\iota 0)v}\neq 0$, so the above equation cannot be zero for any $\mu\in \mathbb R$. So on $V_{E,1}^0$ the operator $(I+\mu G_{11}(E+\iota 0))$ is invertible. Set $\phi=(I+\mu G_{11}(E+\iota 0))v$, observe
\begin{align*}
\hspace{-1cm}\lim_{\epsilon\rightarrow 0}\dprod{\phi}{(\Im G_{11}^\mu(E+\iota\epsilon))\phi}&=\lim_{\epsilon\rightarrow 0}\dprod{\phi}{\Im (G_{11}(E+\iota\epsilon)(I+\mu G_{11}(E+\iota\epsilon))^{-1})\phi}\\
\hspace{-1cm}&=\dprod{(I+\mu G_{11}(E+\iota0))^{-1}\phi}{(\Im G_{11}(E+\iota 0))(I+\mu G_{11}(E+\iota0))^{-1}\phi}\\
\hspace{-1cm}&=\dprod{v}{(\Im G_{11}(E+\iota 0))v}\neq0
\end{align*}
This gives the isomorphism $(I+\mu G_{11}(E+\iota 0)):V_{E,1}^0\rightarrow V_{E,1}^\mu$.
\end{proof}
This only gives the injection between the absolutely continuous spectral subspaces. One cannot expect more from this setting. By a second perturbation we obtain an isomorphism, which is attained in the next corollary.
\begin{cor}\label{AbsEqu1}
Let $A$ be self adjoint operator on Hilbert space $\mathscr{H}$, and $P_1,P_2$ are two rank $N$ projections. Set $A_\mu=A+\mu_1 P_1+\mu_2 P_2$ and $G_{ij}(z)=P_i(A-z)^{-1}P_j$, $G^{\mu_1,\mu_2}_{ij}(z)=P_i(A_{\mu_1,\mu_2}-z)^{-1}P_j$ for $i,j=1,2$ and define the vector space
$$V_{E,i}^{\mu_1,\mu_2}=ker(\Im G_{ii}^{\mu_1,\mu_2}(E+\iota 0))^\bot$$
for each $E\in S\cap \{x\in\mathbb R|\ \lim_{\epsilon\downarrow0} G_{ii}^{\mu_1,\mu_2}(x+\iota\epsilon)\ \text{exists and finite for }i=1,2\}$. Assume $S_{12}$, $S_{21}$ have full measure. Then for a.e $\mu_1,\mu_2$ the two vector space $V_{E,1}^{\mu_1,\mu_2}$ and $V_{E,2}^{\mu_1,\mu_2}$ are isomorphic.
\end{cor}
\begin{proof}
This is just application of lemma \ref{AbsPtSpec1}. For $E$ in full measure set we have
\begin{align*}
V_{E,2}^{\mu_1,\mu_2}\hookrightarrow V_{E,1}^{\mu_1,\mu_2}
\end{align*}
where the map is $(G_{21}^{\mu_1,0}(E+\iota 0))^{-1}$. Lemma \ref{InvLem2} tells us $G_{21}^{\mu_1,0}(E+\iota 0)$ is also invertible for almost all $\mu_1$. Now we can do the same thing other way around:
\begin{align*}
V_{E,1}^{\mu_1,\mu_2}\hookrightarrow V_{E,2}^{\mu_1,\mu_2}
\end{align*}
Since we are working in finite dimensional spaces ($V_{E,i}^{\mu_1,\mu_2}$ are finite dimensional), injection in both direction tells us that they are isomorphic.
\end{proof}
The next lemma is similar to lemma \ref{AbsPtSpec1}, but for the singular part. The conclusion is for subspaces where growth of the Herglotz function is maximum or equivalently its associated measure has lowest (Hausdorff) dimension. We will use the fact that a matrix valued measure $\Sigma_n(\cdot)=P_n E_A(\cdot) P_n$ is absolutely continuous with respect to the trace measure $\sigma_n(\cdot)=tr(\Sigma_n(\cdot))$ and so $\lim_{\epsilon\downarrow 0} \frac{1}{\sigma_n(E+\iota\epsilon)}\Sigma_n(E+\iota\epsilon)=M(E)$ is $L^1$ w.r.t $\sigma_n$-singular ($\sigma_n(z),\Sigma_n(z)$ are corresponding Borel transform).
\begin{lemma}\label{SingPtSpec1}
On Hilbert space $\mathscr{H}$ we have two rank $N$ projections $P_1,P_2$ and a self adjoint operator $A$. Set $A_\mu=A+\mu P_1$, $G_{ij}(z)=P_i(A-z)^{-1}P_j$ and $G_{ij}^\mu(z)=P_i(A_\mu-z)^{-1}P_j$. Set $f_E(\epsilon)=tr(G_{11}^\mu(E+\iota\epsilon))^{-1}$ and $E\in\mathbb R$ be such that $f_E(\epsilon)\xrightarrow{\epsilon\downarrow 0}0$, define
$$\tilde{V}_{E,i}^\mu=ker\left(\lim_{\epsilon\downarrow 0}f_E(\epsilon)G_{ii}^\mu(E+\iota\epsilon)\right)^\bot$$
Assume $S_{12}$ defined as \eqref{WorkSet2} has full measure, then for $E\in S$ such that $f_E(\epsilon)\xrightarrow{\epsilon\downarrow 0}0$ defined as in \eqref{WorkSet1} the map
$$(G_{12}(E+\iota 0))^{-1}: \tilde{V}^\mu_{E,1}\rightarrow \tilde{V}^\mu_{E,2}$$
is injective. So the measure $\sigma_2^\mu$ (where $\sigma^\mu_i(\cdot)=tr\left(P_i E_{A_\mu}(\cdot)P_i\right)$) is absolutely continuous with respect to $\sigma_1^\mu$-singular.
\end{lemma}
\begin{proof}
Using $i,j=2$ in \eqref{MHEq2}, we have
$$G_{22}^\mu(z)=G_{22}(z)-\mu G_{21}(z)G_{12}(z)+\mu^2 G_{21}(z)G_{11}^\mu(z)G_{12}(z)$$
Since we are working with $E\in S$, the limits for $G_{ij}(E+\iota 0)$ exists for $i,j=1,2$. For $\phi,\psi\in P_2\mathscr{H}$ we have
\begin{align*}
\dprod{\psi}{G_{22}^\mu(E+\iota\epsilon)\phi}&=\dprod{\psi}{G_{22}(E+\iota\epsilon)\phi}-\mu \dprod{\psi}{G_{21}(E+\iota\epsilon)G_{12}(E+\iota\epsilon)\phi}\\
&\qquad+\mu^2 \dprod{\psi}{G_{21}(E+\iota\epsilon)G_{11}^\mu(E+\iota\epsilon)G_{12}(E+\iota\epsilon)\phi}\\
\lim_{\epsilon\downarrow0}f_E(\epsilon)\dprod{\psi}{G_{22}^\mu(E+\iota\epsilon)\phi}&=\mu^2\lim_{\epsilon\downarrow 0}f_E(\epsilon)\dprod{\psi}{G_{21}(E+\iota\epsilon)G_{11}^\mu(E+\iota\epsilon)G_{12}(E+\iota\epsilon)\phi}\\
&=\mu^2 \dprod{\psi}{G_{21}(E+\iota0)\left(\lim_{\epsilon\downarrow0}f_E(\epsilon) G_{11}^\mu(E+\iota\epsilon)\right)G_{12}(E+\iota 0)\phi}
\end{align*}
And now using \eqref{RkNuEq1} and \eqref{eq4} we have
\begin{align*}
& \dprod{\psi}{G_{21}(E+\iota0)\left(\lim_{\epsilon\downarrow0}f_E(\epsilon) G_{11}^\mu(E+\iota\epsilon)\right)G_{12}(E+\iota 0)\phi}\\
&\qquad= \dprod{\psi}{G_{12}(E+\iota0)^\ast\left(\lim_{\epsilon\downarrow0}f_E(\epsilon) G_{11}^\mu(E+\iota\epsilon)\right)G_{12}(E+\iota 0)\phi}
\end{align*}
From above if $\phi= G_{12}(E+\iota 0)^{-1} v$ for $v\in \tilde{V}_{E,1}^\mu$, then $\phi\in \tilde{V}^\mu_{E,2}$, giving us that the map $G_{12}(E+\iota 0)^{-1}$ is injection.
Finally
$$\lim_{\epsilon\downarrow 0}\frac{tr\left(G^\mu_{22}(E+\iota\epsilon)\right)}{tr\left(G^\mu_{11}(E+\iota\epsilon)\right)}=tr\left(G_{12}(E+\iota0)^\ast\left(\lim_{\epsilon\downarrow0}f_E(\epsilon) G_{11}^\mu(E+\iota\epsilon)\right)G_{12}(E+\iota 0)\right)$$
where RHS is $L^1$ for $\sigma^\mu_1$-singular by lemma \ref{PolThm} (Poltoratskii's theorem).
\end{proof}
\begin{cor}\label{SingLem3}
Let $A$ be self adjoint operator on Hilbert space $\mathscr{H}$, and $P_1,P_2$ are two rank $N$ projections. Set $A_\mu=A+\mu_1 P_1+\mu_2 P_2$, $G_{ij}(z)=P_i(A-z)^{-1}P_j$ and $G^{\mu_1,\mu_2}_{ij}(z)=P_i(A_{\mu_1,\mu_2}-z)^{-1}P_j$ for $i,j=1,2$. Let $E\in S_{12}\cap S_{21}$ (defined as in \eqref{WorkSet2}) and $tr(G_{ii}^{\mu_1,\mu_2}(E+\iota\epsilon))^{-1}\xrightarrow{\epsilon\downarrow0}0$ for either $i=1,2$, then
$$\tilde{V}_{E,i}^{\mu_1,\mu_2}=ker(\lim_{\epsilon\downarrow 0}tr(G_{ii}^{\mu_1,\mu_2}(E+\iota\epsilon))^{-1}G_{ii}^{\mu_1,\mu_2}(E+\iota\epsilon))^\bot\qquad i=1,2$$
are isomorphic. In particular the singular part of trace measure associated with $G_{ii}^{\mu_1,\mu_2}$ are equivalent to each other.
\end{cor}
\begin{proof}
Define
$$\tilde{V}_{E,i,j}^{\mu_1,\mu_2}=ker(\lim_{\epsilon\downarrow 0}tr(G_{jj}^{\mu_1,\mu_2}(E+\iota\epsilon))^{-1}G_{ii}^{\mu_1,\mu_2}(E+\iota\epsilon))^\bot$$
This is exactly like corollary \ref{AbsEqu1}. By action of lemma \ref{SingPtSpec1} we have
$$V^{\mu_1,\mu_2}_{E,1,1}\hookrightarrow V^{\mu_1,\mu_2}_{E,2,1}\ \&\ V^{\mu_1,\mu_2}_{E,2,2}\hookrightarrow V^{\mu_1,\mu_2}_{E,1,2}$$
where first is given by $G_{12}^{0,\mu_2}(E+\iota 0)^{-1}$ and second is given by $G_{21}^{\mu_1,0}(E+\iota 0)^{-1}$ which are a.e (with respect to perturbation $\mu_1,\mu_2$) invertible because of lemma \ref{InvLem2}. Because of the second conclusion of the previous lemma \ref{SingPtSpec1} we have
$$\lim_{\epsilon\downarrow 0}\frac{tr\left(G^\mu_{11}(E+\iota\epsilon)\right)}{tr\left(G^\mu_{22}(E+\iota\epsilon)\right)}\ \ \text{exists for a.e $tr(P_2 E_{A_\mu}(\cdot)P_2)$-singular},$$
$$\lim_{\epsilon\downarrow 0}\frac{tr\left(G^\mu_{22}(E+\iota\epsilon)\right)}{tr\left(G^\mu_{11}(E+\iota\epsilon)\right)}\ \ \text{exists for a.e $tr(P_1 E_{A_\mu}(\cdot)P_1)$-singular}.$$
So as a vector space $V_{E,i,j}^{\mu_1,\mu_2}=V_{E,i,i}^{\mu_1,\mu_2}=V^{\mu_1,\mu_2}_{E,i}$ for a.e $tr(P_i E_{A_\mu}(\cdot)P_i)$-singular.
Since we have injection both direction and finite dimensionality of the spaces involved, we get the isomorphism.
\end{proof}
\subsection{Proof of Main theorem}
\begin{proof}
The notation we will use is
$$G^\omega_{nm}(z)=P_n(H^\omega-z)^{-1}P_m\qquad\forall n,m\in\mathcal{N}$$
and for some $p\in\mathscr{M}$ we will denote
$$H^{\omega}_{\mu,p}=H^\omega+\mu P_p$$
and
$$G^{\omega,\mu,p}_{nm}(z)=P_n(H^\omega_{\mu,p}-z)^{-1}P_m\qquad\forall n,m\in\mathscr{M}$$
\begin{enumerate}
\item For $n,m\in\mathscr{M}$, let $\omega\in E_{n,m}$, using lemma \ref{InvLem1} we get $G^\omega_{nm}(z)$ is almost surely invertible. For any $p\in\mathcal{N}$, we have $H^\omega_{\mu,p}$, and using lemma \ref{InvLem2} we get $G^{\omega,\mu,p}_{nm}(z)$ is also almost surely invertible for almost all $\mu$. So we get, if $\omega\in E_{n,m}$, then $\tilde{\omega}\in E_{n,m}$ ($\tilde{\omega}$ is defined by $\omega_n=\tilde{\omega}_n\ \forall n\in\mathscr{M}\setminus\{p\}$) or in other words, $E_{n,m}$ is independent of the $\omega_p$ for any $p\in\mathscr{M}$. We can repeat the procedure and show that $E_{n,m}$ is independent of $\{\omega_{p_i}\}_{i=1}^K$ for $p_i\in\mathscr{M}$ . So we can use Kolmogorov $0$-$1$ law to conclude that $\mathbb{P}(E_{n,m})\in\{0,1\}$.
\item For any $n\in\mathscr{M}$, we have $(H^\omega,\mathscr{H}_{n,\omega})$ is unitary equivalent to $(M_{id},L^2(\mathbb R,\Sigma^\omega_n,\mathbb C^N))$ (see theorem \ref{SpecThm}). For $m\in\mathscr{M}$ such that $\mathbb{P}(E_{n,m}\cap E_{m,n})=1$, we have to show $(\Sigma^\omega_n)_{ac}$ is equivalent to $(\Sigma^\omega_m)_{ac}$. Using $(5)$ of theorem \ref{FgetThm1} we have
$$d(\Sigma^\omega_n)_{ac}(E)=\frac{1}{\pi}\Im G_{nn}^\omega(E+\iota 0)dE$$
For $\omega\in E_{n,m}$, we can write the operator $H^{\tilde{\omega}}=H^\omega+\mu_1 P_n+\mu_2 P_m$, and using corollary \ref{AbsEqu1} we get $V_n^{{\tilde{\omega}}}$ are isomorphic to $V_m^{{\tilde{\omega}}}$, where
$$V_i^{{\tilde{\omega}}}=ker\left(P_i(H^{\tilde{\omega}}-E-\iota 0)^{-1}P_i\right)^\bot$$
Since $\Im G^\omega_{nn}(E+\iota 0)=\Im\left(P_n(H^\omega-E-\iota0)^{-1}P_n\right)$, the isomorphism gives the equivalence. By proof of part $(1)$, we know $E_{n,m}$ is independent of $\omega_n$ and $\omega_m$, so the result holds for a.e $\omega$.
\item For $n,m\in\mathscr{M}$ such that $\mathbb{P}(E_{n,m}\cap E_{m,n})=1$. Let $\omega\in E_{n,m}$, define $H^{\tilde{\omega}}=H^\omega+\mu_n P_n+\mu_m P_m$ (almost always $\tilde{\omega}\in E_{n,m}$), then corollary \ref{SingLem3} gives the equivalence of the trace measure for singular part. As for absolute continuous part, second part of the theorem gives the equivalence.
\end{enumerate}
\end{proof}
\section*{Acknowledgement}
I would like to thank M. Krishna for discussions and helpful suggestions. This work is partially supported by IMSc Project 12-R\&D-IMS-5.01-0106.
|
1,116,691,499,784 | arxiv | \section{Introduction}
The increasing availability of modern genetic data offers the possibility of learning more than ever before about the processes which generated it, for example the details of demographic change. However, for stochastic models that incorporate a high level of detail, it is impractically costly to evaluate numerically the probability of a dataset, preventing inference by standard likelihood-based methods. This has motivated the development of likelihood-free approaches, such as approximate Bayesian computation (ABC), which utilise the fact that simulating data from these models is relatively computationally cheap.
There is particular interest in using these methods to choose between explanatory models for observed data. However \cite{Robert:2011} illustrated that applying ABC to model choice problems can produce highly inaccurate results. This paper provides methods to address these concerns and improve the informativeness and efficiency of ABC model choice. We focus on a particular application, inferring the demographic history of \emph{Campylobacter jejuni} in New Zealand from population genetic data. This will be described in detail later.
A simple ABC algorithm operates by simulating data sets $x$ under various model and parameter pairs $(\mathcal{M},\theta)$. Pairs are accepted when $x$ is sufficiently close to the observed data $x_{\text{obs}}$. This produces a sample of independent draws from an approximation to the Bayesian posterior distribution i.e.~that of $\mathcal{M}, \theta | x$. Closeness is judged by the distance between vectors of \emph{summary statistics} $S(x_{\text{obs}})$ and $S(x)$. Previous work (e.g.~\citealt{Blum:2010, Fearnhead:2012}) has shown that the quality of the approximations produced by ABC algorithms decays rapidly with the dimension of $S$. This motivates finding low dimensional summary statistics. However, it is crucial that these are also informative, as otherwise the problem of inaccurate results described by \cite{Robert:2011} can occur.
This paper sets out a method to choose $S(x)$ for use in model selection. We give a theoretical result showing the existence of a low dimensional vector of statistics sufficient for model choice (under an appropriate definition given later). Our method aims to estimate such a vector. The idea is to use an extra simulation step to produce many $(\mathcal{M}, \theta, x)$ triples and then fit simple regression models of $\mathcal{M}$ on $x$. Predictors from the fitted regressions form estimates of low dimensional sufficient statistics, and are used as $S$ in a main ABC analysis. We refer to the approach as the \emph{semi-automatic method} as it adapts the method of the same name in \cite{Fearnhead:2012} which chooses $S$ by regressing $\theta$ on $x$ when the aim is inference of continuous parameters.
We expect that the targeted sufficient statistics are often complicated functions of the data which are hard to estimate globally. To make the task easier, we advise that the regressions are based on data simulated, within each model, from a limited subset of parameter values which is judged by preliminary analysis to hold most of that model's posterior mass. In other words, the simulation step mentioned above performs simulations from the models of interest following a truncation of their parameter supports. The resulting $S$ can only be expected to perform well for choice between these truncated models. A separate theoretical contribution of the paper is to relate results from such a choice to the original model choice problem.
Our approach of performing regressions based on simulated data is similar to \cite{Estoup:2012} who instead use linear discriminant analysis. We expect our other contributions would also be useful to this approach. Other work on ABC summary statistics has focused on validating a particular choice of $S$. One approach is to run ABC analyses on a large number of simulated data sets to check whether $S$ provides accurate results \citep{Sousa:2011, Sjodin:2012}. \cite{Marin:2012} give a complementary approach, identifying necessary and sufficient properties of $S$ for an ABC analysis to be consistent in an asymptotic regime corresponding to highly informative data. Essentially, $S$ must have different asymptotic means under the models. Given a choice of $S$, this property can be tested theoretically or through simulation. Validation techniques are useful, but not sufficient, to choose $S$ for high dimensional genetic data where it is infeasible to compare all possible choices of $S$. Our contribution is a method which can be applied in this setting to propose good choices of $S$.
Ideally the same ABC simulations would be used to provide inference on models and also their parameters. The method we present provides summary statistics suitable for model choice only. It would be desirable to augment them with informative summaries on model parameters, and we give an approach to do this that is specific to our main application. General methods are an interesting topic for future research.
The remainder of the paper is organised as follows. Section \ref{sec:background} describes ABC methods and our notation. Section \ref{sec:theory} gives theoretical results on sufficiency, with proofs delayed until an appendix. Section \ref{sec:method} explains our semi-automatic ABC method, and Section \ref{sec:examples} illustrates it for simple examples. The application to \emph{Campylobacter} data is given in Section \ref{sec:appl}, and the article concludes with a discussion in Section \ref{sec:discussion}. Further theoretical and simulation results are provided as supplementary material \citep{Prangle:2013}.
\section{Background} \label{sec:background}
Denote by $\mathcal{M}$ a random variable which can take values $\mathcal{M}_1, \mathcal{M}_2, \ldots, \mathcal{M}_M$, representing possible models. Let $p_M$ be its prior mass function. In an abuse of notation $\mathcal{M}$ will also denote a generic value of the variable, with usage clear from the context. Each model represents a joint distribution $\pi(x, \theta | \mathcal{M})$ on the data $x$ and parameters $\theta \in \Theta$. This can be written as the product of prior and likelihood terms but we concentrate on the joint form for later convenience and to emphasise that the definition of a model includes a parameter prior. Note that it is possible for the parameters under each model to belong to different spaces, in which case $\Theta$ is their union, and that $\theta$ will also be used to denote both a random variable and generic value.
Bayesian inference concentrates on $\pi(\theta | x, \mathcal{M})$ -- the posterior distribution of parameters under a specific model -- and $\Pr(\mathcal{M} | x)$ -- the posterior model probabilities. Inference on models can also be summarised using \emph{Bayes factors} $B_{ij} = \pi(x | \mathcal{M}_i) / \pi(x | \mathcal{M}_j)$; the ratio of the \emph{evidences} under models $\mathcal{M}_i$ and $\mathcal{M}_j$. The Bayes factor does not involve $p_M$, but incorporating this information allows calculation of the ratio of posterior weights:
\[
\Pr(\mathcal{M}_i | x) / \Pr(\mathcal{M}_j | x) = B_{ij} p_M(\mathcal{M}_i) / p_M(\mathcal{M}_j).
\]
ABC is used in situations where it is possible to simulate $x | \mathcal{M}, \theta$ but evaluation of the density $\pi(x | \mathcal{M}, \theta)$ is impossible or impractically costly. A simple approach to ABC inference is Algorithm \ref{alg:ABC_RS1} \citep{Grelaud:2009}.
\begin{algorithm}[ht]
\rule{\textwidth}{0.7mm}
\begin{tabular}[h]{ll}
{\bf Input:} & Observed data $x_{\text{obs}}$, and a function $S(\cdot)$. \\
& A threshold $h \geq 0$ and a distance function $d(\cdot,\cdot)$. \\
& An integer $N>0$.\\
\\
{\bf Iterate:} & For $i=1,\ldots,N$
\end{tabular}
\begin{enumerate}[topsep=0pt]
\item Simulate $\mathcal{M}^*$ from $p_M(\mathcal{M})$.
\item Simulate $\theta^*$ from $\pi(\theta | \mathcal{M}^*)$.
\item Simulate $x_\text{sim}$ from $\pi(x|\theta^*, \mathcal{M}^*)$.
\item Accept $(\mathcal{M}^*, \theta^*)$ if $d(S(x_{\text{obs}}), S(x_{\text{sim}})) \leq h$.
\end{enumerate}
\begin{tabular}[h]{ll}
{\bf Output:} & A set of accepted model and parameter pairs of the form $(\mathcal{M}^*, \theta^*)$.
\end{tabular}
\rule{\textwidth}{0.7mm}
\caption{Rejection sampling ABC incorporating model choice and parameter inference. \label{alg:ABC_RS1}}
\end{algorithm}
Letting $\mathbb{I}$ represent an indicator function, define
\begin{align*}
p_\text{ABC}(\mathcal{M} | S(x)) &\propto p_M(\mathcal{M}) \int \pi(S(x) | \mathcal{M}) \mathbb{I}[d(S(x_{\text{obs}}), S(x)) \leq h] dx, \\
\pi_\text{ABC}(\theta | \mathcal{M}, S(x)) &\propto
\pi(\theta) \int \pi(S(x) | \theta, \mathcal{M}) \mathbb{I}[d(S(x_{\text{obs}}), S(x)] \leq h) dx.
\end{align*}
Then the sample of $(\mathcal{M}, \theta)$ values output by Algorithm \ref{alg:ABC_RS1} is drawn from a distribution with conditionals $\pi_{\text{ABC}}(\theta | \mathcal{M}, S(x))$ and marginal $p_{\text{ABC}}(\mathcal{M} | S(x))$.
In the limit $h \to 0$, the ABC target distributions just defined converge on $\Pr(\mathcal{M} | S(x))$ and $\pi(\theta | \mathcal{M}, S(x))$. However, reducing $h$ decreases the output sample size, increasing Monte Carlo approximation error. A \emph{curse of dimensionality} result reviewed in the supplementary material shows that the rate of increase in error rises with the dimension of $S$. This motivates a low dimensional $S$. It is also important that $S$ is informative so that the limiting ABC targets approximate the posterior distributions $\Pr(\mathcal{M} | x)$ and $\pi(\theta | M, x)$ well. Hence $S$ is a crucial tuning choice.
In practice, the results of Algorithm \ref{alg:ABC_RS1} can be highly variable if some prior model masses are small. Algorithm \ref{alg:ABC_RS2} is a more stable alternative suggested by \cite{Grelaud:2009}.
\begin{algorithm}[ht]
\rule{\textwidth}{0.7mm}
\begin{enumerate}[topsep=0pt]
\item[] As Algorithm \ref{alg:ABC_RS1} except:
\item Set $\mathcal{M}^*$ to $\mathcal{M}_1,\mathcal{M}_2,\ldots,\mathcal{M}_M$ with equal probability.
\end{enumerate}
\rule{\textwidth}{0.7mm}
\caption{A more stable modification of Algorithm \ref{alg:ABC_RS1}.}
\label{alg:ABC_RS2}
\end{algorithm}
Algorithm 2 samples $\mathcal{M}^*$ values from a uniform distribution rather than $p_M$, and it is necessary to correct the results to take this into account. Let $n_i$ be the number of occurrences of $\mathcal{M}_i$ in the output sample. Then $n_i / n_j$ is an estimator of the Bayes factor $B_{ij}$ and $n_i p_M(\mathcal{M}_i) / \sum_{j=1}^M n_j p_M(\mathcal{M}_j)$ is an estimator of $\Pr(\mathcal{M}_i | S(x))$. The asymptotic and curse of dimensionality results outlined above continue to hold. See \cite{Grelaud:2009} and the supplementary material for full details.
More efficient ABC model choice algorithms have been proposed, mainly based on sequential Monte Carlo (SMC) \cite[e.g.][]{Toni:2010, DelMoral:2011}. However, the tuning issues just described remain. The SMC algorithm of \cite{Toni:2010} is used later and described in the supplementary material. Another approach to improve the quality of ABC results is to \emph{post-process} them. This uses accepted parameters $\theta^{*,1}, \theta^{*,2}, \ldots$, models $\mathcal{M}^{*,1}, \mathcal{M}^{*,2}, \ldots$ and the corresponding simulations $x^{*,1}, x^{*,2}, \ldots$. For parameter inference \emph{regression adjustment} \citep{Beaumont:2002, Blum/Francois:2010} fits a model $\theta=f(x,e)$, where $f$ is a deterministic function and $e$ a random residual, and outputs adjusted values $\theta'^{,i}=\hat{f}(x_{\text{obs}}, \hat{e}^i)$. Model choice results can be post-processed by fitting a multinomial regression model $\Pr(\mathcal{M}|x) = g(x)$ and returning $\hat{g}(x_{\text{obs}})$ \citep{Beaumont:2008}.
\section{Theory} \label{sec:theory}
A statistic $S(x)$ of data $x$ is said to be \emph{Bayes sufficient} for parameter $\theta$ if $\theta | S(x)$ and $\theta | x$ have the same distribution for any prior distribution and almost all $x$ \citep{Kolmogorov:1942}. This is a natural definition of sufficiency for ABC, as it shows that in an ideal ABC algorithm with $h \to 0$, the ABC target distribution equals the correct posterior when $S$ is used. Throughout later sections of this paper we use ``sufficient'' to mean Bayes sufficient.
Theorem \ref{thm:suff} gives an alternative characterisation of Bayes sufficiency for $\mathcal{M}$ in the setting described in Section \ref{sec:background}.
\begin{theorem} \label{thm:suff}
Let $T(x)=\{ T_1(x), T_2(x), \ldots, T_{M-1}(x) \}$ where
\[
T_i(x) = \Pr(x | \mathcal{M}_i) / [\sum_{j=1}^M \Pr(x | \mathcal{M}_j)].
\]
Then $S$ is Bayes sufficient for $\mathcal{M}$ if and only if there exists a function $g$ such that $g[S(x)] = T(x)$ for almost all $x$.
\end{theorem}
Theorem \ref{thm:suff} shows that for any situation with $M$ models there are sufficient statistics for model choice of dimension $M-1$, namely the vector $T(x)$. Furthermore, vectors $S(x)$ which can be transformed to $T(x)$ are also sufficient.
\paragraph{Proof} See Appendix.
A sketch of the proof is as follows. The theorem states that Bayes sufficiency of $S(x)$ for $\mathcal{M}$ is equivalent to there being a deterministic transformation from $S(x)$ to $T(x)$. The latter vector is $M-1$ posterior probabilities given observations $x$ and uniform $p_M$. Under uniform $p_M$, conditioning $\mathcal{M}$ on $S(x)$ satisfying this condition clearly recovers the posterior weights. Reweighting can be used to show that the posterior is also recovered under any other $p_M$. The converse can be shown by construction.
One particular sufficient choice of $S(x)$ used later is a vector of all Bayes factors under a one-to-one transformation. Additionally, we note that a sufficient $S(x)$ may contain summaries which do not contribute to $T(x)$ but are useful for parameter inference.
Theorem \ref{thm:suff} is similar to Theorem 3a of \cite{Fearnhead:2012}, which shows that for continuous parameters $\theta$, $S(x) = E(\theta | x)$ is an optimal choice to estimate parameter means in terms of minimising quadratic error loss. However this $S(x)$ is typically not sufficient for $\theta$. Theorem \ref{thm:suff} is a stronger result for the case of model choice (or, equivalently, for estimating discrete parameters) showing the existence of low dimensional vectors of sufficient statistics.
\section{Method} \label{sec:method}
The low dimensional sufficient statistics described by Theorem \ref{thm:suff} are generally not available. However their existence motivates an approach of approximating them from simulated data, and then using these approximations as $S(x)$ within ABC, as outlined in Algorithm \ref{alg:semiauto1}. Step 2 requires some user input, as will be described in Section \ref{sec:fit}, so the method is referred to as ``semi-automatic ABC''.
\begin{algorithm}[h]
\rule{\textwidth}{0.7mm}
\begin{enumerate}[topsep=0pt]
\item Simulate a large number of $(\mathcal{M}, \theta, x)$ triples.
\item Calculate $S(x)$ by estimating sufficient statistics from simulations.
\item Perform the ABC analysis using $S(x)$.
\end{enumerate}
\rule{\textwidth}{0.7mm}
\caption{Outline of simple semi-automatic ABC for model choice. Full details of the steps are given in Sections \ref{sec:fit} and \ref{sec:method_other}.}
\label{alg:semiauto1}
\end{algorithm}
Sufficient statistics are likely to be highly complicated functions of the data due to the complexity of the models, and thus hard to approximate. To make the task more tractable, we recommend some optional extra steps to give Algorithm \ref{alg:semiauto2}. This simplifies the models by concentrating on the most likely parameter values. We view this as replacing the models $\pi(\theta, x | \mathcal{M}_i)$ with \emph{truncated models}
\begin{equation} \label{eq:trunc}
\pi(\theta, x | \mathcal{M}_i') \propto \pi(\theta, x | \mathcal{M}_i) \mathbb{I}(\theta \in R_i),
\end{equation}
where $R_i$ is a \emph{training region} for model $\mathcal{M}_i$. Calculation of $S$ is performed using data simulated from the truncated models. The resulting $S$ estimates sufficient statistics for the choice between the truncated rather than original models. Therefore the main ABC analysis must be performed between the truncated models, and, as will be shown in Section \ref{sec:method_other}, the results can be used to estimate the model choice posterior for the original problem.
\begin{algorithm}[h]
\rule{\textwidth}{0.7mm}
\begin{enumerate}[topsep=0pt]
\item Perform an ABC \emph{pilot analysis} with ad-hoc summary statistics. Use the output for each model to choose training regions $R_i$ of parameters which contain most of the posterior probability for each model $\mathcal{M}_i$.
\item Simulate a large number of $(\mathcal{M}, \theta, x)$ triples using truncated models.
\item Calculate $S(x)$ by estimating sufficient statistics from simulations.
\item Perform the ABC \emph{main analysis} using $S(x)$ and truncated models.
\item Use truncation correction to estimate posterior probabilities.
\end{enumerate}
\rule{\textwidth}{0.7mm}
\caption{Semi-automatic ABC for model choice with truncation steps. Full details of the steps are given in Sections \ref{sec:fit} and \ref{sec:method_other}.}
\label{alg:semiauto2}
\end{algorithm}
The remainder of this section discusses the implementation of the steps in these algorithms in more detail. Performance is assessed through simulation examples in Section \ref{sec:examples}.
\subsection{Calculating summary statistics} \label{sec:fit}
This section describes a logistic regression based approach to estimating sufficient statistics from simulated \emph{training data}. A motivating example is the case of two models $\mathcal{M}_1$ and $\mathcal{M}_2$, with training data drawn from the joint distribution on $(\mathcal{M},x)$, where $x = (x_1,x_2,\ldots,x_p)$. Define $q(x)=\Pr(\mathcal{M}_1 | x)$. This is clearly a sufficient statistic for $\mathcal{M}$. Logistic regression can be used to fit
\begin{equation} \label{eq:logreg1}
\logit q(x) := \log\{q(x)/[1-q(x)]\} = \beta_0 + \sum_{i=1}^p \beta_i x_i.
\end{equation}
The fitted $\hat{q}(x)$ is an estimate of a sufficient statistic. Note also that $q(x)/[1-q(x)]$ is the Bayes factor multiplied by a constant depending on the prior model weights.
To improve on the fit of \eqref{eq:logreg1} and cope with situations where $x$ is very high dimensional or not a fixed-length vector, in practice we fit instead
\begin{equation} \label{eq:logreg2}
\logit q(x) = \beta^T f(x),
\end{equation}
where $f(x)$ is a vector of transformation of $x$, including a constant term. This can perform initial dimension reduction and introduce non-linear functions of the data into the regression. Example choices of $f(\cdot)$ used later are 1) order statistics of raw data 2) a large number of summaries of genetic sequence data used in previous literature, and transformation of these (a constant term is also included in both cases). To assist in the choice of $f(\cdot)$, regression diagnostics can be used, for example to compare the quality of the logistic regression fits for some $f_1(\cdot)$ and $f_2(\cdot)$. The supplementary material gives examples in which cross-validation estimates of the deviance are compared.
In general the aim is to calculate $S$ for choice between models $\mathcal{M}_1, \mathcal{M}_2, \ldots, \mathcal{M}_M$, which for this discussion may represent original or truncated models. Fix a pair of distinct models, $\mathcal{M}_i$ and $\mathcal{M}_j$, and consider the subset of training data made up of only the simulations from these models. Logistic regression can be used as above to estimate the probability $q_{ij}$ of $\mathcal{M}_i|x$ under the $(\mathcal{M},x)$ distribution for this training data subset. This is repeated for each pair of distinct models, and results in a vector of one-to-one transformations of Bayes factors. This target was shown to be sufficient for $\mathcal{M}$ in Section \ref{sec:theory}.
The logistic regression method set out above gives $\dim(S)=M(M-1)/2$, whereas Theorem \ref{thm:suff} shows there are sufficient statistics of dimension $M-1$. Alternative regression methods can be used to give $\dim(S)=M-1$, for example estimating an appropriate subset of the Bayes factors or multinomial regression.
In this paper we consider only examples with $M \leq 3$ so the logistic regression approach has limited excess dimension. We believe it also aids robustness. Even if the logistic regression for one pair of models fits poorly (as is the case in the \emph{Campylobacter} application), the others can still allow a good overall estimate of sufficient statistics.
\subsection{Other steps} \label{sec:method_other}
\paragraph{Pilot analysis}
The pilot ABC analysis uses an ad-hoc choice of summary statistics $S_{\text{pilot}}$. The purpose of the pilot analysis is to roughly identify regions containing most of the posterior mass, so the procedure should be reasonably robust to the choice of $S_{\text{pilot}}$. \cite{Fearnhead:2012} illustrate this argument by example. Validation tests could also be performed to test the quality of ABC output from analysing simulated data using $S_{\text{pilot}}$.
In our implementation the pilot uses an ABC model choice algorithm such as Algorithm \ref{alg:ABC_RS2}. An alternative approach would be to perform a separate pilot run for each model, focusing only on finding training regions, rather than initial model choice analysis. We did not investigate this as a pilot analysis incorporating model choice has useful properties. The estimated posterior can serve as a verification that the final results appear sensible. Also, if the pilot results are sufficiently convincing in showing that certain models are incompatible with the data, they could be ruled out at this stage saving computational resources.
\paragraph{Training region choice}
The training region $R_i$ for model $\mathcal{M}_i'$ should cover most of the posterior mass. Our implementation is to choose a hypercube, with the range of each parameter being the interval of sampled values.
\paragraph{Simulating data}
We generate training data from the distribution on $(\mathcal{M}, \theta, x)$ defined by the priors and models (or truncated models). An alternative model distribution can be used without affecting the arguments in Section \ref{sec:fit} showing that the fitted summary statistics are estimates of sufficient statistics. This would be useful if some prior model weights are too small to fit all regressions well.
\paragraph{Truncation correction}
Results of the main ABC analysis choosing between truncated models can be used to estimate those for the original model choice problem by the following consequence of \eqref{eq:trunc}:
\[
\pi(x | \mathcal{M}_i) = r_i \pi(x | \mathcal{M}_i'),
\quad \text{where } r_i = \Pr(\theta \in R_i | \mathcal{M}_i) / \Pr(\theta \in R_i | x, \mathcal{M}_i).
\]
That is, the evidence of $\mathcal{M}_i$ equals that of $\mathcal{M}_i'$ multiplied by $r_i$, the ratio of the prior and posterior probabilities of $R_i$. This allows estimation of Bayes factors or posterior probabilities for the original models given $r_i$ values. As $R_i$ is chosen with the aim of containing most of the posterior mass, we estimate its posterior probability by 1, giving an estimate $\hat{r}_i = \Pr(\theta \in R_i | \mathcal{M}_i)$. This prior probability can usually be calculated directly when $R_i$ is a hypercube.
\section{Examples} \label{sec:examples}
To illustrate our semi-automatic ABC method, we apply it to three simple binary model selection examples from the literature \citep{Didelot:2011, Marin:2012}, and extend one of these to a 3 model example. The examples are summarised in Table \ref{tab:examples}. The binary examples are the first two models in each letter group, and the 3 model example is the full A group. In each case the data are 100 independent draws from one of the models and the models have equal prior probabilities. All ABC analyses were performed using Algorithm \ref{alg:ABC_RS2}.
\begin{table}[htp]
\begin{center}
\begin{tabular}{c|ll}
Name & Model & Prior \\
\hline
A1 & Poisson($\theta$) & $\theta \sim \text{Exponential}(1)$ \\
A2 & Geometric($\theta$) & $\theta \sim \text{Uniform}(0,1)$ \\
A3 & Binomial($10, \theta$) & $\theta \sim \text{Beta}(1,9)$ \\
B1 & Laplace($\theta, 1/\sqrt{2}$) & $\theta \sim \text{Normal}(0,2^2)$ \\
B2 & Normal($\theta, 1$) & $\theta \sim \text{Normal}(0,2^2)$ \\
C1 & gk($0,1,0,k$) & $k \sim \text{Unif}(-0.5,5)$ \\
C2 & gk($0,1,g,k$) & $(g,k) \sim \text{Unif}([0,4] \times [-0.5,5])$
\end{tabular}
\caption{Models used in the examples of Section \ref{sec:examples}. For details of the $g$-and-$k$ distribution see \cite{Rayner:2002}.}
\label{tab:examples}
\end{center}
\end{table}
\paragraph{Binary model selection}
The semi-automatic ABC method of Algorithm \ref{alg:semiauto2} was implemented starting with a pilot analysis using $S_{10}(x)=(x^{(5)}, x^{(15)}, \ldots, x^{(95)})$ where $x^{(i)}$ is the $i$th order statistic. Model choice summary statistics were fitted as described in Section \ref{sec:fit} using $f(x)=(1,x^{(1)},x^{(2)},\ldots,x^{(100)})$. No other summaries were added for parameter inference. The analysis used $2 \times 10^4$ simulations, one quarter for the pilot and the rest used for both summary statistic fitting and the main analysis. The pilot and main analysis both accepted 100 simulations. Some alternative ABC analyses on the data were performed, each using the same total number of simulations and acceptances. Firstly, the analysis was repeated using Algorithm \ref{alg:semiauto1}. Secondly, standard ABC analyses were performed with Algorithm \ref{alg:ABC_RS2} using (a) $S=S_{10}$ (b) $S$ as in \cite{Marin:2012}; 4th and 6th moments for B, 10\% and 90\% quantiles for C. All ABC analyses used the following distance
\begin{equation} \label{eq:scaled dist}
d(x,y) = \left[ \sum_{i=1}^p (x_i-y_i)^2 / \hat{\sigma}_i^2 \right]^{1/2},
\end{equation}
i.e.~Euclidean distance between $p$-dimensional summary statistics normalised by estimated standard deviations, $\hat{\sigma}_i$. The latter were estimated from the simulated data.
Figure \ref{fig:examples} shows estimated posterior probabilities for $S_{10}$ and Algorithm \ref{alg:semiauto2}. Numerical summaries of estimation quality are given in Table \ref{tab:exres}. This reports the entropic loss \citep{Robert:1996},
\[
-\sum_{i=1}^{100} \log \hat{\Pr}(m_{0,i} | x_{\text{obs},i}),
\]
the negative log probability of the correct models $m_{0,1}, \ldots, m_{0,100}$ estimated from the corresponding simulated datasets $x_{\text{obs},1}, \ldots, x_{\text{obs},100}$. Also reported is the misallocation rate; the proportion of datasets where the highest weighted model was not the correct model. Our method provides an improvement in all scenarios, although this is modest for example C. The use of the truncation steps from Algorithm \ref{alg:semiauto2} is shown to sometimes be crucial; when Algorithm \ref{alg:semiauto1}, which omits these, is used instead, the results for example C are the worst of all methods. However the effect is problem dependent; in example B it made little difference. Exact posterior calculations are possible for examples A and B (the required Laplace marginal likelihood calculations are described in Appendix 1 from version 1 of \citealt{Marin:2012}), and in both cases Algorithm \ref{alg:semiauto2} provides comparable results.
We attempted to apply post-processing by the method of \cite{Beaumont:2008}. For example A this was usually not possible as there was no variation in the accepted summaries, which were discrete in this case, or because all acceptances were for a single model. For the other examples, it had little effect on entropic loss or misallocation rate, so these are not reported.
\begin{figure}[htp] \begin{center}
\includegraphics[totalheight=4in]{boxplots_paper.eps}
\caption{Boxplots of posterior probabilities of model 1 estimated by ABC (without post-processing) for 100 simulated datasets in each of three binary model comparison examples. The boxplots show quartiles of the values. Within each graph results are split by which model generated the data. The top row uses $S=S_{10}$, and the second row chooses $S$ by semi-automatic ABC (Algorithm \ref{alg:semiauto2}). The columns represent three model choice examples detailed in Table \ref{tab:examples}.} \label{fig:examples}
\end{center} \end{figure}
\begin{table}[htp]
\begin{center}
\begin{tabular}{c|llll}
& \multicolumn{4}{c}{Example} \\
\multicolumn{1}{c|}{Summary statistics} & \multicolumn{1}{c}{Binary A} & \multicolumn{1}{c}{Binary B} & \multicolumn{1}{c}{Binary C} & \multicolumn{1}{c}{3 models} \\
\hline
$S_{10}$ & 33.0 (17\%) & 33.5 (11\%) & 43.0 (16\%) & 70.7 (39\%) \\
From literature & \multicolumn{1}{c}{-} & 55.3 (25\%) & 40.9 (20\%) & \multicolumn{1}{c}{-} \\
From Algorithm \ref{alg:semiauto1} & 30.2 (14\%) & 13.5 (5\%) & $\infty$ (21\%) & \multicolumn{1}{c}{65.9 (42\%)} \\
From Algorithm \ref{alg:semiauto2} & 19.8 (15\%) & 13.9 (7\%) & 38.4 (14\%) & 58.9 (33\%) \\
\hline
Posterior & 19.8 (12\%) & 15.6 (8\%) & \multicolumn{1}{c}{-} & 58.1 (36\%)
\end{tabular}
\caption{Entropic loss and misallocation rate (in brackets) from several ABC analyses of 100 simulated datasets in each of four model comparison examples, detailed in Table \ref{tab:examples}. The final row shows values under the exact posterior, where these are available, for comparison.}
\label{tab:exres}
\end{center}
\end{table}
\begin{figure}[htp] \begin{center}
\includegraphics[totalheight=4in]{PoisGeomBin.eps}
\caption{Plots of true posterior model weight against ABC estimates for 100 simulated datasets in a three model example. Pluses are for $S=S_{10}$ and crosses for $S$ chosen by semi-automatic ABC (Algorithm \ref{alg:semiauto2}).}
\label{fig:PGB}
\end{center} \end{figure}
\paragraph{Selection between three models}
Algorithms \ref{alg:semiauto1} and \ref{alg:semiauto2} were implemented as for the two model examples, with the addition that three summary statistics were fitted, corresponding to three pairs of models. Figure \ref{fig:PGB} plots exact posterior probabilities against ABC estimates, and shows that Algorithm \ref{alg:semiauto2} performs better than the comparison analysis using $S=S_{10}$. This is confirmed by the quantitative summaries in Table \ref{tab:exres}, which also shows that Algorithm \ref{alg:semiauto2} outperforms Algorithm \ref{alg:semiauto1} and achieves comparable results to the true posterior values. Post-processing results are not shown because, as mentioned above, they could usually not be calculated for this example.
\section[Application]{Application} \label{sec:appl}
\emph{Campylobacter jejuni} and \emph{C.~coli} are bacterial pathogens that are a major cause of human gastroenteritis around the world \citep{Humphrey:2007}. They are considered commensals of a wide variety of animals, including poultry, ruminants and wild birds, and human infection occurs as a result of ingesting contaminated food or drinking water and via direct contact with animal faeces \citep{Savill:2003}. New Zealand has very high rates of campylobacteriosis and an investigation into the source of human infection \citep{Mullner:2009} has generated a large dataset of isolates from humans and animals that have been characterized by multilocus sequence typing, MLST \citep{Dingle:2001}. The dataset of \emph{C.~jejuni} and \emph{C.~coli} isolates from New Zealand has been used to inform control policy \citep{Sears:2011} and to estimate evolutionary parameters, such as the rates of mutation and recombination \citep{Yu:2012}. We focus on the question of demographic history, which is of particular interest in New Zealand due to the relatively recent colonization by man and the unique pattern of animal introductions (both wildlife and domestic animals) \citep{Atkinson:1993}. We ask: can we detect historic growth in the effective population size, and if so, does it correspond to a particular historical event? The relative isolation of this location means that neglecting ongoing exchange with outside populations is reasonably realistic. MLST data are available for over 3000 isolates from a variety of hosts.
We present our methods and results below, with a discussion given in Section \ref{sec:campy discussion}. Some further details are provided as supplementary material.
\subsection[Models and priors]{Models and priors}
We modelled the \emph{C.~jejuni} data using a coalescent model using the Jukes-Cantor model of DNA substitution and incorporating the gene conversion recombination model of \cite{Wiuf:2000} with exponential demographic growth, as simulation of this scenario is straightforward using existing tools (detailed below). However, simulation of a large dataset is prohibitively slow so we used a random subsample of 100 isolates. Coalescent theory suggests that such a sample size captures much of the information of the full sample \citep{Nordborg:2004}, and simulation based checks on informativeness are detailed in the supplementary material. The selected isolates were confirmed to be \emph{C.~jejuni} using the PubMLST database and through a phylogeny analysis of these isolates and a representative \emph{C.~coli} sequence. Three models were considered, with equal prior weights: \emph{Model 1} no growth; \emph{Model 2} growth for 50 years (since expansion of the New Zealand poultry industry); \emph{Model 3} growth for 170 years (since introduction of European livestock, primarily from Australia and the UK).
Each model has three biological parameters: a recombination rate, mean track length (i.e.~length of recombining DNA segment) and mutation rate. Models 2 and 3 also have a growth parameter. To aid interpretability we parameterised this as the relative increase in the effective population size during the period of growth. Prior information on parameters is summarised in Table \ref{tab:prior}. Mutation and recombination rates are given per kilobase per $2 N_e g$ years, where $N_e$ is the effective population size and $g$ the generation length in years. \cite{Wilson:2009} estimated the mean time to coalescence, $N_e g$, at $218$ with an interval estimate of $[155,288]$. To simplify our model, we fix $N_e g= 218$. We expect that variations of $N_e g$ within the quoted interval will not affect the detection of growth. Mean recombination length is in kilobase units. The relative growth parameter is unitless as it is a ratio of effective population sizes.
Growth priors are based on demographics of the principal host; poultry for model 2 and sheep/cattle for model 3. Rough estimates of host growth rates are used, based on the data of \cite{Binney:2013}, with variance increased to account for uncertainty of the link between bacterial and host demographics. Biological parameter priors are based on analysis of other \emph{C.~jejuni} data in \cite{Wilson:2009}. This assumed a no growth model, so these priors may not be appropriate for models 2 and 3. Sensitivity analysis detailed in the supplementary material also considers a much less informative biological prior.
\begin{table}[htp] \scriptsize
\begin{center}
\begin{tabular}{*{7}{c}}
&&&&& \multicolumn{2}{c}{Log normal} \\
Parameter & Units & Model & Point estimate & 95\% CI & Mean & Sd \\
\hline
Mutation rate & $kb^{-1} (2N_e g)^{-1}$ & All & 13.7 & $[8.1,23.2]$ & 2.62 & 0.27 \\
Recombination rate & $kb^{-1} (2N_e g)^{-1}$ & All & 1.31 & $[0.03, 51.5]$ & 0.27 & 1.87 \\
Mean track length & $kb$ & All & 4.52 & $[0.1, 209.9]$ & 1.51 & 1.96 \\
Relative growth & - & 2 & 4.06 & $[1.5,10.8]$ & 1.40 & 0.50 \\
Relative growth & - & 3 & 33.1 & $[2.9, 383.8]$ & 3.50 & 1.25
\end{tabular}
\caption{Details of the parameter priors used in Section \ref{sec:appl}. Priors are assumed to be the product of a log normal prior for each individual parameter. The point estimates are geometric means. The recombination length prior was truncated below 1 base pair, and the recombination rate above $25 kb^{-1} (2N_e g)^{-1}$ to avoid excessively slow simulations (All estimated posteriors for recombination rate were well below this - see Figure 2 of supplementary material.)}
\label{tab:prior}
\end{center}
\end{table}
\subsection{Methods}
Data sets were simulated using ms \citep{Hudson:2002} and seq-gen \citep{Rambaut:1997}. Genetic summaries required were calculated using R \citep{R:2012}, which was also used to code the inference algorithms.
We implemented semi-automatic ABC (Algorithm \ref{alg:semiauto2}) as follows. First a pilot analysis was performed using the ABC SMC algorithm of \cite{Toni:2010} (detailed in the supplementary material) with 1000 particles. This targeted log-transformed parameters, as on the original scale the target distribution is roughly log-normal and hard for the algorithm to explore. The summary statistics were a set of 15 genetic summaries (these, and other summaries used below, are listed in the supplementary material). The distance function was Equation \eqref{eq:scaled dist}, Euclidean distance between normalised summary statistics, with standard deviations estimated from 100 datasets simulated from the prior predictive distribution. These simulations were also used to choose an initial ABC threshold: the median of the distances between these datasets and the observations. In following SMC iterations, the threshold was the median of distances for accepted particles in the preceding step. The algorithm terminated after the iteration which reached $2 \times 10^4$ simulated data sets.
To fit summary statistics, $2 \times 10^4$ datasets were simulated using the training regions. Model choice summaries were fitted as described in Section \ref{sec:fit} and summaries for parameter inference by linear regression (detailed shortly). For all regressions the vector of covariates $f(\cdot)$ consisted of 3 cubic B-spline bases for each of 125 genetic summaries, giving a total of 375 covariates, and a constant term. Spline transformations were included to capture non-linear effects. Due to the large number of covariates, $L_1$ penalised versions of logistic and linear regression were used, using the `glmnet' R package \citep{Friedman:2010} with the tuning parameter chosen by cross-validation. Cross-validation estimates of fitting error were used to investigate which genetic summaries were most informative and to validate many of our modelling and tuning choices (details in supplementary material).
Exploratory analysis showed that for each parameter a single estimator could perform reasonably well under all models (details in supplementary material). To keep $\dim(S)$ small, our $S$ is the concatenation of such estimators with model choice statistics.
A single hypercube training region was used for all models to prevent behaviour of a particular model being overrepresented in any region of parameter space. This training region was the product of the parameter ranges from the entire pilot output, regardless of model. The regression responses were log-transformed parameters, supported by exploratory analysis of Box-Cox transformations. The resulting predictors were exponentiated to use in $S$. Regressions for biological parameters were fitted using the simulations from all models, while those for the demographic parameter used simulations from the growth models only.
The final $S$ vector used in the main ABC analysis consisted of four parameter estimators and three statistics for model choice. The analysis used the distance function \eqref{eq:scaled dist} with summary statistic standard deviations estimated from the training data. The analysis used the same SMC ABC algorithm as the pilot run, again with 1000 particles and targeting log-transformed parameters. The initial threshold was the median of distances to the observed data calculated from the training data, with subsequent thresholds chosen as in the pilot run. The algorithm terminated after the iteration which reached $4 \times 10^4$ simulated data sets.
\subsection{Results}
Table \ref{tab:results} summarises the model choice results for the pilot and main analyses, including the effect of regression post-processing as in \cite{Beaumont:2008}. They agree in putting the majority of the weight on model 1, the no growth model. Effective sample sizes \citep{Liu:1996} show that Monte Carlo error is approximately equal to that of a moderately large independent sample.
The supplementary material details sensitivity analyses which vary the parameter priors and the subsample of isolates used as observations. With the exception of some pilot analyses, the weight placed on model 1 remains in the range $80-100\%$. ABC analyses of simulated datasets are also described in the supplementary material. Although only a small number were possible due to the high computational cost, the results suggest that the analyses are capable of distinguishing the no-growth from the growth models, with the main analysis doing so more accurately.
\begin{table}[htp]
\begin{center}
\begin{tabular}{ccc|ccc}
Analysis & ESS & Post-processed? & Model 1 & Model 2 & Model 3 \\
\hline
\multirow{2}{*}{Pilot} & \multirow{2}{*}{348} & No & 0.86 & 0.11 & 0.04 \\
&& Yes & 1.00 & 0.00 & 0.00 \\
\multirow{2}{*}{Main} & \multirow{2}{*}{600} & No & 0.96 & 0.03 & 0.01 \\
&& Yes & 0.92 & 0.03 & 0.05
\end{tabular}
\caption{Estimated posterior probabilities and effective sample sizes
from ABC analyses on \emph{Campylobacter} data.}
\label{tab:results}
\end{center}
\end{table}
Table \ref{tab:param_post} summarises the parameter inference results. Marginal density plots are provided in the supplementary material. The table includes results from applying the regression adjustment of \cite{Beaumont:2002} to model 1 output. This was not applied to other models as there were too few accepted particles to expect it to be stable. The most notable finding is the low estimate of recombination rate, discussed further in Section \ref{sec:campy discussion}. Additionally, informative estimates are made for mutation rate and relative growth. The latter concentrates on low values, providing further evidence against significant growth. Sensitivity analyses detailed in the supplementary material support these qualitative findings, although the numerical values are less robust than those for model choice.
\begin{table}[htp] \scriptsize
\begin{center}
\begin{tabular}{cc|cccc}
&& Recombination rate & Mean track length & Mutation rate & Relative growth \\
&& $kb^{-1} (2N_e g)^{-1}$ & $kb$ & $kb^{-1} (2N_e g)^{-1}$ & \\
\hline
\multirow{3}{*}{Prior} & Model 1 & 1.31 [0.03, 51.5] & 4.52 [0.1, 209.9] & 13.7 [8.1, 23.2] & \\
& Model 2 & 1.31 [0.03, 51.5] & 4.52 [0.1, 209.9] & 13.7 [8.1, 23.2] & 4.06 [1.5, 10.8] \\
& Model 3 & 1.31 [0.03, 51.5] & 4.52 [0.1, 209.9] & 13.7 [8.1, 23.2] & 33.1 [2.9, 383.8] \\
\hline
\multirow{4}{*}{Pilot} & Model 1 & 0.34 [0.02, 5.21] & 2.43 [0.06, 88.2] & 11.4 [7.63, 16.7] & \\
& Model 1 (adjusted) & 0.18 [0.02, 1.87] & 1.04 [0.05, 24.8] & 12.8 [10.2, 16.7] & \\
& Model 2 & 0.28 [0.02, 2.45] & 1.99 [0.09, 24.1] & 12.6 [8.76, 17.2] & 2.12 [1.07, 3.07] \\
& Model 3 & 0.17 [0.01, 0.78] & 1.58 [0.08, 94.9] & 12.2 [8.80, 15.1] & 4.81 [0.97, 19.0] \\
\hline
\multirow{4}{*}{Main} & Model 1 & 0.55 [0.02, 3.74] & 5.81 [0.17, 239.2] & 12.9 [10.1, 16.5] & \\
& Model 1 (adjusted) & 0.22 [0.02, 1.18] & 2.98 [0.22, 63.2] & 13.0 [10.6, 15.9] & \\
& Model 2 & 0.24 [0.01, 3.53] & 5.73 [0.52, 239] & 14.0 [11.6, 16.5] & 1.51 [0.85, 2.71] \\
& Model 3 & 0.34 [0.01, 3.37] & 3.08 [0.40, 128] & 12.6 [9.81, 16.4] & 1.12 [0.41, 2.44]
\end{tabular}
\caption{Parameter point estimates (geometric means) and 95\% credible intervals from prior and ABC analyses on \emph{Campylobacter} data.}
\label{tab:param_post}
\end{center}
\end{table}
The regression and ABC results were also used to find which genetic summaries were particularly informative, and to show that some aspects of the data fitted poorly under any model. These results are given in the supplementary material, and can inform future modelling and analyses.
\section{Discussion} \label{sec:discussion}
\subsection{ABC Methodology}
It is often desirable to perform model choice and parameter inference using the same simulations. Our methodology focuses on producing $S$ appropriate for model choice only. Section \ref{sec:appl} contains an application-specific example of adding a small number of further summaries to $S$ which are informative for parameter inference. General purpose methods to choose such low dimensional summaries would be useful. However, often each model may require separate summaries, so that a choice of $S$ suitable for model choice and parameter inference would be high dimensional. An alternative strategy is to develop ABC methods in which comparisons of simulated and observed data do not always use the same summaries. A simple approach would be to perform separate rejection sampling analyses for model choice and for parameter inference under each model. A possible alternative is an MCMC algorithm which moves between models using only summaries relevant to the model(s) involved in the current step.
There are numerous alternatives to logistic regression to fit summary statistics for model choice, such as linear discriminant analysis \citep{Estoup:2012} and a comparison of their performance within ABC may be interesting.
Other parts of our semi-automatic method could also be varied.
For example, our choice of $S$ is a vector of one-to-one transformations of Bayes factors, and other transformations may perform differently.
Also, other methods could produce a more accurate training region, such as fitting a flexible model to the pilot output.
For simplicity we have used relatively simple ABC algorithms. However, much progress is being made in improving algorithmic efficiency, especially of ABC SMC \citep[e.g.][]{DelMoral:2011}. Our work is complementary to this and it could be used with many such improved algorithms. Indeed ABC SMC algorithms can also be modified to incorporate semi-automatic ABC. For example, recall that in Section \ref{sec:examples} the training data were reused as the simulations needed for ABC rejection sampling. As suggested by \cite{Barnes:2012}, in ABC SMC they could be similarly reused for the first SMC iteration.
\subsection{\emph{Campylobacter} application} \label{sec:campy discussion}
Our main finding is support for a model with no change in the effective population size of \emph{C.~jejuni}. This is surprising over a period where its ecological niche has greatly increased. Analysis in the supplementary material shows some features of the data are poorly fitted under all models, suggesting that more detailed demographic structure is necessary to fit the data well. One potential modification is
subpopulation structure amongst the hosts which might reveal that only some support growing \emph{C.~jejuni} populations.
Our analysis also produced parameter estimates. Those for mutation rate and mean length of recombination tract are comparable to those from other work. The point estimates of recombination rate are somewhat smaller than those of \cite{Wilson:2009}, who performed a similar ABC analysis on a different dataset. Furthermore our credible intervals are much narrower, and exclude the estimates of \cite{Fearnhead:2005}, \cite{Biggs:2011} and \cite{Yu:2012}, who find recombination and mutation rates to be of the same order of magnitude. The discrepancy with \cite{Wilson:2009} is conceivably due to their use of a heavy tailed prior or ABC tuning differences such as choice of threshold. The others suggest differences in the model or data used. For example, as discussed by \cite{Yu:2012}, their analysis, and that of \cite{Biggs:2011}, is for closely related sequences, and may reveal a high level of recombination that is then removed by purifying selection.
\paragraph{Acknowledgements}
The authors acknowledge the Marsden Fund project 08-MAU-099 (Cows, starlings and \emph{Campylobacter} in New Zealand: unifying phylogeny, genealogy, and epidemiology to gain insight into pathogen evolution) for funding this project. This publication made use of the \emph{Campylobacter} Multi Locus Sequence Typing website (http://pubmlst.org/campylobacter/) developed by Keith Jolley and sited at the University of Oxford (Jolley and Maiden 2010, BMC Bioinformatics, 11:595). The development of this site has been funded by the Wellcome Trust.
\section*{Appendix: Proof of Theorem \ref{thm:suff}}
Bayes sufficiency of $S(x)$ for $\mathcal{M}$ is equivalent to the following being true for all $i$ and $p_M$, and almost any $x$,
\begin{equation}
\Pr(\mathcal{M}_i|S(x)) = \Pr(\mathcal{M}_i|x). \label{eq:suff}
\end{equation}
For convenience we introduce $\bm{p}=(p_M(\mathcal{M}_i))_{1 \leq i \leq M}$ to represent the prior mass function. Also, let $\bm{1}$ be a vector of $M$ components equal to $1$.
First assume $S$ is Bayes sufficient for $\mathcal{M}$. Define $h_i(S(x), \bm{p}) = \Pr_{\bm{p}}(\mathcal{M}_i | S(x))$ (i.e.~the conditional probability under prior $\bm{p}$) and note $h_i(S(x), \bm{p}) = \Pr_{\bm{p}}(\mathcal{M}_i | x)$. The required function is $g(S(x)) = (h_i(S(x), M^{-1} \bm{1}))_{1 \leq i \leq_{M-1}}$.
It remains to prove Bayes sufficiency for $\mathcal{M}$ given a function $g$ of the form described in the theorem. Henceforth we consider only the case $\bm{p} = M^{-1} \bm{1}$, since in this case \eqref{eq:suff} is equivalent to $\Pr(x|\mathcal{M}_i) = k \Pr(S(x)|\mathcal{M}_i)$ for some constant $k$, and applying Bayes' theorem to this proves \eqref{eq:suff} for general $\bm{p}$. It also suffices to show that \eqref{eq:suff} holds for all $i<M$; the case $i=M$ follows as probabilities sum to 1. Fix some $i<M$
and define an indicator variable $Y = \mathbb{I}[\mathcal{M}=\mathcal{M}_i]$. Then $T_i(x) = \Pr(\mathcal{M}_i|x) = E[Y | x]$ and $\Pr(\mathcal{M}_i|S(x)) = E[Y | S(x)]$. To prove \eqref{eq:suff}, we will show that these conditional expectations are almost always equal. Standard properties of conditional expectation give $E[Y | S(x)] = E[E\{Y|x\} | S(x)] = E[T_i(x) | S(x)]$. Finally, $E[T_i(x) | S(x)] = E[g_i(S(x)) | S(x)] = g_i(S(x)) = T_i(x) = E[Y|x]$ for almost all $x$ as required, where $g_i(\cdot)$ represents the $i$th component of the $g(\cdot)$ function.
|
1,116,691,499,785 | arxiv | \section*{Acknowledgement}
\label{sec:acknowledgement}
We would like to thank Zafiirah Hosenie for insightful discussions on the Machine Learning model building and training process. We also thank colleagues Boris Leistedt and George Kyriacou for providing feedback and discussing this work during a presentation. A.M is supported financially by the Imperial College President's scholarship.
\section*{Data Availability}
\label{sec:code_availability}
The code, written in Python, is publicly available on Github at \href{https://github.com/Harry45/emuPK}{https://github.com/Harry45/emuPK} and the documentation is maintained at \href{https://emupk.readthedocs.io/}{https://emupk.readthedocs.io/}. The trained surrogate models are also made available as part of this distribution. One can also follow the instructions in the documentation to train her own emulator based on the desired configurations.
\section{Conclusions}
\label{sec:conclusions}
\begin{figure*}
\centering
\subfloat[The EE power spectrum, $C_{\ell,ij}^{\tm{EE}}$]{{\includegraphics[width=0.30\textwidth]{Figures/cl_ee_gp_class.pdf}}}
\qquad
\subfloat[The II power spectrum, $C_{\ell,ij}^{\tm{II}}$]{{\includegraphics[width=0.30\textwidth]{Figures/cl_ii_gp_class.pdf}}}
\qquad
\subfloat[The GI power spectrum, $C_{\ell,ij}^{\tm{GI}}$]{{\includegraphics[width=0.30\textwidth]{Figures/cl_gi_gp_class.pdf}}}
\caption{\label{fig:ee_ii_gi_gp_class}The left, centre and right panels show the different weak lensing power spectra as calculated by the emulator (broken curves) and the accurate model, CLASS, shown by the solid curves. The different power spectra within each panel correspond to the auto- and cross- power spectra, due to the 2 tomographic redshift distribution in Figure \ref{fig:nz_dist}, hence leading to 00, 10, and 11 power spectra. These power spectra are then added, via the intrinsic alignment parameter, $A_{\tm{IA}}$ to construct a final model, $C_{\ell,ij}^{\tm{tot}}$ in a weak lensing analysis. See Equation \ref{eq:cl_tot}.}
\end{figure*}
In this paper, we have proposed an emulator for the 3D matter power spectrum as calculated by CLASS across a wide range of cosmological parameters (see Table \ref{tab:prior_range}). This detailed methodology presented in this work entails a multifaceted view of the 3D power spectrum, which is an essential quantity in a weak lensing analysis. In particular, we have successfully demonstrated that as part of this routine, we can compute the linear matter power spectrum at a reference redshift $z_{0}$, the non-linear 3D matter power spectrum with and without the baryon feedback model described in \S\ref{sec:model}, gradients of the 3D matter power spectrum with respect to the input parameters and the different auto- and cross- weak lensing power spectra (EE, GI and II) derived from $P_{\delta}^{\tm{bary}}(k,z)$ and the given tomographic redshift distributions, $n_{i}(z)$. Note that the gradients of the weak lensing power spectra are also straightforward to calculate using the distributive property of gradients (see Equation \ref{eq:wl_general} for a general form for the different weak lensing power spectra). Note that only $P_{\delta}^{\tm{bary}}(k,z)$ is a function of the cosmological parameters.
The default emulator is built using 1000 training points only and because the mean of the surrogate model is just a linear predictor, the mean function is very quick to compute. In the same spirit, the first and second derivatives involve only element-wise matrix multiplication, and are therefore quick to compute. In the test cases, a full 3D matter power spectrum calculation takes 0.1 seconds compared to an average value of 30 seconds when CLASS is used. While the goal remains to have an emulating method which is faster than the computer model, it is also worth pointing out that it also quite accurate, following the diagnostics we have performed in this work, see Figure \ref{fig:delta_p} as an example. The emulator can be made more accurate and precise as we add more and more training points, but this comes at an expense of $\mc{O}(N^{3})$ cost at each optimisation step during the training phase. Fortunately, in this work, 1000 training points suffice to yield promising and robust power spectra.
Building an emulator for the 3D power spectrum is deemed to be a challenging task \citep{2020PhRvD.102f3504K}, the main difficulties arising due to the fact that GP models cannot easily handle large datasets ($\sim 10^4$ training points) and it is not trivial to work with vector-valued functions, for example, $P_{\delta}(k,z;\bs{\theta})$ as in this work. Also, techniques such as multi-outputs GP result in large matrices, hence a major computational challenge. Fortunately, the method presented in this work, along with the projection method explained in \S\ref{sec:training_points}, provides a simple and straightforward path towards building emulators.
Moreover, current weak lensing data do not constrain the cosmological parameters to a high precision, hence motivating us to distribute 1000 training points across a large parameter space, according to the current prior distributions (hypercube) used in the literature. In future weak lensing surveys, with improved precision on the parameters, one can choose to use, for example, a multi-dimensional Gaussian prior (hypersphere) which will certainly have a much smaller volume compared to the hypercube used in this work. If we stick with 1000 training points, this will lead to very precise power spectra, or we can also opt to distribute fewer than 1000 training point across the parameter space. Fewer training points also imply that training the emulator will be faster.
The different aspects of the emulation scheme proposed in this work can easily pave their way into different cosmological data analysis problems. A nice example is an analysis combining the MOPED data compression algorithm \citep{2000MNRAS.317..965H}, the emulated 3D matter power spectrum and the $n(z)$ uncertainty in a weak lensing analysis. Moreover, if we want to use a more sophisticated sampler such as Hamiltonian Monte Carlo (HMC), one can leverage the gradients from the emulator to derive an expression for the gradient of the negative log-likelihood (the potential energy function in an HMC scheme) with respect to the input cosmological parameters, under the assumption that such an analytic derivation is possible. Furthermore, the second derivatives can be used in a Fisher Matrix analysis, or the first and second derivatives can be be used together in an approximate inference scheme based on Taylor expansion techniques, see for example, the recent work by \cite{2019MNRAS.490.4237L}. In addition, similar concepts behind this work can be extended to build emulators for $P_{\delta}(k,z)$ from N-body simulations.
\section{Gradients}
\label{sec:gradient}
\begin{figure*}
\noindent \begin{centering}
\includegraphics[width=0.9\textwidth]{Figures/all_gradients.pdf}
\par\end{centering}
\caption{\label{fig:gradients}Gradients with respect to the input cosmologies. $\bs{\theta}$ corresponds to the following cosmological parameters: $\bs{\theta}=(\Omega_{\tm{cdm}}h^{2},\,\Omega_{\tm{b}}h^{2},\,\tm{ln}(10^{10}A_{s}),\,n_{s},\,h)$. Note that since we are emulating the 3D power spectrum, the gradient is also a 3D quantity. In this figure, we are showing the predicted function with the GP model in broken blue and the accurate gradient function calculated with CLASS in solid red, at a fixed redshift.}
\end{figure*}
An important by-product from the trained model is the gradient of the emulated function with respect to the input parameters. This can be of paramount importance if we are using a sophisticated Monte Carlo sampling scheme such as Hamiltonian Monte Carlo (HMC) to infer cosmological parameters in a Bayesian analysis. The gradients of the log-likelihood with respect to the cosmological parameters are important in such a sampling scheme. Hence, with some linear algebra and using the gradient of the power spectra, generated with the emulator, the desired gradients can be derived. The analytical gradient of the mean function with respect to the inputs, at a fixed redshift and wavenumber is
\begin{equation}
\label{eq:gp_grad}
\dfrac{\partial\bar{y}_{*}}{\partial\bs{\theta}_{*}} = \dfrac{\partial\bs{\Phi}_{*}}{\partial\bs{\theta}_{*}}\hat{\bs{\beta}} + \left[\bs{k}_{*}\odot\sans{Z}_{*}\bs{\Omega}^{-1}\right]^{\tm{T}}\sans{K}_{y}^{-1}(\bs{y} - \bs{\Phi}\hat{\bs{\beta}})
\end{equation}
\noindent where $\odot$ refers to element-wise multiplication (Hadamard product). $\sans{Z}_{*}\in\bb{R}^{N\times d}$ corresponds to the pairwise difference between the test point, $\bs{\theta}_{*}$ and the training points, that is, $\sans{Z}_{*}=\left[\bs{\theta}_{1}-\bs{\theta}_{*},\,\bs{\theta}_{2}-\bs{\theta}_{*}\ldots\bs{\theta}_{N}-\bs{\theta}_{*}\right]^{\tm{T}}$. Importantly, as seen from equation \ref{eq:gp_grad}, the gradient is the sum of the gradients corresponding to the parametric part and the residual, which is modelled by a kernel. Moreover, higher order derivatives can also be calculated analytically. For example, the second order auto- and cross- derivatives are
\begin{equation}
\dfrac{\partial^{2}\bar{y}_{*}}{\partial\bs{\theta}^{2}_{*}}=\dfrac{\partial^{2}\bs{\Phi}_{*}}{\partial\bs{\theta}^{2}_{*}}\bs{\hat{\beta}}+\left[\Omega^{-1}\dfrac{\partial \bs{k}_{*}}{\partial\bs{\theta}_{*}}\sans{Z}_{*}-\Omega^{-1}\odot\bs{k}_{*}\right]\sans{K}_{y}^{-1}(\bs{y} - \bs{\Phi}\hat{\bs{\beta}}).
\end{equation}
\noindent As a result of this procedure, one can analytically calculate the first and second derivatives of an emulated function using kernel methods. While the first derivatives are particularly useful in HMC sampling method, the second derivatives are more relevant in the calculation of, for example, the Fisher information matrix.
Once the gradients with respect to each component of the non-linear 3D matter power spectrum are derived, the first and second derivatives with respect to the non-linear matter spectrum can be derived via chain rule and are given by:
\begin{equation}
\dfrac{\partial P_{\delta}}{\partial\boldsymbol{\theta}}=\dfrac{\partial D}{\partial\boldsymbol{\theta}}(1+q)P_{\textrm{lin}}+D\dfrac{\partial q}{\partial\boldsymbol{\theta}}P_{\textrm{lin}}+D(1+q)\dfrac{P_{\textrm{lin}}}{\partial\boldsymbol{\theta}}
\end{equation}
\noindent and
\begin{equation}
\begin{split}
\dfrac{\partial^{2} P_{\delta}}{\partial\boldsymbol{\theta}^{2}} &= \dfrac{\partial^{2} D}{\partial \boldsymbol{\theta}^{2}}(1+q)P_{\textrm{lin}}+D\dfrac{\partial^{2} q}{\partial\boldsymbol{\theta}^{2}}P_{\textrm{lin}}+D(1+q)\dfrac{\partial^{2}P_{\textrm{lin}}}{\partial\boldsymbol{\theta}^{2}}\\
&+2\dfrac{\partial D}{\partial \boldsymbol{\theta}}\dfrac{\partial q}{\partial \boldsymbol{\theta}}P_{\textrm{lin}}+2\dfrac{\partial D}{\partial\boldsymbol{\theta}}(1+q)\dfrac{\partial P_{\textrm{lin}}}{\partial\boldsymbol{\theta}}+2D\dfrac{\partial q}{\partial\boldsymbol{\theta}}\dfrac{\partial P_{\textrm{lin}}}{\partial \boldsymbol{\theta}}.
\end{split}
\end{equation}
Once $\sans{K}_{y}^{-1}(\bs{y} - \bs{\Phi}\hat{\bs{\beta}})$ is pre-computed (after learning the kernel hyperparameters, $\bs{\nu}$) and stored, the first and second derivatives can be computed very quickly. In the case of finite difference methods, if a poor finite step size is specified, numerical derivatives can become unstable. This is not the case in this framework. In Figure \ref{fig:gradients}, we show the first derivatives with respect to the input cosmological parameters, $\bs{\theta}=(\Omega_{\tm{cdm}}h^{2},\,\Omega_{\tm{b}}h^{2},\,\tm{ln}(10^{10}A_{s}),\,n_{s},\,h)$. The first derivatives with CLASS (in red) are calculated using finite central difference method.
\section{Introduction}
\label{sec:introduction}
The 3D matter power spectrum, $P_{\delta}(k,z)$ is a key quantity which underpins most cosmological data analysis, such as galaxy clustering, weak lensing, 21 cm cosmology and various others. Crucially, the calculation of other (derived) power spectra can be fast if $P_{\delta}(k,z)$ is pre-computed. In practice, the latter is the most expensive component and can be calculated either using Boltzmann solvers such as CLASS or CAMB, or via simulations, which can be computationally expensive depending on the resolution of the experiments.
For the past 30 decades or so, with the advent of better computational facilities, various techniques have been progressively devised and applied to deal with inference in cosmology. In brief, some of these techniques include Monte Carlo (MC) sampling, variational inference, Laplace approximation and recently we are witnessing other new approaches such as density estimation \citep{2018MNRAS.477.2874A, 2019MNRAS.488.4440A, 2019MNRAS.488.5093A} which makes use of tools like Expectation-Maximisation (EM) algorithm and neural networks (NN). Recently, \cite{2018PhRvD..97h3004C} designed the information maximizing neural networks (IMNNs) to learn nonlinear functionals of data that maximize Fisher information. In this paper, we explore another branch of Machine Learning (ML) which deals with kernel techniques.
\begin{figure*}
\centering
\subfloat[3D Matter Power Spectrum, $P_{\delta}(k,z)$]{{\includegraphics[width=0.45\textwidth]{Figures/3D_pk.pdf}}}
\qquad
\subfloat[The function $1+q(k,z)$]{{\includegraphics[width=0.45\textwidth]{Figures/3D_qf.pdf}}}
\caption{\label{fig:3d_pk_and_q}The left panel shows the 3D matter power spectrum at a fixed input cosmology to CLASS for $k\in[5\times 10^{-4}, 50]$ and $z\in[0.0, 4.66]$. The grid shows the region where we choose to model the function, that is, 40 wavenumbers, equally spaced in logarithm scale and 20 redshifts, equally spaced in linear scale.}
\end{figure*}
The ML techniques discussed previously will slowly pave their way in various weak lensing (WL) analysis. Indeed, in the analysis of the cosmic microwave background (CMB), \cite{2007ApJ...654....2F} designed the Parameters for the Impatient Cosmologist (PICO) algorithm for interpolating CMB power spectra at test points in the parameter space. In the same spirit, \cite{2007MNRAS.376L..11A} built a neural network algorithm, which they refer to as CosmoNet, for interpolating CMB power spectra. Neural networks have been used in other applications as well, for example, in simulations. \cite{2012MNRAS.424.1409A, 2014MNRAS.439.2102A} used neural networks for interpolating non-linear matter power spectrum based on 6 cosmological parameters while \cite{2018MNRAS.475.1213S} used neural networks for emulating the 21cm power spectrum in the context of epoch of reionisation. In the context of weak lensing analysis, \cite{2020MNRAS.491.2655M} used neural networks for accelerating cosmological parameter inference by combining cosmic shear, galaxy clustering, and tangential shear. While we were finishing this work, the work of \cite{2021arXiv210414568A} and \cite{2021arXiv210501081H}, both related to emulating the matter power spectrum, appeared on arXiv.
On the other end, Gaussian Processes have been used in the Coyote Universe collaboration \citep{2007PhRvD..76h3503H, 2009ApJ...705..156H, 2010ApJ...715..104H, 2014ApJ...780..111H, 2010ApJ...713.1322L} for emulating the matter power spectrum for large-scale simulations. Recently, \cite{2018PhRvD..98f3511L} used Gaussian Processes in the context of likelihood-free inference, where the data (training points) is augmented in an iterative fashion via Bayesian Optimisation, hence the procedure being referred to as Bayesian Optimisation for Likelihood-Free Inference, BOLFI \citep{JMLR:v17:15-017}. Each emulating scheme has its own pros and cons (we defer to \S\ref{sec:software} for a short discussion on the advantages and possible limitations of Gaussian Processes).
Different emulating schemes have been designed for the matter power spectrum and most of them are based on combining Singular Value Decomposition (SVD) and Gaussian Processes. The emulator from \cite{2007PhRvD..76h3503H} is among the first in the context of large simulations. Emulating $P_{\delta}(k,z)$ is not a trivial task because it is strictly a function of 3 inputs, $k$, the wavenumber, $z$, the redshift and $\bs{\theta}$, the cosmological parameters. Neural networks seem to be the obvious choice because they can deal with multiple outputs but they generally require a large number of training points.
Our contributions in this work are three fold. First, it addresses the point that we do not always need to assume a zero mean Gaussian Process model for performing emulation, in other words, one can also include some additional basis functions prior to defining the kernel matrix. This can be useful if we already have an approximate model of our function. Moreover, if we know how a particular function behaves, one can adopt a stringent prior on the regression coefficients for the parametric model, hence allowing us to encode our degree of belief about that specific parametric model. Second, since we are using a Radial Basis Function (RBF) kernel and the fact that it is infinitely differentiable enables us to estimate the first and second derivatives of the 3D matter power spectrum. The derived expressions for the derivatives also indicate that there is only element-wise matrix multiplication and no matrix inverse to compute. This makes the gradient calculations very fast. Finally, with the approach that we adopt, we show that the emulator can output various key power spectra, namely, the linear matter power spectrum at a reference redshift $z_{0}$ and the non-linear 3D matter power spectrum with/without an analytic baryon feedback model. Moreover, using the emulated 3D power spectrum and the tomographic redshift distributions, we also show that the weak lensing power spectrum and the intrinsic alignment (II and GI) can be generated in a very fast way using existing numerical techniques.
\begin{figure*}
\centering
\subfloat[The growth factor, $D(z)$]{{\includegraphics[width=0.40\textwidth]{Figures/gf.pdf}}}
\qquad
\subfloat[The linear and non-linear matter power spectrum]{{\includegraphics[width=0.40\textwidth]{Figures/p_lin_nl.pdf}}}
\caption{\label{fig:gf_pk_lin}The left panel shows the growth factor, as function of redshift. In this case, to generate the training set, the growth factor is calculated at 20 redshifts, equally spaced in linear scale (shown by the red scatter points) and the linear matter power spectrum, $P_{\tm{lin}}(k,z_{0})$ is calculated at 40 different wavenumbers, $k$, equally spaced in logarithm space (red scatter points). We also show the non-linear matter power spectrum in (b). These functions are evaluated at different cosmological parameters to build a training set.}
\end{figure*}
In \cite{2020MNRAS.497.2213M}, we found that using the mean of the GP and ignoring the error always results in better posterior densities. This is a known feature when GPs emulate a deterministic function \citep{doi:10.1198/TECH.2009.08019}. As a result, in this work, we work only with the mean of the GP in all experiments. Importantly, we use 1000 training points and once the emulator is trained and stored, it takes about 0.1 seconds to generate the non-linear 3D matter power spectrum, compared to CLASS which takes about 30 seconds to generate an accurate and smooth power spectrum assuming the Limber approximation. Hence, the method presented in this paper also opens a new avenue towards building emulators for large-scale simulations where a single high-resolution forward simulation might take minutes to compute.
The paper is organised as follows: in \S\ref{sec:model}, we describe the 3D power spectrum, which can be decomposed in different components, and the analytic baryon feedback model, which can be used in conjunction with $P_{\delta}(k,z)$. In \S\ref{sec:procedures} and \S\ref{sec:gradient}, we provide a mathematical description for calculating multiple important quantities for the emulator, for example, making predictions at test points, learning the kernel hyperparameters and computing derivatives. In \S\ref{sec:wl}, using a pair of toy $n(z)$ tomographic redshift distributions, we show how the emulator can be used to generate different weak lensing power spectra and in \S\ref{sec:software}, we describe briefly the different functionalities that the code supports and we highlight the main results in \S\ref{sec:results}. Finally, we conclude in \S\ref{sec:conclusions}.
\section{Model}
\label{sec:model}
In this section, we describe the model which we want to emulate. Central to the calculation is the 3D matter power spectrum, $P_{\delta}(k,z;\bs{\theta})$, where $\bs{\theta}$ refers to a vector of cosmological parameters. In what follows, we will drop the $\bs{\theta}$ vector notation for clarity. The matter power spectrum is generally the most expensive part to calculate, especially if one chooses to use large-scale simulation to generate the 3D matter power spectrum. In the simple case, one can just emulate $P_{\delta}(k,z)$ but we consider a different approach, which enables us to include baryon feedback, to calculate the linear matter power spectrum at a reference redshift and to calculate the non-linear 3D matter power spectrum itself.
Baryon feedback is one of the astrophysical systematics which is included in a weak lensing analysis. This process is not very well understood but is deemed to modify the matter distribution at small scales, hence resulting in the suppression of the matter power spectrum at large multipoles. In general, to model these effects, large hydrodynamical simulations provide a proxy to model baryon feedback. In particular, it is quantified via a bias function, $b^{2}(k,z)$ such that the resulting modified 3D matter power spectrum can be written as
\begin{equation}
P_{\delta}^{\tm{bary}}(k,z)=b^{2}(k,z)P_{\delta}(k,z),
\end{equation}
\noindent where $P_{\delta}^{\tm{bary}}(k,z)$ and $P_{\delta}(k,z)$ are the 3D matter power spectra, including and excluding baryon feedback respectively. The bias function is modelled by the fitting formula
\begin{equation}
b^{2}(k,z)=1-A_{\tm{bary}}\left[A_{z}e^{(B_{z}x-C_{z})^{3}}-D_{z}xe^{E_{z}x}\right]\;,
\end{equation}
\noindent where $A_{\tm{bary}}$ is a flexible nuisance parameter and we allow it to vary over the range $A_{\tm{bary}}\in[0.0,2.0]$. The quantity $x=\tm{log}_{10}(k\;[\tm{Mpc}^{-1}])$, and $A_{z}$, $B_{z}$, $C_{z}$, $D_{z}$ and $E_{z}$ depend on the redshift and other constants. See \cite{2015MNRAS.450.1212H} for details and functional forms. Note that setting $A_{\tm{bary}}=0$ implies no baryon feedback. Moreover, since we have a functional form for the baryon feedback model, which is not expensive to compute, we will apply it as a bolt-on function on top of the emulated non-linear 3D matter power spectrum.
Next, we consider the non-linear 3D matter power spectrum without baryon feedback. It can be decomposed into three components as follows:
\begin{equation}
P_{\delta}(k,z)=D(z)[1+q(k,z)]P_{\tm{lin}}(k,z_{0})
\end{equation}
\noindent where $D(z)$ is the growth factor, $q(k,z)$ is a 2D function (in terms of $k$ and $z$) representing the non-linear contribution to the power spectrum and $P_{\tm{lin}}(k,z_{0})$ is the linear matter power spectrum at fixed redshift $z_{0}$. See Figures \ref{fig:3d_pk_and_q} and \ref{fig:gf_pk_lin} for an illustration of the decomposition of the 3D matter power spectrum at fixed cosmological parameters. Emulating the three different components separately has the advantage of calculating the linear matter power spectrum at the reference redshift for any given input cosmology.
Following current weak lensing analysis, we define some bounds on the redshifts, $z$ and wavenumbers, $k$. For example, the maximum redshift in the tomographic weak lensing analysis performed by \cite{2017MNRAS.471.4412K} is $\sim 5$ and the maximum wavenumber is set to 50. With these numbers in mind, we choose $z\in[0.0, 5]$ and $k\in[5\times 10^{-4}, 50]$. We will elaborate more on these settings in the sections which follow. On the other hand, for the cosmological parameters, we assume the following range to generate the training set:
\begin{table}[H]
\footnotesize
\caption{\label{tab:prior_range}Default parameter prior range inputs to the emulator}
\renewcommand\arraystretch{1.5}
\noindent \begin{centering}
\begin{tabularx}{0.45\textwidth} {
| >{\hsize=0.3\textwidth}X
| >{\hsize=0.1\textwidth}X | }
\hline
\textbf{Description} & \textbf{Range}\tabularnewline
\hline
CDM density, $\Omega_{\tm{cdm}}h^{2}$ & $[0.06,\,0.40]$\tabularnewline
Baryon density, $\Omega_{\tm{b}}h^{2}$ & $[0.019,\,0.026]$\tabularnewline
Scalar spectrum amplitude, $\tm{ln}(10^{10}A_{s})$ & $[1.70,\,5.0]$\tabularnewline
Scalar spectral index, $n_{s}$ & $[0.7,\,1.3]$\tabularnewline
Hubble parameter, $h$ & $[0.64,\,0.82]$\tabularnewline
\hline
\end{tabularx}
\par\end{centering}
\end{table}
Current weak lensing analyses also assume a fixed sum of neutrino mass, $\Sigma m_{\nu}$. Hence, in all experiments, $\Sigma m_{\nu}=0.06\,\tm{eV}$. This quantity can be fixed by the user prior to running all experiments with the pipeline we have developed. However, we can also treat it as a varying parameter before building the emulator.
\section{Procedures}
\label{sec:procedures}
In the existing likelihood code from \cite{2017MNRAS.471.4412K}, the accurate solver, CLASS, is queried at 39 wavenumbers $k$ and 72 redshifts $z$, corresponding to the centres of each tophat in the $n(z)$ distribution and a standard spline interpolation is carried out along the $k$ axis. Following a similar approach, we choose to have a model of the $P_{\delta}(k,z)$ at 40 values of $k$, equally spaced on a logarithmic grid and 20 values of redshift, equally spaced in linear scale from 0 to 4.66 (the maximum redshift in the KiDS-450 analysis) and we can perform a standard 2D interpolation, such as spline interpolation, along $k$ and $z$. See Figures \ref{fig:3d_pk_and_q} and \ref{fig:gf_pk_lin} for an illustration.
In this section, we will walk through the steps to build a model for the 3D matter power spectrum. It is organised as follows: in \S\ref{sec:training_points} we discuss how the input training points are generated and this is crucial for the emulator to work with a reasonable number of training points. In \S\ref{sec:polynomial_regression}, we cover briefly the standard approach of emulating functions via polynomial regression and in \ref{sec:model_residuals}, we elaborate on how we can model the residuals, that is, the discrepancy between the actual function and assumed polynomial function.
We denote the response (or target), that is the function we want to model as $y$. In this particular case, we have three different components, namely the growth factor, $D(z)$, the $q(k,z)$ function and the linear matter power spectrum $P_{\tm{lin}}(k,z_{0})$. We assume we have run the simulator, CLASS, at $N$ design points, $\bs{\theta}$, such that we have a training set, $\{\bs{\theta}, \bs{y}_{i}\}$. The index $i$ corresponds to the $i^{\tm{th}}$ response. Note that in our application, we model each function independently with the emulating scheme proposed below.
\subsection{Training Points}
\label{sec:training_points}
An important ingredient in designing a robust emulator lies in generating the input training points. Points which are drawn randomly and uniformly from the pre-defined range (see Table \ref{tab:prior_range}) do not show a space-filling property. As a result, in many regions in the high-dimensional space, the distance between any two points can be very large and the emulator will lack information of its neighbourhood. Hence, the prediction can be very poor in these regions.
\begin{figure}[H]
\noindent \begin{centering}
\includegraphics[width=0.4\textwidth]{Figures/illustrate_2d.pdf}
\par\end{centering}
\caption{\label{fig:lhs_illutrate}An example of a Latin Hypercube (LH) design in two dimensions.
5 LH points are drawn randomly using the \texttt{maximin} procedure and each point will occupy a single cell, that is, if a point occupies cell $(i,j)$, then there is not a point occupying cell $(j,i)$. This procedure remains exactly the same when we generate LH samples from a hypercube.}
\end{figure}
To circumvent these issues, the natural choice is to instead generate Latin Hypercube (LH) samples, which demonstrate a nice space-filling property as shown in Figure \ref{fig:lhs_illutrate}. The idea behind a LH design is that a point will always occupy a single cell. For example, if we consider the design shown in Figure \ref{fig:lhs_illutrate}, each column and row contains precisely one training point (in 2D). Similarly, in a 3D case, each row, column and layer will have one training point and this extends to higher dimensions. Intuitively, for a 2D design, this is analogous to the problem of positioning $n$ rooks on an $n\times n$ chessboard such that they do not attack each other. This ensures that the LH points generated cover the parameter space as much as possible, hence enabling the emulator to predict the targets at test points. \texttt{emuPK} can also be trained on a different set of training points, for example, one which has been generated using a different LH sampling scheme.
In this application, we use the \texttt{LHS} package, available in \texttt{R} to generate the design points. Whilst many different functions are available to generate the LH samples, we choose the \texttt{maximinLHS} procedure, which maximises the minimum distance between the LH points. If we have a set of design points $(x_{i},y_{i})$ where $x_{i}\neq x_{j}$, $y_{i}\neq y_{j}$ and $i\neq j$, in the case of a maximin LH design, for a certain distance measure, $d$, the separation distance, $\tm{min}_{i\neq j}\;d[(x_{i},y_{i}), (x_{j},y_{j})]$ is maximal \citep{van2007maximin}.
These design points as generated from the \texttt{maximinLHS} procedure lie between 0 and 1 and hence, they are scaled according the range we want to distribute them. For example, if $\theta_{\tm{min}}$ and $\theta_{\tm{max}}$ are the minimum and maximum of a particular parameter, the LH points are scaled as: $\theta=\theta_{\tm{min}}+r(\theta_{\tm{max}}-\theta_{\tm{min}})$, where $r$ is the LH design point. Alternatively, if we want them to follow a specific distribution, for example, a Gaussian distribution, one can just use the inverse cumulative density function to scale the LH point.
\begin{table}[H]
\footnotesize
\caption{\label{tab:notations}Symbols and notations with corresponding meanings}
\renewcommand\arraystretch{1.5}
\noindent \begin{centering}
\begin{tabularx}{0.45\textwidth} {
| >{\hsize=0.06\textwidth}X
| >{\hsize=0.34\textwidth}X | }
\hline
\textbf{Symbol} & \textbf{Meaning}\tabularnewline
\hline
$N$ & Number of training points\tabularnewline
$m$ & Number of basis functions\tabularnewline
$\bs{y}$ & Target of size $N$\tabularnewline
$\bs{\theta}$ & Inputs to the emulator\tabularnewline
$\bs{\beta}$ & Regression coefficients of size $m$\tabularnewline
$\bs{f}$ & Deterministic error component of size $N$ of the model\tabularnewline
$\bs{\Phi}$ & Design matrix of size $N\times m$\tabularnewline
$\sans{K}$ & Kernel matrix of size $N\times N$\tabularnewline
$\sans{C}$ & Prior covariance matrix of $\bs{\beta}$ of size $m\times m$\tabularnewline
$\bs{\mu}$ & Prior mean of $\bs{\beta}$ of size $m$\tabularnewline
$\sans{D}$ & $\sans{D}=\left[\bs{\Phi},\,\bb{I}\right]$ is a new design matrix of size $N\times (m+N)$\tabularnewline
$\bs{\alpha}$ & $\bs{\alpha}=\left[\bs{\beta},\,\bs{f}\right]^{\tm{T}}$ is a vector of size $m+N$\tabularnewline
$\sans{R}$ & Prior covariance matrix of size $(m+N)\times(m+N)$\tabularnewline
$\bs{\gamma}$ & $\bs{\gamma}=\left[\bs{\mu},\,\bs{0}\right]^{\tm{T}}$ prior mean of size $m+N$\tabularnewline
$\bs{\Sigma}$ & Noise covariance matrix of size $N\times N$\tabularnewline
$d$ & Dimension of the problem\tabularnewline
$\bs{\nu}$ & Kernel hyperparameters \tabularnewline
\hline
\end{tabularx}
\par\end{centering}
\end{table}
As discussed by \cite{2007ApJ...654....2F}, we also want to ensure for
the fact that there is roughly an equal variation in the power spectrum when we take a step in parameter space. This condition can be met by pre-whitening the input parameters prior to building the emulator. This can be achieved as follows: the training points are first centered on 0, that is, $\bs{\theta}'\rightarrow \bs{\theta}-\bar{\bs{\theta}}$. The covariance, $\sans{M}$ of this modified training set is computed, and Cholesky decomposed to $\sans{M}=\sans{L}\sans{L}^\tm{T}$, then $\bs{\theta}'=\sans{L}\tilde{\bs{\theta}}$, where $\tilde{\bs{\theta}}$ has a covariance matrix equal to the identity.
Once we have our training set, our goal is to learn the functional relationship between the function $\bs{y}$ (we have dropped the index $i$ but the same steps apply to the other functions) and the inputs $\bs{\theta}$. In other words, we model the data (simulations), $\bs{y}$, as
\begin{equation}
\bs{y} = \bs{h}(\bs{\theta}) + \bs{\epsilon}
\end{equation}
\noindent where $\bs{h}$ is the underlying assumed model. The output of CLASS at the training points would often be called ‘data’ in a ML context. Conceptually, this fitting procedure is analogous to many parameter inference tasks in Cosmology, where $\bs{y}$ would be a set of data from observations, for example, a set of band powers and $\bs{h}$ would be a $\Lambda\tm{CDM}$ model.
\subsection{Polynomial Regression}
\label{sec:polynomial_regression}
In our application, $\bs{h}$ might be a deterministic function but the functional (parametric) form might be unknown to us. A straightforward approach is to assume a polynomial approximation to the data, that is,
\begin{equation}
\label{eq:polynomial_model}
\bs{y} = \bs{\Phi}\bs{\beta} + \bs{\epsilon},
\end{equation}
\noindent where $\bs{\Phi}$ is a design matrix, whose columns contain the basis functions $[1, \bs{\theta}_{1},\ldots\bs{\theta}_{p}^{n}]$ and $n$ is the order of the polynomial. $\bs{\beta}$ is a vector of regression coefficients (also referred to as weights) and $\bs{\epsilon}$ is the noise vector and $\tm{cov}(\bs{\epsilon})=\bs{\Sigma}$. Using Bayes' theorem, the full posterior distribution of the weights is
\begin{equation}
p(\bs{\beta}\left|\bs{y}\right.)=\dfrac{p(\bs{y}\left|\bs{\beta}\right.)p(\bs{\beta})}{p(\bs{y})}.
\end{equation}
\noindent $p(\bs{\beta}\left|\bs{y}\right.)$ is the posterior distribution of $\bs{\beta}$, $p(\bs{y}\left|\bs{\beta}\right.)$ is the likelihood of the data, $p(\bs{\beta})$ is the prior for $\bs{\beta}$ and $p(\bs{y})$ is the marginal likelihood (Bayesian evidence) which does not depend on $\bs{\beta}$. In what follows, the notation $\mc{N}(\bs{x}\left|\right.\bs{\mu},\sans{C})$ denotes a multivariate normal distribution with mean $\bs{\mu}$ and covariance $\sans{C}$.
Assuming a Gaussian likelihood for the data, $\mc{N}(\bs{y}\left|\right.\bs{\Phi}\bs{\beta},\bs{\Sigma})$ and a Gaussian prior for the weights, $\mc{N}(\bs{\beta}\left|\right.\bs{\mu},\sans{C})$, the posterior distribution of $\bs{\beta}$ is another Gaussian distribution, $\mc{N}(\bs{\beta}\left|\right.\bar{\bs{\beta}},\bs{\Lambda})$ with mean and covariance given respectively by
\begin{equation}
\begin{split}
\label{eq:post_poly}
\bar{\bs{\beta}}&=\bs{\Lambda}(\bs{\Phi}^{\tm{T}}\bs{\Sigma}^{-1}\bs{y}+\sans{C}^{-1}\bs{\mu})\\
\bs{\Lambda}&=(\sans{C}^{-1}+\bs{\Phi}^{\tm{T}}\bs{\Sigma}^{-1}\bs{\Phi})^{-1}.
\end{split}
\end{equation}
In general, we are also interested in learning the (posterior) predictive distribution at a given test point $\bs{\theta}_{*}$, that is, $p(y_{*}\left|\right.\bs{y},\bs{\theta}_{*})$ and this is another Gaussian distribution,
\begin{equation}
p(y_{*}\left|\right.\bs{y},\bs{\theta}_{*}) = \mc{N}(y_{*}\left|\right. \bs{\Phi}_{*}\bar{\bs{\beta}},\sigma^{2}_{*}+\bs{\Phi}_{*}\bs{\Lambda}\bs{\Phi}_{*}^{\tm{T}})
\end{equation}
\noindent where $y_{*}$ is the predicted function and $\bs{\Phi}_{*}$ is the set of basis functions evaluated at the test point. For noise-free regression, the noise variance, $\sigma^{2}_{*}\approx 0$ and the predictive uncertainty is dominated by the term $\bs{\Phi}_{*}\bs{\Lambda}\bs{\Phi}_{*}^{\tm{T}}$. Moreover, in practice, the noise term at the test point is barely known and is hence approximated by $\bs{\Phi}_{*}\bs{\Lambda}\bs{\Phi}_{*}^{\tm{T}}$.
On the other hand, we are also interested in understanding the model, that is, the number of basis functions we would need to fit the data. An important quantity is the marginal likelihood which penalises model complexity \citep{1996ApJ...471...24J, doi:10.1080/00107510802066753}. In this case, this quantity can be analytically derived and is given by
\begin{equation}
p(\bs{y})=\mc{N}(\bs{y}\left|\right.\bs{\Phi}\bs{\mu},\,\bs{\Sigma}+\bs{\Phi}\sans{C}\bs{\Phi}^{\tm{T}}).
\end{equation}
\noindent Note that this quantity is independent of $\bs{\beta}$ and is an integral of the numerator with respect to \textit{all} the variables (in our case $\bs{\beta}$), that is,
\begin{equation}
p(\bs{y})=\int p(\bs{y}\left|\bs{\beta}\right.)p(\bs{\beta})\; \tm{d}\bs{\beta}.
\end{equation}
\noindent To this end, one can compute the Bayesian evidence for a series of (polynomial) models and choose the model which yields the maximum Bayesian evidence \citep{2006PhRvD..74b3503K}.
\subsection{Modelling the residuals}
\label{sec:model_residuals}
The above formalism works well in various cases but (1) polynomial model fitting is generally a \textit{global} fitting approach, (2) there exists a large number of choice for the number of basis functions, and (3) the functional relationship between the data and the model might be a very complicated function. In this section, we therefore propose a Bayesian technique which models the residuals, that is, the difference between our proposed polynomial approximation and the underlying model. We will re-write equation (\ref{eq:polynomial_model}) as
\begin{equation}
\label{eq:parametric_gp}
\bs{y} = \bs{\Phi}\bs{\beta} + \bs{f} + \bs{\epsilon},
\end{equation}
\noindent where $\bs{f} = \bs{h} - \bs{\Phi}\bs{\beta}$ is the deterministic error component of the model \citep{10.1093/biomet/62.1.79}. Under the assumption that we have modelled $\bs{y}$ as much as we can with the polynomial model, it is fair to make an a priori assumption for the distribution of $f$. In function space, points which are close to each other will depict similar values for $f$ and as we move further away from a given design point, it is expected that the degree of similarity will decrease. In other words, the correlation between $f(\bs{\theta}_{i})$ and $f(\bs{\theta}_{j})$ decreases monotonically as the distance between $\bs{\theta}_{i}$ and $\bs{\theta}_{j}$ increases. This prior knowledge can be encapsulated by using a covariance (kernel) function such as the Gaussian function, that is,
\begin{equation}
\tm{cov}(f_{i},f_{j}) = \lambda^{2}\tm{exp}\left[-\dfrac{1}{2}(\bs{\theta}_{i}-\bs{\theta}_{j})^{\tm{T}}\bs{\Omega}^{-1}(\bs{\theta}_{i}-\bs{\theta}_{j})\right],
\end{equation}
\noindent where $\bs{\Omega}=\tm{diag}(\omega^{2}_{1}\ldots\omega^{2}_{d})$ and $\omega^{2}_{i}$ is the characteristic lengthscale for each dimension. $\bs{\nu}=\{\lambda,\omega_{1},\ldots\omega_{d}\}$ is the set of hyperparameters for this kernel. In the same spirit, the full prior distribution for $\bs{f}$ is a multivariate normal distribution, that is,
\begin{equation}
p(\bs{f})=\mc{N}(\bs{f}\left|\right.\bs{0},\sans{K})
\end{equation}
\noindent where the kernel matrix has elements $k_{ij}\equiv\tm{cov}(f_{i},f_{j})$. At this point, we will assume that the hyperparameters are fixed but we will later consider learning them via optimisation.
\subsubsection{Inference}
Now that we have a model for the data (training set), we seek the full posterior distribution of the variables $\bs{\beta}$ and $\bs{f}$. We assume a Gaussian prior for $\bs{\beta}$, that is, $p(\bs{\beta})=\mc{N}(\bs{\beta}\left|\right.\bs{\mu},\sans{C})$. Using Bayes' theorem, the posterior distribution of $\bs{\beta}$ and $\bs{f}$ is
\begin{equation}
p(\bs{\beta},\bs{f}\left|\right.\bs{y})=\dfrac{p(\bs{y}\left|\right.\bs{\beta},\bs{f})p(\bs{\beta},\,\bs{f})}{p(\bs{y})}
\end{equation}
\noindent To simplify the derivation, we will rewrite equation (\ref{eq:parametric_gp}) as
\begin{equation}
\bs{y} = \sans{D}\bs{\alpha} + \bs{\epsilon},
\end{equation}
\noindent where $\sans{D}=[\bs{\Phi},\bb{I}]$ is an augmented, new design matrix, consisting of the existing design matrix $\bs{\Phi}\in\bb{R}^{N\times m}$ and the identity matrix, $\bb{I}$ of size $N\times N$. $\bs{\alpha} = [\bs{\beta},\bs{f}]^{\tm{T}}$ is now a vector of length $N+m$, consisting of both $\bs{\beta}$ and $\bs{f}$. The sampling distribution of $\bs{y}$ is a Gaussian distribution, $\mc{N}(\bs{y}\left|\right.\sans{D}\bs{\alpha}, \bs{\Sigma})$. We can rewrite the full prior distribution of both set of parameters, $\bs{\beta}$ and $\bs{f}$ as $\mc{N}(\bs{\alpha}\left|\right.\bs{\gamma},\sans{R})$, where
\begin{equation*}
\bs{\gamma}=\left[\begin{array}{c}
\bs{\mu}\\
\bs{0}
\end{array}\right]\;\;\tm{and}\;\;\sans{R}=\left[\begin{array}{cc}
\sans{C} & \sans{0}\\
\sans{0} & \sans{K}
\end{array}\right]
\end{equation*}
Using a similar approach as in the previous section, the posterior of $\bs{\alpha}$ is another Gaussian distribution, that is,
\begin{equation}
p(\bs{\alpha}\left|\right.\bs{y})=\mc{N}(\bs{\alpha}\left|\right.\sans{A}^{-1}\bs{b}, \sans{A}^{-1}),
\end{equation}
\noindent where $\sans{A}=\sans{D}^{\tm{T}}\bs{\Sigma}^{-1}\sans{D} + \sans{R}^{-1}$ and $\bs{b}=\sans{D}^{\tm{T}}\bs{\Sigma}^{-1}\bs{y}+\sans{R}^{-1}\bs{\gamma}$. The covariance of $\bs{\beta}$ and $\bs{f}$ are given respectively by:
\begin{equation}
\label{eq:gp_cov}
\sans{V}_{\bs{\beta}} = \left[\bs{\Phi}^{\tm{T}}\left(\sans{K}+\bs{\Sigma}\right)^{-1}\bs{\Phi}+\sans{C}^{-1}\right]^{-1}
\end{equation}
\noindent and
\begin{equation}
\label{eq:cov_f}
\sans{V}_{\bs{f}} = \left[\sans{K}^{-1} + (\bs{\Sigma}+\bs{\Phi}\sans{C}\bs{\Phi}^{\tm{T}})^{-1}\right]^{-1}
\end{equation}
\noindent Moreover, the posterior mean for $\bs{\beta}$ and $\bs{f}$ can be derived and are given respectively by
\begin{equation}
\label{eq:gp_post_beta}
\hat{\bs{\beta}} = \sans{V}_{\bs{\beta}}\left[\bs{\Phi}^{\tm{T}}\left(\sans{K}+\bs{\Sigma}\right)^{-1}\bs{y} + \sans{C}^{-1}\bs{\mu}\right]
\end{equation}
\noindent and
\begin{equation}
\label{eq:gp_post_f}
\hat{\bs{f}} = \sans{V}_{f}\bs{\Sigma}^{-1}\left[\bs{y} - \bs{\Phi}\bar{\bs{\beta}}\right]
\end{equation}
\noindent Recall that $\bar{\bs{\beta}}$ is the expression for the posterior distribution of $\bs{\beta}$ when we use the polynomial model only. There are also some useful remarks and sanity checks which we can make from equations \ref{eq:gp_cov}, \ref{eq:gp_post_beta} and \ref{eq:gp_post_f}. In equation (\ref{eq:gp_cov}), for the covariance of $\bs{\beta}$, if we had ignored the other variables $\bs{f}$, in other words, in the absence of the kernel matrix, $\sans{K}$, we recover the posterior covariance for $\bs{\beta}$ when we use a polynomial model only. A similar argument applies for equation (\ref{eq:gp_post_beta}) in which case we also recover the posterior distribution of $\bs{\beta}$ in the polynomial model. Equation (\ref{eq:gp_post_f}) has a nice interpretation. The posterior mean of $\bs{f}$ is a linear combination of the residuals, $\bs{y}-\bs{\Phi}\bar{\bs{\beta}}$.
\subsubsection{Prediction}
Now that we have the full posterior distribution of the variables, another key ingredient is learning the predictive distribution at a given test point, $\bs{\theta}_{*}$. The joint distribution of the data and the function at the test point can be written as
\begin{equation}
\left[\begin{array}{c}
\bs{y}\\
y_{*}
\end{array}\right]\sim\mc{N}\left(\left[\begin{array}{c}
\bs{\Phi}\bs{\beta}\\
\bs{\Phi}_{*}\bs{\beta}
\end{array}\right],\,\left[\begin{array}{cc}
\sans{K}+\bs{\Sigma} & \bs{k}_{*}\\
\bs{k}_{*}^{\tm{T}} & k_{**}+\sigma_{*}^{2}
\end{array}\right]\right)
\end{equation}
\noindent where $\bs{k}_{*}$ is a vector, whose elements are given by calculating the kernel function between each training point and the test point, $\bs{\theta}_{*}$. Similarly, $k_{**}$ is just the kernel function evaluated at the test point only. The conditional distribution of $y_{*}$ is a Gaussian distribution
\begin{equation}
p(y_{*}\left|\right.\bs{y}, \bs{\theta}_{*})=\mc{N}(y_{*}\left|\right.\bar{y}_{*},\tm{var}(y_{*}))
\end{equation}
\noindent where $\bar{y}_{*}$ and $\tm{var}(y_{*})$ are the mean and variance given respectively by
\begin{equation}
\label{eq:prediction}
\begin{split}
\bar{y}_{*} &= \sans{X}_{*}\hat{\bs{\beta}} + f_{*}\\
\tm{var}(y_{*}) &= \sans{X}_{*}\sans{V}_{\beta}\sans{X}_{*}^{\tm{T}} + k_{**} + \sigma_{*}^{2} -\bs{k}_{*}^{\tm{T}}\sans{K}_{y}^{-1} \bs{k}_{*}
\end{split}
\end{equation}
\noindent and we have defined $\sans{K}_{y}=\sans{K}+\bs{\Sigma}$, $\sans{X}_{*}=\bs{\Phi}_{*}-\bs{k}_{*}^{\tm{T}}\sans{K}_{y}^{-1}\bs{\Phi}$ and $f_{*} = \bs{k}_{*}^{\tm{T}}\sans{K}_{y}^{-1}\bs{y}$. This is another interesting result because if we did not have the parametric polynomial model, then the prediction corresponds to that of a zero mean Gaussian Process (GP) \citep{2006gpml.book.....R}. In our application, once we predict the three components, $D(z)$, $q(k,z)$ and $P_{\tm{lin}}(k,z_{0})$ at a test point $\bs{\theta}_{*}$, the 3D power spectrum easily be calculated using
\begin{equation}
P_{\delta}(k_{*},z_{*};\,\bs{\theta}_{*})=D(z_{*};\,\bs{\theta}_{*})[1+q(k_{*},z_{*};\,\bs{\theta}_{*})]P_{\tm{lin}}(k_{*},z_{0};\,\bs{\theta}_{*})
\end{equation}
Until now, we have assumed a fixed set of kernel hyperparameters. In the next section, we will explain how we can learn them via optimisation.
\subsubsection{Kernel Hyperparameters}
An important quantity in learning the kernel hyperparameters is the marginal likelihood (Bayesian evidence), which is obtained by marginalising over all the variables $\bs{\alpha}$ and is given by
\begin{equation}
p(\bs{y}) = \int p(\bs{y}\left|\right.\bs{\alpha})p(\bs{\alpha})\,\tm{d}\bs{\alpha}.
\end{equation}
\noindent Fortunately, the above integration is a convolution of two multivariate normal distributions, $\mc{N}(\bs{y}\left|\sans{D}\bs{\alpha},\,\bs{\Sigma}\right.)$ and $\mc{N}(\bs{\alpha}\left|\bs{\gamma},\,\sans{R}\right.)$ and hence can be calculated analytically, that is,
\begin{equation}
p(\bs{y}) = \mc{N}(\bs{y}\left|\right.\bs{\Phi}\bs{\mu},\,\sans{K}_{y}+\bs{\Phi}\sans{C}\bs{\Phi}^{\tm{T}})
\end{equation}
\noindent and the log-marginal likelihood is
\begin{equation}
\label{eq:marginal_likelihood}
\begin{split}
\tm{log}\,p(\bs{y}) &= -\dfrac{1}{2}(\bs{y}-\bs{\Phi}\bs{\mu})^{\tm{T}}(\sans{K}_{y}+\bs{\Phi}\sans{C}\bs{\Phi}^{\tm{T}})^{-1}(\bs{y}-\bs{\Phi}\bs{\mu})\\
& -\dfrac{1}{2}\tm{log}\left|\sans{K}_{y}+\bs{\Phi}\sans{C}\bs{\Phi}^{\tm{T}}\right| + \tm{constant}.
\end{split}
\end{equation}
The first term in equation (\ref{eq:marginal_likelihood}) encourages the fit to the data while the second term (the determinant term) controls the model complexity. Recall that the kernel matrix, $\sans{K}$ is a function of the hyperparameters $\bs{\nu}=\{\lambda,\omega_{1},\ldots\omega_{d}\}$. We want to maximise the marginal likelihood with respect to the kernel hyperparameters and this step is equivalent to minimising the cost, that is, the negative log-marginal likelihood. In other words,
\begin{equation}
\bs{\nu}_{\tm{opt}} = \underset{\bs{\nu}}{\tm{arg min}}\,J(\bs{\nu)}
\end{equation}
\noindent where we have defined $J(\bs{\nu})\equiv -2\tm{log}\,p(\bs{y})$. An important ingredient for the optimisation to perform well is the gradient of the cost with respect to the kernel hyperparameters, which is given by
\begin{equation}
\dfrac{\partial J(\bs{\nu})}{\partial \bs{\nu}_{i}} = \tm{tr}\left[\left((\sans{K}_{y}+\bs{\Phi}\sans{C}\bs{\Phi}^{\tm{T}})^{-1}-\bs{\eta}\bs{\eta}^{\tm{T}}\right)\dfrac{\partial\sans{K}}{\partial\bs{\nu}_{i}}\right],
\end{equation}
\noindent where $\bs{\eta}=(\sans{K}_{y}+\bs{\Phi}\sans{C}\bs{\Phi}^{\tm{T}})^{-1}\bs{y}$. There are a few computational aspects which we should consider when implementing this method. In particular, for a single predictive variance calculation (see equation (\ref{eq:prediction})) an $\mc{O}(N^{2})$ operation is required whereas training (that is, learning the kernel hyperparameters) requires an $\mc{O}(N^{3})$ operation. On the other hand, the mean is quick to compute since it involves $\mc{O}(N)$ operation.
\section{Results}
\label{sec:results}
In this section, we highlight the main results, starting from the calculation of the 3D matter power spectrum to the calculation of the different weak lensing power spectra.
In Figure \ref{fig:gradients}, we show the gradient along at a fixed cosmological parameter (test point) and fixed redshift, $z=0$. The red curve corresponds to the gradients as calculated by CLASS using central difference method and the blue curves show the gradients output from the emulator. In particular, this gradient is strictly a 3D quantity, as a function of the wavenumber $k$, redshift, $z$ and the cosmological parameters $\bs{\theta}$. In other words, the gradient calculation from the emulator will be a tensor of size $(N_{k},\,N_{z},\,N_{p})$, where $N_{k}$ is the number of wavenumbers for $k\in[5\times 10^{-4},\,50]$, $N_{z}$ is the number of redshifts for $z\in[0.0,\,4.66]$ and $N_{p}$ is the number of parameters considered. In this case, $N_{p}=5$ an the default values for a finer grid in $k$ and $z$ are $N_{k}=1000$ and $N_{z}=100$.
In Figure \ref{fig:gf_gp_class}, we show the growth factor, $D(z)$ calculated using CLASS (in orange) and the emulator (in blue), while in Figure \ref{fig:pk_nl_gp_class}, we show three important quantities. First, since we are emulating the 3 different components of the non-linear matter power spectrum, we are able to compute the linear matter power spectrum at a test point, at the reference redshift, $z_{0}=0$. Note that the one calculated by CLASS and the one by the emulator agree quite well. Similarly, we can also calculate the 3D non-linear matter power spectrum and in Figure \ref{fig:pk_nl_gp_class}, in orange and blue, we have the power spectrum at a fixed redshift, excluding baryon feedback, calculated using CLASS and the emulator respectively. The same is repeated for the curves in purple and brown, but in this case including baryon feedback. As discussed in \S\ref{sec:model}, we can also see the effect of baryon feedback which alters the power spectrum at large $k$.
Various techniques have been proposed by \cite{doi:10.1198/TECH.2009.08019} to assess the performance of an emulator. These diagnostics are generally based on the comparisons between the emulator and simulator runs for new test points in input parameter space. These test points should cover the input parameter space over which the training points were previously generated. In this application, we randomly choose 100 independent test points from the prior range and evaluate the simulator and the emulator at these points. Since, we are emulating the 3D matter power spectrum, we can also generate it on a finer grid, unlike the previous setup where we used 40 wavenumbers and 20 redshifts. Hence, we generate all the power spectra for 1000 wavenumbers, equally spaced in logarithmic scale, $k\in [5\times 10^{-4},\,50]$ and 100 redshifts, $z\in[0.0,\,4.66]$, equally spaced in linear scale. For the 100 test points, this gives us a set of $10^{4}$ power spectra. We define the fractional uncertainty as
\begin{equation}
\dfrac{\Delta P_{\delta}}{P_{\delta}}=\dfrac{P_{\delta}^{\tm{emu}} - P_{\delta}}{P_{\delta}}
\end{equation}
\noindent and given the set of power spectra we have generated, we compute the mean and variance of $\nicefrac{\Delta P_{\delta}}{P_{\delta}}$. For a robust emulator, the mean should be centered on zero and indeed, as seen, from Figure \ref{fig:delta_p}, the mean is centered on 0. The variance, depicted by the $3\sigma$ confidence interval in pale blue, is also quite small.
Finally, in Figure \ref{fig:ee_ii_gi_gp_class}, we show the different types of weak lensing power spectra calculated using CLASS and the emulator. The left, middle and right panel show the auto- and cross- EE, II and II power spectra due to the two tomographic bins, shown in Figure \ref{fig:nz_dist}. In the three panels, the blue, orange and green curves correspond to the auto- and cross- power spectra, $C_{\ell,00}$, $C_{\ell,10}$ and $C_{\ell,11}$ as computed by CLASS. Similarly, the red, purple and brown broken curves are the power spectra generated by the emulator. The power spectra are in agreement when comparing CLASS and the emulator. Note that, in a typical weak lensing analysis, the three different types of power spectra (EE, GI and II) are combined together via the intrinsic alignment parameter, $A_{\tm{IA}}$ (see Equation \ref{eq:cl_tot}).
\begin{figure}[H]
\noindent \begin{centering}
\includegraphics[width=0.45\textwidth]{Figures/res_diff_cosmo.pdf}
\par\end{centering}
\caption{\label{fig:delta_p}To investigate the performance of the emulator, we draw an independent set of cosmological parameters, randomly from the prior and we calculate the fractional error between the predicted ones with the GP model and CLASS. The mean of $\nicefrac{\Delta P_{\delta}}{P_{\delta}}$ is shown by the broken horizontal line and the $3\sigma$ confidence interval, derived from the standard deviations of all experiments, is shown in pale blue. For an accurate emulator, it is expected that the mean is centered on 0 and this demonstrates the robustness of this method. Note that in this procedure, one can also specify the number of desired power spectra for $z\in[0.0, 4.66]$. For example, for $p$ cosmological parameters and $n$ redshifts, we have $np$ power spectra outputs.}
\end{figure}
\section{Software}
\label{sec:software}
In this section, we briefly elaborate on how the code is set up and the different functionalities one can exploit. Note that any default values mentioned below can be adjusted according to the user's preferences. The default values of the minimum and maximum redshifts are set to 0 and 4.66 respectively and as discussed in \S\ref{sec:procedures}, we also assume 20 redshifts spaced equally in the linear scale. For the wavenumbers in units of $h\,\tm{Mpc}^{-1}$, the minimum is set to $5\times 10^{-4}$ and the maximum to 50, with 40 wavenumbers equally spaced in logarithmic scale. A fixed neutrino mass of 0.06~eV is assumed but this can also be fixed at some other value or it can also be included as part of the emulation strategy. The code supports either choice.
The next step involves generating the training points. We generate 1000 LH design points using the \texttt{maximinLHS} function and we calculate and record the three quantities, the growth factor, $D(z)$, the non-linear function, $q(k,z)$ and the linear matter power spectrum, $P_{\tm{lin}}(k,z_{0})$. At a very small value of $k$, which we refer to as $k_{\tm{min}}$, $q=0$. The non-linear matter power spectrum is only relevant for some range of $k_{nl}$ and $k_{nl}>k_{\tm{min}}$. Hence, the growth factor is just
\begin{equation}
D(z)=\dfrac{P_{\tm{lin}}(k_{\tm{min}},z)}{P_{\tm{lin}}(k_{\tm{min}}, z_{0})}
\end{equation}
\begin{figure}[H]
\noindent \begin{centering}
\includegraphics[width=0.45\textwidth]{Figures/gf_gp_class.pdf}
\par\end{centering}
\caption{\label{fig:gf_gp_class}The growth factor, $D(z)$ as predicted by the surrogate model (in blue) at a test point in parameter space. The accurate function is also calculated using CLASS and is shown in orange. Recall that the emulator is constructed for $z\in [0.0, 4.66]$, aligned with current weak lensing surveys.}
\end{figure}
\noindent Throughout our analysis, we use $z_{0}=0$. In some regions of the parameter space, we also found that the $q(k,z)$ were noisy and this can be alleviated by increasing the parameter \texttt{P\_k\_max\_h/Mpc} when running CLASS. If a small value is assumed, the interpolation in the high-dimensional space will not be robust. We set this value to 5000 to ensure the $q(k,z)$ function remains smooth as a function of the inputs. However, this procedure leads to CLASS being slower. It takes $\sim 30$ seconds on average to do 1 forward simulation. For example, in our application, it took 520 minutes to generate the targets $(D,\,q,\,P_{\tm{lin}})$ for 1000 input cosmologies. We have also found that CLASS occasionally fails to compute the power spectrum and this is resolved as follows. We allocate a time frame (60 seconds in this work) for CLASS to attempt to calculate the power spectrum and if it fails, a small perturbation is added to the input training point parameters and we re-run CLASS, until the power spectrum is successfully calculated. In the failing cases, the maximum number of attempts is only 3. Moreover, the code currently supports polynomial functions of order 1 and 2, that is, the set of basis functions for an order 2 polynomial is $[1,\,\bs{\theta},\,\bs{\theta}^{2}]$. For example, \cite{2011ApJ...728..137S} implemented a first and second order polynomial function to design an emulator for the CMB while \cite{2007ApJ...654....2F} used a fourth order polynomial function. In this case, recall that we are also marginalising over the residuals analytically by using the kernel function. Training the emulator, that is, learning the kernel hyperparameters, for the different targets, took around 340 minutes. All experiments were conducted on an Intel Core i7-9700 CPU desktop computer.
\begin{figure}[H]
\noindent \begin{centering}
\includegraphics[width=0.45\textwidth]{Figures/pk_nl_gp_class.pdf}
\par\end{centering}
\caption{\label{fig:pk_nl_gp_class}The linear power spectrum at a fixed redshift, $z_{0}$, the 3D non-linear matter power spectrum, $P_{\delta}(k,z)$ and the 3D non-linear matter power spectrum with baryon feedback, $P_{\delta}^{\tm{bary}}(k,z)$ can be calculated with our emulating scheme. The solid curves correspond to predictions from the model while the broken curves show the accurate functions as calculated with CLASS.}
\end{figure}
Note that we do not compute the emulator uncertainty for various reasons. As argued by \cite{doi:10.1198/TECH.2009.08019}, with simulators such as CLASS are deterministic input-output models, that is, running the simulator again at the same input values will give the same outputs and the error returned by the GP is unreliable \citep{2020MNRAS.497.2213M}.
Moreover, the emulator uncertainty changes as a function of the number of training points and so do the accuracy and precision of the predicted mean function from the emulator. However, in a small data regime, for example, band powers for current weak lensing surveys, the emulator uncertainty might have significant undesirable effects on the inference of the cosmological parameters. On a more technical note, storing and calculating the emulator uncertainty is a demanding process, both with $\mc{O}(N^{2})$ computational cost respectively, where $N$ is the number of training points.
Once all these processes (generating the training points and training the emulators) are completed, the emulator is very fast when we compute the 3D matter power spectrum. It takes around 0.1 seconds to do so compared to the average value of 30 seconds by CLASS. Note that the gradient calculation with the emulator is even more efficient compared to finite difference methods, where CLASS would need to be called 10 times for a 5D problem (assuming a central difference method). For an in-depth documentation on the code structure and technical details, we refer the reader to \S\ref{sec:code_availability}, where we provide the links to the code and documentation.
\section{Weak Lensing Power Spectra}
\label{sec:wl}
A crucial application of the 3D matter power spectrum is in a weak lensing analysis, where the calculation of the different power spectra types is required. In the absence of systematics, most of the cosmological information lies in the curl-free (E-) component of the shear field. The Limber approximation \citep{1953ApJ...117..134L, 2008PhRvD..78l3506L} is typically assumed and under the assumption of no systematics, the E-mode lensing power spectrum is equal to the convergence power spectrum and is given by:
\begin{equation}
\label{eq:ee_ps}
C_{\ell,\,ij}^{\tm{EE}}=\int_{0}^{\chi_{H}}\tm{d}\chi\,\dfrac{w_{i}(\chi)w_{j}(\chi)}{\chi^{2}}\,P_{\delta}^{\tm{bary}}(k,\chi).
\end{equation}
\noindent and
\begin{equation}
w_{i}(\chi)=A\chi(1+z)\int_{\chi}^{\chi_{\tm{H}}}\tm{d}\chi'\,n_{i}(\chi)\left(\dfrac{\chi'-\chi}{\chi'}\right)
\end{equation}
\noindent where $A=3H_{0}^{2}\Omega_{\tm{m}}/(2c^{2})$. $\chi$ is the comoving radial distance, $\chi_{\tm{H}}$ is the comoving distance to the horizon, $H_{0}$ is the present day Hubble constant and $\Omega_{\tm{m}}$ is the matter density parameter. $w_{i}$ is the weight function which depends on the lensing kernel. The weight function is a measure of the lensing efficiency for tomographic bin $i$. Moreover, the redshift distribution, $n_{i}(z)$, as a function of the redshift, is related to the comoving distance via a Jacobian term, that is, $n(z)\;\tm{d}z=n(\chi)\;\tm{d}\chi$ and it is also normalised as a probability distribution, that is, $\int n(z)\;\tm{d}z = 1$.
\subsection{Intrinsic Alignment Power Spectra}
An important theoretical astrophysical challenge for weak lensing is intrinsic alignment (IA). It gives rise to preferential and coherent orientation of galaxy shapes, not because of lensing alone but due to other physical effects. Although not very well understood, it is believed to arise by two main mechanisms, namely the interference (GI) and intrinsic alignment (II) effects,
such that the total signal is in fact a biased tracer of the true underlying signal, $C_{\ell,ij}^{\tm{EE}}$, that is,
\begin{equation}
\label{eq:cl_tot}
C_{\ell,ij}^{\tm{tot}}=C_{\ell,ij}^{\tm{EE}}+A_{\tm{IA}}^{2}C_{\ell,ij}^{\tm{II}}-A_{\tm{IA}}C_{\ell,ij}^{\tm{GI}}
\end{equation}
\noindent where $A_{\tm{IA}}$ is a free amplitude parameter, which allows for the flexibility of varying the strength of the power, arising due to the intrinsic alignment effect. In particular, the II term arises as a result of alignment of a galaxy in its local environment whereas the GI term is due to the correlation between the ellipticities of the foreground galaxies and the shear of the background galaxies. Note that, the II term contributes positively towards the total lensing signal whereas the GI subtracts from the signal. The II power spectrum is given by
\begin{equation}
\label{eq:ii_ps}
C_{\ell,ij}^{\tm{II}}=\int_{0}^{\chi_{\tm{H}}}\tm{d}\chi\;\dfrac{n_{i}(\chi)\,n_{j}(\chi)}{\chi^{2}}\,P_{\delta}^{\tm{bary}}(k,\chi)\;F^{2}(\chi)
\end{equation}
\noindent and the GI power spectrum is
\begin{equation}
\label{eq:gi_ps}
C_{\ell,ij}^{\tm{GI}}=\int_{0}^{\chi_{\tm{H}}}\tm{d}\chi\,\dfrac{w_{i}(\chi)n_{j}(\chi)+w_{j}(\chi)n_{i}(\chi)}{\chi^{2}}\,P_{\delta}^{\tm{bary}}(k,\chi)\,F(\chi),
\end{equation}
\noindent where $F(\chi)=C_{1}\rho_{\tm{crit}}\Omega_{\tm{m}}/D(\chi)$. $D(\chi)$ is the linear growth factor normalised to unity today, $C_{1}=5\times10^{-14}\;h^{-2}\tm{M}_{\odot}^{-1}\tm{Mpc}^{3}$ and $\rho_{\tm{crit}}$ is the critical density of the Universe today. As seen from Equations \ref{eq:ee_ps}, \ref{eq:ii_ps} and \ref{eq:gi_ps}, they all involve an integration of the form
\begin{equation}
\label{eq:wl_general}
C_{\ell}=\int_{0}^{\chi_{\tm{H}}}\;g(\chi)\,P_{\delta}^{\tm{bary}}(k,\chi)\;\tm{d}\chi.
\end{equation}
Hence, an emulator for $P_{\delta}(k,z)$ will enable us to numerically compute all the weak lensing power spectra in a fast way. This will be useful in future weak lensing surveys where we will require many power spectra calculation as a result of the large number of auto- and cross- tomographic bins. For example, in the recent KiDS-1000 analysis \citep{2021A&A...645A.104A}, five tomographic bins were employed, resulting in 15 (multiplied by 3 if we are including intrinsic alignment power spectra) power spectra calculations. In future surveys, it is expected that the number of redshift bins will be of the order 10, thus requiring at least 55 power spectra calculations for each power spectrum type (EE, GI and II).
\subsection{Redshift Distribution}
An important quantity for calculating the weak lensing power spectra is the redshift distribution. For an in-depth cosmological data analysis such as the Kilo Degree Survey (KiDS), it is crucial to calibrate the photometric redshift to obtain robust model predictions. For more advanced techniques for estimating the $n(z)$ from photometric redshifts, we refer the reader to techniques such as weighted direct calibration, DIR \citep{2008MNRAS.390..118L, 2016PhRvD..94d2005B}, calibration with cross-correlation, CC \citep{2008ApJ...684...88N} and recalibration of photometric $P(z)$, BOR by \cite{2010MNRAS.406..881B}. Recently \cite{2016MNRAS.460.4258L} developed a hierarchical Bayesian inference method to infer redshift distributions from photometric redshifts.
In this work, we use a toy Gaussian distribution to illustrate how we can use the 3D matter power spectrum, $P_{\delta}(k,z)$ in conjunction with the $n(z)$ distribution to calculate the different weak lensing power spectra. Note that one can just replace this toy $n(z)$ distribution example by any redshift distribution as calculated by any one of the techniques mentioned above.
Different $n(z)$ distributions are available as part of the software. The first 2 distributions are:
\begin{equation}
\label{eq:nz_model_1}
n(z) = B\,z^{2}\tm{exp}\left(-\dfrac{z}{z_{0}}\right)
\end{equation}
\noindent and
\begin{equation}
\label{eq:nz_model_2}
n(z) = B\,\alpha z\tm{exp}\left[-\left(\dfrac{z}{z_{0}}\right)^{\beta}\right].
\end{equation}
\noindent For a \textit{Euclid}-like survey, $z_{0}\sim 0.7$, $\alpha=2$ and $\beta = 1.5$ \citep{2015MNRAS.449.1146L}. The third distribution implemented is just a Gaussian distribution with mean $z_{0}$ and standard deviation, $\sigma$
\begin{equation}
\label{eq:nz_gaussian}
n(z) = B\,\tm{exp}\left[-\dfrac{1}{2}\left(\dfrac{z-z_{0}}{\sigma}\right)^{2}\right]
\end{equation}
\noindent where $B$ is a normalisation factor such that $\int n(z)\;\tm{d}z=1$ in all cases above. As shown in Figure \ref{fig:nz_dist}, we employ two redshift distributions, where the mean and standard deviation for the first distribution is 0.50 and 0.10 respectively and for the second distribution (in orange), the mean and standard deviation are set to 0.75 and 0.20 respectively.
\begin{figure}[H]
\noindent \begin{centering}
\includegraphics[width=0.45\textwidth]{Figures/nz.pdf}
\par\end{centering}
\caption{\label{fig:nz_dist}To illustrate the calculation of the weak lensing power spectra, we use two analytic redshift distributions centered at redshift 0.50 and 0.75 respectively. The $n(z)$ distribution assumed here is a normal distribution and is given by Equation \ref{eq:nz_gaussian}. The standard deviations for each normal distribution are set to 0.1 and 0.2 respectively.}
\end{figure}
|
1,116,691,499,786 | arxiv | \section{Introduction}
The area of quantum information and
computation~\cite{NielsenChuang,Hayashi,HolevoBOOK,RMP,HayashiINTRO}
is one of the fastest growing fields. Understanding how quantum
information is transmitted is necessary not only for the
development of a future quantum
Internet~\cite{Kimble2008,HybridINTERNET,ref1,ref3,ref4,ref5,ref6,Meter}
but also for the construction of practical quantum key
distribution (QKD)~\cite{crypt1,crypt2,crypt3,crypt4} networks.
Motived by this, there is much interest in trying to establish the
optimal performance in the transmission of quantum bits (qubits),
entanglement bits (ebits) and secret bits between two remote
users. This is a theoretical framework which is a direct quantum
generalization of Shannon's theory of
information~\cite{Shannon,Cover&Thomas}. In the quantum setting,
there are different types of maximum rates, i.e., capacities, that
may be defined for a given quantum channel. These include the
classical capacity (transmission of classical bits), the
entanglement distribution capacity (distribution of ebits), the
quantum capacity (transmission of qubits), the private capacity
(transmission of private bits), and the secret-key capacity
(distribution of secret bits). All these capacities may be defined
allowing side local operations (LOs) and classical communication
(CC) either one-way or two-way between the remote parties.
We shall focus on the use of LOs assisted by two-way CC, also known as
\textquotedblleft adaptive LOCCs\textquotedblright. The maximization over
these types of LOCCs leads to the definition of corresponding two-way assisted
capacities. In particular, in this work we are interested in the two-way
quantum capacity $Q_{2}$ (which is equal to the two-way entanglement
distribution capacity $D_{2}$) and the secret-key capacity $K$ (which is equal
to the two-way private capacity $P_{2}$). Generally, these capacities are
extremely difficult to calculate because they involve quantum protocols based
on adaptive LOCCs, where input states and the output measurements are
optimized in an interactive way by the two remote parties. Similar adaptive
protocols may be considered in other tasks, such as quantum hypothesis
testing~\cite{Harrow,PirCo,PBT} and quantum
metrology~\cite{PirCo,Rafal,ReviewMETRO,adaptive2,ada3,ada4,ada5,HWmetro}.
Building on a number of preliminary
tools~\cite{B2main,Gatearray,SamBrassard,HoroTEL,Gottesman,SougatoBowen,Knill,WernerTELE,Cirac,Aliferis,Niset,MHthesis,Wolfnotes,Leung}
and generalizing ideas therein to arbitrary dimension and arbitrary tasks,
Ref.~\cite{Stretching} showed how to use the LOCC simulation~\cite{NoteLOCC}
of a quantum channel to reduce an arbitrary adaptive protocol into a simpler
block version. More precisely, Ref.~\cite{Stretching} showed how the suitable
combination of an adaptive-to-block reduction (teleportation stretching) with
an entanglement measure, such as the relative entropy of entanglement
(REE)~\cite{RMPrelent,VedFORMm,Pleniom}, allows one to reduce the expression
of $Q_{2}$ and $K$ to a computable single-letter version. In this way,
Ref.~\cite{Stretching} established the two-way capacities of several quantum
channels, including the bosonic lossy channel~\cite{RMP}, the quantum-limited
amplifier, the dephasing and the erasure channel~\cite{NielsenChuang}. The
secret-key capacity of the erasure channel was also established in a
simultaneous work~\cite{GEW} by using a different approach based on the
squashed entanglement~\cite{squash}, which also appears to be powerful in the
case of the amplitude damping channel~\cite{GEW,Stretching}. Note that, prior
to these results, only the $Q_{2}$ of the erasure channel was
known~\cite{ErasureChannelm}.
One of the golden rules to apply the previous techniques is teleportation
covariance, first considered for discrete variable (DV)
channels~\cite{MHthesis,Wolfnotes,Leung} and then extended to any dimension,
finite or infinite~\cite{Stretching}. This is the property of a quantum
channel to \textquotedblleft commute\textquotedblright\ with the random
unitaries of quantum teleportation~\cite{tele,teleCV,telereview,teleCV2}.
Because the Holevo-Werner (HW) channels~\cite{counter,Fanneschannel} are
teleportation covariant, we may therefore apply the previous reduction tools
and bound their two-way assisted capacities, $Q_{2}$ and $K$, via
single-letter quantities. These channels are particularly interesting because
the resulting upper bounds, based on relative entropy distances (such as the
REE), are generally non-additive. In fact, we show a regime of parameters
where a multi-letter bound is strictly tighter than a single-letter one.
As a result of this sub-additivity, the regularisation of the
upper bound needs to be considered for the capacities $Q_{2}$ and
$K$ of these channels. This is a property that the HW channels
inherit from their Choi matrices, the Werner states~\cite{Werner}.
Recall that these states may be entangled, yet admit a local model
for all measurements~\cite{Werner,Barrett}. They disproved the
additivity of REE~\cite{VolWer} (which is the main property
exploited here), and they are also conjectured to prove the
existence of negative partial transpose undistillable
states~\cite{NPTstates}.
Another interesting finding is that bounds which are based on the squashed
entanglement compete with the REE bounds in a way that there is not a clear
preference among them. In fact, we find that the secret-key capacity of an
HW\ channel is better bounded by the REE or the squashed entanglement
depending on the value of its main defining parameter. This is a feature which
has never been observed for another quantum channel so far.
The structure of this paper is as follows. We begin in Sec.~\ref{Wer_preli} by
introducing the mathematical description of both Werner states and HW
channels. In Sec.~\ref{REEsec} we review the notions of relative entropy
distance with respect to separable states and partial positive transpose (PPT)
states, also discussing their regularised versions. In Sec.~\ref{sec:TC}, we
compute the REE for the overall state consisting of two identical Werner
states, discussing the strict subadditivity of the REE for a subclass of the
family. Then, in Sec.~\ref{Capacities} we give our upper bounds to the $Q_{2}$
and $K$ of the HW channels, which too exhibits the subadditivity property.
Here we also prove a general upper bound for the $Q_{2}$ of any teleportation
covariant channel (at any dimension). In Sec.~\ref{SECnet}\ we extend the
results to repeater chains and quantum networks connected by HW channels. We
then conclude and summarize in Sec.~\ref{Werconclu}.
\begin{figure*}[th]
\begin{center}%
\begin{tabular}
[c]{c|c|c|c|c|c}%
~Representation~ & ~Variable~ & State & $~%
\begin{array}
[c]{c}%
\text{Separable}\\
\text{Extreme}%
\end{array}
~$ & ~Boundary~ & $~%
\begin{array}
[c]{c}%
\text{Entangled}\\
\text{Extreme}%
\end{array}
~$\\\hline
$\alpha$-rep & $\alpha$ & $\frac{1}{d^{2}-d\alpha}\left( \mathbb{I}%
-\alpha\mathbb{F}\right) $ & $-1$ & $\frac{1}{d}$ & $1$\\
Weighting rep & $p$ & $\frac{1-p}{d^{2}+d}\left( \mathbb{I}+\mathbb{F}%
\right) +\frac{p}{d^{2}-d}\left( \mathbb{I}-\mathbb{F}\right) $ & $0$ &
$\frac{1}{2}$ & $1$\\
Expectation rep & $\langle\mathbb{F}\rangle=\eta$ & $~\frac{1}{d^{3}-d}\left[
(d-\eta)\mathbb{I}+(d\eta-1)\mathbb{F}\right] ~$ & $1$ & $0$ & $-1$\\
Anti-rep & $t$ & $t\frac{\mathbb{I}-d\mathbb{F}}{d^{2}(d-1)}+\frac{\mathbb{I}%
}{d^{2}}$ & $-\frac{1}{d-1}$ & $\frac{1}{d+1}$ & $1$%
\end{tabular}
\end{center}
\caption{The various ways in which the set of Werner states of dimension $d$
may be parametrised. All of these are equivalent and may be transformed
between. Here $\mathbb{I}$ is the $d^{2}$-dimensional identity operator and
$\mathbb{F}$ is the flip operator.}%
\label{reps}%
\end{figure*}
\section{Werner states and Holevo-Werner channels\label{Wer_preli}}
Werner states are an important family of quantum states which are generally
defined over two qudits of equal dimension $d$. They have the peculiar
property to be invariant under unitaries $U_{d}$ applied identically to both
subsystems, i.e., they satisfy the fixed-point equation%
\begin{equation}
(U_{d}\otimes U_{d})\rho(U_{d}^{\dagger}\otimes U_{d}^{\dagger})=\rho.
\label{invar}%
\end{equation}
There exists several parametrisations of this family as also shown in
Fig.~\ref{reps}. We shall use the \textquotedblleft expectation
representation\textquotedblright, where the Werner state $W_{\eta,d}$ is
parametrised by $\eta\in\lbrack0,1]$ which is defined by the mean value
\begin{equation}
\eta:=\mathrm{Tr}[W_{\eta,d}\mathbb{F}], \label{Tracecon}%
\end{equation}
where $\mathbb{F}$ is the flip operator acting on two qudits in the
computational basis $\left\{ \ket{i}\right\} _{i=0}^{d-1}$, i.e.,
\begin{equation}
\mathbb{F}:=\sum_{i,j=0}^{d-1}\ket{ij}\bra{ji}.
\end{equation}
If $\eta$ is negative (non-negative), then the Werner state is entangled
(separable). One also has an explicit formula for $W_{\eta,d}$ as a linear
combination of the $\mathbb{F}$ operator and the $d^{2}$-dimensional identity
operator $\mathbb{I}$, i.e.,
\begin{equation}
W_{\eta,d}=\frac{(d-\eta)\mathbb{I}+(d\eta-1)\mathbb{F}}{d^{3}-d}.
\label{stateform}%
\end{equation}
As already mentioned before, Werner states are of much interest to quantum
information theorists due to their properties. For $d\geq3$ there are Werner
states which are entangled, yet admit a local model for all
measurements~\cite{Werner,Barrett}. In particular, the extremal entangled
Werner state $W_{-1,d}$ was used to disprove the additivity of the
REE~\cite{VolWer}. A useful property of the Werner states is that, for a given
dimension, they are simultaneously diagonalisable, i.e., they share a common
eigenbasis. A Werner state $W_{\eta,d}$ has $n_{+}$ ($n_{-}$) eigenvectors
with eigenvalue $\gamma_{+}$ ($\gamma_{-}$), where $n_{\pm}:=d(d\pm1)/2$ and
$\gamma_{\pm}:=(1\pm\eta)[d(d\pm1)]^{-1}$.
Closely linked with Werner states are the HW
channels~\cite{counter,Fanneschannel}.\ These are defined as those channels
$\mathcal{W}_{\eta,d}$ whose Choi matrices are Werner states $W_{\eta,d}$. In
other words, we have
\begin{equation}
W_{\eta,d}:=\mathbf{I}\otimes\mathcal{W}_{\eta,d}\left(
\ket{\Phi}\bra{\Phi}\right) ,
\end{equation}
where $\mathbf{I}$\ is the $d$-dimensional identity map and
$\ket{\Phi}=d^{-1/2}\sum_{i=0}^{d-1}\ket{ii}$ is a maximally-entangled state.
This is a family of quantum channels whose action can be expressed as%
\begin{equation}
\mathcal{W}_{\eta,d}\left( \rho\right) :=\frac{(d-\eta)\mathbf{I}%
+(d\eta-1)\rho^{T}}{d^{2}-1}, \label{channeldef}%
\end{equation}
where $T$ is transposition (see Fig.~\ref{HW2} for a representation in the
specific case $d=2$). It is known that the minimal output entropy of the HW
channels is additive~\cite{Fanneschannel}, and the extremal HW channel (for
$\eta=-1$) is a counterexample of the additivity of the minimal Reny\'{\i}
entropy~\cite{counter}. HW channels were also studied by Ref.~\cite{Leung}\ in
relation to forward-assisted quantum error correcting codes and
superactivation of quantum capacity.
An important property of the HW channels is their \emph{teleportation
covariance}. A quantum channel $\mathcal{E}$ is called \textquotedblleft
teleportation covariant\textquotedblright\ if, for any teleportation unitary
$U$, there exists some unitary $V$ such that~\cite{Stretching}
\begin{equation}
\mathcal{E}\left( U\rho U^{\dagger}\right) =V\mathcal{E}\left( \rho\right)
V^{\dagger},
\end{equation}
for any state $\rho$. The teleportation unitaries referred to here are the
Weyl-Heisenberg generalisation of the Pauli matrices~\cite{NielsenChuang}.
Note that the output unitary $V$ may belong to a different representation of
the input group. For an HW channel $\mathcal{W}_{\eta,d}$, it is easy to see
that we may write
\begin{equation}
\mathcal{W}_{\eta,d}(U_{d}\rho U_{d}^{\dagger})=U_{d}^{\ast}\mathcal{W}%
_{\eta,d}(\rho)(U_{d}^{\ast})^{\dagger},
\end{equation}
for an arbitary unitary $U_{d}$. This comes from Eq.~(\ref{channeldef}) and
noting that $\mathbf{I}=U_{d}^{\ast}\mathbf{I}(U_{d}^{\ast})^{\dagger}$ and
$(U_{d}\rho U_{d}^{\dagger})^{T}=U_{d}^{\ast}\rho^{T}(U_{d}^{\ast})^{\dagger}$.
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.23\textwidth]{spheres_v1.pdf}
\end{center}
\par
\vspace{-0.2cm}\caption{An illustration of the qubit HW channel ($d=2$). The
Bloch sphere is shrunk by a factor of $|\frac{2\eta-1}{3}|$, with the state
reflected in the $x$-$z$ axis for $\frac{2\eta-1}{3}>0$, and rotated by $\pi$
around the $y$ axis for $\frac{2\eta-1}{3}<0$.}%
\label{HW2}%
\end{figure}
\section{Relative entropy distances\label{REEsec}}
An important functional of two quantum states $\rho$ and $\sigma$ is their
relative entropy, which is defined as
\begin{equation}
S(\rho||\sigma)=\mathrm{Tr}\left( \rho\mathrm{log}_{2}\rho-\rho
\mathrm{log}_{2}\sigma\right) .
\end{equation}
This is the basis for defining relative entropy distances. Given any compact
and convex set of states $S$ (containing the maximally mixed state), the
relative entropy distance of a state $\rho$ from this set is defined
as~\cite{Horos}%
\begin{equation}
E_{S}\left( \rho\right) :=\inf_{\sigma\in S}S(\rho||\sigma).
\end{equation}
This is known to be asymptotically continuous~\cite{Horos,Donald}. One
possible choice for $S$ is the set of separable (\textrm{SEP}) states, in
which case we have the REE~\cite{RMPrelent,VedFORMm,Pleniom}%
\begin{equation}
E_{R}\left( \rho\right) :=\inf_{\sigma\in\mathrm{Sep}}S(\rho||\sigma).
\end{equation}
Another possible choice is the set of PPT states, in which case we have the
relative entropy distance with respect to PPT states, which we denote by RPPT.
This is defined as%
\begin{equation}
E_{P}\left( \rho\right) :=\inf_{\sigma\in\mathrm{PPT}}S(\rho||\sigma),
\end{equation}
which coincides with the Rains' bound~\cite{Rains,Rains2} when $\rho$ is a
Werner state, as shown in Ref.~\cite{Audenert}. Recall that a PPT state
$\sigma$ is such that $\sigma^{\mathrm{PT}}$ has non-negative eigenvalues
(where $\mathrm{PT}$ is transposition over the second subsystem only). This is
a \emph{necessary} condition for $\sigma$ to be separable, but is \emph{not
sufficient}, unless $\sigma$ is a 2-qubit or qubit-qutrit state. Thus, in
general, we have
\begin{equation}
E_{P}(\rho)\leq E_{R}(\rho).
\end{equation}
Both the measures here defined are subadditive, i.e., they have the following
property under tensor product,
\begin{equation}
E_{R(P)}^{2}(\rho):=\frac{E_{R(P)}\left( \rho^{\otimes2}\right) }{2}\leq
E_{R(P)}\left( \rho\right) .
\end{equation}
It was shown that there exist states which are \emph{strictly} subadditive
($<$). In fact, for $d>2$, Ref.~\cite{VolWer} proved that
\begin{equation}
E_{R(P)}^{2}\left( W_{-1,d}\right) <E_{R(P)}(W_{-1,d}).
\end{equation}
This motivates the definition of the regularised quantities
\begin{equation}
E_{R(P)}^{\infty}\left( \rho\right) =\lim_{n\rightarrow\infty}\frac
{E_{R(P)}\left( \rho^{\otimes n}\right) }{n}\leq E_{R(P)}\left(
\rho\right) ,
\end{equation}
i.e., the regularised REE $E_{R}^{\infty}$ and RPPT $E_{P}^{\infty}\leq
E_{R}^{\infty}$.
For an entangled Werner state, the closest separable and PPT state (for one
copy) is the boundary Werner separable state $W_{0,d}$, so that~\cite{VolWer}%
\begin{align}
& E_{R(P)}\left( W_{\eta,d}\right) \label{onecopyREE}\\
& =%
\begin{cases}
0 & \text{ if }\eta\geq0,\\
\frac{1+\eta}{2}\mathrm{log}_{2}\left( 1+\eta\right) +\frac{1-\eta}%
{2}\mathrm{log}_{2}\left( 1-\eta\right) & \text{ if }\eta\leq0.
\end{cases}
\nonumber
\end{align}
Note that the one-copy quantity $E_{R(P)}\left( W_{\eta,d}\right) $ does not
depend on the dimension $d$. Then, for Werner states, the regularised RPPT
$E_{P}^{\infty}$ is known~\cite{Audenert} and reads
\begin{align}
& E_{P}^{\infty}\left( W_{\eta,d}\right) \label{PPTres}\\
= &
\begin{cases}
0 & \text{ if }\eta\geq0,\\
\frac{1+\eta}{2}\mathrm{log}_{2}\left( 1+\eta\right) +\frac{1-\eta}%
{2}\mathrm{log}_{2}\left( 1-\eta\right) & \text{ if }-\frac{2}{d}\leq
\eta\leq0,\\
\mathrm{log}_{2}\left( \frac{d+2}{d}\right) +\frac{1+\eta}{2}\mathrm{log}%
_{2}\left( \frac{d-2}{d+2}\right) & \text{ if }\eta\leq-\frac{2}{d}.
\end{cases}
\nonumber
\end{align}
From the previous equation, we see that we have strict subadditivity
$E_{P}^{\infty}\left( W_{\eta,d}\right) <E_{P}\left( W_{\eta,d}\right) $
in the region $\eta<-2/d$. Note that, in the region $-2/d\leq\eta\leq0$, the
REE is additive, and that the REE, the RPPT, and their regularised versions
all coincide. In fact, using the previous results, one has%
\begin{align}
E_{R}\left( W_{\eta,d}\right) & =E_{P}\left( W_{\eta,d}\right)
=E_{P}^{\infty}\left( W_{\eta,d}\right) \nonumber\\
& \leq E_{R}^{\infty}\left( W_{\eta,d}\right) \leq E_{R}\left( W_{\eta
,d}\right) .
\end{align}
\section{Relative entopy distance of a two-copy Werner state\label{sec:TC}}
One of the results of Ref.~\cite{VolWer} was to show that the
closest state $\sigma$ minimizing $E_{R(P)}(W_{\eta,d}^{\otimes
n})$ is invariant under the following transformation
\begin{equation}
U_{d}^{1}\otimes U_{d}^{1}\otimes\ldots U_{d}^{n}\otimes U_{d}^{n}\left(
\sigma\right) (U_{d}^{1}\otimes U_{d}^{1}\otimes\ldots U_{d}^{n}\otimes
U_{d}^{n})^{\dagger},
\end{equation}
where each $U_{d}^{i}\otimes U_{d}^{i}$ acts on the $d\times d$ Hilbert space
occupied by the $i^{\mathrm{th}}$ copy of $W_{\eta,d}$. States which are
invariant under this action are of the form
\begin{align}
\sigma_{\mathbf{x}}^{n} & =x_{0}W_{-1,d}^{\otimes n}\nonumber\\
& +\frac{x_{1}}{n}\left( W_{-1,d}^{\otimes n-1}\otimes W_{1,d}+\ldots
W_{1,d}\otimes W_{-1,d}^{\otimes n-1}\right) \nonumber\\
& +\ldots+\frac{x_{k}}{\binom{n}{k}}\left( W_{-1,d}^{n-k}\otimes W_{1,d}%
^{k}\ldots W_{1,d}^{k}\otimes W_{-1,d}^{n-k}\right) \nonumber\\
& +\ldots+x_{n}W_{1,d}^{\otimes n},
\end{align}
where $\mathbf{x}=\left( x_{0},x_{1},\ldots,x_{n}\right) ^{T}$ is a vector
of probabilities, i.e., $x_{i}\geq0$ and $\sum_{i=0}^{n}x_{i}=1$. We also have
an explicit condition of $\mathbf{x}$ to ensure that $\sigma_{\mathbf{x}}^{n}$
is PPT. This is~\cite{Audenert}
\begin{equation}
\left(
\begin{array}
[c]{cc}%
-1 & 1\\
1 & \frac{d-1}{d+1}%
\end{array}
\right) ^{\otimes n}\mathbf{x^{\prime}}\geq0, \label{PPTcons}%
\end{equation}
where
\begin{equation}
\mathbf{x^{\prime}}=\left( x_{0},\overset{n}{\overbrace{\frac{x_{1}}%
{n},\ldots,\frac{x_{1}}{n}},}\ldots\overset{\binom{n}{k}}{\overbrace
{\frac{x_{k}}{\binom{n}{k}},\ldots,\frac{x_{k}}{\binom{n}{k}}}},\ldots
x_{n}\right) ^{T}.
\end{equation}
For general $n$, it is not known if the PPT states
$\sigma_{\mathbf{x}}^{n}$ satisfying Eq.~(\ref{PPTcons}) are
separable. However, they are known to be equivalent for
$n=2$~\cite{VolWer}, in which case Eq.~(\ref{PPTcons}) simplifies
to
\begin{align}
1-2x_{1} & \geq0,\\
(d-1)-2dx_{0}+(2-d)x_{1} & \geq0,\\
(d-1)^{2}+4dx_{0}+2(d-1)x_{1} & \geq0,
\end{align}
where we have eliminated the dependent variable $x_{2}$. This means that, for
two copies ($n=2$), the state $\sigma_{\mathbf{x}}^{2}$ is the closest state
for the minimization of both $E_{P}(W_{\eta,d}^{\otimes2})$ and $E_{R}%
(W_{\eta,d}^{\otimes2})$. Let us compute the latter quantity.
Assuming the basis where the single-copy Werner state is diagonal, we may
write%
\begin{align}
S\left( W_{\eta,d}^{\otimes n}||\sigma_{\mathbf{x}}^{n}\right) &
=\sum_{i=0}^{n}y_{i}\mathrm{log}_{2}\left( \frac{y_{i}}{x_{i}}\right) ,\\
y_{i} & =\frac{\binom{n}{i}(1-\eta)^{n-i}(1+\eta)^{i}}{2^{n}}.
\end{align}
Therefore, for $n=2$ and in the region $\eta\leq-\frac{2}{d}$, we derive
\begin{gather}
E_{R}^{2}\left( W_{\eta,d}\right) :=\frac{E_{R}\left( W_{\eta,d}^{\otimes
2}\right) }{2}=\min_{x_{0},x_{1}}\left\{ \frac{(1-\eta)^{2}}{8}%
\mathrm{log}_{2}\frac{(1-\eta)^{2}}{4x_{0}}\right. \nonumber\\
+\frac{(1-\eta)(1-\eta)}{4}\mathrm{log}_{2}\frac{(1-\eta)(1-\eta)}{2x_{1}%
}\nonumber\\
\left. +\frac{(1+\eta)^{2}}{8}\mathrm{log}_{2}\frac{(1+\eta)^{2}}{4\left(
1-x_{0}-x_{1}\right) }\right\} , \label{Lagrange}%
\end{gather}
where%
\begin{align}
1-2x_{1} & \geq0,\\
(d-1)-2dx_{0}+(2-d)x_{1} & \geq0,\\
(d-1)^{2}+4dx_{0}+2(d-1)x_{1} & \geq0,\\
x_{0}+x_{1} & \leq1.
\end{align}
We can use Lagrangian optimisation methods to solve this problem. Let us set%
\begin{align}
\theta & :=d^{4}\left( \eta^{2}+1\right) ^{2}-4d^{3}\eta\left( \eta
^{2}-3\right) \\
& -4d^{2}\left( \eta^{4}+3\eta^{2}-1\right) +8d\eta\left( \eta
^{2}-3\right) +4\left( \eta^{2}+1\right) ^{2},\nonumber
\end{align}
then we compute
\begin{align}
x_{0} & =\frac{d^{2}\left( \eta^{2}+1\right) +\sqrt{\theta}-2d(\eta
-2)-2\eta^{2}-2}{8d(d+2)},\\
x_{1} & =-\frac{d^{2}\left( \eta^{2}-3\right) +\sqrt{\theta}-2d\eta
-2\eta^{2}+6}{4\left( d^{2}-4\right) }.
\end{align}
The comparison between the one-copy REE $E_{R}(W_{\eta,d})$ of
Eq.~(\ref{onecopyREE}) and the two-copy REE $E_{R}^{2}\left( W_{\eta
,d}\right) $ of Eq.~(\ref{Lagrange}) is shown in Fig.~\ref{compa}. While
$E_{R}(W_{\eta,d})$ does not depend on the dimension $d$, we see that the
two-copy REE considerably decreases for increasing $d>2$.
\begin{figure}[ptbh]
\vspace{+0.2cm} \includegraphics[width=\columnwidth]{REE_Div_Part.pdf}
\caption{Comparison between the one-copy REE $E_{R}$ and the two-copy REE
$E_{R}^{2}$ of a Werner state $W_{\eta,d}$, for varying dimension $d>2$. In
particular, we consider here $\eta\leq0$ which includes the subadditivity
region $\eta<-2/d$.}%
\label{compa}%
\end{figure}
\section{Two-way assisted capacities of the Holevo-Werner
channels\label{Capacities}}
\subsection{Weak converse bounds based on the relative entropy distances}
We now combine the results in the previous section with the methods of
Ref.~\cite{Stretching} to bound the two-way capacities of the HW channels.
According to Ref.~\cite{Stretching}, the secret-key capacity $K$ of a
teleportation covariant channel $\mathcal{E}$ is upper bounded by the
regularised REE of its Choi Matrix $\chi_{\mathcal{E}}$, i.e.,
\begin{equation}
K\left( \mathcal{E}\right) \leq E_{R}^{\infty}\left( \chi_{\mathcal{E}%
}\right) . \label{upper}%
\end{equation}
Therefore, for an HW channel $\mathcal{W}_{\eta,d}$, we may write the upper
bound%
\begin{equation}
K(\mathcal{W}_{\eta,d})\leq E_{R}^{\infty}\left( W_{\eta,d}\right) ,
\end{equation}
by using its corresponding Werner state $W_{\eta,d}$. From the previous
section, we have that, for $\eta<-2/d$ we may write the following strict
inequality%
\begin{equation}
K(\mathcal{W}_{\eta,d})\leq E_{R}^{2}\left( W_{\eta,d}\right) <E_{R}\left(
W_{\eta,d}\right) ,
\end{equation}
so that the one-copy (single-letter) REE bound is strictly loose. This shows
that the regularised REE is needed to tightly bound (and possibly establish)
the secret-key capacity of an HW channel. As shown in Fig.~\ref{compa}, the
improvement of $E_{R}^{2}$ over $E_{R}$ is better and better for increasing
dimension $d$.
Let us now consider the two-way quantum capacity $Q_{2}$, which is also known
to be equal to the channel's two-way entanglement distribution capacity
$D_{2}$. In Appendix~\ref{APPprova}, we provide a general proof of the following.
\begin{lemma}
[Channel's RPPT bound]For a teleportation covariant channel $\mathcal{E}$, we
may write
\begin{equation}
Q_{2}(\mathcal{E})\leq E_{P}^{\infty}\left( \chi_{\mathcal{E}}\right)
,\label{RPPTbound}%
\end{equation}
where the Choi matrix $\chi_{\mathcal{E}}$ and the RPPT $E_{P}^{\infty}$ are
meant to be asymptotic if $\mathcal{E}$ is a continuous-variable channel. In
particular, $E_{P}^{\infty}\left( \chi_{\mathcal{E}}\right) $ becomes the
regularisation of
\begin{equation}
E_{P}\left( \chi_{\mathcal{E}}\right) :=\inf_{\sigma^{\mu}}\underset
{\mu\rightarrow+\infty}{\lim\inf}S(\chi_{\mathcal{E}}^{\mu}||\sigma^{\mu
}),\label{asyBBB}%
\end{equation}
where: $\chi_{\mathcal{E}}^{\mu}:=\mathcal{I}\otimes\mathcal{E}(\Phi^{\mu})$
is defined on a two-mode squeezed vacuum state $\Phi^{\mu}$ with energy $\mu$,
and $\sigma^{\mu}$ is a sequence of PPT states converging in trace norm, i.e.,
such that $\left\Vert \sigma^{\mu}-\sigma\right\Vert \overset{\mu}%
{\rightarrow}0$ for some PPT state $\sigma$.
\end{lemma}
By applying the bound of Eq.~(\ref{RPPTbound}) to an HW channel $\mathcal{W}%
_{\eta,d}$, we may write
\begin{equation}
Q_{2}(\mathcal{W}_{\eta,d})\leq E_{P}^{\infty}\left( W_{\eta,d}\right) ,
\label{bb1}%
\end{equation}
where the right hand side is computed as in Eq.~(\ref{PPTres}). Of course we
may also write
\begin{equation}
Q_{2}(\mathcal{W}_{\eta,d})\leq E_{R}^{2}\left( W_{\eta,d}\right) \leq
E_{R}\left( W_{\eta,d}\right) =E_{P}\left( W_{\eta,d}\right) . \label{bb2}%
\end{equation}
The bounds in Eqs.~(\ref{bb1}) and~(\ref{bb2}) are shown and compared in
Fig.~\ref{queuetwocomp} for a HW channel in dimension $d=5$%
.\begin{figure}[ptb]
\vspace{+0.1cm} \includegraphics[width=\columnwidth]{Rains_Bound_Usage.pdf}
\caption{Weak converse upper bounds for the two-way quantum capacity $Q_{2}$
of the HW channel $\mathcal{W}_{\eta,5}$ (dimension $d=5$). We compare the
one-copy REE bound $E_{R}(=E_{P})$, the two-copy REE bound $E_{R}^{2}\left(
=E_{P}^{2}\right) $, and the regularised RPPT bound $E_{P}^{\infty}$, which
is the tightest. Note that $E_{R}$ and $E_{R}^{2}$ also bound the secret-key
capacity $K$ of the channel.}%
\label{queuetwocomp}%
\end{figure}
\subsection{Weak converse bounds based on the squashed entanglement}
Whilst the relative entropy distances provide useful upper bounds, we may also
consider other functionals. In particular, we may consider the squashed
entanglement. For an arbitrary bipartite state $\rho_{AB}$, this is defined
as~\cite{squash,HayashiINTRO}
\begin{equation}
E_{sq}(\rho_{AB}):=\frac{1}{2}\min_{\rho_{ABE}^{\prime}\in\Omega_{AB}%
}S(A:B|E),
\end{equation}
where $\Omega_{AB}$ is the set of density matrices $\rho_{ABE}^{\prime}$
satisfying $\mathrm{Tr}_{E}(\rho_{ABE}^{\prime})=\rho_{AB}$, and $S(A:B|E)$ is
the conditional mutual information
\begin{equation}
S(A:B|E):=S(\rho_{AE}^{\prime})+S(\rho_{BE}^{\prime})-S(\rho_{E})-S(\rho
_{ABE}),
\end{equation}
with $S(...)$ being the Von Neumann entropy~\cite{NielsenChuang}.
The squashed entanglement can be combined with teleportation
stretching~\cite{Stretching} to provide a single-letter bound to the
secret-key capacity. In fact, it satisfies all the required conditions. It
normalises, so that $E_{sq}(\phi_{m})\geq mR_{m}$ for a private state
$\phi_{m}$ with $mR_{m}$ private bits~\cite{squash}. It is continuous, and
monotonic under LOCC~\cite{squash}. Furthermore, it is additive over
tensor-product states, which means that there is no need to regularize over
many copies. For a teleportation covariant dicrete-variable channel
$\mathcal{E}$, we may therefore write
\begin{equation}
K(\mathcal{E})\leq E_{sq}(\chi_{\mathcal{E}}).\label{Esq}%
\end{equation}
This is a direct consequence of Proposition 6 of Ref.~\cite{Stretching},
according to which we may write
\begin{equation}
K(\mathcal{E})=K(\chi_{\mathcal{E}}),
\end{equation}
where the latter is the distillable key of the Choi matrix $\chi_{\mathcal{E}%
}$. Then, using Ref.~\cite{squash}, we may write $K(\chi_{\mathcal{E}})\leq
E_{sq}(\chi_{\mathcal{E}})$, which leads to Eq.~(\ref{Esq})~\cite{ProvaCV}.
However, there is some difficulty in optimizing over $\rho_{ABE}^{\prime}$
such that $\mathrm{Tr}_{E}(\rho_{ABE}^{\prime})=\chi_{\mathcal{E}}$, since the
dimension of the environment system $E$ is generally unbounded. In order to
provide an analytical upper bound, we simply choose the purification
$\tilde{\chi}_{\mathcal{E}}$ of $\chi_{\mathcal{E}}$. In the case of an HW
channel $\mathcal{E}=\mathcal{W}_{\eta,d}$, we have $\chi_{\mathcal{E}%
}=W_{\eta,d}$ and we may write%
\begin{align}
K(\mathcal{W}_{\eta,d}) & \leq E_{sq}(W_{\eta,d})\nonumber\\
& \leq\tilde{E}_{sq}(W_{\eta,d}):=\frac{1}{2}S(A:B|E)_{\tilde{W}_{\eta,d}%
}\nonumber\\
& =\log_{2}d+\frac{1+\eta}{4}\log_{2}\frac{1+\eta}{d(d+1)}\nonumber\\
& +\frac{1-\eta}{4}\log_{2}\frac{1-\eta}{d(d-1)}, \label{sqKKK}%
\end{align}
which is positive only if $\eta\leq0$.
We can find a further upper bound by exploiting the convexity property of the
squashed entanglement. First note that
\begin{align}
W_{\eta,d} & =\frac{\left( d-\eta\right) \mathbb{I}+\left( d\eta
-1\right) \mathbb{F}}{d^{3}-d}\nonumber\\
& =\left( 1+\eta\right) W_{0,d}+\left( -\eta\right) W_{-1,d},
\end{align}
which means that for $-1\leq\eta\leq0$ the state $W_{\eta,d}$ can be written
as a convex combination of the separable state $W_{0,d}$ and the extremal
Werner state $W_{-1,d}$. Second, note that we have $E_{sq}\left(
W_{0,d}\right) =0$ (since it is a separable state) and, for the extremal
state, we may write~\cite{entanglementantisymmetricstate}%
\begin{equation}
E_{sq}\left( W_{-1,d}\right) \leq%
\begin{cases}
\log_{2}\left( \frac{d+2}{d}\right) & \text{if }d\text{ even,}\\
\frac{1}{2}\log_{2}\left( \frac{d+3}{d-1}\right) & \text{if }d\text{
uneven.}%
\end{cases}
\end{equation}
Using the convexity property of the squashed entanglement~\cite{squash}
\begin{align}
& E_{sq}\left[ p\rho_{1}+(1-p)\rho_{2}\right] \nonumber\\
& \leq pE_{sq}\left( \rho_{1}\right) +(1-p)E_{sq}\left( \rho_{2}\right) ,
\end{align}
we find that
\begin{equation}
K(\mathcal{W}_{\eta,d})\leq E_{sq}\left( W_{\eta,d}\right) \leq E_{sq}%
^{\ast}\left( W_{\eta,d}\right) ,
\end{equation}
where we define
\begin{equation}
E_{sq}^{\ast}\left( W_{\eta,d}\right) =%
\begin{cases}
-\eta\log_{2}\left( \frac{d+2}{d}\right) & \text{if }d\text{ even,}\\
-\frac{\eta}{2}\log_{2}\left( \frac{d+3}{d-1}\right) & \text{if }d\text{
uneven,}%
\end{cases}
\label{convexBBB}%
\end{equation}
for $-1\leq\eta\leq0$ and zero otherwise.
These bounds are compared in Fig.~\ref{squashPIC} for the case of an HW
channel with dimension $d=4$. We can see that one bound is better than another
depending on the value of $\eta$. In particular, the secret-key capacity is in
the gray area of Fig.~\ref{squashPIC}(a) or, equivalently, below the
composition of bounds shown in Fig.~\ref{squashPIC}(b).
\begin{figure}[ptb]
\begin{center}
\vspace{-1.2cm}
\includegraphics[width=0.5\textwidth]{Squashed_Comparison.pdf}
\end{center}
\begin{center}
\vspace{-2.0cm}
\includegraphics[width=0.5\textwidth]{KeyBound4.pdf}
\end{center} \vspace{-0.6cm} \caption{Comparison of the capacity
bounds for the HW channel $\mathcal{W}_{\eta,4}$. (a) The
regularised RPPT bound $E_{P}^{\infty}$ is the lowest (red-dashed)
curve and bounds the two-way quantum capacity $Q_{2}$ of the
channel. The secret-key capacity of the channel $K$ is in the gray
area. Depending on the value of $\eta$, this is upper-bounded by
the two-copy REE bound $E_{R}^{2}(=E_{P}^{2})$ (better than
$E_{R}(=E_{P})$) or by the squashed entanglement bounds
$\tilde{E}_{sq}$ and $E_{sq}^{\ast}$. We see that $\tilde{E}_{sq}$
coincides with $E_{R}^{2}$ for $\eta=-1$. (b) We show the
competing upper bounds for the secret-key capacity $K$ of the HW
channel $\mathcal{W}_{\eta,4}$, explicitly drawing which bound is
better at which value of $\eta$. We see that the squashed
entanglement bounds perform better at lower $\eta$, while the REE
bounds are better for
higher $\eta$. }%
\label{squashPIC}%
\end{figure}
\section{Holevo-Werner Repeater Chains and Quantum Networks\label{SECnet}}
\subsection{Repeater chains}
In this section, we apply the results of Ref.~\cite{ref1} to bound the
end-to-end capacities of quantum networks in which the edges between nodes are
HW channels. First, we consider the simplest multi-hop quantum network which
consists of a linear chain of $N$ repeaters between the two end-parties. Such
a set up is depicted in Fig.~\ref{tikzchain}.
\begin{figure}[ptbh]
\begin{center}
\vspace{-0.9cm} \includegraphics[width=0.5\textwidth]{Crop6.pdf}
\vspace{-1.2cm}
\end{center}
\caption{Alice (A) and Bob (B) are connected by $N$ quantum repeaters $r_{1}%
$,\ldots, $r_{N}$ in a linear chain; each connection (edge) in the chain is a
$d$ dimension HW channel with a generally-different parameter $\eta_{i}$.}%
\label{tikzchain}%
\end{figure}
For a linear chain of $N$ quantum repeaters, whose $N+1$ connecting channels
$\{\mathcal{E}_{i}\}_{i=0}^{N}$ are teleportation covariant, we have that the
secret capacity $K$ of the chain and its two-way quantum capacity $Q_{2}$ are
bounded by~\cite{ref1}
\begin{align}
Q_{2} & \leq K\leq\min_{i}E_{R}^{\infty}\left( \chi_{\mathcal{E}_{i}%
}\right) \nonumber\\
& \leq\min_{i}E_{R}^{2}\left( \chi_{\mathcal{E}_{i}}\right) \leq\min
_{i}E_{R}\left( \chi_{\mathcal{E}_{i}}\right) ,
\end{align}
with $\chi_{\mathcal{E}_{i}}$ the Choi matrix of the $i^{\text{th}}$ channel.
Similarly, we may use the squashed entanglement and write~\cite{ref1}%
\begin{align}
Q_{2} & \leq K\leq\min_{i}E_{sq}\left( \chi_{\mathcal{E}_{i}}\right)
\nonumber\\
& \leq\min\{\min_{i}\tilde{E}_{sq}\left( \chi_{\mathcal{E}_{i}}\right)
,\min_{i}E_{sq}^{\ast}\left( \chi_{\mathcal{E}_{i}}\right) \}.
\end{align}
In general, we may write
\begin{equation}
Q_{2}\leq K\leq\min_{E}\min_{i}E\left( \chi_{\mathcal{E}_{i}}\right) ,
\end{equation}
where the bound is also minimized over the type of entanglement measure. In
particular, we may consider the \textquotedblleft ideal\textquotedblright\ set
$E\in\{E_{R}^{\infty},E_{sq}\}$ or the \textquotedblleft
computable\textquotedblright\ one $E\in\{E_{R}^{2}\leq E_{R},\tilde{E}%
_{sq},E_{sq}^{\ast}\}$. Then, if the task of the parties is to transmit qubits
(or distill ebits), we may use the regularised RPPT and write~\cite{ref1}%
\begin{equation}
Q_{2}=D_{2}\leq\min_{i}E_{P}^{\infty}\left( \chi_{\mathcal{E}_{i}}\right) .
\end{equation}
Let us apply these results to a linear repeater chain connected by $N+1$
iso-dimensional HW channels $\left\{ \mathcal{W}_{\eta_{i},d}\right\}
=\left\{ \mathcal{W}_{\eta_{0},d},\ldots,\mathcal{W}_{\eta_{N},d}\right\} $,
i.e., with the same dimension $d$ but generally different $\eta$'s. We may
simplify the previous bounds ($E_{R}$, $E_{R}^{2}$, $\tilde{E}_{sq}$,
$E_{sq}^{\ast}$, and $E_{P}^{\infty}$) by exploiting the fact that they are
monotonically decreasing in $\eta$, so that the maximum value $\eta
_{\text{max}}:=\max\left\{ \eta_{i}\right\} $ determines the bottleneck of
the chain, i.e., $\min_{i}E=E(W_{\eta_{\text{max}},d})$. In particular, for
$\eta_{\text{max}}\geq0$, we certainly have $Q_{2}=D_{2}=K=0$ because
$E_{R}(W_{\eta_{\text{max}}\geq0,d})=0$ from Eq.~(\ref{onecopyREE}). By
contrast, if $\eta_{\text{max}}\leq0$, then we may write the following bounds
for the secret-key capacity and two-way quantum capacity of the repeater chain%
\begin{align}
K\left( \left\{ \mathcal{W}_{\eta_{i},d}\right\} \right) & \leq\min
_{E}E\left( W_{\eta_{\text{max}},d}\right) ,\label{Kcompat}\\
Q_{2}\left( \left\{ \mathcal{W}_{\eta_{i},d}\right\} \right) & \leq
E_{P}^{\infty}\left( W_{\eta_{\text{max}},d}\right) .\label{Qcompact}%
\end{align}
In Eq.~(\ref{Kcompat}), the optimal entanglement measure $E$ can be computed
from the set $\{E_{R}^{2}\leq E_{R},\tilde{E}_{sq},E_{sq}^{\ast}\}$, where
$E_{R}$ is given in Eq.~(\ref{onecopyREE}), $E_{R}^{2}$ in Eq.~(\ref{Lagrange}%
), $\tilde{E}_{sq}$ in Eq.~(\ref{sqKKK}), $E_{sq}^{\ast}$ in
Eq.~(\ref{convexBBB}). In Eq.~(\ref{Qcompact}), we compute $E_{P}^{\infty}$
from Eq.~(\ref{PPTres}).
\subsection{Single-path routing in quantum networks}
We may then extend the results to an arbitrary quantum network, where there
exist many possible paths between the two end-parties, Alice and Bob. Assuming
single-path routing, a single chain of repeaters is used for each use of the
network and this may differ from use to use. For a network connected by
teleportation covariant channels, we may bound the single-path secret-key
capacity of the network as~\cite{ref1}%
\begin{equation}
K\leq\min_{C}E(C),~E(C):=\max_{\mathcal{E}\in\tilde{C}}E(\chi_{\mathcal{E}%
}),\label{cutset}%
\end{equation}
where $E$ is a suitable entanglement measure, here to be optimized in
$\{E_{R},E_{sq}\}$~\cite{remark}, and $\tilde{C}$ is a \textquotedblleft
cut-set\textquotedblright\ associated with the cut~\cite{Slepian,netflow}.
The cut-set $\tilde{C}$ can be described as a set of channels such that, if
those channels were removed by the cut, then the network would be
bi-partitioned, with Alice and Bob in separate sets of nodes. Therefore the
meaning of Eq.~(\ref{cutset}) is that: (i)~we perform an arbitrary cut $C$ of
the network; (ii)~we consider the channels $\mathcal{E}$ in the cut-set
$\tilde{C}$; (iii)~we compute the entanglement measure $E$ of their Choi
matrices $\chi_{\mathcal{E}}$; (iv)~we take the maximum so as to compute
$E(C)$; (v)~we finally minimize over all the possible Alice-Bob cuts $C$ of
the network.
In the case of a quantum network connected by HW channels, we may the
following bound for the single-path secret-key capacity
\begin{equation}
K\leq\min_{C}\max_{\mathcal{W}_{\eta,d}\in\tilde{C}}E(W_{\eta,d}).
\end{equation}
If the HW channels are iso-dimensional (as in the example of
Fig.~\ref{DiamondNet}), then we may simplify the previous bound into the
following%
\begin{equation}
K\leq\min_{C}E(W_{\eta_{\text{min}(C)},d}),
\end{equation}
where $\eta_{\text{min}(C)}$ is the smallest expectation parameter belonging
to the cut-set $\tilde{C}$. In particular, we may also miminize $E$ over
$\{E_{R},\tilde{E}_{sq},E_{sq}^{\ast}\}$ by computing $E_{R}$ as in
Eq.~(\ref{onecopyREE}), $\tilde{E}_{sq}$ as in Eq.~(\ref{sqKKK}), and
$E_{sq}^{\ast}$ as in Eq.~(\ref{convexBBB}).
\begin{figure}[ptbh]
\begin{center}
\vspace{-0.8cm} \includegraphics[width=0.5\textwidth]{Crop7.pdf}
\vspace{-1.4cm}
\end{center}
\caption{Alice (A) and Bob (B) as end-nodes of a diamond network connected by
iso-dimensional HW\ channels with generally-different expectation parameters
$\eta$. In red we show a possible path between the end-nodes.}%
\label{DiamondNet}%
\end{figure}
\subsection{Multi-path routing in quantum networks}
Finally we may also consider multipath routing. In this case, each use of the
network corresponds to a simultaneous use of all the channels, allowing for
simultaneous pathways between Alice and Bob (e.g., see Fig.~\ref{DiaFlood}).
This is also known as a flooding protocol~\cite{flooding} and represents a
crucial requirement in order to extend the max-flow/min-cut
theorem~\cite{Harris,Ford,ShannonFLOW} to the quantum setting~\cite{ref1}.
\begin{figure}[ptbh]
\begin{center}
\vspace{-0.4cm} \includegraphics[width=0.5\textwidth]{Crop8.pdf}
\vspace{-1.5cm}
\end{center}
\caption{Example of multipath routing in a diamond network (with
iso-dimensional HW channels). With respect to Fig.~\ref{DiamondNet} all the
channels are used in a single use of the network (flooding protocol). Dashed
lines represent the advantage over the previous single-path routing protocol.}%
\label{DiaFlood}%
\end{figure}
For a network connected by teleportation-covariant channels, the multi-path
secret-key capacity $K^{\text{m}}\geq K$ is bounded as~\cite{ref1}%
\begin{align}
K^{\text{m}} & \leq\min_{C}\Sigma^{\infty}(C)\leq\cdots\leq\min_{C}\Sigma
^{r}(C)\nonumber\\
& \leq\cdots\leq\min_{C}\Sigma^{1}(C),
\end{align}
where, for any integer $r=1,\cdots,\infty$,%
\begin{equation}
\Sigma^{r}(C):=\sum_{\mathcal{E}\in\tilde{C}}E^{r}(\chi_{\mathcal{E}}).
\end{equation}
and $E^{r}$ is a suitable $r$-copy entanglement measure. In particular, we may
optimize over the multi-copy REE $E^{r}=E_{R}^{r}$ or the squashed
entanglement $E^{r}=E_{sq}$ (the latter being additive). For the multipath
two-way quantum capacity, we may correspondingly write%
\begin{equation}
Q_{2}^{\text{m}}\leq\min_{C}\Sigma_{P}^{\infty}(C)\leq\cdots\leq\min_{C}%
\Sigma_{P}^{1}(C),
\end{equation}
where
\begin{equation}
\Sigma_{P}^{r}(C):=\sum_{\mathcal{E}\in\tilde{C}}E_{P}^{r}(\chi_{\mathcal{E}%
}),
\end{equation}
and $E_{P}^{r}$ is the $r$-copy RPPT.
For a network connected by HW channels $\mathcal{W}_{\eta,d}$, we may specify
the previous bounds to one- and two-copy REE, so that we may write%
\begin{equation}
K^{\text{m}}\leq\min_{C}\sum_{\mathcal{W}_{\eta,d}\in\tilde{C}}E_{R}%
^{2}(W_{\eta,d})\leq\min_{C}\sum_{\mathcal{W}_{\eta,d}\in\tilde{C}}%
E_{R}(W_{\eta,d}),\label{Kemme}%
\end{equation}
where $E_{R}$ is in Eq.~(\ref{onecopyREE}), and $E_{R}^{2}$ in
Eq.~(\ref{Lagrange}). The first bound in Eq.~(\ref{Kemme}) is
certainly tighter than the second one if the channels have
$\eta<-2/d$.\ More generally, we write
\begin{equation}
K^{\text{m}}\leq\min_{E}\min_{C}\sum_{\mathcal{W}_{\eta,d}\in\tilde{C}%
}E(W_{\eta,d}),
\end{equation}
where $E$ is minimized in the computable set $\{E_{R}^{2}\leq E_{R},\tilde
{E}_{sq},E_{sq}^{\ast}\}$. Finally,\ we may write
\begin{equation}
Q_{2}^{\text{m}}\leq\min_{C}\sum_{\mathcal{W}_{\eta,d}\in\tilde{C}}%
E_{P}^{\infty}(W_{\eta,d})\leq\min_{C}\sum_{\mathcal{W}_{\eta,d}\in\tilde{C}%
}E_{P}(W_{\eta,d}),\label{multiQ2}%
\end{equation}
where $E_{P}$ is in Eq.~(\ref{onecopyREE}) and $E_{P}^{\infty}$ in
Eq.~(\ref{PPTres}). The first bound in Eq.~(\ref{multiQ2}) is computable from
the regularised RPPT in Eq.~(\ref{PPTres}) and is certainly strictly tigther
than the second bound if the channels have $\eta<-2/d$.
\section{Conclusions\label{Werconclu}}
In this work we have considered quantum and private communication over the
class of (teleportation-covariant) Holevo-Werner channels. We have computed
suitable upper bounds for their two-way assisted capacities in terms of
relative entropy distances, i.e., the relative entropy of entanglement (REE)
and its variant with respect to PPT\ states (RPPT), and also in terms of the
squashed entanglement (using the identity isometry and then the convexity
property).
We have shown that there is a general competing behaviour between these
bounds, so that an optimization over the entanglement measure is in order.
These calculations were done not only for point-to-point communication, but
also for chains of quantum repeaters and, more generally, quantum networks
under different types of routings.
In all cases, we have also pointed out the subadditivity behaviour of the REE
and RPPT bounds, so that their two-copy and regularised versions perform
strictly better than their simpler one-copy expressions, under suitable
conditions of the parameters. From this point of view, our paper clearly shows
how the subadditivity properties of the Werner states can be fully mapped to
the corresponding Holevo-Werner channels in configurations of adaptive quantum
and private communication.
\smallskip
\textit{Acknowledgements}.--This work has been supported by the EPSRC via the
`UK Quantum Communications Hub' (EP/M013472/1) and by the Innovation Fund
Denmark (Qubiz project). The authors would like to thank David Elkouss for feedback.
\bigskip
|
1,116,691,499,787 | arxiv | \section*{Introduction}
Let ${\mathbb K}$ be an algebraically closed field
of characteristic zero.
A first aim of this paper is to
determine all finitely generated factorial
${\mathbb K}$-algebras $R$ with an effective complexity
one multigrading
$R = \oplus_{u \in M} R_u$ satisfying $R_0 = {\mathbb K}$;
here effective complexity one multigrading
means that with $d := \dim \, R$ we have
$M \cong {\mathbb Z}^{d-1}$ and
the $u \in M$ with $R_u \ne 0$ generate $M$
as a ${\mathbb Z}$-module.
Our result extends work by
Mori~\cite{Mo} and Ishida~\cite{Is},
who settled the cases $d=2$ and $d=3$.
An obvious class of multigraded factorial
algebras as above is given by polynomial rings.
A much larger class is obtained as follows.
Take a sequence $A = (a_0, \ldots, a_r)$
of vectors $a_i \in {\mathbb K}^2$
such that $(a_i,a_k)$ is linearly independent
whenever $k \ne i$,
a sequence $\mathfrak{n} = (n_0, \ldots, n_r)$
of positive integers
and a family $L = (l_{ij})$
of positive integers,
where $0 \le i \le r$ and
$1 \le j \le n_i$.
For every $0 \le i \le r$, we
define a monomial
$$
f_i
\ := \
T_{i1}^{l_{i1}} \cdots T_{in_i}^{l_{in_i}}
\ \in \
{\mathbb K}[T_{ij}; \; 0 \le i \le r, \; 1 \le j \le n_i],
$$
for any two indices $0 \le i,j \le r$,
we set $\alpha_{ij} := \det(a_i,a_j)$,
and for any three indices $0 \le i < j < k \le r$,
we define a trinomial
$$
g_{i,j,k}
\ := \
\alpha_{jk}f_i
\ + \
\alpha_{ki}f_j
\ + \
\alpha_{ij}f_k
\ \in \
{\mathbb K}[T_{ij}; \; 0 \le i \le r, \; 1 \le j \le n_i].
$$
Note that the coefficients of $g_{i,j,k}$ are
all nonzero.
The triple $(A,\mathfrak{n},L)$ then defines a
${\mathbb K}$-algebra
$$
R(A,\mathfrak{n},L)
\ := \
{\mathbb K}[T_{ij}; \; 0 \le i \le r, \; 1 \le j \le n_i]
\ / \
\bangle{g_{i,i+1,i+2}; \; 0 \le i \le r-2}.
$$
It turns out that $R(A,\mathfrak{n},L)$ is a normal
complete intersection,
see Proposition~\ref{prop:RAnLnormal}.
In particular, it is of dimension
\begin{eqnarray*}
\dim \, R(A,\mathfrak{n},L)
& = &
n_0 + \ldots + n_r \ - \ r \ + \ 1.
\end{eqnarray*}
If the triple $(A,\mathfrak{n},L)$
is {\em admissible\/}, i.e.,
the numbers $\gcd(l_{i1}, \ldots, l_{in_i})$,
where $0 \le i \le r$, are pairwise
coprime, then $R(A,\mathfrak{n},L)$
admits a canonical effective complexity
one grading by a lattice $K$,
see Construction~\ref{constr:Kgrading}.
Our first result is the following.
\goodbreak
\begin{introthm1}
Up to isomorphy, the finitely generated
factorial ${\mathbb K}$-algebras with an effective
complexity one grading $R = \oplus_M R_u$
and $R_0 = {\mathbb K}$ are
\begin{enumerate}
\item
the polynomial algebras ${\mathbb K}[T_1, \ldots, T_d]$
with a grading $\deg(T_i) = u_i \in {\mathbb Z}^{d-1}$
such that $u_1, \ldots, u_d$ generate ${\mathbb Z}^{d-1}$
as a lattice and the convex cone on ${\mathbb Q}^{d-1}$
generated by $u_1, \ldots, u_d$ is pointed,
\item
the $(K \times {\mathbb Z}^m)$-graded
algebras $R(A,\mathfrak{n},L)[S_1,\ldots,S_m]$,
where $R(A,\mathfrak{n},L)$ is the $K$-graded
algebra defined by an admissible triple
$(A,\mathfrak{n},L)$ and $\deg \, S_j \in {\mathbb Z}^m$
is the $j$-th canonical base vector.
\end{enumerate}
\end{introthm1}
The further paper is devoted to
normal (possibly singular) $d$-dimensional
Fano varieties $X$ with an effective action
of an algebraic torus $T$.
In the case $\dim \, T = d$, we have the
meanwhile extensively studied class of
toric Fano varieties, see~\cite{Bat1},
\cite{WaWa} and~\cite{Bat2} for the
initiating work.
Our aim is to show that the above
Theorem provides an approach
to classification results for the
case $\dim \, T = d-1$,
that means Fano varieties with a
complexity one torus action.
Here, we treat the case of divisor class group
$\operatorname{Cl}(X) \cong {\mathbb Z}$; note that in the toric setting
this gives precisely the weighted projective
spaces. The idea is to consider the Cox ring
\begin{eqnarray*}
\mathcal{R}(X)
& = &
\bigoplus_{D \in \operatorname{Cl}(X)} \Gamma(X, \mathcal{O}_X(D)).
\end{eqnarray*}
The ring $\mathcal{R}(X)$ is factorial,
finitely generated as a ${\mathbb K}$-algebra and
the $T$-action on $X$ gives rise to an effective
complexity one multigrading of $\mathcal{R}(X)$
refining the $\operatorname{Cl}(X)$-grading,
see~\cite{BeHa1} and~\cite{HaSu}.
Consequently, $\mathcal{R}(X)$ is one of the rings
listed in the first Theorem.
Moreover, $X$ can be easily reconstructed from
$\mathcal{R}(X)$; it is the homogeneous
spectrum with respect to the $\operatorname{Cl}(X)$-grading
of $\mathcal{R}(X)$.
Thus, in order to construct Fano varieties,
we firstly have to figure out the Cox rings
among the rings occuring in the first Theorem
and then find those, which belong to a Fano variety;
this is done in Propositions~\ref{prop:coxchar}
and~\ref{Prop:FanoPicard}.
In order to produce classification results
via this approach,
we need explicit bounds on the number
of deformation types of Fano varieties with
prescribed discrete invariants.
Besides the dimension, in our setting,
a suitable invariant is the
{\em Picard index\/} $[\operatorname{Cl}(X):\operatorname{Pic}(X)]$.
Denoting by $\xi(\mu)$ the number of
primes less or equal to $\mu$,
we obtain the following bound,
see Corollary~\ref{cor:finitefanos}:
for any pair $(d,\mu) \in {\mathbb Z}^2_{>0}$,
the number $\delta(d,\mu)$ of different
deformation types of $d$-dimensional
Fano varieties with a complexity one
torus action such that
$\operatorname{Cl}(X) \cong {\mathbb Z}$ and $\mu = [\operatorname{Cl}(X):\operatorname{Pic}(X)]$
hold is bounded by
\begin{eqnarray*}
\delta(d,\mu)
& \le &
(6d\mu)^{2\xi(3d\mu)+d-2}\mu^{\xi(\mu)^2 + 2\xi((d+2)\mu)+2d+2}.
\end{eqnarray*}
In particular, we conclude that for fixed
$\mu \in {\mathbb Z}_{>0}$, the number $\delta(d)$
of different deformation types of $d$-dimensional
Fano varieties with a complexity one
torus action $\operatorname{Cl}(X) \cong {\mathbb Z}$
and Picard index $\mu$
is asymptotically bounded by
$d^{Ad}$ with a constant~$A$
depending only on~$\mu$,
see~Corollary~\ref{cor:asymptoticsmu}.
In fact, in Theorem~\ref{Th:FiniteIndex} we
even obtain explicit bounds for the discrete input
data of the rings $R(A,\mathfrak{n},L)[S_1,\ldots,S_m]$.
This allows us to construct all
Fano varieties $X$ with prescribed dimension
and Picard index that come with an effective
complexity one torus action and have divisor
class group ${\mathbb Z}$.
Note that, by the approach, we get the Cox
rings of the resulting Fano varieties $X$
for free.
In Section~\ref{sec:tables}, we give
some explicit classifications.
We list all non-toric surfaces $X$ with
Picard index at most six and the
non-toric threefolds~$X$
with Picard index up at most two.
They all have a Cox ring defined by a single
relation;
in fact, for surfaces the first
Cox ring with more than one relation
occurs for Picard index~29, and for the
threefolds this happens with Picard index~3,
see Proposition~\ref{prop:fano22rel}
as well as Examples~\ref{ex:fanosurf2rel}
and~\ref{ex:fano32rel}.
Moreover, we determine all locally
factorial fourfolds~$X$, i.e.
those of Picard index one:
67 of them occur sporadic and
there are two one-dimensional families.
Here comes the result on the locally factorial
threefolds; in the table, we denote by $w_i$
the $\operatorname{Cl}(X)$-degree of the variable $T_i$.
\goodbreak
\begin{introthm2}
The following table lists the
Cox rings $\mathcal{R}(X)$
of the three-dimensional
locally factorial non-toric Fano
varieties $X$ with an effective two torus
action and $\operatorname{Cl}(X) = {\mathbb Z}$.
\begin{center}
\begin{longtable}[htbp]{llll}
\toprule
No.
&
$\mathcal{R}(X)$
&
$(w_1,\ldots, w_5)$
&
$(-K_X)^3$
\\
\midrule
1
\hspace{.5cm}
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^5 + T_3^3 + T_4^2}
$
\hspace{.5cm}
&
$(1,1,2,3,1)$
\hspace{.5cm}
&
$8$
\\
\midrule
2
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2T_3^4 + T_4^3 + T_5^2}
$
&
$(1,1,1,2,3)$
&
$8$
\\
\midrule
3
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^2T_3^3 + T_4^3 + T_5^2}
$
&
$(1,1,1,2,3)$
&
$8$
\\
\midrule
4
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2 + T_3T_4 + T_5^2}
$
&
$(1,1,1,1,1)$
&
$54$
\\
\midrule
5
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^2 + T_3T_4^2 + T_5^3}
$
&
$(1,1,1,1,1)$
&
$24$
\\
\midrule
6
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^3 + T_3T_4^3 + T_5^4}
$
&
$(1,1,1,1,1)$
&
$4$
\\
\midrule
7
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^3 + T_3T_4^3 + T_5^2}
$
&
$(1,1,1,1,2)$
&
$16$
\\
\midrule
8
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^5 + T_3T_4^5 + T_5^2}
$
&
$(1,1,1,1,3)$
&
$2$
\\
\midrule
9
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^5 + T_3^3T_4^3 + T_5^2}
$
&
$(1,1,1,1,3)$
&
$2$
\\
\bottomrule
\end{longtable}
\end{center}
\end{introthm2}
Note that each of these varieties $X$ is
a hypersurface in the respective
weighted projective space
${\mathbb P}(w_1, \ldots, w_5)$.
Except number~4, none of them is
quasismooth in the sense that
${\rm Spec} \, \mathcal{R}(X)$ is
singular at most in the origin;
quasismooth hypersurfaces of
weighted projective spaces were studied
in~\cite{JoKo} and~\cite{CCC}.
In Section~\ref{sec:geom3folds}, we take
a closer look at the singularities
of the threefolds listed above.
It turns out that number~1,3,5,7 and 9 are singular
with only canonical singularities and all of them
admit a crepant resolution.
Number~6 and 8 are singular with
non-canonical singularities but
admit a smooth relative minimal model.
Number two is singular with only canonical singularities,
one of them of type $\mathbf{cA_1}$,
and it admits only a singular relative minimal model.
Moreover, in all cases, we determine the Cox rings
of the resolutions.
The authors would like to thank Ivan Arzhantsev
for helpful comments and discussions and also the
referee for valuable remarks and many references.
\section{UFDs with complexity one multigrading}
\label{sec:factrings}
As mentioned before, we work over an algebraically
closed field ${\mathbb K}$ of characteristic zero.
In Theorem~\ref{thm:factrings}, we describe all
factorial finitely generated ${\mathbb K}$-algebras~$R$
with an effective complexity one grading and $R_0={\mathbb K}$.
Moreover, we characterize the possible Cox rings
among these algebras, see Proposition~\ref{prop:coxchar}.
First we recall the construction sketched in the
introduction.
\begin{construction}
\label{constr:triple2ring}
Consider a sequence
$A = (a_0, \ldots, a_r)$
of vectors $a_i = (b_i,c_i)$ in ${\mathbb K}^2$
such that any pair $(a_i,a_k)$ with
$k \ne i$ is linearly independent,
a sequence
$\mathfrak{n} = (n_0, \ldots, n_r)$
of positive integers
and a family $L = (l_{ij})$
of positive integers,
where $0 \le i \le r$ and
$1 \le j \le n_i$.
For every $0 \le i \le r$, define a monomial
$$
f_i
\ := \
T_{i1}^{l_{i1}} \cdots T_{in_i}^{l_{in_i}}
\ \in \
{\mathbb K}[T_{ij}; \; 0 \le i \le r, \; 1 \le j \le n_i],
$$
for any two indices $0 \le i,j \le r$,
set $\alpha_{ij} := \det(a_i,a_j) = b_ic_j-b_jc_i$
and for any three indices
$0 \le i < j < k \le r$ define
a trinomial
$$
g_{i,j,k}
\ := \
\alpha_{jk}f_i
\ + \
\alpha_{ki}f_j
\ + \
\alpha_{ij}f_k
\ \in \
{\mathbb K}[T_{ij}; \; 0 \le i \le r, \; 1 \le j \le n_i].
$$
Note that the coefficients of this trinomial are
all nonzero.
The triple $(A,\mathfrak{n},L)$ then defines a ring
$$
R(A,\mathfrak{n},L)
\ := \
{\mathbb K}[T_{ij}; \; 0 \le i \le r, \; 1 \le j \le n_i]
\ / \
\bangle{g_{i,i+1,i+2}; \; 0 \le i \le r-2}.
$$
\end{construction}
\begin{proposition}
\label{prop:RAnLnormal}
For every triple $(A,\mathfrak{n},L)$
as in~\ref{constr:triple2ring},
the ring $R(A,\mathfrak{n},L)$ is a
normal complete intersection of
dimension
$$
\dim \, R(A,\mathfrak{n},L)
\ = \
n-r+1,
\qquad\qquad
n \ := \ n_0 + \ldots + n_r.
$$
\end{proposition}
\begin{lemma}
\label{lem:alltrins}
In the setting of~\ref{constr:triple2ring},
one has for any $0 \le i < j < k < l \le r$
the identities
$$
g_{i,k,l}
\ = \
\alpha_{kl} \cdot g_{i,j,k} + \alpha_{ik} \cdot g_{j,k,l},
\qquad\qquad
g_{i,j,l}
\ = \
\alpha_{jl} \cdot g_{i,j,k} + \alpha_{ij} \cdot g_{j,k,l}.
$$
In particular, every trinomial $g_{i,j,k}$,
where $0 \le i < j < k \le r$
is contained in the ideal
$\bangle{g_{i,i+1,i+2}; \; 0 \le i \le r-2}$.
\end{lemma}
\begin{proof}
The identities are easily obtained by direct
computation;
note that for this one may assume
$a_j = (1,0)$ and $a_k = (0,1)$.
The supplement then follows by repeated
application of the identities.
\end{proof}
\begin{lemma}
\label{lem:twotrinszero}
In the notation of~\ref{constr:triple2ring}
and~\ref{prop:RAnLnormal},
set $X := V({\mathbb K}^n,g_0,\ldots, g_{r-2})$, and let
$z \in X$.
If we have $f_i(z) = f_j(z) = 0$ for
two $0 \le i < j \le r$,
then $f_k(z) = 0$ holds for all
$0 \le k \le r$.
\end{lemma}
\begin{proof}
If $i<k<j$ holds, then, according to
Lemma~\ref{lem:alltrins},
we have $g_{i,k,j}(z)=0$,
which implies $f_k(z)=0$.
The cases $k<i$ and $j<k$ are
obtained similarly.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:RAnLnormal}]
Set $X := V({\mathbb K}^n; g_0, \ldots, g_{r-2})$,
where $g_i := g_{i,i+1,i+2}$.
Then we have to show that $X$ is
a connected complete intersection with
at most normal singularities.
In order to see that $X$ is connected,
set $\ell := \prod n_i \prod l_{ij}$ and
$\zeta_{ij} : = \ell n_i^{-1}l_{ij}^{-1}$.
Then $X \subseteq {\mathbb K}^n$ is invariant
under the ${\mathbb K}^*$-action given by
\begin{eqnarray*}
t \cdot z
& := &
(t^{\zeta_{ij}} z_{ij})
\end{eqnarray*}
and the point $0 \in {\mathbb K}^n$ lies in the closure of
any orbit ${\mathbb K}^* \! \cdot \! x \subseteq X$,
which implies connectedness.
To proceed, consider the Jacobian $J_g$
of $g := (g_0, \ldots, g_{r-2})$.
According to Serre's criterion,
we have to show that the set of points of
$z \in X$ with $J_g(z)$ not of full rank
is of codimension at least two in $X$.
Note that the Jacobian $J_g$ is of the shape
\begin{eqnarray*}
J_g
& = &
\left(
\begin{array}{rrrrrcrrrrr}
\delta_{0 \, 0} & \delta_{0 \, 1} & \delta_{0 \, 2} & 0 & & &&&&& 0
\\
0 & \delta_{1 \, 1} & \delta_{1 \, 2} & \delta_{1 \, 3} & 0 & &&&&&
\\
&&&&& \vdots &&&&&
\\
\\
&&&&& & 0 & \delta_{r-3 \, r-3} & \delta_{r-3 \, r-2} & \delta_{r-3 \, r-1} & 0
\\
0 &&&&& & & 0 & \delta_{r-2 \, r-2} & \delta_{r-2 \, r-1} & \delta_{r-2 \, r}
\end{array}
\right)
\end{eqnarray*}
where $\delta_{ti}$ is a nonzero multiple
of the gradient $\delta_i := {\rm grad} \, f_i$.
Consider $z \in X$
with $J_g(z)$ not of full rank.
Then $\delta_i(z) = 0 = \delta_k(z)$
holds with some $0 \le i < k \le r$.
This implies
$z_{ij} = 0 = z_{kl}$
for some $1 \le j \le n_i$ and
$1 \le l \le n_k$.
Thus, we have $f_i(z) = 0 = f_k(z)$.
Lemma~\ref{lem:twotrinszero} gives
$f_s(z) = 0$, for all $0 \le s \le r$.
Thus, some coordinate
$z_{st}$ must vanish
for every $0 \le s \le r$.
This shows that $z$ belongs to a
closed subset of $X$ having
codimension at least two in $X$.
\end{proof}
\begin{lemma}
\label{lem:tijprime}
Notation as in~\ref{constr:triple2ring}.
Then the variable $T_{ij}$ defines a prime ideal
in $R(A,\mathfrak{n},L)$ if and only if
the numbers $\gcd(l_{k1}, \ldots, l_{kn_k})$,
where $k \ne i$, are pairwise
coprime.
\end{lemma}
\begin{proof}
We treat exemplarily $T_{01}$.
Using Lemma~\ref{lem:alltrins},
we see that the ideal of relations of
$R(A,\mathfrak{n},L)$ can be
presented as follows
\begin{eqnarray*}
\bangle{g_{s,s+1,s+2}; \; 0 \le s \le r-2}
& = &
\bangle{g_{0,s,s+1}; \; 1 \le s \le r-1}.
\end{eqnarray*}
Thus, the ideal
$\bangle{T_{01}} \subseteq R(A,\mathfrak{n},L)$
is prime if and only if the following
binomial ideal is prime
$$
\mathfrak{a}
\ := \
\bangle{\alpha_{s+1 \, 0}f_s + \alpha_{0 s}f_{s+1}; \; 1 \le s \le r-1}
\ \subseteq \
{\mathbb K}[T_{ij}; \; (i,j) \ne (0,1)].
$$
Set $l_i := (l_{i1}, \ldots, l_{in_i})$.
Then the ideal $\mathfrak{a}$ is prime if and
only if the following family
can be complemented to a lattice basis
$$
(l_1,-l_2,0,\ldots,0),
\
\ldots,
\
(0,\ldots,0,l_{r-1},-l_r).
$$
This in turn is equivalent to the
statement that the numbers
$\gcd(l_{k1}, \ldots, l_{kn_k})$,
where $1 \le k \le r$, are pairwise
coprime.
\end{proof}
\begin{definition}
We say that a triple $(A,\mathfrak{n},L)$
as in~\ref{constr:triple2ring} is
{\em admissible\/} if the
numbers $\gcd(l_{i1}, \ldots, l_{in_i})$,
where $0 \le i \le r$, are pairwise
coprime.
\end{definition}
\begin{construction}
\label{constr:Kgrading}
Let $(A,\mathfrak{n},L)$ be an admissible triple
and consider the following free abelian groups
$$
E
\quad := \quad
\bigoplus_{i=0}^r \bigoplus_{j=1}^{n_i} {\mathbb Z} \! \cdot \! e_{ij},
\qquad
\qquad
K
\quad := \quad
\bigoplus_{j=1}^{n_0} {\mathbb Z} \! \cdot \! u_{0j}
\ \oplus \
\bigoplus_{i=1}^r \bigoplus_{j=1}^{n_i-1} {\mathbb Z} \! \cdot \! u_{ij}
$$
and define vectors
$u_{in_i}
:=
u_{01} + \ldots + u_{0r} - u_{i1} - \ldots - u_{in_i-1}
\in K$.
Then there is an epimorphism $\lambda \colon E \to K$
fitting into a commutative diagram with exact rows
$$
\xymatrix{
0
\ar[rr]
&&
E
\ar[rr]_{\alpha}^{e_{ij} \mapsto l_{ij} e_{ij}}
\ar[d]^{\eta}_{e_{ij} \mapsto u_{ij}}
&&
E
\ar[rr]^{e_{ij} \mapsto \b{e}_{ij} \qquad}
\ar[d]^{\lambda}
&&
{\bigoplus_{i,j} {\mathbb Z} / l_{ij} {\mathbb Z}}
\ar[rr]
\ar@{<->}[d]^{\cong}
&&
0
\\
0
\ar[rr]
&&
K
\ar[rr]_{\beta}
&&
K
\ar[rr]
&&
{\bigoplus_{i,j} {\mathbb Z} / l_{ij} {\mathbb Z}}
\ar[rr]
&&
0
}
$$
Define a $K$-grading of
${\mathbb K}[T_{ij}; \; 0 \le i \le r, \; 1 \le j \le n_i]$
by setting $\deg \, T_{ij} := \lambda(e_{ij})$.
Then every
$f_i = T_{i1}^{l_{i1}} \cdots T_{in_i}^{l_{in_i}}$
is $K$-homogeneous of degree
$$
\deg \, f_i
\ = \
l_{i1} \lambda(e_{i1}) + \ldots + l_{in_i}\lambda(e_{in_i})
\ = \
l_{01} \lambda(e_{01}) + \ldots + l_{0n_0}\lambda(e_{0n_0})
\ \in \
K.
$$
Thus, the polynomials $g_{i,j,k}$
of~\ref{constr:triple2ring}
are all $K$-homogeneous of the same degree
and we obtain an effective
$K$-grading of complexity one
of $R(A,\mathfrak{n},L)$.
\end{construction}
\begin{proof}
Only for the existence of the commutative diagram there
is something to show.
Write for short
$l_i := (l_{i1}, \ldots, l_{in_i})$.
By the admissibility condition,
the vectors
$v_i := (0, \ldots, 0,l_i,-l_{i+1},0,\ldots,0)$,
where $0 \le i \le r-1$,
can be completed to a lattice basis for $E$.
Consequently, we find an epimorphism
$\lambda \colon E \to K$ having precisely
${\rm lin}(v_0, \ldots, v_{r-1})$ as its kernel.
By construction, $\ker(\lambda)$ equals
$\alpha(\ker(\eta))$.
Using this, we obtain the induced
morphism $\beta \colon K \to K$ and
the desired properties.
\end{proof}
\begin{lemma}
\label{lem:pwnonassoc}
Notation as in~\ref{constr:Kgrading}.
Then $R(A,\mathfrak{n},L)_0 = {\mathbb K}$
and $R(A,\mathfrak{n},L)^* = {\mathbb K}^*$ hold.
Moreover, the $T_{ij}$ define pairwise nonassociated
prime elements in $R(A,\mathfrak{n},L)$.
\end{lemma}
\begin{proof}
The fact that all elements of degree zero
are constant is due to the fact that all
degrees $\deg \, T_{ij} = u_{ij} \in K$
are non-zero and generate a pointed convex
cone in $K_{{\mathbb Q}}$.
As a consequence, we obtain that all units
in $R(A,\mathfrak{n},L)$ are constant.
The $T_{ij}$ are prime by the admissibility
condition and Lemma~\ref{lem:tijprime},
and they are pairwise nonassociated
because they have pairwise different degrees
and all units are constant.
\end{proof}
\goodbreak
\begin{theorem}
\label{thm:factrings}
Up to isomorphy, the finitely generated
factorial ${\mathbb K}$-algebras with an effective
complexity one grading $R = \oplus_M R_u$
and $R_0 = {\mathbb K}$ are
\begin{enumerate}
\item
the polynomial algebras ${\mathbb K}[T_1, \ldots, T_d]$
with a grading $\deg(T_i) = u_i \in {\mathbb Z}^{d-1}$
such that $u_1, \ldots, u_d$ generate ${\mathbb Z}^{d-1}$
as a lattice and the convex cone on ${\mathbb Q}^{d-1}$
generated by $u_1, \ldots, u_d$ is pointed,
\item
the $(K \times {\mathbb Z}^m)$-graded
algebras $R(A,\mathfrak{n},L)[S_1,\ldots,S_m]$,
where $R(A,\mathfrak{n},L)$ is the $K$-graded
algebra defined by an admissible triple
$(A,\mathfrak{n},L)$ as in~\ref{constr:triple2ring}
and~\ref{constr:Kgrading}
and $\deg\, S_j \in {\mathbb Z}^m$
is the $j$-th canonical base vector.
\end{enumerate}
\end{theorem}
\begin{proof}
We first show that for any admissible triple
$(A,\mathfrak{n},L)$ the ring $R(A,\mathfrak{n},L)$
is a unique factorization domain.
If $l_{ij} = 1$ holds for any two $i,j$,
then, by~\cite[Prop.~2.4]{HaSu},
the ring $R(A,\mathfrak{n},L)$
is the Cox ring of a space ${\mathbb P}_1(A,\mathfrak{n})$
and hence is a unique factorization domain.
Now, let $(A,\mathfrak{n},L)$ be arbitrary
admissible data and let $\lambda \colon E \to K$
be an epimorphism
as in~\ref{constr:Kgrading}.
Set $n := n_0 + \ldots + n_r$
and consider the diagonalizable groups
$$
{\mathbb T}^n \ := \ {\rm Spec} \, {\mathbb K}[E],
\qquad
H \ := \ {\rm Spec} \, {\mathbb K}[K],
\qquad
H_0 \ := \ {\rm Spec} \, {\mathbb K}[\oplus_{i,j} {\mathbb Z} / l_{ij} {\mathbb Z}].
$$
Then ${\mathbb T}^n = ({\mathbb K}^*)^n$ is the standard $n$-torus
and $H_0$ is the direct product
of the cyclic subgroups
$H_{ij} := {\rm Spec} \, {\mathbb K}[{\mathbb Z} / l_{ij} {\mathbb Z}]$.
Moreover, the diagram in~\ref{constr:Kgrading}
gives rise to a commutative diagram
with exact rows
$$
\xymatrix{
0
&&
{{\mathbb T}^n}
\ar[ll]
&&
{{\mathbb T}^n}
\ar[ll]_{(t_{ij}^{l_{ij}}) \mapsfrom (t_{ij})}
&&
\ar[ll]
H_0
&&
0
\ar[ll]
\\
0
&&
{H}
\ar[ll]
\ar[u]^{\imath}
&&
{H}
\ar[ll]
\ar[u]^{\jmath}
&&
H_0
\ar[ll]
\ar@{<->}[u]_{\cong}
&&
0
\ar[ll]
}
$$
where $t_{ij} = \chi^{e_{ij}}$ are the coordinates
of ${\mathbb T}^n$ corresponding to the
characters $e_{ij} \in E$
and the maps
$\imath$, $\jmath$ are the closed embeddings
corresponding to the epimorphisms $\eta$,
$\lambda$ respectively.
Setting $\deg \, T_{ij} := e_{ij}$ defines
an action of ${\mathbb T}^n$ on
${\mathbb K}^n = {\rm Spec} \, {\mathbb K}[T_{ij}]$;
in terms of the coordinates $z_{ij}$
corresponding to $T_{ij}$
this action is given by
$t \! \cdot \! z = (t_{ij} z_{ij})$.
The torus $H$ acts effectively on ${\mathbb K}^n$ via
the embedding $\jmath \colon H \to {\mathbb T}^n$.
The generic isotropy group of $H$ along
$V({\mathbb K}^n,T_{ij})$ is the subgroup
$H_{ij} \subseteq H$
corresponding to
$K \to K/\lambda(E_{ij})$,
where $E_{ij} \subseteq E$
denotes the sublattice generated
by all $e_{kl}$ with $(k,l) \ne (i,j)$;
recall that we have
$K/\lambda(E_{ij}) \cong {\mathbb Z} / l_{ij}{\mathbb Z}$.
Now, set $l_{ij}' := 1$
for any two $i,j$ and
consider the spectra
$X := {\rm Spec} \, R(A,\mathfrak{n},L)$
and
$X' := {\rm Spec} \, R(A,\mathfrak{n},L')$.
Then the canonical surjections
${\mathbb K}[T_{ij}] \to R(A,\mathfrak{n},L)$
and
${\mathbb K}[T_{ij}] \to R(A,\mathfrak{n},L')$
define embeddings
$X \to {\mathbb K}^n$ and $X' \to {\mathbb K}^n$.
These embeddings fit into
the following commutative diagram
$$
\xymatrix{
{{\mathbb K}^n}
\ar@{<-}[rrr]_{\pi}^{(z_{ij}^{l_{ij}}) \mapsfrom (z_{ij})}
&
&
&
{{\mathbb K}^n}
\\
X'
\ar@{<-}[rrr]
\ar[u]
&
&
&
X
\ar[u]
}
$$
The action of $H$ leaves $X$ invariant
and the induced $H$-action on $X$
is the one given by the $K$-grading of
$R(A,\mathfrak{n},L)$.
Moreover, $\pi \colon {\mathbb K}^n \to {\mathbb K}^n$
is the quotient map for the induced action
of $H_0 \subseteq H$ on ${\mathbb K}^n$,
we have $X = \pi^{-1}(X')$, and hence
the restriction $\pi \colon X \to X'$
is a quotient map for the induced action
of $H_0$ on $X$.
Removing all subsets $V(X;T_{ij},T_{kl})$,
where $(i,j) \ne (k,l)$ from $X$,
we obtain an open subset $U \subseteq X$.
By Lemma~\ref{lem:pwnonassoc},
the complement $X \setminus U$
is of codimension at least two
and each $V(U,T_{ij})$ is irreducible.
By construction, the only isotropy groups
of the $H$-action on $U$ are
the groups $H_{ij}$ of the points of
$V(U,T_{ij})$.
The image $U' := \pi(U)$ is open in
$X'$,
the complement $X' \setminus U'$
is as well of codimension at least two
and $H/H_0$ acts freely on $U'$.
According to~\cite[Cor.~5.3]{KKV},
we have two exact sequences fitting
into the following diagram
$$
\xymatrix{
&
&
1
\ar[d]
&
\\
&
&
{\operatorname{Pic}}(U')
\ar[d]^{\pi^*}
&
\\
1 \ar[r]
&
{{\mathbb X}(H_0)} \ar[r]^{\alpha}
&
{\operatorname{Pic}_{H_0}}(U) \ar[r]^{\beta}
\ar[d]^{\delta}
&
{\operatorname{Pic}}(U)
\\
&
&
{\prod_{i,j}} {\mathbb X}(H_{ij})
&
}
$$
Since $X'$ is factorial, the Picard group
$\operatorname{Pic}(U')$ is trivial
and we obtain that $\delta$ is injective.
Since $H_0$ is the direct product
of the isotropy groups $H_{ij}$
of the Luna strata $V(U,T_{ij})$,
we see that
$\delta \circ \alpha$ is an isomorphism.
It follows that $\delta$ is surjective
and hence an isomorphism.
This in turn shows that $\alpha$ is an
isomorphism.
Now, every bundle on $U$ is $H$-linearizable.
Since $H_0$ acts as a subgroup of $H$,
we obtain that every bundle is $H_0$-linearizable.
It follows that $\beta$ is surjective and hence
$\operatorname{Pic}(U)$ is trivial.
We conclude $\operatorname{Cl}(X) = \operatorname{Pic}(U) = 0$,
which means that $R(A,\mathfrak{n},L)$ admits unique
factorization.
The second thing we have to show is that
any finitely generated factorial ${\mathbb K}$-algebra
$R$ with an effective complexity one multigrading
satisfying $R_0 = {\mathbb K}$ is as claimed.
Consider the action of the torus $G$ on
$X = {\rm Spec}\, R$ defined by the multigrading,
and let $X_0 \subseteq X$ be the set of
points having finite isotropy $G_x$.
Then~\cite[Prop~3.3]{HaSu}
provides a graded splitting
\begin{eqnarray*}
R
& \cong &
R'[S_1, \ldots, S_m],
\end{eqnarray*}
where the variables $S_j$ are identified with
the homogeneous functions defining the prime
divisors $E_j$ inside the boundary
$X \setminus X_0$ and $R'$ is the ring of
functions of $X_0$, which are invariant
under the subtorus $G_0 \subseteq G$
generated by the generic isotropy groups
$G_j$ of $E_j$.
Since $R'_0 = R_0 = {\mathbb K}$ holds, the orbit
space $X_0/G$ has only constant functions and
thus is a space ${\mathbb P}_1(A,\mathfrak{n})$
as constructed in~\cite[Section~2]{HaSu}.
This allows us to proceed exactly as in the
proof of Theorem~\cite[Thm~1.3]{HaSu} and
gives $R' = R(A,\mathfrak{n},L)$.
The admissibility condition follows
from Lemma~\ref{lem:tijprime} and the
fact that each $T_{ij}$ defines a prime
element in $R'$.
\end{proof}
\begin{remark}
\label{rem:mori}
Let $(A,\mathfrak{n},L)$ be an admissible triple
with $\mathfrak{n} =(1,\ldots, 1)$.
Then $K = {\mathbb Z}$ holds, the admissibility condition
just means that the numbers $l_{ij}$ are pairwise coprime
and we have
$$
\dim \, R(A,\mathfrak{n},L)
\ = \
n_0 + \ldots + n_r - r + 1
\ = \
2.
$$
Consequently, for two-dimensional rings,
Theorem~\ref{thm:factrings} specializes to Mori's
description of almost geometrically graded
two-dimensional
unique factorization domains provided in~\cite{Mo}.
\end{remark}
\begin{proposition}
\label{prop:coxchar}
Let $(A,\mathfrak{n},L)$ be an admissible triple,
consider the associated
$(K \times {\mathbb Z}^m)$-graded ring
$R(A,\mathfrak{n},L)[S_1, \ldots, S_m]$
as in Theorem~\ref{thm:factrings}
and let $\mu \colon K \times {\mathbb Z}^m \to K'$ be a surjection
onto an abelian group $K'$.
Then the following statements are equivalent.
\begin{enumerate}
\item
The $K'$-graded ring
$R(A,\mathfrak{n},L)[S_1, \ldots, S_m]$
is the Cox ring of a projective variety $X'$ with
$\operatorname{Cl}(X') \cong K'$.
\item
For every pair $i,j$ with $0 \le i \le r$ and
$1 \le j \le n_i$, the group $K'$ is generated
by the elements $\mu(\lambda(e_{kl}))$ and $\mu(e_s)$,
where $(i,j) \ne (k,l)$ and $1 \le s \le m$,
for every $1 \le t \le m$, the group $K'$ is generated
by the elements $\mu(\lambda(e_{ij}))$ and $\mu(e_s)$,
where $0 \le i \le r$, $1 \le j \le n_i$ and $s \ne t$,
and, finally the following
cone is of full dimension in $K'_{{\mathbb Q}}$:
$$
\bigcap_{(k,l)} {\rm cone}(\mu(\lambda(e_{ij})), \mu(e_s); \; (i,j) \ne (k,l))
\ \cap \
\bigcap_{t} {\rm cone}(\mu(\lambda(e_{ij})), \mu(e_s); \; s \ne t).
$$
\end{enumerate}
\end{proposition}
\begin{proof}
Suppose that~(i) holds,
let $p \colon \rq{X}' \to X'$
denote the universal torsor
and let $X'' \subseteq X'$ be the set
of smooth points.
According to~\cite[Prop.~2.2]{Ha2},
the group $H' = {\rm Spec} \, {\mathbb K}[K']$ acts
freely on $p^{-1}(X'')$, which
is a big open subset of the total
coordinate space
${\rm Spec} \, R(A,\mathfrak{n},L)[S_1, \ldots, S_m]$.
This implies the first condition of~(ii).
Moreover, by~\cite[Prop.~4.1]{Ha2}, the displayed cone
is the moving cone of $X'$ and
hence of full dimension.
Conversely, if~(ii) holds,
then the $K'$-graded ring
$R(A,\mathfrak{n},L)[S_1, \ldots, S_m]$
can be made into a bunched ring and
hence is the Cox ring of a projective variety,
use~\cite[Thm.~3.6]{Ha2}.
\end{proof}
\section{Bounds for Fano varieties}
We consider $d$-dimensional Fano varieties
$X$ that come with a complexity one torus
action and have divisor class group
$\operatorname{Cl}(X) \cong {\mathbb Z}$.
Then the Cox ring $\mathcal{R}(X)$ of
$X$ is factorial~\cite[Prop.~8.4]{BeHa1}
and has an effective complexity one
grading,
which refines the $\operatorname{Cl}(X)$-grading,
see~\cite[Prop.~2.6]{HaSu}.
Thus, according to Theorem~\ref{thm:factrings},
it is of the form
\begin{eqnarray*}
\mathcal R(X)
& \cong &
{\mathbb K}[T_{ij}; \; 0 \le i \le r, \; 1 \le j \le n_i][S_1,\ldots, S_m]
\ / \
\bangle{g_{i,i+1,i+2}; \; 0 \le i \le r-2},
\\
g_{i,j,k}
& := &
\alpha_{jk} T_{i1}^{l_{i1}} \cdots T_{in_i}^{l_{in_i}}
\ + \
\alpha_{ki} T_{j1}^{l_{j1}} \cdots T_{jn_{j}}^{l_{jn_{j}}}
\ + \
\alpha_{ij}T_{k1}^{l_{k1}} \cdots T_{kn_{k}}^{l_{kn_{k}}}.
\end{eqnarray*}
Here, we may (and will) assume $n_0 \ge \ldots \ge n_r \ge 1$.
With $n := n_0 + \ldots + n_r$, we have
$n + m = d + r$.
For the degrees of the variables in
$\operatorname{Cl}(X) \cong {\mathbb Z}$, we write
$w_{ij} := \deg \, T_{ij}$ for
$0 \le i \leq r$, $1 \le j \le n_i$
and $u_k = \deg \, S_k$ for $1 \le k \le m$.
Moreover, for $\mu \in {\mathbb Z}_{>0}$, we denote
by $\xi(\mu)$ the number of primes in
$\{2, \ldots, \mu\}$.
The following result provides bounds for the
discrete data of the Cox ring.
\begin{theorem}
\label{Th:FiniteIndex}
In the above situation,
fix the dimension $d = \dim(X)$
and the Picard index
$\mu = [\operatorname{Cl}(X):\operatorname{Pic}(X)]$.
Then we have
$$
u_k \ \le \ \mu \quad \text{for } 1 \le k \le m.
$$
Moreover, for the degree $\gamma$
of the relations, the weights
$w_{ij}$ and the exponents $l_{ij}$,
where $0 \le i \le r$ and $1 \le j \le n_i$
one obtains the following.
\begin{enumerate}
\item
Suppose that $r = 0,1$ holds.
Then $n + m \le d+1$ holds
and one has the bounds
$$
w_{ij} \ \le \ \mu
\quad\text{for } 0 \le i \le r \text{ and } 1 \le j \le n_i,
$$
and the Picard index is given by
$$
\mu
\ = \
\mathrm{lcm}(w_{ij},u_k; \;0 \le i \le r, 1 \le j \le n_i, 1 \le k \le m ).
$$
\item
Suppose that $r \ge 2$ and $n_0=1$ hold.
Then $r \le \xi(\mu)-1$ and $n=r+1$ and
$m=d-1$ hold and one has
$$
w_{i1} \ \le \ \mu^r
\quad
\text{for } 0 \le i \le r,
\qquad
l_{01} \cdots l_{r1} \ \mid \ \mu,
\qquad
l_{01} \cdots l_{r1} \ \mid \ \gamma \ \le \ \mu^{r+1},
$$
and the Picard index is given by
$$
\mu
\ = \
\mathrm{lcm}(\gcd(w_{j1}; \; j \neq i), u_k;\; 0 \le i \le r, 1\le k \le m).
$$
\item
Suppose that $r \ge 2$ and $n_0 > n_1=1$ hold.
Then we may assume $l_{11} > \ldots > l_{r1} \ge 2$,
we have $r \le \xi(3d\mu)-1$ and
$n_0+m = d$ and the bounds
$$
w_{01},\ldots,w_{0n_0} \ \le \ \mu,
\qquad
l_{01},\ldots,l_{0n_0} \ < \ 6d\mu,
$$
$$
w_{11},l_{21} \ < \ 2d\mu,
\qquad
w_{21},l_{11} \ < \ 3d\mu,
$$
$$
w_{i1} \ < \ 6d\mu,
\quad
l_{i1} \ < \ 2d\mu
\quad
\text{for } 2 \le i \le r,
$$
$$
l_{11} \cdots l_{r1} \ \mid \ \gamma \ < \ 6d\mu,
$$
and the Picard index is given by
$$
\mu
\ = \
\mathrm{lcm}(w_{0j}, \gcd(w_{11},\ldots,w_{r1}), u_k; \;
1 \le j \le n_0, 1 \le k \le m ).
$$
\item
Suppose that $n_1 > n_2 = 1$ holds.
Then we may assume $l_{21} > \ldots > l_{r1} \ge 2$,
we have $r \le \xi(2(d+1)\mu)-1$
and $n_0+n_1+m = d+1$ and the bounds
$$
w_{ij} \ \le \ \mu
\quad
\text{for } i=0,1 \text{ and } 1 \le j \le n_i,
\qquad
w_{21} \ < \ (d+1)\mu,
$$
$$
w_{ij}, l_{ij} \ < \ 2(d+1)\mu
\quad
\text{for } 0 \le i \le r \text{ and } 1 \le j \le n_i,
$$
$$
l_{21} \cdots l_{r1} \ \mid \ \gamma \ < \ 2(d+1)\mu,
$$
and the Picard index is given by
$$
\mu
\ = \
\mathrm{lcm}(w_{ij}, u_k; \; 0 \le i\le 1, 1 \le j \le n_i, 1 \le k \le m).
$$
\item
Suppose that $n_2 > 1$ holds
and let $s$ be the maximal number with $n_{s}>1$.
Then one may assume $l_{s+1,1} > \ldots > l_{r1} \ge 2$,
we have $r \le \xi((d+2)\mu)-1$ and
$n_0+ \ldots + n_s+m = d+s$ and the bounds
$$
w_{ij} \ \le \ \mu,
\quad \text{for } 0 \le i \le s,
$$
$$
w_{ij}, l_{ij} \ < \ (d+2)\mu
\quad \text{for } 0 \le i \le r \text{ and } 1 \le j \le n_i,
$$
$$
l_{s+1,1} \cdots l_{r1} \ \mid \ \gamma \ < \ (d+2)\mu,
$$
and the Picard index is given by
$$
\mu
\ = \
\mathrm{lcm}(w_{ij}, u_k; \; 0 \le i \le s, 1 \le j \le n_i, 1 \le k \le m).
$$
\end{enumerate}
\end{theorem}
Putting all the bounds of the theorem together,
we obtain the following (raw) bound for the number
of deformation types.
\begin{corollary}
\label{cor:finitefanos}
For any pair $(d,\mu) \in {\mathbb Z}^2_{>0}$,
the number $\delta(d,\mu)$ of different
deformation types of $d$-dimensional
Fano varieties with a complexity one
torus action such that
$\operatorname{Cl}(X) \cong {\mathbb Z}$ and $[\operatorname{Cl}(X):\operatorname{Pic}(X)]=\mu$
hold is bounded by
\begin{eqnarray*}
\delta(d,\mu)
& \le &
(6d\mu)^{2\xi(3d\mu)+d-2}\mu^{\xi(\mu)^2 + 2\xi((d+2)\mu)+2d+2}.
\end{eqnarray*}
\end{corollary}
\begin{proof}
By Theorem~\ref{Th:FiniteIndex} the
discrete data $r$, $\mathfrak{n}$,
$L$ and $m$ occuring in $\mathcal{R}(X)$
are bounded as in the assertion.
The continuous data in $\mathcal{R}(X)$
are the coefficients $\alpha_{ij}$;
they stem from the family
$A = (a_0, \ldots, a_r)$
of points $a_i \in {\mathbb K}^2$.
Varying the $a_i$ provides flat
families of Cox rings and hence,
by passing to the homogeneous spectra,
flat families of the resulting
Fano varieties $X$.
\end{proof}
\begin{corollary}
\label{cor:asymptoticsd}
Fix $d \in {\mathbb Z}_{>0}$. Then
the number $\delta(\mu)$ of different
deformation types of $d$-dimensional
Fano varieties with a complexity one
torus action, $\operatorname{Cl}(X) \cong {\mathbb Z}$
and Picard index
$\mu := [\operatorname{Cl}(X):\operatorname{Pic}(X)]$
is asymptotically bounded by
$\mu^{A \mu^2 / \log^2 \mu}$
with a constant~$A$ depending only
on~$d$.
\end{corollary}
\begin{corollary}
\label{cor:asymptoticsmu}
Fix $\mu \in {\mathbb Z}_{>0}$. Then
the number $\delta(d)$ of different
deformation types of $d$-dimensional
Fano varieties with a complexity one
torus action, $\operatorname{Cl}(X) \cong {\mathbb Z}$
and Picard index
$\mu := [\operatorname{Cl}(X):\operatorname{Pic}(X)]$
is asymptotically bounded by
$d^{Ad}$ with a constant~$A$
depending only on~$\mu$.
\end{corollary}
We first recall the necessary facts on
Cox rings, for details, we refer
to~\cite{Ha2}.
Let $X$ be a complete $d$-dimensional
variety with divisor class group
$\operatorname{Cl}(X) \cong {\mathbb Z}$.
Then the Cox ring $\mathcal{R}(X)$
is finitely generated and the total
coordinate space $\b{X} := {\rm Spec} \, \mathcal{R}(X)$
is a factorial affine variety coming
with an action of ${\mathbb K}^*$ defined by
the $\operatorname{Cl}(X)$-grading of $\mathcal{R}(X)$.
Choose a system $f_1, \ldots, f_\nu$ of
homogeneous pairwise nonassociated
prime generators for $\mathcal{R}(X)$.
This provides an ${\mathbb K}^*$-equivariant
embedding
$$
\b{X} \ \to \ {\mathbb K}^{\nu},
\qquad
\b{x} \ \mapsto \ (f_1(\b{x}), \ldots, f_{\nu}(\b{x})).
$$
where ${\mathbb K}^*$ acts diagonally with the
weights $w_i = \deg(f_i) \in \operatorname{Cl}(X) \cong {\mathbb Z}$
on ${\mathbb K}^{\nu}$.
Moreover, $X$ is the geometric
${\mathbb K}^*$-quotient of
$\rq{X} := \b{X} \setminus \{0\}$,
and the quotient map
$p \colon \rq{X} \to X$ is a universal
torsor.
By the local divisor class group $\operatorname{Cl}(X,x)$
of a point $x \in X$, we mean the group of
Weil divisors $\operatorname{WDiv}(X)$ modulo those that
are principal near~$x$.
\begin{proposition}
\label{Prop:FanoPicard}
For any
$\b{x} =(\b{x}_1,\ldots,\b{x}_{\nu}) \in \rq{X}$
the local divisor class group $\operatorname{Cl}(X,x)$
of $x := p(\b{x})$
is finite of order $\gcd(w_i; \; \b{x}_i \ne 0)$.
The index of the Picard group $\operatorname{Pic}(X)$ in
$\operatorname{Cl}(X)$ is given by
\begin{eqnarray*}
[\operatorname{Cl}(X):\operatorname{Pic}(X)]
& = &
\mathrm{lcm}_{x \in X}( |\operatorname{Cl}(X,x)| ).
\end{eqnarray*}
Suppose that the ideal of $\b{X} \subseteq {\mathbb K}^{\nu}$
is generated by $\operatorname{Cl}(X)$-homogeneous
polynomials $g_1, \ldots, g_{\nu-d-1}$
of degree $\gamma_j := \deg(g_j)$.
Then one obtains
$$
-\mathcal{K}_X
\ = \
\sum_{i=1}^{\nu} w_i - \sum_{j=1}^{\nu-d-1} \gamma_j,
\qquad
(-\mathcal{K}_X )^d
\ = \
\left(\sum_{i=1}^{\nu} w_i - \sum_{j=1}^{\nu-d-1} \gamma_j\right)^d
\frac{\gamma_1 \cdots \gamma_{\nu-d-1}}{w_1 \cdots w_\nu}
$$
for the anticanonical class
$-\mathcal{K}_X \in \operatorname{Cl}(X) \cong {\mathbb Z}$.
In particular, $X$ is a Fano variety
if and only if the following inequality holds
\begin{eqnarray*}
\sum_{j=1}^{\nu-d-1} \gamma_j
& < &
\sum_{i=1}^{\nu} w_i.
\end{eqnarray*}
\end{proposition}
\begin{proof}
Using~\cite[Prop.~2.2, Thm.~4.19]{Ha2}, we observe
that $X$ arises from the bunched ring
$(R,\mathfrak{F},\Phi)$,
where $R = \mathcal{R}(X)$,
$\mathfrak{F} = (f_1, \ldots, f_\nu)$
and $\Phi = \{{\mathbb Q}_{\ge 0}\}$.
The descriptions of local class groups, the
Picard index and the anticanonical class are
then special cases
of~\cite[Prop.~4.7, Cor.~4.9 and Cor.~4.16]{Ha2}.
The anticanonical self-intersection number
is easily computed in the ambient weighted
projective space ${\mathbb P}(w_1, \ldots, w_\nu)$,
use~\cite[Constr.~3.13, Cor.~4.13]{Ha2}.
\end{proof}
\begin{remark}
If the ideal of $\b{X} \subseteq {\mathbb K}^{\nu}$
is generated by $\operatorname{Cl}(X)$-homogeneous
polynomials $g_1, \ldots, g_{\nu-d-1}$,
then~\cite[Constr.~3.13, Cor.~4.13]{Ha2}
show that $X$ is a well formed
complete intersection in the weighted
projective space ${\mathbb P}(w_1, \ldots, w_\nu)$ in the
sense of~\cite[Def.~6.9]{IaFl}.
\end{remark}
We turn back to the case that $X$ comes
with a complexity one torus action as at
the beginning of this section.
We consider the case $n_0 = \ldots = n_r=1$,
that means that each relation $g_{i,j,k}$
of the Cox ring $\mathcal{R}(X)$ depends
only on three variables.
Then we may write $T_i$ instead of $T_{i1}$
and $w_i$ instead of $w_{i1}$, etc..
In this setting,
we obtain the following bounds
for the numbers of possible varieties~$X$
(Fano or not).
\begin{proposition}
\label{prop:Finite3Var}
For any pair $(d,\mu) \in {\mathbb Z}^2_{>0}$
there is, up to deformation,
only a finite number of
complete $d$-dimensional
varieties with divisor class group
${\mathbb Z}$,
Picard index $[\operatorname{Cl}(X):\operatorname{Pic}(X)] = \mu$
and Cox ring
$$
{\mathbb K}[T_0,\ldots, T_r,S_1,\ldots, S_m]
\ / \
\bangle{
\alpha_{i+1,i+2} T_i^{l_i}
+
\alpha_{i+2,i} T_{i+1}^{l_{i+1}}
+
\alpha_{i,i+1} T_{i+2}^{l_{i+2}};
\; 0 \le i \le r-2}.
$$
In this situation we have $r \le \xi(\mu)-1$.
Moreover, for the weights $w_i := \deg\, T_i$,
where $0 \le i \le r$
and $u_k := \deg\, S_k$, where $1 \le k \le m$,
the exponents $l_i$
and the degree $\gamma := l_0w_0$ of the relation
one has
$$
l_0 \cdots l_r \ \mid \ \gamma,
\qquad
l_0 \cdots l_r \ \mid \ \mu,
\qquad
w_i \ \le \ \mu^{\xi(\mu)-1},
\qquad
u_k \ \le \ \mu.
$$
\end{proposition}
\begin{proof}
Consider the total coordinate space
$\b{X} \subseteq {\mathbb K}^{r+1+n}$ and
the universal torsor $p \colon \rq{X} \to X$
as discussed before.
For each $0 \le i \le r$ fix a point
$\b{x}(i) = (\b{x}_0, \ldots, \b{x}_r, 0, \ldots, 0)$
in $\rq{X}$ such that $\b{x}_i = 0$ and
$\b{x}_j \ne 0$ for $j \ne i$ hold.
Then, denoting $x(i) := p(\b{x}(i))$, we obtain
$$
\gcd(w_j; j \ne i)
\ = \
\vert \operatorname{Cl}(X,x(i)) \vert
\ \mid \ \mu.
$$
Consider $i,j$ with $j \ne i$.
Since all relations are homogeneous
of the same degree,
we have $l_iw_i = l_jw_j$.
Moreover, by the admissibility condition,
$l_i$ and $l_j$ are coprime.
We conclude $l_i \vert w_j$
for all $j \ne i$ and hence
$l_i \vert \gcd(w_j; \; j \ne i)$.
This implies
$$
l_0 \cdots l_r \ \mid \ l_0w_0 \ = \ \gamma,
\qquad\qquad
l_0 \cdots l_r \ \mid \ \mu.
$$
We turn to the bounds for the $w_i$,
and first verify $w_0 \le \mu^r$.
Using the relation $l_iw_i = l_0w_0$,
we obtain for every $l_i$ a presentation
$$
l_i
\ = \
l_0 \cdot \frac{w_0 \cdots w_{i-1}}{w_1 \cdots w_i}
\ = \
\eta_i \cdot \frac{\gcd(w_0, \ldots ,w_{i-1})}{\gcd(w_0, \ldots, w_i)}
$$
with suitable integers $1 \le \eta_i \le \mu$.
In particular, the very last fraction
is bounded by $\mu$.
This gives the desired estimate:
$$
w_0
=
\frac{w_{0}}{\gcd(w_{0},w_{1})}
\cdot
\frac{\gcd(w_{0},w_1)}{\gcd(w_{0},w_{1},w_2)}
\cdots
\frac
{\gcd(w_{0},\ldots,w_{r-2})}
{\gcd(w_{0},\ldots,w_{r-1})}
\cdot
\gcd(w_{0},\ldots,w_{r-1})
\le
\mu^r.
$$
Similarly, we obtain $w_i \le \mu^r$
for $1 \le i \le r$.
Then we only have to show that
$r+1$ is bounded by $\xi(\mu)$,
but this follows immediately from
the fact that $l_0, \ldots, l_r$
are pairwise coprime.
Finally, to estimate the $u_k$,
consider the points $\b{x}(k) \in \rq{X}$
having the $(r+k)$-th coordinate one and
all others zero. Set $x(k) : =p(\b{x}(k))$.
Then $\operatorname{Cl}(X,x(k))$ is of order $u_k$,
which implies $u_k \le \mu$.
\end{proof}
\goodbreak
\begin{lemma}
\label{Lem:1relation}
Consider the ring
${\mathbb K}[T_{ij}; \; 0 \le i \le 2, \; 1 \le j \le n_i][S_1,\ldots,S_k]
/
\bangle{g}
$
where $n_0 \ge n_1 \ge n_2 \ge 1$ holds.
Suppose that $g$ is homogeneous
with respect to a ${\mathbb Z}$-grading
of ${\mathbb K}[T_{ij},S_k]$ given by
$\deg \, T_{ij} = w_{ij} \in {\mathbb Z}_{>0}$
and $\deg \, S_k = u_k \in {\mathbb Z}_{>0}$,
and assume
\begin{eqnarray*}
\deg \, g
& < &
\sum_{i=0}^2\sum_{j=1}^{n_i}w_{ij}
\ + \
\sum_{i=1}^m u_i.
\end{eqnarray*}
Let $\mu \in {\mathbb Z}_{>1}$, assume
$w_{ij} \le \mu$ whenever $n_i > 1$,
$1 \le j \le n_i$ and $u_k \le \mu$
for $1 \le k \le m$ and
set $d := n_0+n_1+n_2+m-2$.
Depending on the shape of $g$,
one obtains the following bounds.
\begin{enumerate}
\item
Suppose that
$g
=
\eta_0 T_{01}^{l_{01}} \cdots T_{0n_0}^{l_{0n_0}}
+
\eta_1 T_{11}^{l_{11}}
+
\eta_2 T_{21}^{l_{21}}$
with $n_0 > 1$ and coefficients
$\eta_i \in {\mathbb K}^*$ holds,
we have $l_{11} \ge l_{21} \ge 2$
and $l_{11}$, $l_{21}$ are coprime.
Then, one has
$$
\qquad\qquad
w_{11}, l_{21} \ < \ 2d\mu,
\qquad
w_{21}, l_{11} \ < \ 3d\mu,
\qquad
\deg \, g \ < \ 6d\mu.
$$
\item
Suppose that
$g
=
\eta_0 T_{01}^{l_{01}} \cdots T_{0n_0}^{l_{0n_0}}
+
\eta_1 T_{11}^{l_{11}} \cdots T_{1n_1}^{l_{1n_1}}
+
\eta_2 T_{21}^{l_{21}}$
with $n_1 > 1$ and coefficients
$\eta_i \in {\mathbb K}^*$ holds and
we have $l_{21} \ge 2$.
Then one has
$$
\qquad\qquad
w_{21}
\ < \
(d+1)\mu,
\qquad\qquad
\deg \, g
\ < \
2(d+1)\mu.
$$
\end{enumerate}
\end{lemma}
\begin{proof}
We prove~(i). Set for short
$c := (n_0+m)\mu = d\mu$.
Then, using homogeneity of $g$
and the assumed inequality, we obtain
$$
l_{11}w_{11}
\ = \
l_{21}w_{21}
\ = \
\deg \, g
\ < \
\sum_{i=0}^2\sum_{j=1}^{n_i}w_{ij}
+
\sum_{i=1}^m u_i
\ \le \
c+w_{11}+w_{21}.
$$
Since $l_{11}$ and $l_{21}$
are coprime, we have
$l_{11} > l_{21} \ge 2$.
Plugging this into the above inequalities,
we arrive at
$2 w_{11} < c + w_{21}$ and
$w_{21} < c + w_{11}$.
We conclude $w_{11} < 2c$ and $w_{21} < 3c$.
Moreover, $l_{11}w_{11} = l_{21}w_{21}$ and
$\gcd(l_{11},l_{21}) = 1$ imply
$l_{11} \vert w_{21}$ and $l_{21} \vert w_{11}$.
This shows $l_{11} < 3c$ and $l_{21} < 2c$.
Finally, we obtain
$$
\deg \, g
\ < \
c + w_{11} + w_{21}
\ < \
6c.
$$
We prove (ii).
Here we set $c := (n_0+n_1+m)\mu = (d+1)\mu$.
Then the assumed inequality gives
$$
l_{21}w_{21}
\ = \
\deg g
\ < \
\sum_{i=0}^1\sum_{j=1}^{n_i}w_{ij}+
\sum_{i=1}^m u_i+ w_{21}
\ \le \
c+w_{21}.
$$
Since we assumed $l_{21} \geq 2$,
we can conclude $w_{21} < c$.
This in turn gives us
$\deg \, g < 2c$
for the degree of the relation.
\end{proof}
\begin{proof}
[Proof of Theorem~\ref{Th:FiniteIndex}]
As before, we denote by $\b{X} \subseteq {\mathbb K}^{n+m}$
the total coordinate space and by
$p \colon \rq{X} \to X$ the universal torsor.
We first consider the case that
$X$ is a toric variety.
Then the Cox ring is a polynomial ring,
$\mathcal R(X) = {\mathbb K}[S_1,\ldots,S_m]$.
For each $1 \le k \le m$,
consider the point
$\overline x(k) \in \rq{X}$
having the $k$-th coordinate one
and all others zero and
set $x(k) := p(\b{x}(k))$.
Then, by~Proposition~\ref{Prop:FanoPicard},
the local class group
$\operatorname{Cl}(X,x(k))$ is of order $u_k$
where $u_k := \deg \, S_k$.
This implies
$u_k \le \mu$ for $1 \le k \leq m$
and settles Assertion~(i).
Now we treat the non-toric case,
which means $r \ge 2$.
Note that we have $n \ge 3$.
The case $n_0=1$ is done in
Proposition~\ref{prop:Finite3Var}.
So, we are left with $n_0>1$.
For every $i$ with $n_i > 1$
and every $1 \le j \le n_i$,
there is the point $\b{x}(i,j) \in \rq{X}$
with $ij$-coordinate $T_{ij}$ equal
to one and all others equal to
zero, and thus we have the point
$x(i,j) := p(\b{x}(i,j)) \in X$.
Moreover, for every $1 \le k \le m$, we have
the point $\b{x}(k) \in \rq{X}$
having the $k$-coordinate $S_k$ equal to one
and all others zero; we
set $x(k):=p(\b{x}(k))$.
Proposition~\ref{Prop:FanoPicard}
provides the bounds
$$
w_{ij}
\ = \
\deg \, T_{ij}
\ = \
\vert \operatorname{Cl}(X,x(i,j)) \vert
\ \le \
\mu
\qquad
\text{for }
n_i > 1, \, 1 \le j \le n_i,
$$
$$
u_k
\ = \
\deg \, S_k
\ = \
\vert \operatorname{Cl}(X,x(k)) \vert
\ \le \
\mu
\qquad
\text{for }
1 \le k \le m.
$$
Let $0 \le s \le r$ be the maximal number with
$n_{s} > 1$. Then $g_{s-2,s-1,s}$ is the last
polynomial such that each of its three monomials
depends on more than one variable.
For any $t \ge s$, we have the ``cut ring''
\begin{eqnarray*}
R_t
& := &
{\mathbb K}[T_{ij}; \; 0 \le i \le t, \; 1 \le j \le n_i]
[S_1,\ldots,S_m]
\ / \
\bangle{g_{i,i+1,i+2}; \; 0 \le i \le t-2}
\end{eqnarray*}
where the relations $g_{i,i+1,i+2}$ depend on
only three variables as soon as $i > s$ holds.
For the degree $\gamma$ of the relations we have
\begin{eqnarray*}
(r-1)\gamma
& = &
(t-1)\gamma \ + \ (r-t)\gamma
\\
& = &
(t-1)\gamma \ + \ l_{t+1,1}w_{t+1,1} + \ldots + l_{r1}w_{r1}
\\
& < &
\sum_{i=0}^r\sum_{j=1}^{n_i}w_{ij}
\ + \
\sum_{i=1}^m u_i
\\
& = &
\sum_{i=0}^t \sum_{j=1}^{n_i}w_{ij}
\ + \
w_{t+1,1}+ \ldots + w_{r1}
\ + \
\sum_{i=1}^m u_i.
\end{eqnarray*}
Since $l_{i1}w_{i1} > w_{i1}$ holds in
particular for $t+1 \le i \le r$,
we derive from this the inequality
\begin{eqnarray*}
\gamma
& < &
\frac{1}{t-1}
\left(
\sum_{i=0}^t\sum_{j=1}^{n_i}w_{ij}
\ + \
\sum_{i=1}^m u_i
\right).
\end{eqnarray*}
To obtain the bounds in
Assertions~(iii) and~(iv),
we consider the cut ring $R_t$
with $t=2$ and apply
Lemma~\ref{Lem:1relation};
note that we have
$d = n_0+n_1+n_2+m-2$
for the dimension $d = \dim(X)$
and that $l_{22} \ge 0$ is due
to the fact that $X$ is non-toric.
The bounds $w_{ij}, l_{0j} < 6d\mu$ in
Assertion~(iii) follow from
$l_{ij}w_{ij} = \gamma < 6 d\mu$
and $l_{i1} < 2d\mu$ follows from
$l_{i1} \mid w_{21}$ for $3 \le i \le r$.
Moreover, $l_{i1} \mid w_{11}$
for $2 \le i \le r$ implies
$l_{11} \cdots l_{r1} \mid \gamma = l_{11}w_{11}$.
Similarly $w_{ij},l_{ij} < 2(d+1)\mu$
in Assertion~(iv) follow from
$l_{ij}w_{ij} = \gamma < 2(d+1) d\mu$
and $l_{21} \cdots l_{r1} \mid \gamma = l_{21}w_{21}$
follows from $l_{i1} \mid w_{21}$ for $3 \le i \le r$.
The bounds on $r$ in~(iii) in~(iv) are
as well consequences of the admissibility
condition.
To obtain the bounds in Assertion~(v),
we consider the cut ring $R_t$
with $t=s$.
Using $n_i=1$ for $i \ge t+1$, we
can estimate the degree of the relation
as follows:
$$
\gamma
\ \le \
\frac{(n_0 + \ldots + n_t + m) \mu}{t-1}
\ = \
\frac{(d + t) \mu}{t-1}
\ \le \
(d + 2) \mu.
$$
Since we have $w_{ij}l_{ij} \le \deg \, g_0$
for any $0 \le i \le r$ and any $1 \le j \le n_i$,
we see that all $w_{ij}$ and $l_{ij}$
are bounded by $(d+2)\mu$.
As before, $l_{s+1,1} \cdots l_{r1} \mid \gamma$
is a consequence of $l_{i1} \mid \gamma$
for $i = s+2, \ldots, r$
and also the bound on $r$ follows
from the admissibility condition.
Finally, we have to express the Picard index
$\mu$ in terms of the weights $w_{ij}$ and $u_k$
as claimed in the Assertions.
This is a direct application of the formula of
Proposition~\ref{Prop:FanoPicard}.
Observe that it suffices to work with the
$p$-images of the following points:
For every $0 \le i \le r$ with $n_i > 1$
take a point $\b{x}(i,j) \in \rq{X}$
with $ij$-coordinate $T_{ij}$ equal
to one and all others equal to
zero,
for every $0 \le i \le r$ with $n_i = 1$
whenever $n_i=1$ take $\b{x}(i,j) \in \rq{X}$
with $ij$-coordinate $T_{ij}$ equal
to zero, all other $T_{st}$ equal to
one and coordinates $S_k$ equal to zero,
and, for every $1 \le k \le m$,
take a point $\b{x}(k) \in \rq{X}$
having the $k$-coordinate $S_k$ equal to one
and all others zero.
\end{proof}
We conclude the section with discussing some
aspects of the not necessarily Fano
varieties of Proposition~\ref{prop:Finite3Var}.
Recall that we considered admissible triples
$(A,\mathfrak{n},L)$ with
$n_0 = \ldots = n_r =1$ and thus
rings $R$ of the form
$$
{\mathbb K}[T_0,\ldots, T_r,S_1,\ldots, S_m]
\ / \
\bangle{\alpha_{i+1,i+2} T_i^{l_i}
+
\alpha_{i+2,i} T_{i+1}^{l_{i+1}}
+
\alpha_{i,i+1} T_{i+2}^{l_{i+2}};
\; 0 \le i \le r-2}.
$$
\begin{proposition}
\label{prop:MoriCox}
Suppose that the ring $R$ as above
is the Cox ring
of a non-toric variety $X$
with $\operatorname{Cl}(X) = {\mathbb Z}$.
Then we have $m \ge 1$ and
$\mu := [\operatorname{Cl}(X):\operatorname{Pic}(X)] \ge 30$.
Moreover, if $X$ is a surface, then we
have $m=1$ and $w_i= l_i^{-1} l_0 \cdots l_r$.
\end{proposition}
\begin{proof}
The homogeneity condition
$l_{i}w_{i}=l_{j}w_{j}$
together with the
admissibility condition
$\gcd(l_{i},l_{j})=1$
for $0 \le i \ne j\leq r$
gives us
$l_{i} \mid \gcd(w_{j}; j \ne i)$.
Moreover, by Proposition~\ref{prop:coxchar},
every set of $m+r$ weights $w_i$
has to generate the class group ${\mathbb Z}$,
so they must have
greatest common divisor one.
Since $X$ is non-toric,
$l_{i} \ge 2$ holds and we obtain
$m \ge 1$.
To proceed, we infer
$l_0 \cdots l_r \mid \mu$
and
$l_0 \cdots l_r \mid \deg g_{ijk}$
from Proposition~\ref{Prop:FanoPicard}.
As a consequence,
the minimal value for $\mu$ and $\deg g_{ijk}$
is obviously $2\cdot3\cdot5=30$.
what really can be received
as the following example shows.
Note that if $X$ is a surface we have $m=1$ and
$\gcd(w_{i}; 0\le i\le r )=1$.
Thus, $l_iw_i=l_jw_j$ gives us
$\deg g_{ijk}= l_0 \cdots l_r$ and
$w_i= l_i^{-1} l_0 \cdots l_r$.
\end{proof}
The bound $[\operatorname{Cl}(X):\operatorname{Pic}(X)] \ge 30$ given
in the above proposition is even sharp;
the surface discussed below realizes it.
\begin{example}
Consider $X$ with
$\mathcal R(X)=
{\mathbb K}[T_{0},T_{1},T_{2},T_{3}]/
\langle g\rangle$
with
$g=T_{0}^2+T_{1}^3+T_{2}^5$
and the grading
$$
\deg \, T_0 \ = \ 15,
\quad
\deg \, T_1 \ = \ 10,
\quad
\deg \, T_2 \ = \ 6,
\quad
\deg \, T_3 \ = \ 1.
$$
Then we have $\gcd(15,10)=5$, $\gcd(15,6)=3$
and $\gcd(10,6)=2$
and therefore $[\operatorname{Cl}(X):\operatorname{Pic}(X)]=30$.
Further $X$ is Fano because of
$$
\deg \, g
\ = \
30
\ < \
32
\ = \
\deg \, T_0 + \ldots + \deg \, T_3.
$$
\end{example}
Let us have a look at the geometric meaning
of the condition $n_0 = \ldots = n_r = 1$.
For a variety $X$ with an action of a torus $T$,
we denote by $X_0 \subseteq X$ the
union of all orbits with at most finite isotropy.
Then there is a possibly non-separated orbit space
$X_0/T$; we call it the maximal orbit space.
From~\cite{HaSu}, we infer that
$n_0 = \ldots = n_r = 1$ holds if and only
if $X_0/T$ is separated.
Combining this with Propositions~\ref{prop:Finite3Var}
and~\ref{prop:MoriCox} gives the following.
\begin{corollary}
For any pair $(d,\mu) \in {\mathbb Z}^2_{>0}$
there is, up to deformation,
only a finite number of
$d$-dimensional complete
varieties $X$
with a complexity one torus action
having divisor class group ${\mathbb Z}$,
Picard index $[\operatorname{Cl}(X):\operatorname{Pic}(X)] = \mu$
and maximal orbit space ${\mathbb P}_1$
and for each of these varieties
the complement
$X \setminus X_0$ contains divisors.
\end{corollary}
Finally, we present a couple of examples
showing that there are also non-Fano
varieties with a complexity one torus action
having divisor class group ${\mathbb Z}$
and maximal orbit space ${\mathbb P}_1$.
\begin{example}
Consider $X$ with
$\mathcal R(X)= {\mathbb K}[T_{0},T_{1},T_{2},T_{3}]/ \langle g\rangle$
with $g=T_{0}^2+T_{1}^3+T_{2}^{7}$
and the grading
$$
\deg \, T_0 \ = \ 21,
\quad
\deg \, T_1 \ = \ 14,
\quad
\deg \, T_2 \ = \ 6,
\quad
\deg \, T_3 \ = \ 1.
$$
Then we have $\gcd(21,14)=7$, $\gcd(21,6)=3$
and $\gcd(14,6)=2$
and therefore $[\operatorname{Cl}(X):\operatorname{Pic}(X)]=42$.
Moreover, $X$ is not Fano,
because its canonical class $\mathcal{K}_X$ is
trivial
$$
\mathcal{K}_X
\ = \
\deg \, g - \deg \, T_0 - \ldots -\deg \, T_3
\ = \
0.
$$
\end{example}
\begin{example}
Consider $X$ with
$\mathcal R(X)= {\mathbb K}[T_{0},T_{1},T_{2},T_{3}]/ \langle g\rangle$
with $g=T_{0}^2+T_{1}^3+T_{2}^{11}$
and the grading
$$
\deg \, T_0 \ = \ 33,
\quad
\deg \, T_1 \ = \ 22,
\quad
\deg \, T_2 \ = \ 6,
\quad
\deg \, T_3 \ = \ 1.
$$
Then we have $\gcd(22,33)=11$, $\gcd(33,6)=3$
and $\gcd(22,6)=2$
and therefore $[\operatorname{Cl}(X):\operatorname{Pic}(X)]=66$.
The canonical class $\mathcal{K}_X$ of
$X$ is even ample:
$$
\mathcal{K}_X
\ = \
\deg \, g - \deg \, T_0 - \ldots - \deg \, T_3
\ = \
4.
$$
\end{example}
The following example shows
that the Fano assumption is
essential for the finiteness results
in Theorem~\ref{Th:FiniteIndex}.
\begin{remark}
For any pair
$p,q$ of coprime positive integers,
we obtain a locally factorial
${\mathbb K}^*$-surface $X(p,q)$ with
$\operatorname{Cl}(X) = {\mathbb Z}$ and Cox ring
$$
\mathcal{R}(X(p,q))
\ = \
{\mathbb K}[T_{01},T_{02},T_{11},T_{21}]
\ / \
\bangle{g},
\qquad\qquad
g \ = \
T_{01}T_{02}^{pq-1}
+T_{11}^{q}
+T_{21}^{p};
$$
the $\operatorname{Cl}(X)$-grading is given by
$\deg \, T_{01} = \deg \, T_{02} =1 $,
$\deg \, T_{11} = p$ and $\deg \, T_{21} = q$.
Note that $\deg \, g =pq$ holds
and for $p,q\geq 3$, the canonical class
$\mathcal{K}_X$ satisfies
$$
\mathcal{K}_X
\ = \
\deg \, g - \deg \, T_{01} - \deg \, T_{02} - \deg \, T_{11} - \deg \, T_{21}
\ = \
pq - 2 - p - q
\ \ge \
0.
$$
\end{remark}
\section{Classification results}
\label{sec:tables}
In this section, we give classification results
for Fano varieties~$X$ with $\operatorname{Cl}(X) \cong {\mathbb Z}$
that come with a complexity one torus action;
note that they are necessarily rational.
The procedure to obtain classification
lists for prescribed dimension $d = \dim \, X$
and Picard index $\mu = [\operatorname{Cl}(X) : \operatorname{Pic}(X)]$
is always the following.
By Theorem~\ref{thm:factrings}, we know that their
Cox rings are of the form
$\mathcal{R}(X) \cong R(A,\mathfrak{n},L)[S_1,\ldots,S_m]$
with admissible triples $(A,\mathfrak{n},L)$.
Note that for the family
$A = (a_0, \ldots, a_r)$ of points $a_i \in {\mathbb K}^2$,
we may assume
$$
a_0 \ = \ (1,0),
\qquad
a_1 \ = \ (1,1),
\qquad
a_2 \ = \ (0,1).
$$
The bounds on the input data of $(A,\mathfrak{n},L)$
provided by Theorem~\ref{Th:FiniteIndex}
as well as the criteria of
Propositions~\ref{prop:coxchar}
and~\ref{Prop:FanoPicard}
allow us to generate all the possible Cox rings
$\mathcal{R}(X)$ of the Fano varieties
$X$ in question for fixed dimension~$d$
and Picard index~$\mu$.
Note that $X$ can be reconstructed from
$\mathcal{R}(X)= R(A,\mathfrak{n},L)[S_1,\ldots,S_n]$
as the homogeneous spectrum
with respect to the $\operatorname{Cl}(X)$-grading.
Thus $X$ is classified by its Cox ring
$\mathcal{R}(X)$.
In the following tables, we present the
Cox rings as ${\mathbb K}[T_1, \ldots, T_s]$
modulo relations and fix the
${\mathbb Z}$-gradings by giving the weight vector
$(w_1, \ldots, w_s)$, where $w_i := \deg \, T_i$.
The first classification result concerns
surfaces.
\begin{theorem}
Let $X$ be a non-toric Fano surface
with an effective ${\mathbb K}^*$-action such that
$\operatorname{Cl}(X)={\mathbb Z}$ and $[\operatorname{Cl}(X):\operatorname{Pic}(X)]\leq 6$ hold.
Then its Cox ring is precisely one of the following.
\begin{center}
\begin{longtable}[htbp]{llll}
\multicolumn{4}{c}{\bf $[\operatorname{Cl}(X):\operatorname{Pic}(X)] = 1$}
\\[1ex]
\toprule
No.
&
$\mathcal{R}(X)$
&
$(w_1,\ldots,w_4)$
&
$(-K_X)^2$
\\
\midrule
1
\hspace{.5cm}
&
${\mathbb K}[{T_1,\ldots,T_4}]/ \bangle{T_1T_2^5+T_3^3+T_4^2}$
\hspace{.5cm}
&
$(1,1,2,3)$
\hspace{.5cm}
&
$1$
\\
\bottomrule
\\[2ex]
\multicolumn{4}{c}{\bf $[\operatorname{Cl}(X):\operatorname{Pic}(X)] = 2$}
\\[1ex]
\toprule
No.
&
$\mathcal{R}(X)$
&
$(w_1,\ldots,w_4)$
&
$(-K_X)^2$
\\
\midrule
2
&
${\mathbb K}[{T_1,\ldots,T_4}]/ \bangle{T_1^4T_2+T_3^3+T_4^2}$
&
$(1,2,2,3)$
&
$2$
\\
\bottomrule
\\[2ex]
\multicolumn{4}{c}{\bf $[\operatorname{Cl}(X):\operatorname{Pic}(X)] = 3$}
\\[1ex]
\toprule
No.
&
$\mathcal{R}(X)$
&
$(w_1,\ldots,w_4)$
&
$(-K_X)^2$
\\
\midrule
3
&
${\mathbb K}[{T_1,\ldots,T_4}]/ \bangle{T_1^3T_2+T_3^3+T_4^2}$
&
$(1,3,2,3)$
&
$3$
\\
\midrule
4
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1T_2^3+T_3^5+T_4^2}$
&
$(1,3,2,5)$
&
$1/3$
\\
\midrule
5
&
${\mathbb K}[{T_1,\ldots,T_4}]/ \bangle{T_1^7T_2+T_3^5+T_4^2}$
&
$(1,3,2,5)$
&
$1/3$
\\
\bottomrule
\\[2ex]
\multicolumn{4}{c}{\bf $[\operatorname{Cl}(X):\operatorname{Pic}(X)] = 4$}
\\[1ex]
\toprule
No.
&
$\mathcal{R}(X)$
&
$(w_1,\ldots,w_4)$
&
$(-K_X)^2$
\\
\midrule
6
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1^2T_2+T_3^3+T_4^2}$
&
$(1,4,2,3)$
&
$4$
\\
\midrule
7
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1^6T_2+T_3^5+T_4^2}$
&
$(1,4,2,5)$
&
$1$
\\
\bottomrule
\\[2ex]
\multicolumn{4}{c}{\bf $[\operatorname{Cl}(X):\operatorname{Pic}(X)] = 5$}
\\[1ex]
\midrule
No.
&
$\mathcal{R}(X)$
&
$(w_1,\ldots,w_4)$
&
$(-K_X)^2$
\\
\midrule
8
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1T_2+T_3^3+T_4^2}$
&
$(1,5,2,3)$
&
$5$
\\
\midrule
9
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1^5T_2+T_3^5+T_4^2}$
&
$(1,5,2,5)$
&
$9/5$
\\
\midrule
10
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1^9T_2+T_3^7+T_4^2}$
&
$(1,5,2,7)$
&
$1/5$
\\
\midrule
11
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1^7T_2+T_3^4+T_4^3}$
&
$(1,5,3,4)$
&
$1/5$
\\
\bottomrule
\\[2ex]
\multicolumn{4}{c}{\bf $[\operatorname{Cl}(X):\operatorname{Pic}(X)] = 6$}
\\[1ex]
\toprule
No.
&
$\mathcal{R}(X)$
&
$(w_1,\ldots,w_4)$
&
$(-K_X)^2$
\\
\midrule
12
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1^4T_2+T_3^5+T_4^2}$
&
$(1,6,2,5)$
&
$8/3$
\\
\midrule
13
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1^8T_2+T_3^7+T_4^2}$
&
$(1,6,2,7)$
&
$2/3$
\\
\midrule
14
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1^6T_2+T_3^4+T_4^3}$
&
$(1,6,3,4)$
&
$2/3$
\\
\midrule
15
&
${\mathbb K}[{T_1,\ldots,T_4}] / \bangle{T_1^9T_2+T_3^3+T_4^2}$
&
$(1,3,4,6)$
&
$2/3$
\\
\bottomrule
\end{longtable}
\end{center}
\end{theorem}
\begin{proof}
As mentioned, Theorems~\ref{thm:factrings},
\ref{Th:FiniteIndex} and
Propositions~\ref{prop:coxchar}, \ref{Prop:FanoPicard}
produce a list of all Cox rings of surfaces
with the prescribed data.
Doing this computation, we obtain the list
of the assertion.
Note that none of the Cox rings listed
is a polynomial ring and hence none of the
resulting surfaces $X$ is a toric variety.
To show that different members of the
list are not isomorphic to each other,
we use the following two facts.
Firstly, observe that any two minimal
systems of homogeneous generators of
the Cox ring have (up to reordering)
the same list of degrees, and thus
the list of generator degrees is invariant
under isomorphism (up to reordering).
Secondly, by Construction~\ref{constr:Kgrading},
the exponents $l_{ij} >1$ are precisely the
orders of the non-trivial isotropy groups
of one-codimensional orbits of the action
of the torus $T$ on $X$.
Using both principles and going through the
list, we see that different members $X$
cannot be $T$-equivariantly isomorphic to
each other.
Since all listed $X$ are non-toric,
the effective complexity one torus action
on each $X$ corresponds to a maximal torus in
the linear algebraic group ${\rm Aut}(X)$.
Any two maximal tori in the automorphism
group are conjugate, and thus we can conclude
that two members are isomorphic if and only if they
are $T$-equivariantly isomorphic.
\end{proof}
We remark that in~\cite[Section~4]{tfano},
log del Pezzo surfaces with an effective
${\mathbb K}^*$-action and Picard number 1 and
Gorenstein index less than 4 were classified.
The above list contains six such surfaces,
namely no. 1-4, 6 and~8;
these are exactly the ones where
the maximal exponents of the monomials
form a platonic triple, i.e.,
are of the form $(1,k,l)$, $(2,2,k)$, $(2,3,3)$,
$(2,3,4)$ or $(2,3,5)$.
The remaining ones, i.e., no. 5, 7, and~9-15
have non-log-terminal and thus non-rational
singularities; to check this one may compute
the resolutions via resolution of the ambient
weighted projective space as
in~\cite[Ex.~7.5]{Ha2}.
With the same scheme of proof
as in the surface case, one establishes
the following classification results
on Fano threefolds.
\goodbreak
\begin{theorem}
\label{thm:3fano}
Let $X$ be a three-dimensional
locally factorial non-toric Fano
variety with an effective two torus
action such that $\operatorname{Cl}(X) = {\mathbb Z}$ holds.
Then its Cox ring is precisely
one of the following.
\begin{center}
\begin{longtable}[htbp]{llll}
\toprule
No.
&
$\mathcal{R}(X)$
&
$(w_1,\ldots, w_5)$
&
$(-K_X)^3$
\\
\midrule
1
\hspace{.5cm}
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^5 + T_3^3 + T_4^2}
$
\hspace{.5cm}
&
$(1,1,2,3,1)$
\hspace{.5cm}
&
$8$
\\
\midrule
2
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2T_3^4 + T_4^3 + T_5^2}
$
&
$(1,1,1,2,3)$
&
$8$
\\
\midrule
3
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^2T_3^3 + T_4^3 + T_5^2}
$
&
$(1,1,1,2,3)$
&
$8$
\\
\midrule
4
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2 + T_3T_4 + T_5^2}
$
&
$(1,1,1,1,1)$
&
$54$
\\
\midrule
5
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^2 + T_3T_4^2 + T_5^3}
$
&
$(1,1,1,1,1)$
&
$24$
\\
\midrule
6
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^3 + T_3T_4^3 + T_5^4}
$
&
$(1,1,1,1,1)$
&
$4$
\\
\midrule
7
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^3 + T_3T_4^3 + T_5^2}
$
&
$(1,1,1,1,2)$
&
$16$
\\
\midrule
8
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^5 + T_3T_4^5 + T_5^2}
$
&
$(1,1,1,1,3)$
&
$2$
\\
\midrule
9
&
$
{\mathbb K}[T_1, \ldots, T_5]
\ / \
\bangle{T_1T_2^5 + T_3^3T_4^3 + T_5^2}
$
&
$(1,1,1,1,3)$
&
$2$
\\
\bottomrule
\end{longtable}
\end{center}
\end{theorem}
The singular threefolds listed in this theorem
are rational degenerations of smooth Fano threefolds
from~\cite{fano3}.
The (smooth) general Fano threefolds of
the corresponding families are non-rational
see~\cite{Gri} for no.~1-3,
\cite{CG} for no.~5,
\cite{IM} for no.~6,
\cite{voi,tim}~for no.~7
and \cite{Isk80} for no. 8-9.
Even if one allows certain mild singularities,
one still has non-rationality in some cases,
see \cite{Gri2}, \cite{Co,Pu}, \cite{CM}, \cite{CP}.
\begin{theorem}
\label{thm:3fano2}
Let $X$ be a three-dimensional non-toric Fano
variety
with an effective two torus action such that
$\operatorname{Cl}(X)={\mathbb Z}$ and $[\operatorname{Cl}(X):\operatorname{Pic}(X)]=2$ hold.
Then its Cox ring is precisely one of the
following.
\begin{center}
\begin{longtable}[htbp]{llll}
\toprule
No. \hspace{.5cm}
&
$\mathcal{R}(X)$ \hspace{.5cm}
&
$(w_1,\ldots,w_5)$ \hspace{.5cm}
&
$(-K_X)^3$
\\
\midrule 1 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2+T_3^3+T_4^2 \rangle$
&
$(1,2,2,3,1)$
&
$27/2$
\\
\midrule 2 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2^3+T_3^5+T_4^2 \rangle$
&
$(1,2,2,5,1)$
&
$1/2$
\\
\midrule 3 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^8T_2+T_3^5+T_4^2 \rangle$
&
$(1,2,2,5,1)$
&
$1/2$
\\
\midrule 4 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2+T_3^3+T_4^2 \rangle$
&
$(1,2,2,3,2)$
&
$16$
\\
\midrule 5 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2^3+T_3^5+T_4^2 \rangle$
&
$(1,2,2,5,2)$
&
$2$
\\
\midrule 6 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^8T_2+T_3^5+T_4^2 \rangle$
&
$(1,2,2,5,2)$
&
$2$
\\
\midrule 7 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5+T_3^3+T_4^2 \rangle$
&
$(1,1,2,3,2)$
&
$27/2$
\\
\midrule 8 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^9+T_3^5+T_4^2 \rangle$
&
$(1,1,2,5,2)$
&
$1/2$
\\
\midrule 9 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^7+T_3^5+T_4^2 \rangle$
&
$(1,1,2,5,2)$
&
$1/2$
\\
\midrule 10 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^{11}+T_3^3+T_4^2 \rangle$
&
$(1,1,4,6,1)$
&
$1/2$
\\
\midrule 11 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^5T_2^7+T_3^3+T_4^2 \rangle$
&
$(1,1,4,6,1)$
&
$1/2$
\\
\midrule 12 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^{11}+T_3^3+T_4^2 \rangle$
&
$(1,1,4,6,2)$
&
$2$
\\
\midrule 13 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^5T_2^7+T_3^3+T_4^2 \rangle$
&
$(1,1,4,6,2)$
&
$2$
\\
\midrule 14 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^5+T_3^3+T_4^2 \rangle$
&
$(1,2,4,6,1)$
&
$2$
\\
\midrule 15 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^{10}T_2+T_3^3+T_4^2 \rangle$
&
$(1,2,4,6,1)$
&
$2$
\\
\midrule 16 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^2+T_3^3+T_4^2 \rangle$
&
$(2,2,2,3,1)$
&
$16$
\\
\midrule 17 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^4+T_3^5+T_4^2 \rangle$
&
$(2,2,2,5,1)$
&
$2$
\\
\midrule 18 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^3+T_3^5+T_4^2 \rangle$
&
$(2,2,2,5,1)$
&
$2$
\\
\midrule 19 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^2+T_3T_4+T_5^3 \rangle$
&
$(1,1,1,2,1)$
&
$81/2$
\\
\midrule 20 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^4+T_3T_4^2+T_5^5 \rangle$
&
$(1,1,1,2,1)$
&
$5/2$
\\
\midrule 21 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^3+T_3T_4^2+T_5^5 \rangle$
&
$(1,1,1,2,1)$
&
$5/2$
\\
\midrule 22 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^3+T_3^2T_4+T_5^4 \rangle$
&
$(1,1,1,2,1)$
&
$16$
\\
\midrule 23 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^4+T_3^3T_4+T_5^5 \rangle$
&
$(1,1,1,2,1)$
&
$5/2$
\\
\midrule 24 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^3+T_3^3T_4+T_5^5 \rangle$
&
$(1,1,1,2,1)$
&
$5/2$
\\
\midrule 25 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^3+T_3^2T_4+T_5^2 \rangle$
&
$(1,1,1,2,2)$
&
$27$
\\
\midrule 26 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5+T_3^2T_4^2+T_5^3 \rangle$
&
$(1,1,1,2,2)$
&
$3/2$
\\
\midrule 27 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5+T_3^4T_4+T_5^3 \rangle$
&
$(1,1,1,2,2)$
&
$3/2$
\\
\midrule 28 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^4+T_3^4T_4+T_5^3 \rangle$
&
$(1,1,1,2,2)$
&
$3/2$
\\
\midrule 29 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5+T_3^4T_4+T_5^2 \rangle$
&
$(1,1,1,2,3)$
&
$8$
\\
\midrule 30 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^3+T_3^4T_4+T_5^2 \rangle$
&
$(1,1,1,2,3)$
&
$8$
\\
\midrule 31 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^7+T_3^2T_4^3+T_5^2 \rangle$
&
$(1,1,1,2,4)$
&
$1$
\\
\midrule 32 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^5+T_3^2T_4^3+T_5^2 \rangle$
&
$(1,1,1,2,4)$
&
$1$
\\
\midrule 33 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^7+T_3^6T_4+T_5^2 \rangle$
&
$(1,1,1,2,4)$
&
$1$
\\
\midrule 34 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^5+T_3^6T_4+T_5^2 \rangle$
&
$(1,1,1,2,4)$
&
$1$
\\
\midrule 35 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^3+T_3T_4+T_5^4 \rangle$
&
$(1,1,2,2,1)$
&
$27$
\\
\midrule 36 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5+T_3T_4^2+T_5^6 \rangle$
&
$(1,1,2,2,1)$
&
$3/2$
\\
\midrule 37 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^3+T_3T_4+T_5^2 \rangle$
&
$(1,1,2,2,2)$
&
$16$
\\
\midrule 38 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5+T_3T_4^2+T_5^3 \rangle$
&
$(1,1,2,2,2)$
&
$6$
\\
\midrule 39 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^4+T_3T_4^2+T_5^3 \rangle$
&
$(1,1,2,2,2)$
&
$6$
\\
\midrule 40 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^3+T_3T_4^2+T_5^2 \rangle$
&
$(1,1,2,2,2)$
&
$27/2$
\\
\midrule 41 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^5+T_3T_4^3+T_5^2 \rangle$
&
$(1,1,2,2,2)$
&
$32$
\\
\midrule 42 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5+T_3T_4^2+T_5^2 \rangle$
&
$(1,1,2,2,3)$
&
$4$
\\
\midrule 43 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^7+T_3T_4^3+T_5^2 \rangle$
&
$(1,1,2,2,4)$
&
$32$
\\
\midrule 44 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^9+T_3T_4^4+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 45 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^9+T_3^2T_4^3+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 46 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^7+T_3T_4^4+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 47 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^7+T_3^2T_4^3+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 48 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^5T_2^5+T_3T_4^4+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 49 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^5T_2^5+T_3^2T_4^3+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 50 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2+T_3T_4+T_5^3 \rangle$
&
$(1,2,1,2,1)$
&
$48$
\\
\midrule 51 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2+T_3^2T_4+T_5^4 \rangle$
&
$(1,2,1,2,1)$
&
$27$
\\
\midrule 52 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^2+T_3T_4^2+T_5^5 \rangle$
&
$(1,2,1,2,1)$
&
$10$
\\
\midrule 53 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^2+T_3^3T_4+T_5^5 \rangle$
&
$(1,2,1,2,1)$
&
$10$
\\
\midrule 54 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2+T_3^3T_4+T_5^5 \rangle$
&
$(1,2,1,2,1)$
&
$10$
\\
\midrule 55 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2+T_3^4T_4+T_5^6 \rangle$
&
$(1,2,1,2,1)$
&
$3/2$
\\
\midrule 56 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2+T_3^2T_4+T_5^2 \rangle$
&
$(1,2,1,2,2)$
&
$32$
\\
\midrule 57 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^2+T_3^4T_4+T_5^3 \rangle$
&
$(1,2,1,2,2)$
&
$6$
\\
\midrule 58 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2+T_3^4T_4+T_5^3 \rangle$
&
$(1,2,1,2,2)$
&
$6$
\\
\midrule 59 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2+T_3^4T_4+T_5^2 \rangle$
&
$(1,2,1,2,3)$
&
$27/2$
\\
\midrule 60 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^3+T_3^2T_4^3+T_5^2 \rangle$
&
$(1,2,1,2,4)$
&
$4$
\\
\midrule 61 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^3+T_3^6T_4+T_5^2 \rangle$
&
$(1,2,1,2,4)$
&
$4$
\\
\midrule 62 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^6T_2+T_3^6T_4+T_5^2 \rangle$
&
$(1,2,1,2,4)$
&
$4$
\\
\midrule 63 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2^3+T_3^4T_4^3+T_5^2 \rangle$
&
$(1,2,1,2,5)$
&
$1/2$
\\
\midrule 64 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^8T_2+T_3^4T_4^3+T_5^2 \rangle$
&
$(1,2,1,2,5)$
&
$1/2$
\\
\midrule 65 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^8T_2+T_3^8T_4+T_5^2 \rangle$
&
$(1,2,1,2,5)$
&
$1/2$
\\
\midrule 66 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2+T_3T_4+T_5^4 \rangle$
&
$(1,2,2,2,1)$
&
$32$
\\
\midrule 67 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2+T_3T_4^2+T_5^6 \rangle$
&
$(1,2,2,2,1)$
&
$6$
\\
\midrule 68 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2+T_3T_4^2+T_5^2 \rangle$
&
$(1,2,2,2,3)$
&
$16$
\\
\midrule 69 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2^3+T_3T_4^4+T_5^2 \rangle$
&
$(1,2,2,2,5)$
&
$2$
\\
\midrule 70 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2^3+T_3^2T_4^3+T_5^2 \rangle$
&
$(1,2,2,2,5)$
&
$2$
\\
\midrule 71 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^8T_2+T_3T_4^4+T_5^2 \rangle$
&
$(1,2,2,2,5)$
&
$2$
\\
\midrule 72 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^8T_2+T_3^2T_4^3+T_5^2 \rangle$
&
$(1,2,2,2,5)$
&
$2$
\\
\midrule 73 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2T_3^{10}+T_4^3+T_5^2 \rangle$
&
$(1,1,1,4,6)$
&
$1/2$
\\
\midrule 74 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^2T_3^9+T_4^3+T_5^2 \rangle$
&
$(1,1,1,4,6)$
&
$1/2$
\\
\midrule 75 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^3T_3^8+T_4^3+T_5^2 \rangle$
&
$(1,1,1,4,6)$
&
$1/2$
\\
\midrule 76 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^4T_3^7+T_4^3+T_5^2 \rangle$
&
$(1,1,1,4,6)$
&
$1/2$
\\
\midrule 77 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5T_3^6+T_4^3+T_5^2 \rangle$
&
$(1,1,1,4,6)$
&
$1/2$
\\
\midrule 78 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^3T_3^7+T_4^3+T_5^2 \rangle$
&
$(1,1,1,4,6)$
&
$1/2$
\\
\midrule 79 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^5T_3^5+T_4^3+T_5^2 \rangle$
&
$(1,1,1,4,6)$
&
$1/2$
\\
\midrule 80 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^4T_3^5+T_4^3+T_5^2 \rangle$
&
$(1,1,1,4,6)$
&
$1/2$
\\
\midrule 81 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2T_3^2+T_4^3+T_5^2 \rangle$
&
$(1,1,2,2,3)$
&
$27/2$
\\
\midrule 82 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^3T_3+T_4^3+T_5^2 \rangle$
&
$(1,1,2,2,3)$
&
$27/2$
\\
\midrule 83 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^2T_3+T_4^3+T_5^2 \rangle$
&
$(1,1,2,2,3)$
&
$27/2$
\\
\midrule 84 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2T_3^4+T_4^5+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 85 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^3T_3^3+T_4^5+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 86 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5T_3^2+T_4^5+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 87 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^7T_3+T_4^5+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 88 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^2T_3^3+T_4^5+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 89 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^6T_3+T_4^5+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 90 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^3T_3^2+T_4^5+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 91 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^5T_3+T_4^5+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 92 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2^4T_3+T_4^5+T_5^2 \rangle$
&
$(1,1,2,2,5)$
&
$1/2$
\\
\midrule 93 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2T_3^5+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 94 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^3T_3^4+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 95 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^5T_3^3+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 96 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^7T_3^2+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 97 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1T_2^9T_3+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 98 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^4T_3^3+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 99 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2^8T_3+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 100 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^5T_3^2+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 101 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^3T_2^7T_3+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 102 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2^6T_3+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 103 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^5T_2^5T_3+T_4^3+T_5^2 \rangle$
&
$(1,1,2,4,6)$
&
$2$
\\
\midrule 104 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2T_3+T_4^3+T_5^2 \rangle$
&
$(1,2,2,2,3)$
&
$16$
\\
\midrule 105 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^2T_2T_3^3+T_4^5+T_5^2 \rangle$
&
$(1,2,2,2,5)$
&
$2$
\\
\midrule 106 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^4T_2T_3^2+T_4^5+T_5^2 \rangle$
&
$(1,2,2,2,5)$
&
$2$
\\
\midrule 107 &
${\mathbb K}[{T_1,\ldots,T_5}]/
\langle T_1^6T_2T_3+T_4^5+T_5^2 \rangle$
&
$(1,2,2,2,5)$
&
$2$
\\
\bottomrule
\end{longtable}
\end{center}
\end{theorem}
The varieties no. 2,3 and 25, 26 are rational
degenerations of quasismooth varieties from
the list in \cite{IaFl}.
In \cite{CPR} the non-rationality of a general
(quasismooth) element of the corresponding family
was proved.
The varieties listed so far might
suggest that we always obtain only
one relation in the Cox ring.
We discuss now some examples, showing
that for a Picard index big enough,
we need in general more than one
relation, where this refers always
to a presentation as in
Theorem~\ref{thm:factrings}~(ii).
\begin{example}
\label{ex:fanosurf2rel}
A Fano ${\mathbb K}^*$-surface $X$ with $\operatorname{Cl}(X)={\mathbb Z}$
such that the Cox ring $\mathcal{R}(X)$ needs
two relations.
Consider the ${\mathbb Z}$-graded ring
\begin{eqnarray*}
R & = &
{\mathbb K}[T_{01},T_{02},T_{11},T_{21},T_{31}]/\bangle{g_0,g_1},
\end{eqnarray*}
where the degrees of $T_{01},T_{02},T_{11},T_{21},T_{31}$
are $29,1,6,10,15$, respectively,
and the relations $g_0,g_1$ are given by
$$
g_0 \ := \ T_{01}T_{02}+T_{11}^5+T_{21}^3,
\qquad\qquad
g_1 \ := \
\alpha_{23} T_{11}^5+\alpha_{31} T_{21}^3+\alpha_{12}T_{31}^2
$$
Then $R$ is the Cox ring of a Fano ${\mathbb K}^*$-surface.
Note that the Picard index is given by
$
[\operatorname{Cl}(X):\operatorname{Pic}(X)]= \mathrm{lcm}(29,1)=29.
$
\end{example}
\begin{proposition}
\label{prop:fano22rel}
Let $X$ be a non-toric Fano surface with
an effective ${\mathbb K}^*$-action such that
$\operatorname{Cl}(X) \cong {\mathbb Z}$ and $[\operatorname{Cl}(X):\operatorname{Pic}(X)] < 29$
hold.
Then the Cox ring of $X$ is of the form
\begin{eqnarray*}
\mathcal{R}(X)
& \cong &
{\mathbb K}[T_1, \ldots, T_4]/\bangle{T_1^{l_1}T_2^{l_2} + T_3^{l_3} + T_4^{l_4}}.
\end{eqnarray*}
\end{proposition}
\begin{proof}
The Cox ring $\mathcal{R}(X)$ is as in
Theorem~\ref{thm:factrings}, and, in the
notation used there, we have
$n_0 + \ldots + n_r + m = 2+r$.
This leaves us with the possibilities
$n_0=m=1$ and $n_0=2$, $m=0$.
In the first case, Proposition~\ref{prop:MoriCox}
tells us that the Picard index of $X$
is at least $30$.
So, consider the case $n_0=2$ and $m=0$.
Then, according to Theorem~\ref{thm:factrings},
the Cox ring $\mathcal{R}(X)$ is
${\mathbb K}[T_{01},T_{02},T_1 \ldots, T_r]$
divided by relations
$$
g_{0,1,2}=T_{01}^{l_{01}}T_{02}^{l_{02}} + T_1^{l_1} + T_2^{l_2},
\quad
g_{i,i+1,i+2}=
\alpha_{i+1,i+2}T_i^{l_i} +
\alpha_{i+2,i}T_{i+1}^{l_{i+1}} +
\alpha_{i,i+1}T_{i+2}^{l_{i+2}},
$$
where $1 \le i \le r-2$.
We have to show that $r=2$ holds.
Set $\mu := [\operatorname{Cl}(X):\operatorname{Pic}(X)]$ and
let $\gamma \in {\mathbb Z}$ denote the degree
of the relations. Then we have
$\gamma = w_il_i$ for $1 \le i \le r$,
where $w_i := \deg \, T_i$.
With $w_{0i} := \deg \, T_{0i}$,
Proposition~\ref{Prop:FanoPicard} gives us
\begin{eqnarray*}
(r-1) \gamma
& < &
w_{01} + w_{02} + w_1 + \ldots + w_r.
\end{eqnarray*}
We claim that $w_{01}$ and $w_{02}$ are coprime.
Otherwise they had a common prime divisor $p$.
This $p$ divides $\gamma = l_iw_i$.
Since $l_1,\ldots,l_r$ are pairwise coprime,
$p$ divides at least $r-1$ of the weights
$w_1,\ldots, w_r$.
This contradicts the Cox ring condition that
any $r+1$ of the $r+2$ weights generate the class
group ${\mathbb Z}$.
Thus, $w_{01}$ and $w_{02}$ are coprime and
we obtain
$$
\mu \ \ge \ \rm{lcm}(w_{01},w_{02})
\ = \ w_{01}\cdot w_{02}
\ \ge \ w_{01}+w_{02}-1.
$$
Now assume that $r \ge 3$ holds. Then we can conclude
$$
2 \gamma
\ < \
w_{01} + w_{02} + w_1 + w_2 + w_3
\ \le \
\mu + 1 +
\gamma \left( \frac{1}{l_1} + \frac{1}{l_2} + \frac{1}{l_3} \right)
$$
Since the numbers $l_i$ are pairwise coprime,
we obtain $l_1 \ge 5$, $l_2 \ge 3$ and $l_3 \ge 2$.
Moreover, $l_iw_i = l_jw_j$ implies $l_i \mid w_j$
and hence $l_1l_2l_3 \mid \gamma$. Thus, we have
$\gamma \ge 30$. Plugging this in the above
inequality gives
$$
\mu
\ \ge \
\gamma\left(2- \frac{1}{l_1} - \frac{1}{l_2} - \frac{1}{l_3} \right)-1
\ = \
29.
$$
\end{proof}
The Fano assumption is essential in this result;
if we omit it, then we may even construct locally
factorial surfaces with a Cox ring that needs more
then one relation.
\begin{example}
A locally factorial
${\mathbb K}^*$-surface $X$ with $\operatorname{Cl}(X)={\mathbb Z}$
such that the Cox ring $\mathcal{R}(X)$
needs two relations.
Consider the ${\mathbb Z}$-graded ring
\begin{eqnarray*}
R & = &
{\mathbb K}[T_{01},T_{02},T_{11},T_{21},T_{31}]/\bangle{g_0,g_1},
\end{eqnarray*}
where the degrees of $T_{01},T_{02},T_{11},T_{21},T_{31}$
are $1,1,6,10,15$, respectively,
and the relations $g_0,g_1$ are given by
$$
g_0 \ := \ T_{01}^7T_{02}^{23}+T_{11}^5+T_{21}^3,
\qquad\qquad
g_1 \ := \
\alpha_{23} T_{11}^5+\alpha_{31}T_{21}^3+\alpha_{12}T_{31}^2
$$
Then $R$ is the Cox ring of a non Fano ${\mathbb K}^*$-surface~$X$
of Picard index one, i.e, $X$ is locally factorial.
\end{example}
For non-toric Fano threefolds~$X$
with an effective 2-torus action
$\operatorname{Cl}(X) \cong {\mathbb Z}$,
the classifications~\ref{thm:3fano}
and~\ref{thm:3fano2}
show that for Picard indices one and
two we only obtain hypersurfaces as
Cox rings.
The following example shows that
this stops at Picard index three.
\begin{example}
\label{ex:fano32rel}
A Fano threefold $X$ with $\operatorname{Cl}(X)={\mathbb Z}$
and a 2-torus action such that
the Cox ring $\mathcal{R}(X)$
needs two relations.
Consider
\begin{eqnarray*}
R
& = &
{\mathbb K}[T_{01},T_{02},T_{11},T_{12},T_{21},T_{31}]/
\bangle{g_0,g_1}
\end{eqnarray*}
where the degrees of $T_{01},T_{02},T_{11},T_{12},T_{21},T_{31}$
are $1,1,3,3,2,3$, respectively,
and the relations are given by
$$
g_0
\ = \
T_{01}^5T_{02}+T_{11}T_{12}+T_{21}^3,
\qquad
g_1
\ = \
\alpha_{23} T_{11}T_{12}+\alpha_{31}T_{21}^3+\alpha_{12}T_{31}^2.
$$
Then $R$ is the Cox ring of a Fano threefold
with a 2-torus action.
Note that the Picard index is given by
$$
[\operatorname{Cl}(X):\operatorname{Pic}(X)]
\ = \
\mathrm{lcm}(1,1,3,3)
\ = \ 3.
$$
\end{example}
Finally, we turn to locally factorial
Fano fourfolds.
Here we observe more than one relation
in the Cox ring even
in the locally factorial case.
\begin{theorem}
Let $X$ be a four-dimensional locally factorial non-toric
Fano variety with an effective three torus action such that
$\operatorname{Cl}(X)={\mathbb Z}$ holds.
Then its Cox ring is precisely one of the following.
\begin{center}
\begin{longtable}[htbp]{llll}
\toprule
No.
&
$\mathcal{R}(X)$
&
$(w_1,\ldots,w_6)$
&
$(-K_X)^4$
\\
\midrule 1 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^5+T_3^3+T_4^2 \rangle$
&
$(1,1,2,3,1,1)$
&
$81$
\\
\midrule 2 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^9+T_3^2+T_4^5 \rangle$
&
$(1,1,2,5,1,1)$
&
$1$
\\
\midrule 3 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^3T_2^7+T_3^2+T_4^5 \rangle$
&
$(1,1,2,5,1,1)$
&
$1$
\\
\midrule 4 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^4+T_4^3+T_5^2 \rangle$
&
$(1,1,1,2,3,1)$
&
$81$
\\
\midrule 5 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^3+T_4^3+T_5^2 \rangle$
&
$(1,1,1,2,3,1)$
&
$81$
\\
\midrule 6 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^8+T_4^5+T_5^2 \rangle$
&
$(1,1,1,2,5,1)$
&
$1$
\\
\midrule 7 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^7+T_4^5+T_5^2 \rangle$
&
$(1,1,1,2,5,1)$
&
$1$
\\
\midrule 8 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^3T_3^6+T_4^5+T_5^2 \rangle$
&
$(1,1,1,2,5,1)$
&
$1$
\\
\midrule 9 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^4T_3^5+T_4^5+T_5^2 \rangle$
&
$(1,1,1,2,5,1)$
&
$1$
\\
\midrule 10 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^2T_2^3T_3^5+T_4^5+T_5^2 \rangle$
&
$(1,1,1,2,5,1)$
&
$1$
\\
\midrule 11 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^3T_2^3T_3^4+T_4^5+T_5^2 \rangle$
&
$(1,1,1,2,5,1)$
&
$1$
\\
\midrule 12 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2+T_3T_4+T_5^2 \rangle$
&
$(1,1,1,1,1,1)$
&
$512$
\\
\midrule 13 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2+T_3T_4^2+T_5^3 \rangle$
&
$(1,1,1,1,1,1)$
&
$243$
\\
\midrule 14 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^3+T_3T_4^3+T_5^4 \rangle$
&
$(1,1,1,1,1,1)$
&
$64$
\\
\midrule 15 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^4+T_3T_4^4+T_5^5 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 16 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^4+T_3^2T_4^3+T_5^5 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 17 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^2T_2^3+T_3^2T_4^3+T_5^5 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 18 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^3+T_3T_4^3+T_5^2 \rangle$
&
$(1,1,1,1,2,1)$
&
$162$
\\
\midrule 19 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^5+T_3T_4^5+T_5^3 \rangle$
&
$(1,1,1,1,2,1)$
&
$3$
\\
\midrule 20 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^5+T_3^2T_4^4+T_5^3 \rangle$
&
$(1,1,1,1,2,1)$
&
$3$
\\
\midrule 21 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^5+T_3T_4^5+T_5^2 \rangle$
&
$(1,1,1,1,3,1)$
&
$32$
\\
\midrule 22 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^5+T_3^3T_4^3+T_5^2 \rangle$
&
$(1,1,1,1,3,1)$
&
$32$
\\
\midrule 23 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^7+T_3T_4^7+T_5^2 \rangle$
&
$(1,1,1,1,4,1)$
&
$2$
\\
\midrule 24 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^7+T_3^3T_4^5+T_5^2 \rangle$
&
$(1,1,1,1,4,1)$
&
$2$
\\
\midrule 25 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^3T_2^5+T_3^3T_4^5+T_5^2 \rangle$
&
$(1,1,1,1,4,1)$
&
$2$
\\
\midrule 26 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3T_4^3+T_5^3+T_6^2 \rangle$
&
$(1,1,1,1,2,3)$
&
$81$
\\
\midrule 27 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^2T_4^2+T_5^3+T_6^2 \rangle$
&
$(1,1,1,1,2,3)$
&
$81$
\\
\midrule 28 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3T_4^7+T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,2,5)$
&
$1$
\\
\midrule 29 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^2T_4^6+T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,2,5)$
&
$1$
\\
\midrule 30 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^3T_4^5+T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,2,5)$
&
$1$
\\
\midrule 31 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^4T_4^4+T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,2,5)$
&
$1$
\\
\midrule 32 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^2T_4^5+T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,2,5)$
&
$1$
\\
\midrule 33 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^3T_4^4+T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,2,5)$
&
$1$
\\
\midrule 34 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^3T_3^3T_4^3+T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,2,5)$
&
$1$
\\
\midrule 35 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^2T_2^2T_3^3T_4^3+T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,2,5)$
&
$1$
\\
\midrule 36 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3+T_4T_5^2+T_6^3 \rangle$
&
$(1,1,1,1,1,1)$
&
$243$
\\
\midrule 37 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^2+T_4T_5^3+T_6^4 \rangle$
&
$(1,1,1,1,1,1)$
&
$64$
\\
\midrule 38 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^3+T_4T_5^4+T_6^5 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 39 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^3+T_4^2T_5^3+T_6^5 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 40 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^2+T_4T_5^4+T_6^5 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 41 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^2+T_4^2T_5^3+T_6^5 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 42 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^2+T_4T_5^3+T_6^2 \rangle$
&
$(1,1,1,1,1,2)$
&
$162$
\\
\midrule 43 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^4+T_4T_5^5+T_6^3 \rangle$
&
$(1,1,1,1,1,2)$
&
$3$
\\
\midrule 44 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^4+T_4^2T_5^4+T_6^3 \rangle$
&
$(1,1,1,1,1,2)$
&
$3$
\\
\midrule 45 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^3+T_4T_5^5+T_6^3 \rangle$
&
$(1,1,1,1,1,2)$
&
$3$
\\
\midrule 46 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^3+T_4^2T_5^4+T_6^3 \rangle$
&
$(1,1,1,1,1,2)$
&
$3$
\\
\midrule 47 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^2T_2^2T_3^2+T_4T_5^5+T_6^3 \rangle$
&
$(1,1,1,1,1,2)$
&
$3$
\\
\midrule 48 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^3+T_4^3T_5^3+T_6^2 \rangle$
&
$(1,1,1,1,1,3)$
&
$32$
\\
\midrule 49 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^3+T_4T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,1,3)$
&
$32$
\\
\midrule 50 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^4+T_4^3T_5^3+T_6^2 \rangle$
&
$(1,1,1,1,1,3)$
&
$32$
\\
\midrule 51 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^4+T_4T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,1,3)$
&
$32$
\\
\midrule 52 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^6+T_4T_5^7+T_6^2 \rangle$
&
$(1,1,1,1,1,4)$
&
$2$
\\
\midrule 53 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2T_3^6+T_4^3T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,1,4)$
&
$2$
\\
\midrule 54 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^5+T_4T_5^7+T_6^2 \rangle$
&
$(1,1,1,1,1,4)$
&
$2$
\\
\midrule 55 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2T_3^5+T_4^3T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,1,4)$
&
$2$
\\
\midrule 56 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^3T_3^4+T_4T_5^7+T_6^2 \rangle$
&
$(1,1,1,1,1,4)$
&
$2$
\\
\midrule 57 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^3T_3^4+T_4^3T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,1,4)$
&
$2$
\\
\midrule 58 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^2T_2^3T_3^3+T_4T_5^7+T_6^2 \rangle$
&
$(1,1,1,1,1,4)$
&
$2$
\\
\midrule 59 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^2T_2^3T_3^3+T_4^3T_5^5+T_6^2 \rangle$
&
$(1,1,1,1,1,4)$
&
$2$
\\
\midrule 60 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2+T_3T_4+T_5T_6 \rangle$
&
$(1,1,1,1,1,1)$
&
$512$
\\
\midrule 61 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^2+T_3T_4^2+T_5T_6^2 \rangle$
&
$(1,1,1,1,1,1)$
&
$243$
\\
\midrule 62 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^3+T_3T_4^3+T_5T_6^3 \rangle$
&
$(1,1,1,1,1,1)$
&
$64$
\\
\midrule 63 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^3+T_3T_4^3+T_5^2T_6^2 \rangle$
&
$(1,1,1,1,1,1)$
&
$64$
\\
\midrule 64 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^4+T_3T_4^4+T_5T_6^4 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 65 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^4+T_3T_4^4+T_5^2T_6^3 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 66 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1T_2^4+T_3^2T_4^3+T_5^2T_6^3 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 67 &
${\mathbb K}[{T_1,\ldots,T_6}]/
\langle T_1^2T_2^3+T_3^2T_4^3+T_5^2T_6^3 \rangle$
&
$(1,1,1,1,1,1)$
&
$5$
\\
\midrule 68
&
${\mathbb K}[T_1,\ldots,T_7] /
\left\langle
\begin{smallmatrix}
T_1T_2 + T_3T_4 + T_5T_6,
\\
\alpha T_3T_4 + T_5T_6 + T_7^2
\end{smallmatrix}
\right\rangle
$
&
$(1,1,1,1,1,1,1)$
&
$324$
\\
\midrule 69
&
${\mathbb K}[T_1,\ldots,T_7] /
\left\langle
\begin{smallmatrix}
T_1T_2^2 + T_3T_4^2 + T_5T_6^2,
\\
\alpha T_3T_4^2 + T_5T_6^2 + T_7^3
\end{smallmatrix}
\right\rangle
$
&
$(1,1,1,1,1,1,1)$
&
$9$
\\
\bottomrule
\end{longtable}
\end{center}
where in the last two rows of the table the parameter
$\alpha$ can be any element from ${\mathbb K}^* \setminus \{1\}$.
\end{theorem}
By the result of \cite{Pu2}, the singular
quintics of this list are rational degenerations
of smooth non-rational Fano fourfolds.
\section{Geometry of the locally factorial threefolds}
\label{sec:geom3folds}
In this section, we take a closer look
at the (factorial) singularities of the
Fano varieties~$X$ listed in Theorem~\ref{thm:3fano}.
Recall that the discrepancies of a resolution
$\varphi \colon \t{X} \to X$ of a
singularity are the coefficients
of $K_{\t{X}} - \varphi^* K_X$, where
$K_X$ and $K_{\t{X}}$ are canonical divisors
such that $K_{\t{X}} - \varphi^* K_X$
is supported on the exceptional locus
of $\varphi$.
A resolution is called crepant, if its
discrepancies vanish and a singularity
is called canonical (terminal),
if it admits a resolution with
nonnegative (positive) discrepancies.
By a relative minimal model we mean a
projective morphism $\t{X} \to X$ such that
$\t{X}$ has at most terminal singularities
and its relative canonical divisor is
relatively nef.
\begin{theorem}
\label{prop:3foldsing}
For the nine 3-dimensional Fano varieties
listed in Theorem~\ref{thm:3fano}, we have
the following statements.
\begin{enumerate}
\item
No.~4 is a smooth quadric in ${\mathbb P}^4$.
\item
Nos.~1,3,5,7 and 9 are singular with only
canonical singularities and all admit
a crepant resolution.
\item
Nos.~6 and 8 are singular with
non-canonical singularities but
admit a smooth relative minimal model.
\item
No.~2 is singular with only canonical singularities,
one of them of type $\mathbf{cA_1}$,
and admits only a singular relative minimal model.
\end{enumerate}
The Cox ring of the relative minimal model $\widetilde{X}$
as well as the the Fano degree of $X$ itself are
given in the following table.
\begin{center}
\begin{longtable}[htbp]{llc}
\toprule
No.
&
\hspace{3.5cm}$\mathcal{R}(\widetilde{X})$
&
$(-K_X)^3$
\\
\midrule
1
&
${\mathbb K}[T_1,\ldots,T_{14}]/(T_1T_2T_3^2T_4^3T_5^4T_6^5+T_7^3T_8^2T_9+T_{10}^2T_{11}\rangle$
&
$8$
\\
\cmidrule{1-3}
2
&
${\mathbb K}[T_1,\ldots,T_9]/\langle T_1T_2T_3^2T_4^4+T_5T_6^2T_7^3+T_8^2 \rangle$
&
$8$
\\
\cmidrule{1-3}
3
&
${\mathbb K}[T_1,\ldots,T_8]/ \langle T_1T_2^2T_3^3+T_4T_5^3+T_6T_7^2\rangle$
&
$8$
\\
\cmidrule{1-3}
4
&
${\mathbb K}[T_1, \ldots, T_5]/\bangle{T_1T_2 + T_3T_4 + T_5^2}$
&
$54$
\\
\cmidrule{1-3}
5
&
${\mathbb K}[T_1,\ldots,T_6]/\langle T_1T_2^2+T_3T_4^2+T_5^3T_6\rangle$
&
$24$
\\
\cmidrule{1-3}
6
&
${\mathbb K}[T_1,\ldots,T_6]/ \langle T_1T_2^3+T_3T_4^3+T_5^4T_6 \rangle$
&
$4$
\\
\cmidrule{1-3}
7
&
${\mathbb K}[T_1, \ldots, T_7]/\bangle{T_1T_2^3 + T_3T_4^3 + T_5^2T_6}$
&
$16$
\\
\cmidrule{1-3}
8
&
${\mathbb K}[T_1, \ldots, T_7]/\bangle{T_1T_2^5 + T_3T_4^5 + T_5^2T_6}$
&
$2$
\\
\cmidrule{1-3}
9
&
$\displaystyle {\mathbb K}[T_1,\ldots,T_{46}]/
\left\langle
\begin{smallmatrix}
T_1T_2T_3T_4^2T_5^2T_6^3T_7^3T_8^4T_9^4T_{10}^5
\;+\; \\ \;+\; T_{11} \cdots T_{18} T_{19}^2\cdots T_{24}^2 T_{25}^3 T_{26}^3
\;+\; T_{27}\cdots T_{32} T_{33}^2
\end{smallmatrix}
\right\rangle$
&
$2$\\
\bottomrule
\end{longtable}
\end{center}
\end{theorem}
For the proof, it is convenient to work
in the language of polyhedral divisors introduced
in~\cite{MR2207875} and~\cite{divfans}.
As we are interested in rational varieties
with a complexity one torus action,
we only have to consider polyhedral divisors
on the projective line $Y = {\mathbb P}^1$.
This considerably simplifies the general
definitions and allows us to give a
short summary.
In the sequel, $N \cong {\mathbb Z}^n$ denotes a lattice
and $M = {\rm Hom}(N,{\mathbb Z})$ its dual.
For the associated rational vector spaces we write
$N_{\mathbb Q}$ and $M_{\mathbb Q}$.
A {\em polyhedral divisor\/} on the projective line
$Y := {\mathbb P}^1$ is a formal sum
\begin{eqnarray*}
\mathcal{D}
& = &
\sum_{y \in Y} \mathcal{D}_y \cdot y,
\end{eqnarray*}
where the coefficients $\mathcal{D}_y \subseteq N_{{\mathbb Q}}$
are (possibly empty) convex polyhedra
all sharing the same tail (i.e.~recession)
cone $\mathcal{D}_Y = \sigma \subseteq N_{\mathbb Q}$,
and only finitely many $\mathcal{D}_y$ differ from
$\sigma$.
The {\em locus\/} of $\mathcal{D}$ is the open subset
$Y(\mathcal{D}) \subseteq Y$ obtained by removing all
points $y \subseteq Y$ with $\mathcal{D}_y = \emptyset$.
For every $u \in \sigma^{\vee} \cap M$ we have the
{\em evaluation\/}
\begin{eqnarray*}
\mathcal{D}(u)
& := &
\sum_{y \in Y} \min_{v \in \mathcal{D}_y} \bangle{u ,v} \! \cdot \! y,
\end{eqnarray*}
which is a usual rational divisor on $Y(\mathcal{D})$.
We call the polyhedral divisor $\mathcal{D}$ on $Y$
{\em proper\/} if $\deg \, \mathcal{D} \subsetneq \sigma$
holds, where the {\em polyhedral degree\/}
is defined by
\begin{eqnarray*}
\deg \, \mathcal{D}
& := &
\sum_{y \in Y} \mathcal{D}_y.
\end{eqnarray*}
Every proper polyhedral divisor
$\mathcal{D}$ on $Y$ defines a
normal affine variety $X(\mathcal{D})$
of dimension ${\rm rk}\,(N)+1$
coming with an effective action of
the torus $T = {\rm Spec} \, {\mathbb K}[M]$:
set $X(\mathcal{D}) := {\rm Spec} \, A(\mathcal{D})$, where
$$
A(\mathcal{D})
\ := \
\bigoplus_{u \in \sigma^\vee \cap M} \Gamma(Y(\mathcal{D}),\mathcal{O}(\mathcal{D}(u)))
\ \subseteq \
\bigoplus_{u \in M} {\mathbb K}(Y) \cdot \chi^u.
$$
A {\em divisorial fan\/},
is a finite set $\Xi$ of polyhedral divisors
$\mathcal{D}$ on $Y$, all having their
polyhedral coefficients $\mathcal{D}_y$ in
the same $N_{{\mathbb Q}}$
and fulfilling certain compatibility
conditions, see~\cite{divfans}.
In particular, for every point $y \in Y$,
the {\em slice\/}
\begin{eqnarray*}
\Xi_y
& := &
\left\{\mathcal{D}_y; \; \mathcal{D} \in \Xi \right\}
\end{eqnarray*}
must be a polyhedral subdivision.
The {\em tail fan\/} is
the set $\Xi_Y$ of the tail cones $\mathcal{D}_Y$
of the $\mathcal{D} \in \Xi$; it is a fan in the
usual sense.
Given a divisorial fan $\Xi$,
the affine varieties $X(\mathcal{D})$, where
$\mathcal{D} \in \Xi$, glue equivariantly
together to a normal variety $X(\Xi)$,
and we obtain every rational normal
variety with a
complexity one torus action this way.
Smoothness of $X = X(\Xi)$
is checked locally.
For a proper polyhedral divisor $\mathcal{D}$ on $Y$,
we infer the following from~\cite[Theorem~3.3]{tfano}.
If $Y(\mathcal{D})$ is affine,
then $X(\mathcal{D})$ is smooth
if and only if
${\rm cone}(\{1\} \times \mathcal{D}_y) \subseteq {\mathbb Q} \times N_{{\mathbb Q}}$,
the convex, polyhedral cone
generated by $\{1\} \times \mathcal{D}_y$,
is regular for every $y \in Y(\mathcal{D})$.
If $Y(\mathcal{D}) = Y$ holds, then
$X(\mathcal{D})$ is smooth
if and only if there are $y,z \in Y$
such that $\mathcal{D} = \mathcal{D}_y y + \mathcal{D}_z z$
holds and
${\rm cone}(\{1\} \times \mathcal{D}_y)
+
{\rm cone}(\{-1\} \times \mathcal{D}_z)$
is a regular cone in
${\mathbb Q} \times N_{{\mathbb Q}}$.
Similarly to toric geometry, singularities
of $X(\mathcal{D})$ are resolved by means of
subdividing~$\mathcal{D}$.
This means to consider divisorial fans $\Xi$
such that for any $y \in Y$, the slice $\Xi_y$
is a subdivision of $\mathcal{D}_y$.
Such a $\Xi$ defines a dominant morphism
$X(\Xi) \rightarrow X(\mathcal{D})$
and a slight generalization
of~\cite[Thm.~7.5.]{divfans}
yields that this morphism is proper.
\goodbreak
\begin{proposition}
\label{sec:prop-divfans}
The 3-dimensional Fano varieties
No. 1-8 listed in Theorem~\ref{thm:3fano}
and their relative minimal models
arise from divisorial fans having the
following slices and tail cones.
\myrule{1}{
\threefoldG
}
\myrule{2}{
\threefoldE
}
\myrule{3}{
\threefoldF
}
\myrule{4}{
\threefoldA
}
\myrule{5}{
\threefoldB
}
\myrule{6}{
\threefoldC
}
\myrule{7}{
\threefoldH
}
\myrule{8}{
\threefoldD
}
\noindent
\end{proposition}
The above table should be interpreted as follows.
The first three pictures in each row are the slices at
$0$, $1$ and $\infty$ and the last one is the tail fan.
The divisorial fan of the fano variety itself is
given by the solid polyhedra in the pictures.
Here, all polyhedra of the same gray scale
belong to the same polyhedral divisor.
The subdivisions for the relative minimal models
are sketched with dashed lines.
In general, polyhedra with the same tail cone belong
all to a unique polyhedral divisor with complete locus.
For the white cones inside the tail fan we have another rule:
for every polyhedron $\Delta \in \Xi_y$ with the given
white cone as its tail there is a polyhedral divisor
$\Delta \cdot y + \emptyset \cdot z \in \Xi$,
with $z \in \{0,1,\infty\} \setminus \{y\}$.
Here, different choices of $z$ lead to isomorphic
varieties, only the affine covering given by the $X(\mathcal{D})$ changes.
In order to prove Theorem~\ref{prop:3foldsing},
we also have to understand invariant divisors
on $X = X(\Xi)$ in terms of $\Xi$,
see~\cite[Prop.~4.11 and~4.12]{HaSu}
for details.
A first type of invariant prime divisors,
is in bijection $D_{y,v} \leftrightarrow (y,v)$
with the vertices $(y,v)$, where $y \in Y$
and $v \in \Xi_y$ is of dimension zero.
The order of the generic isotropy group
along $D_{y,v}$ equals the minimal positive integer
$\mu(v)$ with $\mu(v) v \in N$.
A second type of invariant prime divisors,
is in
$D_{\varrho} \leftrightarrow \varrho$
with the extremal rays $\varrho \in \Xi_Y$,
where a ray $\varrho \in \Xi_Y$
is called extremal if there is a
$\mathcal{D} \in \Xi$ such that
$\varrho \subseteq \mathcal{D}_Y$
and $\deg \, \mathcal{D} \cap \varrho = \emptyset$
holds.
The set of extremal rays is denoted by $\Xi_Y^\times$.
The divisor of a semi-invariant function $f \cdot \chi^u \in {\mathbb K}(X)$
is then given by
\begin{eqnarray*}
{\rm div}(f \cdot \chi^u)
& = &
- \sum_{y \in Y} \sum_{v \in \Xi_y^{(0)}}
\mu(v) \cdot (\langle v, u \rangle + {\rm ord}_y f) \cdot D_{y,v}
\ - \
\sum_{\varrho \in \Xi_Y^\times} \langle n_\varrho, u \rangle \cdot D_\varrho.
\end{eqnarray*}
Next we describe the canonical divisor.
Choose a point $y_0 \in Y$ such that
$\Xi_{y_0} = \Xi_Y$ holds.
Then a canonical divisor on $X = X(\Xi)$ is given by
\begin{eqnarray*}
K_X
& = &
(s - 2) \cdot y_0
\ - \
\sum_{\Xi_y \ne \Xi_Y} \sum_{v \in \Xi_i^{(0)}} D_{y,v}
\ - \
\sum_{\varrho \in \Xi_Y^\times} E_\varrho.
\end{eqnarray*}
\begin{proposition}
\label{prop:discrepancies}
Let $\mathcal{D}$ be a proper polyhedral divisor
with $Y(\mathcal{D}) = {\mathbb P}_1$,
let $\Xi$ be a refinement of $\mathcal{D}$
and denote by $y_1, \ldots, y_s \in Y$
the points with $\Xi_{y_i} \ne \Xi_Y$.
Then the associated morphism
$\varphi \colon X(\Xi) \to X(\mathcal{D})$
satisfies the following.
\begin{enumerate}
\item
The prime divisors in the exceptional
locus of $\varphi$ are the divisors
$D_{y_i,v}$ and $D_{\varrho}$
corresponding to
$v \in \Xi_{y_i}^{(0)} \setminus \mathcal{D}_{y_i}^{(0)}$
and
$\varrho \in \Xi_Y^\times \setminus \mathcal{D}^\times$
respectively.
\item
Then the discrepancies along
the prime divisors
$D_{y_i,v}$ and $D_{\varrho}$
of~(i) are computed as
\[
d_{y_i,v}
\ = \
-\mu(v)\cdot (\langle v, u' \rangle + \alpha_y) - 1,
\qquad\qquad
d_{\varrho}
\ = \
-\langle v_\varrho , u' \rangle - 1,
\]
where the numbers $\alpha_i$ are determined by
\begin{eqnarray*}
\begin{pmatrix}
-1 & -1 & \ldots & -1& 0 \\
\hline
\mu(v_{1}^1) & 0 & \ldots & 0 & \mu(v_{1}^1) v_{1}^1 \\
\vdots & \vdots & & \vdots &\vdots \\
\mu(v_{1}^{r_1})& 0 & \ldots & 0 & \mu(v_{1}^{r_1}) v_{1}^{r_1} \\
& & \ddots & & \\
0 & 0 & \ldots & \mu(v_{s}^1) & \mu(v_{s}^1) v_{s}^{1} \\
\vdots & \vdots & & \vdots &\vdots \\
0 & 0 & \ldots & \mu(v_{s}^{r_s}) & \mu(v_{s}^{r_s}) v_{s}^{r_s} \\
\hline
0 & 0 & \ldots & 0 & n_{\varrho_1} \\
\vdots & \vdots & & \vdots &\vdots \\
0 & 0 & \ldots & 0 & n_{\varrho_{r}}
\end{pmatrix}
\ \cdot \
\begin{pmatrix}
\alpha_{y_1}\\
\vdots\\
\alpha_{y_s}\\
u
\end{pmatrix}
& = &
\begin{pmatrix}
2-s\\
1 \\
\vdots \\
1\\
\hline
1\\
\vdots\\
1\\
\end{pmatrix}
\end{eqnarray*}
\end{enumerate}
\end{proposition}
\begin{proof}
The first claim is obvious by the characterization
of invariant prime divisors.
For the second claim note that by~\cite[Theorem~3.1]{tidiv}
every Cartier divisor on $X(\mathcal{D})$ is principal.
Hence, we may assume
$$
\ell\cdot K_X
\ = \
{\rm div}(f \cdot \chi^{u}),
\qquad\qquad
{\rm div}(f)
\ = \
\sum_y \alpha_y \cdot y.
$$
Then our formul{\ae} for ${\rm div}(f \cdot \chi^{u})$
and $K_X$ provide a row for every vertex
$v_{i}^j \in \Xi_{y_i}$, $i=0,\ldots,s$,
and for every extremal ray $\varrho_i \in \Xi^\times$,
and ${\ell}^{-1}(\alpha,u)$
is the (unique) solution of the above system.
\end{proof}
Note, that in the above Proposition,
the variety $X(\mathcal{D})$ is ${\mathbb Q}$-Gorenstein if and only
if the linear system of equations has a solution.
\begin{proof}[Proof of Theorem~\ref{prop:3foldsing}
and Proposition~\ref{sec:prop-divfans}]
We exemplarily discuss variety number
eight.
Recall that its Cox ring is given as
\begin{eqnarray*}
\mathcal{R}(X)
& = &
\mathbb{K}[T_1,\ldots,T_5]/(T_1T_2^5+T_3T_4^5+T_5^2)
\end{eqnarray*}
with the degrees $1,1,1,1,3$.
In particular, $X$ is a hypersurface
of degree $6$ in ${\mathbb P}(1,1,1,1,3)$,
and the self-intersection of the anti-canonical
divisor can be calculated as
$$
(-K_X^3)
\ = \
6 \cdot \frac{(1+1+1+1+3-6)^3}{1\cdot 1\cdot 1\cdot 1\cdot 3}
\ = \
2.
$$
The embedding $X \subseteq {\mathbb P}(1,1,1,1,3)$ is equivariant,
and thus we can use the technique described
in~\cite[Sec.~11]{MR2207875} to calculate a divisorial
fan $\Xi$ for $X$.
The result is the following divisorial fan;
we draw its slices and indicate the polyhedral
divisors with affine locus by colouring their
tail cones $\mathcal{D}_Y \in \Xi_Y$ white:
\threefoldDplain
\noindent
One may also use~\cite[Cor.~4.9.]{HaSu} to verify
that $\Xi$ is the right divisorial fan:
it computes the Cox ring in terms of $\Xi$,
and, indeed, we obtain again $\mathcal{R}(X)$.
Now we subdivide and obtain a divisorial fan
having the refined slices as indicated
in the following picture.
\threefoldD
\noindent
Here, the white ray ${\mathbb Q}_{\geq 0}\cdot (1,0)$ indicates
that the polyhedral divisors with that tail have affine loci.
According to~\cite[Cor.~4.9.]{HaSu}, the corresponding Cox ring
is given by
\begin{eqnarray*}
\mathcal{R}(\widetilde{X})
& = &
{\mathbb K}[T_1, \ldots, T_7]/\bangle{T_1T_2^5 + T_3T_4^5 + T_5^2T_6}.
\end{eqnarray*}
We have to check that $\widetilde{X}$ is smooth.
Let us do this explicitly for the affine chart
defined by the polyhedral divisor $\mathcal{D}$
with tail cone $\mathcal{D}_Y = {\rm cone}((1,2),(3,1))$.
Then $\mathcal{D}$ is given by
\begin{eqnarray*}
\mathcal{D}
& = &
\left(\left(\frac{3}{5},\frac{1}{5}\right) + \sigma\right) \cdot \{0\}
\ + \
\left(\left[-\frac{1}{2},0\right] \times 0 + \sigma\right)\cdot \{\infty\}.
\end{eqnarray*}
Thus, ${\rm cone}(\{1\} \times \mathcal{D}_0) + {\rm cone}(\{-1\} \times \mathcal{D}_\infty)$
is generated by $(5,3,1)$, $(-2,-1,0)$ and $(-1,0,0)$;
in particular, it is a regular cone.
This implies smoothness of the affine chart $X(\mathcal{D})$.
Furthermore, we look at the affine charts
defined by the polyhedral divisors
$\mathcal{D}$ with tail cone
$\mathcal{D}_Y = {\rm cone}(1,0)$.
Since they have affine locus,
we have to check
${\rm cone}(\{1\} \times \mathcal{D}_y)$,
where $y \in Y$.
For $y \neq 0, 1$,
we have $\mathcal{D}_y = \mathcal{D}_Y$.
In this case,
${\rm cone}(\{1\} \times \mathcal{D}_y)$ is
generated by $(1,1,0)$, $(0,1,0)$
and thus is regular.
For $y=0$, we obtain
that ${\rm cone}(\{1\} \times \mathcal{D}_y)$
is generated by $(5,3,1)$, $(1,0,0)$, $(0,1,0)$
and this is regular.
For $y=1$ we get the same result.
Hence, the polyhedral divisors
with tail cone $\mathcal{D}_y = {\rm cone}(1,0)$
give rise to smooth affine charts.
Now we compute the discrepancies according to
Proposition~\ref{prop:discrepancies}.
The resolution has two exceptional divisors
$D_{\infty, \mathbf{0}}$ and $E_{(1,0)}$.
We work in the chart defined by
the divisor $\mathcal{D} \in \Xi$ with tail cone
$\mathcal{D}_Y = {\rm cone}((1,2),(1,0))$.
The resulting system of linear equations
and its unique solution are given by
\[
\left(\begin{array}{ccccc|c}
-1 & -1 & -1 & 0 & 0 & -1\\
5 & 0 & 0 & 3 & 1 & 1\\
0 & 1 & 0 & 0 & 0 & 1\\
0 & 5 & 0 & 0 & -1 & 1\\
0 & 0 & 2 & -1 & 0 & 1
\end{array}\right),
\qquad\qquad
\begin{pmatrix}
\alpha_0\\
\alpha_1\\
\alpha_\infty\\
\hline
u
\end{pmatrix}
\ = \
\begin{pmatrix}
0\\
1\\
0 \\
\hline
-1\\
4
\end{pmatrix}.
\]
The formula for the discrepancies yields
$d_{\infty,\mathbf{0}}= -1$ and $d_{(1,0)}= -2$.
In particular, $X$ has non-canonical singularities.
By a criterion from~\cite[Sec.~3.4.]{tidiv},
we know that $D_{\infty, \mathbf{0}} + 2 \cdot E_{(1,0)}$
is a nef divisor.
It follows that $\t{X}$ is a minimal model over $X$.
\end{proof}
|
1,116,691,499,788 | arxiv | \section{Introduction}
Acute Kidney Injury (AKI), the abrupt decline in kidney function due to temporary or permanent injury, is associated with increased mortality, morbidity, length of stay, and hospital cost~\cite{chertow2005MortLOSCost}. There exist a variety of preventative strategies~\cite{lameire2008prevention}---some types of AKI (e.g., from non-steroidal anti-inflammatory medication, radiocontrast chemotherapy, or aminoglycoside antibiotics) can be prevented outright by altering treatment or close monitoring.
For this reason, there is particular interest in modeling AKI with electronic health record (EHR) data~\cite{sutherland2016utilizingAKIADQI}.
Prior hospitalizations generate enormous amounts of data, some of it inaccessible to clinicians via current interfaces; we hope to gain insight into the way this data might be leveraged to help predict and prevent AKI. Of particular interest as a predictor is serum creatinine (sCr). Creatinine is a protein that accumulates in the serum if kidney filtration is reduced, acting as a surrogate for the true measure of kidney function, glomerular filtration rate (GFR). In this study we process sCr with recurrent neural networks (RNNs)~\cite{rumelhart1986sequential} and multilayer perceptrons (MLPs) using different input structures.
\section{Related work and background}
AKI prediction is an active area of research, with special emphasis on features from the
the Electronic Health Record (EHR) data~\cite{koyner2016development,sutherland2016utilizingAKIADQI}.
In particular, there is interest in
construction of models that apply to a broad patient population~\cite{sutherland2016utilizingAKIADQI}.
Many current models focus on AKI in the context of cardiac procedures,
the critically ill,
the elderly,
liver
and lung
transplant patients, and rhabdomyolsis.
Most
use logistic regression, and although
some use decision trees
or ensemble methods.
These models use features from the current hospitalization; in contrast, in line with \cite{choi2016doctor}, we use features from previous visits to estimate the probability of AKI in a rehospitalization given data from prior hospitalizations, focusing on the cohort of patients who are rehospitalized and also have previous sCr measurements. This particular cohort has high prior probability of AKI and therefore a predictive model could have real application; e.g., many rehospitalized patients present with conditions that may benefit from certain medications best administered when AKI risk is low. By considering only longitudinal sCr data, we also explore a much simpler, more interpretable model space than other studies. The EHR data flow is complex (Figure~\ref{EHR_fig}) and implementing models that depend on many features might be difficult, the models described here might more easily be implemented into an EHR, facilitating translation into the clinic.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\linewidth]{EHR_diag.png}
\caption{{\bf EHR diagram.} Blue lines are lab data flow, green billing and scheduling, and red medications. Dotted lines correspond to data files and solid to direct Health Level 7 integration.}
\label{EHR_fig}
\end{figure}
RNNs are popular in medical research; they are used for phenotyping in~\cite{lipton2015learning} and heart failure detection and next visit prediction in~\cite{choi2016doctor,choi2016HF}.
In~\cite{choi2016doctor}, the authors predicted AKI among other diagnosis codes, inspiring us to pursue it further with laboratory values in the inpatient setting. In~\cite{choi2016doctor}, features are collected per unit of time, whereas here we only consider the order of measurements, but not their actual time stamps.
Since we process sequences of measurements, there is related work in natural language modeling.
As phrases can be represented by averaging words~\cite{mikolov2013efficient}, hospitalizations might be represented by averaging measurements.
Hierarchical RNN architectures have been employed ~\cite{sordoni2015hierarchical,chung2016hierarchical} for context awareness and boundary inference.
\section{Methods}
We use an IRB-approved, de-identified adult dataset from an inpatient EHR system. The full dataset contains roughly six million laboratory records. We assigned a positive label if AKI was coded (according to International Statistical Classification of Diseases ICD $9^{th}$ edition) or sCr trajectories met the most current diagnostic criteria~\cite{ERBPpositionState}.
The data for each patient consists of a variable-length sequence of hospitalizations, or hospitalizations, where each hospitalization is a variable-length sequence of measurements. We have no information on these patients outside of their hospitalizations
We restrict ourselves to a single feature, sCr
to build a simple, interpretable model.
We forward fill missing sCrs (since we processed sequences, there were very few of these, so we did not use a more sophisticated method; these were truly missing at random because a test was recorded, but no result value was present). We denote sCr measurement from hospitalization $a$ at time $t$ as $m_{(a)}^{(t)}$. Each patient could therefore be represented as a list of $A$ hospitalizations, each of which contained a list of $\tau_{(a)}$ sCr measurements.
$$[
1:[m_{(1)}^{(1)}, ... ,m_{(1)}^{(\tau_{(1)})}],\\
... ,\\
A:[m_{(A)}^{(1)}, ... ,m_{(A)}^{(\tau_{(A)})}]$$
There are $A$ labels, where each is an indicator of whether AKI occurred in the hospitalization directly following hospitalization $a$.
$$[l_{1}, ... ,l_{A}]$$
\subsection{Input feature structures}
The basic formulations are as follows (further details are provided in the Appendix): each hospitalization-label pair can be considered independently (MARKOV)
\footnote{We denote this one as MARKOV because it is memoryless past one hospitalization}; all prior measurements can be concatenated (CONCAT); we can consider all observations from the same patient as a single, nested sequence (NEST).
For processing by MLP, the measurements must be aggregated. We can choose MARKOV or CONCAT before aggregating and then employ MEAN, SUM, or MAX. Data processing was performed in Pandas~\cite{mckinney2010data}.
\subsection{Architectures}
We do not review the standard RNN architecture in entirety, but we try to follow the notation in Goodfellow et al.~\cite{GoodFellowBengio2015deep}, which provides more detail. For more detail on our methods, see the Appendix. We use cross-entropy loss and hyperbolic tangent activation in all experiments. We use a many-to-one RNN
for MARKOV and CONCAT.
For the NEST input structure we require an RNN that processes nested, variable-length sequences where each inner sequence produces only a single output. We therefore modified the RNN architecture to process nested sequences by chaining together multiple instances of the many-to-one network. We suspected that information might be passed differently from one hospitalization to the next than it is from one measurement to the next, so we introduced a new "rehospitalization" parameter, $R$, that was only between hospitalizations. Since our data consists of hospitalization sequences that contain measurement sequences, we can index the new equations by $1\leq a \leq A$ where each $a$ has $\tau_{(a)}$ measurements. With $R$, we have the forward equations
$$
z^{(t)}_{(a)} = \left\{
\begin{array}{ll}
Wm^{(t)} _{(a)}+Rh^{(\tau _ {(a-1)})}_{(a-1)}+r & \quad t= 1 \\
Wm^{(t)} _{(a)}+Uh^{(t-1)}_{(a)}+b & \quad t \neq 1 \\
\end{array}
\right.
$$
$$h^{(t)} _{(a)}=\tanh(z^{(t)} _{(a)})$$
$$y _{(a)}= Vh^{(\tau _ {(a)})} _{(a)} + c$$
The network is now unrolled in time over measurements and over hospitalizations before backpropagation. All models were implemented using Numpy~\cite{walt2011numpy}.
\section{Experiments}
The dataset contains 135,862 sCr measurements from 12,491 unique patients who generated 26,606 hospitalizations. AKI occurred in rehospitalization after 15.7\% of the hospitalizations. Throughout the dataset, there were on average 2.1 $\pm$ 2.4 hospitalizations per patient and on average 5.1 $\pm$ 9.9 sCr measurements per hospitalization. Over 100 trials, we compare the three input structures MARKOV, CONCAT, and NEST where NEST has the inter-hospitalization variable R (otherwise, it is identical to CONCAT). We also evaluate an MLP acting on MARKOV or CONCAT with aggregation function SUM, MEAN, or MAX. Each trial has identical (random) parameter initializations and the same shuffled training/validation dataset. Note that since the MLP has no parameter $U$ linking hidden states, we were not able to initialize it with the same parameters as the RNNs, but inter-MLP comparisons have identical parameter initializations as well. We explore different numbers of hidden units (HUs) (10, 50, and 100).
Twenty percent of the full dataset was held out as test data and 20\% of the training data was held out as validation data (the prevalence of AKI upon rehospitalization in the train and test sets were roughly equal at 16\% and 15\%, respectively). (20\% is still held out.) For all splits, we ensure that no hospitalizations from the same patient are in the training and testing set. We trained each model for 20 epochs using AdaGrad~\cite{duchi2011adaptive} and then tested the model with the best validation set AUROC of the 100 trials on the test set.
The distribution of area under the receiver operating characteristic (AUROC), area under the precision recall curve (AUPRC), and logistic loss (LL) for the validation set over the 100 trials with different numbers of hidden units are shown in the Appendix (Figure \ref{fig3}). We report untouched test set performance for the best models in Table \ref{tr}. All metrics are computed at the hospitalization (not patient) level, and therefore patients with multiple hospitalizations are represented multiple times. Evaluation metrics were computed with functions from scikit-learn~\cite{pedregosa2011scikit}.
\begin{table}[ht]
\caption{Held-out test set performance}
\label{tr}
\tiny
\centering
\begin{tabular}{lllrrr}
\toprule
\# HU & Model & Input Struct & LL & AUPRC & AUROC \\
\midrule
10 & RNN & NEST & 0.438376 & 0.460836 & 0.831527 \\
& & MARKOV & 0.422027 & 0.614550 & 0.900993 \\
& & CONCAT & 0.367677 & 0.460945 & 0.831687 \\
\cmidrule{3-6}
& MLP & MARKOV-MAX & 0.373659 & 0.507971 & 0.843675 \\
& & CONCAT-MAX & 0.381322 & 0.470231 & 0.832244 \\
& & MARKOV-MEAN & 0.391130 & 0.416539 & 0.765329 \\
& & CONCAT-MEAN & 0.392883 & 0.402276 & 0.758172 \\
& & MARKOV-SUM & 0.281967 & \textbf{0.697588} & \textbf{0.919228} \\
& & CONCAT-SUM & 0.394754 & 0.479620 & 0.844405 \\
\cmidrule{2-6}
50 & RNN & NEST & 0.475782 & 0.430490 & 0.786414 \\
& & MARKOV & 0.371971 & 0.560443 & 0.878528 \\
& & CONCAT & 0.421936 & 0.213510 & 0.690209 \\
\cmidrule{3-6}
& MLP & MARKOV-MAX & 0.375280 & 0.507971 & 0.843675 \\
& & CONCAT-MAX & 0.383305 & 0.470231 & 0.832244 \\
& & MARKOV-MEAN & 0.392830 & 0.416530 & 0.765314 \\
& & CONCAT-MEAN & 0.394410 & 0.402274 & 0.758167 \\
& & MARKOV-SUM & \textbf{0.281964} & 0.697587 & 0.919227 \\
& & CONCAT-SUM & 0.374631 & 0.479623 & 0.844408 \\
\cmidrule{2-6}
100 & RNN & NEST & 0.495673 & 0.471420 & 0.816330 \\
& & MARKOV & 0.425833 & 0.596208 & 0.879722 \\
& & CONCAT & 0.390785 & 0.211309 & 0.679677 \\
\cmidrule{3-6}
& MLP & MARKOV-MAX & 0.373959 & 0.507971 & 0.843675 \\
& & CONCAT-MAX & 0.380864 & 0.470231 & 0.832244 \\
& & MARKOV-MEAN & 0.392619 & 0.416536 & 0.765317 \\
& & CONCAT-MEAN & 0.393588 & 0.402281 & 0.758176 \\
& & MARKOV-SUM & 0.282694 & 0.697584 & 0.919224 \\
& & CONCAT-SUM & 0.387263 & 0.479623 & 0.844405 \\
\bottomrule
\end{tabular}
HU = hidden units; LL = log loss; AUPRC = area under PR curve; AUROC = area under ROC curve
\end{table}
\section{Conclusion}
Using only sCr, RNNs and MLP predict AKI in rehospitalizations with high AUROC and AUPRC.
The best performing model was very simple. MARKOV, where we treated each hospitalization as an independent sequence, was the best performing input structure for the RNN; MARKOV-SUM, where we treated each hospitalization as an independent sequence and aggregated it using sum, gave best results for the MLP and in general.
Although this is still preliminary work and requires validation at different institutions, this is an exceedingly simple model and might be easily integrated into an EHR (since performance was insensitive to HU, a lower-capacity model like logistic regression could be appropriate). Further, it should be noted that this model was actually a baseline and therefore its discovery was happenstance---it was not the original intent of the authors to evaluate it; a new study intended to externally validate the model would help ensure that the finding was not unique to this particular dataset. Such a model is enticing however because the sum of sCr takes into account both sCr values---reflecting renal function, frequency of the orders--- reflecting physician concern, and length of the hospitalization (more work should be done to disentangle these factors, but capturing them all in a single input allows us to only estimate one parameter).
Our study also sheds light on how to model inpatient data when outpatient measurements are unknown. This is a common occurrence in medical datasets, many of which are generated by a single hospital lacking complete outpatient data. For 10 HU, inserting a different parameter R between hospitalizations did not seem to affect results. For greater than 10 HU, inserting R appeared to improve AUROC and AUPRC considerably but worsen LL. LL might have decreased because AUROC, not LL, was optimized for in model selection and the selected model was not required to be properly calibrated, but further experiments are needed to more fully explore this phenomenon.
These preliminary results however might suggest that a similar architecture or hierarchical RNN might be best for data streams with missing outpatient measurements---information flows differently between measurements than between hospitalizations.
We stress our general finding of the benefits of investigating different permutations of input data structure (varying time window and aggregation function) and models for medical time-series prediction tasks. The simple models yielded from this process might be very interpretable (simply summing all previous sCr measurements is easy to explain and could even be performed manually) and easy to implement into an EHR, which could allow quick adoption and facilitate the transition to automated "data-driven" healthcare practice, making way for more sophisticated techniques in the future. Future directions include incorporating time stamps because more recent measurements might be most important and employing more sophisticated RNN architectures.
\subsubsection*{Acknowledgments}
The project described in this publication was supported by the University of Rochester CTSA award number TL1 TR 002000 from the National Center for Advancing Translational Sciences of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors acknowledge Robert White for obtaining the data and the anonymous reviewers for help in improving the manuscript.
\subsection*{Appendix}
\subsubsection*{Input structures}
\label{Appendix1}
\begin{itemize}
\item Each hospitalization-label pair can be considered independently (MARKOV).
$[
([m_{(1)}^{(1)}, ... ,m_{(1)}^{(\tau_{(1)})}],[l_{1}]), ... , ([m_{(A)}^{(1)}, ... ,m_{(A)}^{(\tau_{(A)})}],[l_{A}])]$
This corresponds to the time window approach with lag 1 visit in~\cite{choi2015doctor}.
This kind of Markovian approach ignores measurements from all but the most recent hospitalization, and also does not take into account that hospitalizations are from the same patient. Each sequence of measurements, however, is relatively short, alleviating concerns about vanishing or exploding gradients~\cite{bengio1994learning,hochreiter2001gradient}.
\item All prior measurements can be concatenated (CONCAT).
$[
([m_{(1)}^{(1)}, ... ,m_{(1)}^{(\tau_{(1)})}],[l_{1}]),... , ([m_{(1)}^{(1)}, ... ,m_{(1)}^{(\tau_{(1)})},...,m_{(A)}^{(1)}, ... ,m_{(A)}^{(\tau_{(A)})}],[l_{A}])
]$
Unlike MARKOV, CONCAT has memory for previous hospitalization measurements, but it still ignores that these hospitalizations are from the same patient.
\item We can consider all observations from the same patient as a single, nested sequence (NEST).
$(
[
[
m_{(1)}^{(1)}, ... ,m_{(1)}^{(\tau_{(1)})}],
... ,
[m_{(A)}^{(1)}, ... ,m_{(A)}^{(\tau_{(A)})}]
]
,
[
l_{1},...,l_{A}]
)$
NEST should theoretically be preferable to MARKOV and CONCAT as all measurement information for each hospitalization is retained and also we take into account that the hospitalizations are from the same patient. An RNN processing NEST requires nBPTT for training. NEST produces a single sequence of the same length as the longest sequence in CONCAT.
\end{itemize}
For processing by MLP, the measurements must be aggregated. We can choose MARKOV or CONCAT before aggregating and then employ MEAN, SUM, or MAX.
For example, to aggregate a MARKOV hospitalization using SUM, we just convert:\\
$$[
m_{(j)}^{(1)}, ... ,m_{(j)}^{(\tau_{(1)})}] \rightarrow
\sum_{i=1}^{\tau_{(1)}} m_{(j)}^{(i)}$$
Benefits of the aggregation approach are that it is parsimonious and SUM elegantly represents the number of measurements (weighted by magnitude)---this could be highly predictive since the frequency of medical testing often reflects a clinician's anxiety over a patient's status, which might be a strong indicator of deterioration, but the value of the test is also relevant.
\subsubsection*{RNNs}
\label{Appendix2}
Again we follow the notation in Goodfellow et al.~\cite{GoodFellowBengio2015deep}. We only process a single scalar sCr, but provide the matrix equations in case of multi-dimensional input. Each sCr measurement is denoted $m$.
We use a many-to-one RNN
for MARKOV and CONCAT. Given a loss $L$ (we use cross entropy), an initial state $h^{(0)}$, and $\tau$ timesteps each with a sCr measurement $m^{(t)}$, the forward equations with hyperbolic tangent activation are
$$z^{(t)}=Wm^{(t)}+Uh^{(t-1)}+b$$
$$h^{(t)}=\tanh(z^{(t)})$$
$$y^{(\tau )}= Vh^{(\tau)} + c$$
Relative to a many-to-many RNN, the gradient of the hidden state becomes
$$
\nabla _{h^{(t)}}L = \left\{
\begin{array}{ll}
\left( \dfrac{\partial{y^{}}}{\partial{h^{(t)}}} \right)^{T} \nabla _{y^{}}L
= V^{T} \nabla _{y^{}}L & \quad t=\tau \\
\left( \dfrac{\partial{h^{(t+1)}}}{\partial{h^{(t)}}}\right )^{T} \nabla _{h^{(t+1)}} L
=
U^{T}J \nabla _{h^{(t+1)}}L & \quad t \neq \tau
\end{array}
\right.
$$
Where J is the Jacobian of the hyperbolic tangent.
For the NEST input structure, we require an RNN that processes nested, variable-length sequences where each inner sequence produces only a single output. We therefore construct an RNN
for nested sequences by chaining together multiple instances of the many-to-one network (similar to a hierarchical RNN~\cite{chung2016hierarchical}). We suspect that information might be passed differently from one hospitalization to the next than one measurement to the next, so we introduce a new "rehospitalization" parameter, $R$, that is only between hospitalizations. Since our data consists of hospitalization sequences that contain measurement sequences, we can index the new equations by $1\leq a \leq A$ where each $a$ has $\tau _ {(a)}$ measurements. As mentioned in the body of the paper, with $R$, we have the forward equations
$$
z^{(t)}_{(a)} = \left\{
\begin{array}{ll}
Wm^{(t)} _{(a)}+Rh^{(\tau _ {(a-1)})}_{(a-1)}+r & \quad t= 1 \\
Wm^{(t)} _{(a)}+Uh^{(t-1)}_{(a)}+b & \quad t \neq 1 \\
\end{array}
\right.
$$
$$h^{(t)} _{(a)}=\tanh(z^{(t)} _{(a)})$$
$$y _{(a)}= Vh^{(\tau _ {(a)})} _{(a)} + c$$
The loss is now summed over hospitalizations from the same patient
$$\mathbb{L}=\sum_{a}L_{(a)} $$
The network is now unrolled in time over measurements and over hospitalizations before backpropagation~\cite{rumelhart1988learning} (automatic differentiation~\cite{rall1981automatic} could also be used here). Since we use at each hospitalization an RNN with a single output, the hidden state gradient is
$$
\nabla _{h^{(t)} _{(a)}}L = \left\{
\begin{array}{ll}
\left( \dfrac{\partial{h^{(t+1)}_{(a)}}}{\partial{h^{(t)} _{(a)}}}\right )^{T} \nabla _{h^{(t+1)}_{(a)}} L
=
U^{T}J \nabla _{h^{(t+1)}_{(a)}}L & \quad t \neq \tau _{(a)}\\
\left( \dfrac{\partial{y_{(a)}}}{\partial{h^{(t)} _{(a)}}} \right)^{T} \nabla _{y_{(a)}}L + \left( \dfrac{\partial{h^{(1)}_{(a+1)}}}{\partial{h^{(t)} _{(a)}}}\right )^{T} \nabla _{h^{(1)}_{(a+1)}} L
=
V^{T} \nabla _{y_{(a)}}L + R^{T}J \nabla _{h^{(1)}_{(a+1)}}L & \quad t = \tau _{(a)}, a \neq A \\
\left( \dfrac{\partial{y_{(a)}}}{\partial{h^{(t)} _{(a)}}} \right)^{T} \nabla _{y_{(a)}}L
= V^{T} \nabla _{y_{(a)}}L & \quad t=\tau _{(a)}, a = A \\
\end{array}
\right.
$$
The parameter gradients, collected now over time and hospitalizations, are accessible through the node gradients
$$\nabla _{c}L = \sum_{a} \left( \dfrac{\partial{y^{(\tau_{(a)})} _{(a)}}}{\partial{c^{(\tau_{(a)})} _{(a)}}} \right)^{T}\nabla _{y^{(\tau_{(a)})} _{(a)}}L = \sum_{a} \nabla _{y^{(\tau_{(a)})} _{(a)}}L$$
$$\nabla _{V}L
=\sum_{a} \nabla _{y(\tau_{(a)})}L \left(\dfrac{\partial{y^{(\tau_{(a)})} _{(a)}}}{\partial{V^{(\tau_{(a)})} _{(a)}}}\right)
=\sum_{a} \nabla _{y(\tau_{(a)})}L {h^{(\tau_{(a)})} _{(a)}}^T$$
$$\nabla _{b}L
= \sum_{a} \sum _{t>1} \left( \dfrac{\partial{h^{(t)} _{(a)}}}{\partial{b^{(t)} _{(a)}}} \right)^{T}\nabla _{h^{(t)} _{(a)}}L = \sum_{a} \sum _{t>1}J\nabla _{h^{(t)} _{(a)}}L$$
$$\nabla _{W}L
= \sum_{a} \sum _{t}\nabla_{h^{(t)} _{(a)}}L \left(\dfrac{\partial{h^{(t)}_{(a)} }}{\partial{W^{(t)}_{(a)}}} \right)
= \sum_{a} \sum_{t}J \nabla _{h^{(t)} _{(a)}}L {m^{(t)} _{(a)}}^{T}
$$
$$\nabla _{U}L
= \sum_{a} \sum _{t>1}\nabla_{h^{(t)} _{(a)}}L \left(\dfrac{\partial{h^{(t)} _ {(a)} }}{\partial{U^{(t)}_{(a)}}} \right)
= \sum_{a} \sum_{t>1}J \nabla _{h^{(t)} _{(a)}}L {h^{(t-1)}_{(a)}}^{T}
$$
$$\nabla _{R}L
= \sum_{a} \nabla_{h^{(1)} _{(a)}}L \left(\dfrac{\partial{h^{(1)} _ {(a)} }}{\partial{R^{(1)}_{(a)}}} \right)
= \sum_{a} J \nabla _{h^{(1)} _{(a)}}L {h^{(\tau _{(a-1)})}_{(a-1)}}^{T}
$$
$$\nabla _{r}L
= \sum_{a} \nabla_{h^{(1)} _{(a)}}L \left(\dfrac{\partial{h^{(1)} _ {(a)} }}{\partial{r^{(1)}_{(a)}}} \right)
= \sum_{a} J \nabla _{h^{(1)} _{(a)}}L
$$
\newpage
\subsubsection*{Validation Set Distributions}
\begin{figure}[!ht]
\begin{center}
10 HU
\end{center}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{both10valAUROC.png}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{both10valAUPRC.png}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{both10valLL.png}
\end{minipage}
\begin{center}
50 HU
\end{center}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{both50valAUROC.png}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{both50valAUPRC.png}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{both50valLL.png}
\end{minipage}
\begin{center}
100 HU
\end{center}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{both100valAUROC.png}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{both100valAUPRC.png}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{both100valLL.png}
\end{minipage}
\caption{{\bf Distributions of validation set AUROC, AUPRC, and LL for 10, 50, and 100 hidden unit MLP and RNN processing different inputs structures over 100 trials.} HU = hidden units; LL = log loss; AUPRC = area under PR curve; AUROC = area under ROC curve}
\label{fig3}
\end{figure}
\small
|
1,116,691,499,789 | arxiv | \section{Introduction\label{sec:intr}}
The theory of General Relativity (GR) and its surrounding paradigm are unmatched for predictive success. Not even quantum field theory can boast agreement with experiment over such a vast range of scales \cite{Baker:2014zba}. Perhaps a victim of its own success, theorists have grown more and more focused on its shortfalls: the theory is non-renormalizable, divergent in the ultra violet regime and, when applied to cosmology, makes the uncomfortable prediction that 95\% of the matter in the universe is exotic, dark and intractable \cite{Copeland:2006wr}. Proposed modifications and upgrades abound. A popular set of such modifications are models of gravity in which two fields, not one, mitigate the force, and of this popular set, a subset are the disformal theories in which the gravitational geometry, $\hat{g}_{\mu\nu}$ and the matter geometry, $\tilde{g}_{\mu\nu}$ are related via the \emph{disformal} transformations
\begin{equation}
\tilde{g}_{\mu\nu} = \hat{g}_{\mu\nu} + D(\phi)\phi,_{\mu}\phi,_{\nu}
\end{equation}
for some additional gravitational scalar field $\phi$ \cite{Bekenstein:1992pj}\footnote{We have not written the most general transformation here. There could be a conformal part in front of $\hat{g}_{\mu\nu}$ as well, but we are dealing with electromagnetism in this paper so that that term is irrelevant.}.
In the history of disformal terms in gravity theories there exists a multiplicity of purpose. They were geometry corrections to GR in compactifications of higher dimensional brane-world gravity theories, but they were also utilized in the literature to vary the relationship between the speed of light and the speed of gravitational waves, which could solve the horizon and flatness problems of early universe cosmology without recourse to a potential-driven inflationary phase \cite{Clayton:2001rt}\cite{Kaloper:2003yf}\cite{Magueijo:2008sx}\cite{Magueijo:2010zc}\cite{vandeBruck:2015tna}. (Such theories are now very tightly constrained by observations \cite{Magueijo:2003gj}.) This second aspect of disformal theories -- their tendency to distort the light cones of fundamental fields with respect to each other -- is what concerns us in this work, however here we focus on the late, rather than inflationary, universe.
Coupling universally to all matter has been constrained quite severely via global tests in cosmology \cite{Brax:2014vla}, or local tests in the solar system \cite{Ip:2015qsa} or the laboratory \cite{Brax:2015hma,Brax:2016did}, which has led some to postulate that disformal couplings can, for example, only be between the scalar and dark matter \cite{Zumalacarregui:2012us}\cite{vandeBruck:2015ida}. As the nature of dark matter is poorly understood, the constraints of disformal couplings to it are rather weak. This idea of species selectivity opens the door, though, to varying interaction strengths with respect to varying types of matter (dark, baryonic, electromagnetic sector, etc.); if strengths can vary from species to species, there is little theoretical motivation to assume that the coupling to the standard model particles is negligibly small. Relaxing this assumption will invariably lead to observable deviations from standard matter theory.
A handful of these deviations must be in the form of novel radiation processes. Due to the variation in the relative speeds of photons and gravitons in disformal theories, it remains an open question as to whether charged particles, disformally coupled, can Cherenkov radiate in vacuum. In this work we unambiguously demonstrate that this is indeed the case, and deduce the conditions that must be met in order for this to occur. We will also discover that another radiative interaction channel will open under those same model conditions, a channel that depends on the dynamics of the theory's speed of light. For reasons that will become clear, we dub this interaction \emph{vacuum bremsstrahlung}.
In \cite{vandeBruck:2013yxa} it was shown that in order to induce spectral distortions in the CMB via gravity modifications, a necessary ingredient was that the geometry of space-time experienced by photons and that of the rest of the Standard Model must vary disformally with respect to each other. Hence, we consider the following action
\begin{equation}\label{eq:action1}
\mathcal{S}
= \mathcal{S}_{\mathrm{grav}}[\hat{g}_{\mu\nu},\phi]
+\mathcal{S}_{\mathrm{matter}}[\tilde{g}^{(m)}_{\mu\nu}]
+ \mathcal{S}_{\mathrm{EM}}[\tilde{g}^{(r)}_{\mu\nu},A^{\mu}]
+ \mathcal{S}_{\mathrm{int}}~,
\end{equation}
where the definition of the interaction terms will be clarified in the next section, and the metrics relate in the following way
\begin{subequations}
\begin{eqnarray}
\tilde{g}^{(m)}_{\mu\nu} &=& \hat{g}_{\mu\nu} + D^{(m)}(\phi)\phi,_{\mu}\phi,_{\nu} \\
\tilde{g}^{(r)}_{\mu\nu} &=& \hat{g}_{\mu\nu} + D^{(r)}(\phi)\phi,_{\mu}\phi,_{\nu}.
\end{eqnarray}
\end{subequations}
We refer to $\tilde{g}^{(m)}$ as the \emph{matter metric}, $\tilde{g}^{(r)}$ as the \emph{electromagnetic metric}, and $\hat{g}$ the \emph{gravitational metric}.
In the next section we refine this action and restrict our attention to a minimal subsystem in which to cleanly explore novel radiative processes, but in the meantime this schematic action, Eq. \eqref{eq:action1}, highlights a key point: there are three metrics in our theory, all related by disformal transformations, so there are three different frames within which to make calculations, and three representations of each field. In standard scalar-tensor theory, it is commonplace to perform computations in the Einstein frame, where the gravitational action is of Einstein-Hilbert form (quantities are defined with respect to $\hat{g}$), and reserve physical interpretation for the Jordan frame (everything expressed in terms of $\tilde{g}^{(m)}$), however, for this work, we will find that while physical interpretation is easiest in the Jordan frame, it is in fact the \emph{Electromagnetic frame} (expressing the full action in terms of $\tilde{g}^{(r)}$) in which calculations are simplest. This will hopefully become clear as we unveil the calculation.
As we have mentioned above, we find that two radiation channels are open to a disformally coupled charged particle, provided certain radiation conditions are satisfied: vacuum Cherenkov and bremsstrahlung radiation. Both of which we consider in what follows. In section \ref{sec:cher} we introduce the gravitational part of the action \eqref{eq:action1}, and specify a small charged particle-and-field subsystem -- adequate to determine the conditions under which vacuum Cherenkov radiation occurs in a disformal theory.
In this section we present Maxwell's equations with disformal couplings present, then solutions and finally constraints on model parameters from collider based experiments. In section \ref{sec:brem} we present the case for bremsstrahlung, define the relevant parts of the action, derive equations of motion, and discuss the conditions to be met for its presence. We do not use vacuum bremsstrahlung to place theory constraints in this paper, but simply offer an illustration as to the scale of the effect in a cosmology setting using cosmic rays. Our conclusions can be found in section \ref{sec:conc}.
\section{Vacuum Cherenkov radiation\label{sec:cher}}
\subsection{Action\label{sec:action}}
The salient feature of our model is the disformal coupling to radiation; we ask what novel changes this detail will introduce into the theory of electromagnetism. The electromagnetic sector is specified by the terms $\mathcal{S}_{\mathrm{field}} + \mathcal{S}_{\mathrm{interaction}}$ which we write as
\begin{subequations}\label{eq:action2}
\begin{equation}
{\cal S}_{\rm field} =
-\frac{1}{4\mu_0} \int d^4 x \sqrt{-g^{(r)}} g_{(r)}^{\mu\nu} g_{(r)}^{\alpha\beta} F_{\mu\alpha}F_{\nu\beta}
\end{equation}
and
\begin{equation}
{\cal S}_{\rm interaction} =
\int d^4 x \sqrt{-g^{(m)}} j^\mu A_\mu,
\end{equation}
\end{subequations}
where $j^\mu$ is a four--current, describing the motion of a charged particle. Note that gauge invariance implies charge conservation, i.e. we have $\nabla_\mu j^\mu = 0$, where the covariant derivative is with respect to the metric $g_{\mu\nu}^{(m)}$. As it will be useful for the subsequent calculations, we will write the action in terms of the matter metric $g^{(m)}_{\mu\nu}$. Note that
\begin{equation}\label{eq:trans}
g^{(r)}_{\mu\nu}
= g_{\mu\nu}^{(m)} + \left( D^{(r)} - D^{(m)} \right)\phi_{,\mu}\phi_{,\nu}
:= g_{\mu\nu} + B\phi_{,\mu}\phi_{,\nu} ,
\end{equation}
where in the last equation we have dropped the tilde and written $g_{\mu\nu} := g_{\mu\nu}^{(m)}$ to simplify notation. We emphasise that $B$ measures the difference between the disformal couplings $D^{(r)}$ and $D^{(m)}$. Then, in terms of this metric the electromagnetic sector becomes
\begin{equation}\label{eq:field_action}
{\cal S}_{\rm field} = -\frac{1}{4\mu_0} \int d^4 x \sqrt{-g}Z\left[g^{\mu\nu}g^{\alpha\beta} - 2 \gamma^2 g^{\mu\nu}\phi^{,\alpha}\phi^{,\beta} \right]F_{\mu\alpha}F_{\nu\beta}~,
\end{equation}
with
\begin{equation}\label{def:Z}
Z := \sqrt{\frac{g^{(r)}}{g}} = \sqrt{ 1 + Bg^{\mu\nu}\partial_\mu \phi \partial_\nu \phi }
\end{equation}
and
\begin{equation}
\gamma^2 = \frac{B}{1+Bg^{\mu\nu}\partial_\mu \phi \partial_\nu \phi}~.
\end{equation}
Note that the dynamics of $\phi$ are not specified at this point; it is a generic scalar field. We also have not specified the gravitational sector at this point. The work below holds for generic modified gravity theories. Only later we will be specific when we discuss constraints on the theory.
\subsection{Disformal Maxwell's equations}
The electromagnetic field equation can be readily obtained from this action:
\begin{eqnarray}\label{generalfield}
\nabla_\epsilon\left(Z F^{\epsilon\rho} \right) - \nabla_\epsilon \left( Z\gamma^2 \phi^{,\beta}\left( g^{\epsilon\nu}\phi^{,\rho} - g^{\rho\nu}\phi^{,\epsilon} \right)F_{\nu\beta} \right) = - \mu_0j^\rho~.
\end{eqnarray}
From now on, we will consider the case of flat space, i.e. $g_{\mu\nu} = \eta_{\mu\nu}$ (we remind the reader that matter moves on geodesics with respect to this metric) and write $A^\mu = (\Phi/c,{\bf A)}$ and $j^\mu = (c\rho,{\bf j})$. Then, working in the disformal Lorenz gauge $\nabla\cdot{\bf A} = - \dot{\Phi}/(cZ)^2$, the dot denoting the derivative with respect to time $t$, Eq. \eqref{generalfield} becomes
\begin{subequations}
\begin{eqnarray}
\left( \nabla^2 - \frac{1}{c^2Z^2}\frac{\partial^2}{\partial t^2} \right) \Phi
&=& - \frac{Z}{\epsilon_0} \rho \label{Phieq0} \\
\left( \nabla^2 - \frac{1}{c^2Z^2}\frac{\partial^2}{\partial t^2} \right) {\bf A}
&+& \frac{1}{c^2}\frac{\dot Z}{Z}\left( \nabla \dot\Phi + \dot {\bf A} \right)
= - \frac{\mu_0}{Z} {\bf j} \label{Aeq0}~.
\end{eqnarray}
\end{subequations}
In deriving the last equation, we made use of the identity $\nabla (\nabla\cdot {\bf V}) = \nabla^2 {\bf V} + \nabla \times (\nabla \times {\bf V})$ and defined in the usual way $\epsilon_0 = 1/\mu_0 c^2$. For the case that the scalar is time dependent only, Maxwell's equations read
\begin{subequations}\label{eq:max}
\begin{eqnarray}
\nabla \cdot {\bf E} &=& \frac{Z}{\epsilon_0} \rho \\
\nabla \times {\bf B} &=& \frac{\mu_0}{Z}{\bf j} + \frac{\mu_0}{Z}\frac{\partial}{\partial t} \left( \frac{\epsilon_0}{Z}{\bf E} \right)\\
\nabla \cdot {\bf B} &=& 0 \\
\nabla \times {\bf E} + \frac{\partial {\bf B}}{\partial t} &=& 0~,
\end{eqnarray}
\end{subequations}
where ${\bf E}$ and ${\bf B}$ are defined in the usual way:
\begin{eqnarray}
{\bf E} = -\nabla \Phi - \frac{\partial {\bf A}}{\partial t}~~~{\rm and}~~~ {\bf B} = \nabla\times {\bf A} ~.
\end{eqnarray}
Momentarily considering a vacuum (i.e. $\rho = 0$, ${\bf j}= {\bf 0}$), and assuming that $Z$ is constant, from Maxwell's equations we can derive the following wave equations for the fields ${\bf E}$ and ${\bf B}$
\begin{subequations}
\begin{eqnarray}
-\frac{1}{c^2Z^2} \frac{\partial^2 {\bf E}}{\partial t^2} + \nabla^2 {\bf E} &=& 0~ \\
-\frac{1}{c^2Z^2} \frac{\partial^2 {\bf B}}{\partial t^2} + \nabla^2 {\bf B} &=& 0~,
\end{eqnarray}
\end{subequations}
which shows that, in the absence of charges and with $Z$ constant, electromagnetic fields propagate with a modified speed\footnote{It was shown in \cite{vandeBruck:2015rma} that the fine-structure coupling `constant' is not constant in this theory.}. Prompted by this observation, we define more generally
\begin{equation}\label{sol}
c_s(t) := c Z(t) = \left( c^2 - B\dot\phi^2 \right)^{1/2}.
\end{equation}
The set of equations \eqref{eq:max} quite clearly suggest that we can go further; an effective speed of light here arises as a consequence of the fact that the disformal couplings modify spacetime geometry and hence distort the electromagnetic vacuum, producing an \emph{effective medium} for the electromagnetic field, whose permeability, $\mu_0$, and permittivity, $\epsilon_0$, of free space are modified by the scalar interaction. We thus also make the definitions
\begin{equation}
\mu(t) := \frac{\mu_0}{Z(t)} ~~~{\rm and}~~~ \epsilon(t) := \frac{\epsilon_0}{Z(t)}
\end{equation}
to physically characterize this new effective vacuum, and, subsequently, the auxiliary fields
\begin{equation}
{\bf H}:=\frac{1}{\mu(t)} {\bf B} ~~~{\rm and}~~~ {\bf D}:=\epsilon(t) {\bf E}.
\end{equation}
Given this effective medium formulation, we can now ask how the energy density will change in the field due to time evolution of our scalar field. In terms of the auxiliary fields the first two Maxwell equations simplify as follows:
\begin{subequations}
\begin{eqnarray}
\nabla\cdot{\bf D} = \rho \\
\nabla\times{\bf H} - \bf{\dot{D}} = {\bf j},
\end{eqnarray}
\end{subequations}
from which we obtain Poynting's theorem in our theory:
\begin{equation}\label{eq:poynt}
\frac{d}{dt}(U_E + U_H) = \frac{\dot{Z}}{Z}( U_E + U_H ) - {\bf E}\cdot{\bf j}
-\nabla\cdot(\underbrace{{\bf E}\times{\bf H}}_{{\bf S}})~.
\end{equation}
Here we have defined the field energy densities
\begin{equation}
U_E:=\frac{1}{2}\epsilon(t) |{\bf E}|^2, ~~~~~ U_H:=\frac{1}{2}\mu(t)|{\bf H}|^2~,
\end{equation}
and identified the standard Poynting vector $\bf{S} = \bf{E}\times\bf{H}$, which we will use to compute the energy lost by a charged particle in superluminal flight in the next section.
To summarize, we have found that when the scalar is time dependent only, our field theory with disformal couplings reduces to that of an electromagnetic field in an effective linear medium, whose resistance to the formation and evolution of field disturbances ($\epsilon$, $\mu$) will depend on $Z(t)$: the ratio of the two metric determinants. This establishes an interesting conceptual link between the geometry of space and the physical response of the fields defined on it. The link should strengthen the reader's intuition that many analogues of electricity in linear media should carry through to this model.
\subsection{Field solutions and the Cherenkov radiation condition}
As a first application of the model, we will investigate under which circumstances Cherenkov radiation can occur. We follow the calculation in \cite{lecturenotes}. The speed of light $c_s$, given by Eq. (\ref{sol}), is smaller than the bare speed of light $c$ if the field evolves in time, i.e. if $\dot\phi$ is non-vanishing. A charged particle can then move faster than $c_s$ and this is the situation which we will now study. Let us therefore consider a moving particle with charge $q$, for which
\begin{subequations}
\begin{eqnarray}
\rho({\bf x},t) &=& q \delta({\bf x} - {\bf x}_p(t))\\
{\bf j}({\bf x}, t) &=& \rho {\bf v}~,
\end{eqnarray}
\end{subequations}
with ${\bf x}_p(t)$ the time dependent position in 3-space of the moving particle, and ${\bf v} = \dot{{\bf x}}_p$ the velocity. Furthermore, we assume in this section that $\phi = \phi(t)$ with $c_s=cZ={\rm constant}$.
Then, considering the Fourier space components one obtains from Eq.(\ref{Phieq0})
\begin{equation}
\Phi_{k} = \frac{2\pi q }{\epsilon} \frac{\delta(\omega - {\bf k}\cdot {\bf v})}{k^2 - \dfrac{\omega^2}{c_s^2}},
\end{equation}
and Eq.(\ref{Aeq0}) can be solved to find
\begin{equation}\label{A_k}
{\bf A}_k = 2 \pi q\mu \frac{\delta(\omega - {\bf k}\cdot{\bf v})}{k^2 - \dfrac{\omega^2}{c_s^2}}{\bf v}~.
\end{equation}
As a consistency check, these solutions imply the Lorenz--gauge condition ${\bf k}\cdot{\bf A}_k = \omega \Phi_k/c_s^2$.
The Fourier coefficients of ${\bf B}$ are related to ${\bf A}_k$ via ${\bf B}_k = i{\bf k}\times {\bf A}_k$ and the Fourier coefficients of ${\bf E}$ are given by ${\bf E}_k = -i{\bf k} \Phi_k + i \omega {\bf A}_k$. We find
\begin{equation}
{\bf B_k}({\bf k},\omega) = 2\pi i q \mu \frac{{\bf k}\times{\bf v}}{k^2 - \dfrac{\omega^2}{c_s^2}}\delta(\omega - {\bf k}\cdot{\bf v})
\end{equation}
and
\begin{equation}
{\bf E}_k({\bf k},\omega) = - \frac{2\pi iq}{\epsilon} \frac{{\bf k} - \dfrac{\omega}{c_s^2}{\bf v}}{k^2 - \dfrac{\omega^2}{c_s^2}}\delta(\omega - {\bf k}\cdot{\bf v})~.
\end{equation}
To find the energy loss along the particle's trajectory, we assume without the loss of generality that the particle moves along the $z$-axis with velocity ${\bf v} = (0,0,v)$, and that the observer is located at a distance $r$ from the $z$-axis. The energy loss per unit length is then given by the integral
\begin{equation}\label{energyloss}
- \frac{d{\cal E}}{dz} = -2\pi r \int E_z({\bf r},t) B_{\phi}({\bf r},t)dt = -r \int E_z({\bf r},\omega) H^*_{\phi}({\bf r},\omega)d\omega ,
\end{equation}
where
\begin{eqnarray}
E_z({\bf r},\omega) &=& \frac{1}{(2\pi)^3} \int d^3 k E_z ({\bf k},\omega) e^{i{\bf k}\cdot {\bf r}} {~~~~{\rm and}} \nonumber \\
H_\phi({\bf r},\omega) &=& \frac{1}{(2\pi)^3} \int d^3 k H_\phi ({\bf k},\omega) e^{i{\bf k}\cdot {\bf r}}.
\end{eqnarray}
Evaluating the integrals for $\beta = v/c_s > 1$, we find
\begin{eqnarray}
E_z({\bf r},\omega) &=&
\frac{iq\mu\omega}{2\pi}\left[ 1 - \frac{1}{\beta^2} \right] e^{i\omega z/\beta c_s} K_0(\alpha r), \\
H_\phi({\bf r}, \omega) &=&
\frac{\alpha q}{2\pi}e^{iz\omega/\beta c_s} K_{1}\left(\alpha r \right)~,
\end{eqnarray}
where $\alpha = -(i\omega/c_s)\sqrt{1-\beta^{-2}}$. Note that for large $\alpha r$, $K_0 (\alpha r) \approx \sqrt{\pi/(2\alpha r)} \exp(-\alpha r)$, so these represent outgoing waves if $\beta>1$. The expressions for $E_z$ and $H_\phi$ are identical to those for electromagnetic waves propagating through a medium, leading to Cherenkov radiation for $v>c_s$. The integral (\ref{energyloss}) can be evaluated for $|\alpha | r \gg1$, giving
\begin{equation}
- \frac{d{\cal E}}{dz} = \frac{1}{4\pi \epsilon_0}\frac{e^2}{c^2} \int \omega \left( 1 - \frac{1}{\beta^2} \right) d\omega~.
\end{equation}
\subsection{Constraints\label{sec:constraints}}
We will now discuss constraints on the model. So far, the scalar field has been completely unspecified. The only assumption we have made is that it is disformally coupled to the electromagnetic sector. To specify the dynamics of the field, we have to specify the action for it, and in the following we assume that the gravitational sector is of standard Einstein form, together with a canonical scalar field. The form of $\mathcal{S}_{grav}$ in equation \eqref{eq:action1} we then chose to be
\begin{equation}\label{eq:grav_action}
\mathcal{S}_{\mathrm{grav}}
=\int d^4x\sqrt{-g}\left[ \frac{R(g)}{2\kappa} - \frac{1}{2}g^{\mu\nu}\phi,_{\mu}\phi,_{\nu} - V(\phi) \right],
\end{equation}
where we assume that $\phi$ is the scalar field responsible for the accelerated expansion of the universe at late times and we assume $\hat{g}_{\mu\nu} = g_{\mu\nu}$, which implies that we set $D^{(m)}=0$.
There are direct constraints on isotropic deviations of the speed of light from unity from laboratory experiments \cite{Michimura:2013kca, Baynes:2012zz} at the level of $|1- c_s/c|< 10^{-10}$, however stronger constraints arise from searches for Cherenkov radiation from particles in vacuum. These can be done in terrestrial experiments, with bounds $|1- c_s/c|< 10^{-11}$ coming from the absence of vacuum Cherenkov radiation from $104.5 \mbox{ GeV}$ electrons and positrons at LEP \cite{Hohensee:2009zk}. Indeed, the energetics of the LEP beam were so well understood that measurements of the synchrotron emission rate indicate that any deviation of the speed of photons is constrained by $|1-c_s/c|< 5 \times 10^{-15}$, \cite{Altschul:2009xh}. Observations of high energy cosmic rays provide significantly tighter constraints; the lack of vacuum Cherenkov radiation from high energy electrons and neutrinos propagating over astronomical distances constrains $|1-c_s/c|< 10^{-20}$ \cite{Stecker:2014xja, Diaz:2013wia, Stecker:2013jfa}, however these constraints come with some uncertainty about the high energy dynamics of the source of the cosmic ray.
To translate these constraints into constraints on disformal electrodynamics, we assume now that the scalar field is slowly evolving and plays the role of dark energy. Firstly, we assume the constraint $|1-c_s/c|<5\times 10^{-15}$. The speed of light $c_s$ should not deviate drastically from one, so we can expand $Z \approx 1 - B\dot\phi^2/2c^2$.
The Friedmann equation evaluated today reads
\begin{equation}
3H^2_0 = \kappa \left( \rho c^4 + \frac{1}{2}\dot{\phi}^2 + c^2V \right)
\end{equation}
for the bare speed $c$. If we assume that the scalar $\phi$ plays the role of dark energy then we have
\begin{equation}\label{eq:OmegaDE}
\Omega_{\rm DE}
= \frac{\kappa}{6}\left(\frac{\dot{\phi}}{H_0}\right)^2+\frac{\kappa c^2V}{3H_0^2} \simeq 0.7,
\end{equation}
where $\Omega_{\rm DE}$ is the dark energy density parameter. The equation of state of dark energy is
\begin{equation}
w_{\rm DE,0} = \frac{\dot\phi^2 - 2c^2V}{\dot\phi^2 + 2c^2V}
\end{equation}
which, combined with equation \eqref{eq:OmegaDE} gives
\begin{equation}
\frac{\kappa\dot\phi^2}{2c^2} = \frac{3}{2}\Omega_{\rm DE}H_0^2(1+\omega_{\rm DE,0}).
\end{equation}
Hence, the constraint can be written as $B\dot\phi^2/2c^2 < 5\times 10^{-15}$ or, expressed as a dimensionless ratio:
\begin{equation}\label{eq:constraintonM}
\frac{B_0 H_0^2}{\kappa} < \frac{10^{-14}}{3\Omega_{\rm DE}(1+\omega_{\rm DE,0})}.
\end{equation}
In Fig. \ref{fig:cherenkov} we show the constraint on the energy scale:
\begin{equation}\label{eq:M}
M := \left(\frac{c \hbar^3}{B_0}\right)^{1/4}
\end{equation}
as a function of the dark energy equation of state $\omega_{\rm DE,0}$, measured today, setting $\Omega_{\rm DE} = 0.7$. We remind the reader that constraints of this type will always place limits on the difference between the disformal couplings to matter and radiation, since $B = D^{(r)} - D^{(m)}$ (see eq. (\ref{eq:trans})), though we have set $D^{(m)}=0$ here.
\begin{figure}\label{fig:cherenkov}
\begin{center}
\includegraphics[width=0.7\textwidth]{cher_plot.pdf}
\caption{Cherenkov radiation in vacuum constraints the energy scale $M$, defined in Eq. (\ref{eq:M}), as a function of the current dark energy equation of state $\omega_{\rm DE,0}$. The shaded region is ruled out by bounds coming from the LEP constraint $|1-c_s/c|< 5 \times 10^{-15}$. As the dark energy equation of state approaches $-1$, $\dot\phi$ approaches 0 and hence $c_s \rightarrow c$ and the constraint on $M$ vanishes in this limit.}
\end{center}
\end{figure}
\section{Vacuum Bremsstrahlung\label{sec:brem}}
We have seen that the particle will emit Cherenkov radiation in vacuum, if the effective speed of light $c_s$ drops below the particle speed $v$. A natural question to ask, given the close resemblance at the classical level our model has with that of a linear dielectric medium, is whether or not other radiative channels are open in the presence of a disformal coupling. In this section we derive the conditions for vacuum bremsstrahlung.
We are particularly interested in the possibility that charged cosmic rays might emit bremsstrahlung due to the evolution of the scalar $\phi$ in the cosmological background. Therefore we generalize our calculations to an expanding background with $c_s$ now time dependent in what follows.
\subsection{Action\label{sec:action_brem}}
We consider again a subsystem of the action in \eqref{eq:action1} where a single charged particle in flight couples to an electromagnetic field: $\mathcal{S}_{\mathrm{field}}+\mathcal{S}_{\mathrm{int}}$, (see equation \eqref{eq:action2}), however, we now work on an expanding background, and so we chose comoving coordinates such that
\begin{equation}
g_{\mu\nu} = a^2(\tau)\eta_{\mu\nu},
\end{equation}
where $\tau$ is the conformal time, related to the physical time by $dt = ad\tau$, hence
\begin{equation}
g^{(r)}_{\mu\nu} = a^2(\tau)\left(\eta_{\mu\nu} + \frac{B}{a^2}\phi_{,\mu}\phi_{,\nu}\right)
:= a^2 h^{(r)}_{\mu\nu}.
\end{equation}
The gravitational action is still given by Eq. \eqref{eq:grav_action} and we assume that the scalar field $\phi$ depends on time only.
Then, as $\mathcal{S}_{\mathrm{field}}$ is conformally invariant, we have
\begin{equation}
{\cal S}_{\rm field} =
-\frac{1}{4\mu_0} \int d^4 x ~Z h_{(r)}^{\mu\nu} h_{(r)}^{\alpha\beta} F_{\mu\alpha}F_{\nu\beta},
\end{equation}
where, recalling the definition of $Z$ (Eq. \eqref{def:Z}), we have now
\begin{equation}
Z=\sqrt{-h^{(r)}} = \sqrt{1+\frac{B}{a^2}\eta^{\mu\nu}\phi,_{\mu}\phi,_{\nu}}~.
\end{equation}
For the interaction term, we define the \emph{comoving current}
\begin{equation}
J^{\mu} := \sqrt{-g} j^{\mu},
\end{equation}
so that
\begin{equation}
{\cal S}_{\rm int} =
\int d^4 x J^\mu A_\mu.
\end{equation}
As $\nabla_{\mu}j^{\mu}=0$ (see section \ref{sec:action}), we have that the comoving current is conserved with respect to the flat metric $\eta_{\mu\nu}$, i.e.
\begin{equation}
\partial_{\mu}J^{\mu}
= \partial_{\mu}\left( \sqrt{-g} j^{\mu} \right)
= \sqrt{-g} \nabla_{\mu}j^{\mu} = 0,
\end{equation}
where we have used that $\sqrt{-g}\nabla_{\mu}v^{\mu} = \partial_{\mu}\left( \sqrt{-g} v^{\mu} \right)$ for any 4 vector $v^{\mu}$ and metric $g$. Lastly, we consider a point-like charged particle whose motion can be described by a curve ${\bf x}_p(\tau)$, and, since $\partial_{\mu}J^{\mu}=0$, we can define $J^{\mu} = (c\Omega,{\bf J})$ such that
\begin{subequations}
\begin{eqnarray}
\Omega({\bf x}, \tau) &:=& Q\delta({\bf x} - {\bf x}_p(\tau))\\
{\bf J}({\bf x}, \tau) &:=& \Omega{\bf V}
\end{eqnarray}
\end{subequations}
for ${\bf V} := d{\bf x}_p / d\tau$ and $Q$ the charge of the particle. By construction this ansatz satisfies the continuity equation. Comparing this to the physical current, expressed in terms of the physical time, $t$, it is straightforward to show that $j^{\mu\prime} = (c\Omega/a^3, {\bf v} \Omega/a^3)$, and hence the charge density dilutes as $a^{-3}$, as it must in isotropically expanding space. It is also clear that, for $a(\tau)$ an arbitrary function, light still propagates with velocity
\begin{equation}
c_s(\tau) = Z(\tau)c.
\end{equation}
\subsection{Disformal Maxwell's equations in expanding space}
The electromagnetic field equations can be readily obtained from the action specified in section \ref{sec:action_brem} as before; they are the expanding-space counterpart to Eq.s \eqref{eq:max}:
\begin{subequations}\label{eq:max_brem}
\begin{eqnarray}
\nabla \cdot {\bf E} &=& \frac{Z}{\epsilon_0} \Omega~, \\
\nabla \times {\bf B} &=& \frac{\mu_0}{Z}{\bf J} + \frac{\mu_0}{Z}\frac{\partial}{\partial \tau} \left( \frac{\epsilon_0}{Z}{\bf E} \right)~,\\
\nabla \cdot {\bf B} &=& 0~, \\
\nabla \times {\bf E} + \frac{\partial {\bf B}}{\partial \tau} &=& 0~.
\end{eqnarray}
\end{subequations}
In these equations, $\nabla$ is the \emph{flat} 3-space derivative operator. Even though space is expanding, this is valid, as the dependance of the system on the scale factor $a$ was absorbed by the field redefinitions in the previous section.
From definition \eqref{sol} we see that if the speed of light were to vary in time in some coordinate system with time $t$, there would naturally exist some new system of coordinates such that this speed remains constant. If the particle were to travel with fixed velocity in the original system, it would appear to accelerate with respect to these new coordinates in which $c_s$ is constant. The electric field thus `sees' an accelerating charge. We would expect such a field to radiate accordingly, and indeed this is what we will find.
To make this intuition mathematically precise, we must consider a case more general than the previous sections, whereby $Z(t)$ becomes now an arbitrary function of time. Some suitable field and coordinate redefinitions will help us find solutions in this new case. Considering again the disformal Maxwell's equations, \eqref{eq:max_brem}, the following redefinitions are useful:
\begin{equation}\label{eq:redefs}
\overset{\sim}{\bf{E}}:=\frac{\bf E}{Z(\tau)}, \quad \tilde{\bf J}:=\frac{\bf J}{Z(\tau)}, \quad d\tilde{\tau} := Z(\tau)d\tau.
\end{equation}
These fields obey the following equations:
\begin{subequations}\label{eq:maxJF}
\begin{eqnarray}
\nabla \cdot \overset{\sim}{\bf{E}} &=& \frac{\Omega}{\epsilon_0}~, \\
\nabla \times {\bf B} &=& \mu_0\tilde{\bf J} + \mu_0\epsilon_0\frac{\partial}{\partial {\tilde \tau}} \left( \overset{\sim}{\bf{E}} \right)~,\\
\nabla \cdot {\bf B} &=& 0~, \\
\nabla \times \overset{\sim}{\bf{E}} + \frac{\partial {\bf B}}{\partial \tilde{\tau}} &=& 0~.
\end{eqnarray}
\end{subequations}
This set of equations allows us make the standard gauge field definitions: ${\bf B} = \nabla\times{\bf A}$ as before, and now
\begin{equation}\label{eq:Epot}
\overset{\sim}{\bf{E}} = -\nabla \tilde{\Phi} - \accentset{\circ}{\bf A}
\end{equation}
where `$\accentset{\circ}{~}$' denotes the derivative with respect to $\tilde{\tau}$. Then, working again in the disformal Lorenz gauge: $\nabla \cdot {\bf A} = - \accentset{\circ}{\tilde{\Phi}} / c^2$, we arrive at the field-potential equations of motion:
\begin{subequations}\label{eq:system}
\begin{eqnarray}
\left( \nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial \tilde{\tau}^2} \right) \tilde{\Phi}
&=& - \frac{\Omega}{\epsilon_0} \label{Phieq} \\
\left( \nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial \tilde{\tau}^2} \right) {\bf A}
&=& - \mu_0 \tilde{\bf J} \label{Aeq}.
\end{eqnarray}
\end{subequations}
The system (\ref{eq:system}) is closed, and is now instantly recognizable from classical electrodynamics, hence easily solvable. Important to note here is that these tilde variables and coordinates just defined are exactly those we would have, had we originally expressed the action \ref{eq:action1} entirely in terms of the electromagnetic metric metric $g^{(r)}$ -- we are now working in the electromagnetic frame.
\subsection{Field solutions and the bremsstrahlung condition}
The system of equations, \eqref{eq:system}, is readily satisfied by the Lienard-Wiechert potentials \cite{2007classical}. In terms of our electromagnetic frame quantities -- recalling that our coordinates are all comoving -- these solutions read
\begin{subequations}\label{eq:LW}
\begin{eqnarray}
\tilde{\Phi}({\bf x}, \tilde{\tau}) &=&
\frac{Q}{4\pi \epsilon_0}~
\frac{1}{[1-{\bf n}(\tilde{\tau}') \cdot \boldsymbol\beta(\tilde{\tau}')]}~
\frac{1}{X(\tilde{\tau}')} \\
\mathbf{A}({\bf x},\tilde{\tau}) &=&
\mu_0 \epsilon_0{\bf V}(\tilde{\tau}')\tilde{\Phi}
\end{eqnarray}
\end{subequations}
where $\tilde{\tau}'$ is the retarded electric frame time, defined by the implicit equation
\begin{equation}\label{eq:ret}
(\tilde{\tau}' - \tilde{\tau})c + X(\tilde{\tau}') = 0,
\end{equation}
and we have made the following definitions
\begin{equation}\label{eq:defs}
X(\tau) := |{\bf x} - {\bf x}_p(\tau)|, \quad
{\bf n}(\tau) := \frac{{\bf x} - {\bf x}_p(\tau)}{X(\tau)}, \quad
\boldsymbol\beta(\tau) := \frac{\tilde{\bf V}(\tau)}{c} = \frac{{\bf V}(\tau)}{c_s(\tau)}.
\end{equation}
Combining \eqref{eq:Epot} with \eqref{eq:LW} and reversing the field redefinitions gives the following electric field profile in the Jordan frame:
\begin{equation}\label{eq:profile}
\mathbf{E}({\bf x},\tau) =
\frac{Q}{4\pi\epsilon(\tau)}
\left[
\frac{(1-\beta^2)({\bf n} - \boldsymbol\beta)}{ X^2[1 - {\bf n} \cdot {\boldsymbol \beta}]^3} +
\frac{{\bf n} \times [({\bf n} - \boldsymbol\beta) \times \dot{\boldsymbol\beta}]}{c_sX[1 - {\bf n} \cdot {\boldsymbol \beta}]^3}
\right]_{\mathrm{ret}},
\end{equation}
and, also in the Jordan frame:
\begin{equation}
{\bf B}({\bf x},\tau) = \left[{\bf n}\times\frac{\bf E}{c_s}\right]_{\mathrm{ret}}
\end{equation}
where we have reverted back to the $\tau$ time derivative, `$\dot{~}$', and quantities enclosed in the square brackets, $[...]_{\mathrm{ret}}$, are to be evaluated at the retarded time $\tau'$ given implicitly by equation \eqref{eq:ret} together with the relationship between $\tau$ and $\tilde\tau$
\begin{equation}
\tau = \int\frac{d\tilde{\tau}}{Z(\tilde{\tau})}~,
\end{equation}
which is non-local, and not analytically solvable in general. In the Jordan frame, the Poynting vector as obtained from Poynting's theorem is:
\begin{equation}\label{eq:poyn_exp}
{\bf S} = {\bf E} \times \frac{{\bf B}}{\mu(\tau)}
\end{equation}
and so, for a charged particle on a straight trajectory ($\boldsymbol\beta\times\dot{\boldsymbol\beta}=0$) the comoving power radiated, i.e. the power radiated per unit conformal time, is:
\begin{equation}\label{eq:power}
{\mathcal P} =
\frac{1}{4\pi\epsilon(\tau)}\frac{2Q^2}{3c_s(\tau)}\frac{1}{(1-\beta^2)^3}|\dot{\boldsymbol\beta}|^2,
\end{equation}
obtained from \eqref{eq:poyn_exp}.
The second (radiative) term in equation \eqref{eq:profile} will be non-zero if and only if $|\dot{\boldsymbol\beta}|\neq0$. We can clearly see that, from the definition of $\boldsymbol\beta$ in equation \eqref{eq:defs}, this can be true even when the particle is not accelerating. If the comoving velocity ${\bf V}$ is constant, then $\dot{\boldsymbol\beta} = \boldsymbol\beta ~ \dot{c_s} / c_s$ and electromagnetic radiation will still carry energy outward, away from the particle\footnote{We note that the assumption of constant comoving velocity is non--trivial, due to the fact that we consider motion in an expanding background. Since we just want to consider the effect due to the disformal coupling, we ignore this issue here and refer the reader to \cite{Futamase:1996xh,Nomura:2006ka,Blaga:2014vna}.}. We sum this result up in the following radiation condition: a charged particle on an expanding background in motion -- uniform as seen by a stationary observer on the same background -- will in general radiate if the electromagnetic field couples to a second, distinct geometry, disformally varying with respect to the first: that is if $\dot{c}_s \neq0$.
We see in this setup that if the scalar field $\phi$ evolves in time (with $\dot Z$ non-zero), the particle appears to the electric field as one that is accelerating, even when in vacuum; to this phenomenon we attach the name vacuum bremsstrahlung. All this effect requires, really, is the condition that the speed of light vary with time. In fact, though we have demonstrated the presence of vacuum bremsstrahlung in a disformally coupled field setting, it will no doubt be more widely applicable. We expect any theory in which the speed of light is dynamical in this sense to exhibit this phenomena, and hence to be testable in this way. We shall see in the next section, however, that for our theory, the effect is much smaller than Cherenkov radiation.
\subsection{Energy lost from a coupled cosmic ray}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Zs.pdf}
\end{center}
\caption{Cosmological evolution of the observed speed of light, $c_s$, with redshift for values within LEP constraints (see Sec. \ref{sec:constraints}). $M$ is the energy scale associated to the disformal coupling -- defined in Eq. \eqref{eq:M} -- between light and a quintessence scalar field with exponential potential: $V(\phi)=V_0\mathrm{exp}[-\phi\kappa^{1/2}]$. \label{fig:Zs}}
\end{figure}
We will consider an ultra-high energy cosmic ray (a ray with energy in excess of about $10^{15}$ eV) in what follows. This means we can safely assume the cosmic ray travels along a straight line geodesic, that is $\boldsymbol\beta\times\dot{\boldsymbol\beta}=0$; intergalactic magnetic fields are extremely weak, too much so to curve the path of an ray of this energy appreciably. The radiation condition for expanding space is thus: vacuum bremsstrahlung will occur if $\dot{c}_s \neq 0$, even when the comoving velocity -- and hence the physical velocity -- is constant. In this case, we have that $\dot{\beta} = \beta \dot{Z} / Z$ and so both the square of the factor $\dot{Z} / Z$ and the sixth power of the Lorentz factor, $(1-\beta^2)^{-3}$, will determine the magnitude of energy lost by the particle through this process.
If the scalar field $\phi$ is responsible for the late time accelerated expansion of the universe, then the cosmic ray's bremsstrahlung energy loss will be suppressed by the Hubble scale as measured in the present epoch (within a few redshift). Further, our model must obey the LEP constraints imposed on it in Sec. \ref{sec:constraints} which translates to an energy scale $M$ of roughly $\mbox{eV}$ or above. Both factors drive the allowed values of $\dot{Z} / Z$ down to very small values indeed for any viable cosmology scenario. We show in Fig. \ref{fig:Zs} several allowed evolution histories of the speed of light for a simple extension to the standard cosmological model, whereby the dark energy field is driven by an exponential potential with mild negative slope: $V(\phi)=V_0\mathrm{exp}[-\phi\kappa^{1/2}]$.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{EandP.pdf}
\end{center}
\caption{\emph{Left:} Energy radiated per unit cosmic time $P={\mathcal P}a$ at each redshift, where $\mathcal{P}$ is defined in Eq. \eqref{eq:power}. \emph{Right:} Integrated energy loss by the particle for its entire trajectory, beginning at some initial redshift, arriving at earth today. In both plots, each curve corresponds to a cosmic ray with relativistic energy shown in the legend; PeV$=10^{15}$ eV. The disformal energy scale, $M$, is fixed at 2 eV for the left and right panels.\label{fig:PandE}}
\end{figure}
We see from the left panel of Fig. \ref{fig:Zs} that $\dot{c_s} / c_s$ must be very small -- many orders of magnitude \emph{less} than the Hubble scale at $H_0 \simeq 10^{-42} \mbox{ GeV}$! Observations of ultra high energy cosmic rays tell us we must consider charged particles with energy in excess of, say, a $\mbox{PeV}$, and bounds on $M$ from LEP mean that the expression for radiated power, Eq. (\ref{eq:power}) is valid up to very high velocity, but not above that for a few $\mbox{PeV}$, when vacuum Cherenkov radiation radically alters the nearby electric field behavior. In these cases the Lorentz factor is huge, and Eq. (\ref{eq:power}) shows the amount of power radiated by the ray is highly sensitive to the size of the Lorentz factor, yet, in Fig. \ref{fig:PandE} it is clear that this is not enough to beat the Hubble scale suppression.
For most of these cosmic rays a galactic source is highly unlikely. More probable: they were accelerated by jets protruding from the active nuclei of quasars, some of which have been recorded by the Sloan Digital Sky Survey at cosmic distances of redshift up to about $z\simeq6$. However, the right panel of Fig. \ref{fig:PandE} shows that even the integrated energy loss across a distance this large is suppressed by the Hubble scale (as could perhaps be infered from dimensional analysis.)
We conclude this section by remarking that such an effect will never be practically measurable if the disformal coupling is to dark energy. The Hubble scale today is so far from any of those in particle physics that a second order effect in a dynamic speed of light theory like vacuum bremsstrahlung will be negligible. For any such coupling or dynamic light speed in inflation is, however, a different story. The scale of inflation may be large enough that, during or just after reheating, these effects must be taken into account.
\section{Conclusions\label{sec:conc}}
We have shown that disformal couplings allow charged particles to emit Cherenkov radiation and bremsstrahlung in vacuum. The distortion of causal structure by the scalar field, a characteristic consequence of these interactions, can cause the speed of photons to be lower than that of a charged particle -- and even to vary in time -- mimicking a dielectric medium.
To demonstrate this, we have developed a theory of electrodynamics in which a scalar field couples disformally to photons and charged particles, on both flat and expanding backgrounds. Unless the coupling strengths to each species are forced to be equal, two distinct frames appear in the theory, each with a specific role: working out observational quantities, such as the observed speed of light, required use of the frame in which matter is uncoupled from the scalar (i.e. the Jordan frame), but photons in general are not. Calculations were found to be simplest however, especially for a time dependent coupling, in the electromagnetic frame, where freely falling photons always follow geodesics.
Working in flat space, we determined the constraints on dark energy models with disformal couplings that arise from the non-observation of vacuum Cherenkov radiation by the LEP collaboration. These parameter-space bounds are complementary to those obtained from spectral distortions of the CMB \cite{vandeBruck:2013yxa}; they both cover different regions and agree across their intersection. Finally, we have shown that the dark energy fine tuning problem is a problem for vacuum bremsstrahlung detection also: suppression of this particle physics interaction by the Hubble scale is unbeatable for any conceivable measurement one could dream of making on the earth's cosmic ray flux.
In this study, we have converted the bounds on maximum attainable velocities of particles obtained by the LEP group to those on the scalar field coupling interaction $M$ via the Friedman equation. Explicitly, we: a) assumed our gravity sector was as simple as possible (quintessence with exponential potential, uncoupled to matter) and: b) produced constraints that are dependent on the measured dark energy equation of state today. This work should thus be extended along these two lines. How sensitive are these limits to changes in the gravitational sector of the theory? The bound on $M$, eq. (\ref{eq:constraintonM}), will change and this must be worked out. The interplay between particle physics and cosmology has so far been exceedingly rich, and constraining cosmological models, such as the present one, using results from ground-based particle experiments in this fashion remains a surprisingly fruitful venture.
\vspace{0.5cm}\
\noindent {\bf Acknowledgements} We are grateful to J. Ronayne for a helpful comment. We also thank Robert Blaga for discussions on moving particles in an expanding spacetime. The work of CvdB is supported by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC Grant No. ST/L000520/1. CB is supported by a Royal Society University Research Fellowship.
|
1,116,691,499,790 | arxiv | \section*{Introduction}
\label{Intro}
\setcounter{equation}{0}
Let $\Omega$ be a bounded domain in $\hbox{\bbbld C}^n$ with the canonical
K\" ahler form $\beta= dd^c\|z\|^2$, where $d= \partial + \bar{\partial}$,
$d^c=i(\bar{\partial}- \partial)$. For $1\leq m\leq n$, we denote $\hbox{\bbbld C}_{(1,1)}$
the space of $(1,1)$-forms with constant coefficients. One defines the positive cone
\begin{equation}
\label{in-1}
\Gamma_m=
\{ \eta\in \hbox{\bbbld C}_{(1,1)}: \eta\wedge \beta^{n-1}
\geq 0, ... , \eta^m \wedge\beta^{n-m}\geq 0\}.
\end{equation}
A $C^2$ smooth function $u$ is called $m$-subharmonic in $\Omega$
if at every point $z\in\Omega$ the $(1,1)$-form associated to its complex
Hessian belongs to $\Gamma_m$, i.e
\begin{equation}
\label{in-2}
\sum_{j,k=1}^n
\frac{\partial^2u(z)}{\partial z_j\partial\bar{z_k}} i dz_j\wedge d\bar{z_k}
\in \Gamma_m.
\end{equation}
It was observed by B\l ocki (see \cite{Bl}) that one may relax the smoothness
condition in the definition \eqref{in-2} and consider this inequality in the sense
of distributions to obtain a class, denoted by $SH_m(\Omega)$ (see preliminaries).
When functions $u_1, ..., u_k$, $1\leq k\leq m$, are in $SH_m(\Omega)$ and are locally
bounded, one still may define
$dd^cu_1\wedge dd^cu_2\wedge ...\wedge dd^cu_k\wedge \beta^{n-m}$
as a closed positive current of bidegree $(n-m+k,n-m+k)$. In particular
$(dd^cu)^m\wedge \beta^{n-m}$ is a positive measure for $u$ bounded $m$-subharmonic.
Thus, it is possible to study bounded solutions of the Dirichlet problem with positive
Borel measures $\mu$ in $\Omega$ and continuous boundary data
$\varphi\in C(\partial \Omega)$:
\begin{equation}
\label{heq}
\begin{cases}
u \in SH_m(\Omega) \cap L^{\infty}(\Omega), \\
(dd^cu)^m \wedge \beta^{n-m} = d\mu, \\
u(z) = \varphi(z) \;\; \text{ on } \;\; \partial \Omega.
\end{cases}
\end{equation}
The Dirichlet problem for the complex Hessian equation \eqref{heq} in smooth cases was
first considered by S.Y. Li (see \cite{Li}). His main result says that if $\Omega$ is smoothly
bounded and strongly $m$-pseudoconvex (see Definition~\ref{pr-de-4}) then, for a smooth
boundary data and for a smooth positive measure, i.e $d\mu=f\beta^n$ and $f>0$ smooth,
there exists a unique smooth solution of the Dirichlet problem for the Hessian equation.
The weak solutions of the equation \eqref{heq}, when the measure $d\mu$
is possibly degenerate, were first considered by B\l ocki \cite{Bl}, more precisely,
he proved that there exists a unique continuous solution of the homogeneous
Dirichlet problem in the unit ball in $\hbox{\bbbld C}^n$.
Very recently, in \cite{DK} Dinew and Ko\l odziej investigated weak solutions of the complex
Hessian equations \eqref{heq} with the right hand side more general, namely $d\mu = f\beta^n$
where $f\in L^p$, for $p>n/m$. One of their results extended Li's theorem, they proved that
the Dirichlet problem still has a unique continuous solution provided continuous boundary data
and $d\mu$ in $ L^p$ as above.
Their method exploited the new counterpart of pluripotential theory
for $m$-subharmonic functions, after showing a crucial inequality between the usual volume and
$m$-capacity which is a version of the relative capacity for $m$-subharmonic functions
In the case m=n, the subsolution theorem due to Ko\l odziej \cite{K1}
(see \cite{K2} for a simpler proof)
says that the Dirichlet problem \eqref{heq} in a strongly pseudoconvex domain
is solvable if there is a subsolution. Thus, one may ask the same question when $m<n$.
In this note we show that the subsolution theorem, Theorem~\ref{su-th-1},
for the complex Hessian equation is still true by combining the new results of
Dinew and Ko\l odziej for weak solutions of the complex Hessian equations
and the method used to prove the subsolution theorem in the pluripotential case.
\bigskip
\section*{Acknowledgements} I am indebted to my advisor, professor S\l awomir Ko\l odziej,
for suggesting the problem and for many stimulating discussions.
I also would like to thank the referee whose suggestions and remarks helped to improve the exposition of the
paper. This work is supported by the International Ph.D Program
{\em " Geometry and Topology in Physical Models "}.
\bigskip
\section{Preliminaries}
\label{pr}
\setcounter{equation}{0}
\subsection{$m$-subharmonic functions}
\label{pr1}
We recall basic notions and results which are adapted from pluripotential theory.
The main sources are \cite{BT1,BT2}, \cite{Ce1,Ce2}, \cite{D1, D2}, \cite{K2}
for plurisubharmonic functions and \cite{Bl}, \cite{DK} for $m$-subharmonic functions.
Since a major part of pluripotential theory can be easily adapted to $m$-
subharmonic case, when the proof is only a copy of the original one
with obvious changes of notations, for the
proofs we refer the reader to the above references.
Let $\hbox{\bbbld C}_{(k,k)}$ be the space of $(k,k)$-forms with constant coefficients, and
$$
\Gamma_m=
\{ \eta\in \hbox{\bbbld C}_{(1,1)}: \eta\wedge \beta^{n-1}\geq 0
, ... , \eta^m \wedge\beta^{n-m}\geq 0\} .
$$
We denote by $ \Gamma_m^*$ its dual cone
\begin{equation}
\label{pr1-1}
\Gamma_m^*
= \{ \gamma\in \hbox{\bbbld C}_{(n-1,n-1)}: \gamma\wedge \eta
\geq 0 \;\; \text{ for every } \;\; \eta\in \Gamma_m\}.
\end{equation}
By Proposition 2.1 in \cite{Bl} we know that
$\{\eta_1\wedge...\wedge\eta_{m-1}\wedge\beta^{n-m}; \;\;
\eta_1,...,\eta_{m-1}\in \Gamma_m\}\subset\Gamma_m^*$, moreover if we consider
$
\Gamma_m^{**}
=\{\eta\in \hbox{\bbbld C}_{(1,1)}: \eta\wedge\gamma\geq 0
\;\; \text{for every} \;\; \gamma\in\Gamma_m^*\}
$
then we have
$$
\Gamma_m=\Gamma_m^{**}
$$
as $ \{\eta_1\wedge...\wedge\eta_{m-1}\wedge\beta^{n-m};
\;\; \eta_1,...,\eta_{m-1}\in \Gamma_m\}^*\subset \Gamma_m$.
Therefore
\begin{equation}
\label{pr1-2}
\Gamma_m^*
=\{\eta_1\wedge...\wedge\eta_{m-1}\wedge\beta^{n-m};
\;\; \eta_1,...,\eta_{m-1}\in \Gamma_m\}.
\end{equation}
Since $\Gamma_n\subset\Gamma_{n-1}\subset ... \subset\Gamma_1$, we thus obtain
$$
\Gamma_n^*\supset \Gamma_{n-1}^*\supset ...\supset \Gamma_1^*
=\{ t\beta^{n-1}; t\geq 0\}.
$$
In particular, when $\eta\in\Gamma_m^*$, and it has a representation
$$
\sum a^{j\bar k} i^{(n-1)^2}\hat{dz_j} \wedge\hat{d\bar{z_k}}
$$
(this notation means that in the $(n-1,n-1)$-form only $dz_j$ and $d\bar{z_k}$ disappear in the
complete form $dz\wedge d\bar{z}$ at positions $j$-th and $k$-th) then
the Hermitian matrix $(a^{j\bar k})$ is
nonnegative definite. In the language of differential forms, a $C^2$ smooth function $u$ is
$m$-subharmonic ($m$-sh for short) if
$$
dd^cu\wedge\beta^{n-1}
\geq 0, ...,(dd^cu)^m\wedge\beta^{n-m}\geq 0
\;\; \text{ at every point in }\;\; \Omega.
$$
\begin{definition}
\label{pr-de-1}
Let $u$ be a subharmonic function on an open subset $\Omega\subset \hbox{\bbbld C}^n$.
Then $u$ is called $m$-subharmonic if for any collection of
$\eta_1,...,\eta_{m-1}$ in $\Gamma_m$, the inequality
$$
dd^cu\wedge \eta_1\wedge ... \wedge\eta_{m-1}\wedge\beta^{n-m}
\geq 0
$$
holds in the sense of currents.
Let $SH_m(\Omega)$ denote the set of all $m$-sh functions in $\Omega$.
\end{definition}
\begin{remark}
\label{pr-re-2}
{\bf (a)} The condition \eqref{pr1-1} is equivalent to
$dd^cu\wedge \eta\geq 0$ for every $\eta\in\Gamma_m^*$ by \eqref{pr1-2}.
Hence, a subharmonic function $u$ is $m$-subharmonic if
\begin{equation}
\label{pr-re}
\int_\Omega u~dd^c\phi\wedge\eta
= \int_\Omega u
\sum_{j,k=1}^na^{j\bar k} \frac{\partial^2\phi}{\partial z_j\partial\bar{z_k}} \beta^n
\geq 0
\end{equation}
for every non-negative test function $0\leq\phi$ in $\Omega$
and for every nonnegative definite Hermitian matrix
$(a^{j\bar{k}})$ of constant coefficients such that
$\eta=
\sum_{j,k=1}^na^{j\bar k} i^{(n-1)^2} dz_1\wedge ...
\wedge\hat{dz_j}\wedge ...\wedge dz_n\wedge d\bar{z_1}\wedge ...
\wedge\hat{d\bar{z_k}}\wedge ...\wedge d\bar{z_n}
$
belongs to $\Gamma_{m}^*$.
This means that $u$ is subharmonic with respect to
a family of elliptic operators with constant coefficients.
{\bf (b)} A $C^2$ function $v$ is $m$-subharmonic
iff $dd^cv(z)$ belongs to $\Gamma_m$ at every $z\in \Omega$. Hence
$$
dd^cu\wedge dd^cv_1\wedge ... \wedge dd^cv_{m-1}\wedge\beta^{n-m}
\geq 0
$$
holds in $\Omega$ in the weak sense of currents,
for every collection $v_1, ... ,v_{m-1}\in SH_m\cap C^2(\Omega)$ and any
$u\in SH_m(\Omega)$.
\end{remark}
\begin{proposition}
\label{pr-pr-3}
Let $\Omega\subset \hbox{\bbbld C}^n$ be a bounded open subset. Then
\begin{enumerate}
\item
\label{pr-pr3-1}
$ PSH(\Omega)
= SH_n (\Omega)\subset SH_{n-1} (\Omega)\subset\cdots \subset SH_1 (\Omega)
= SH(\Omega)$.
\item
\label{pr-pr3-2}
$SH_m (\Omega)$ is a convex cone.
\item
\label{pr-pr3-3}
The limit of a decreasing sequence in $SH_m(\Omega)$ belongs to $SH_m(\Omega)$.
Moreover, the standard regularization $u\ast\rho_{\varepsilon}$
of a $m$-sh function is again a $m$-sh fucntion.
There $\rho_\varepsilon(z)=\frac{1}{\varepsilon^{2n}}\rho(\frac{z}{\varepsilon})$,
$\rho(z)=\rho(\|z\|^2)$ is a smoothing kernel,
with $\rho: \hbox{\bbbld R}_+\rightarrow \hbox{\bbbld R}_+$ defined by
$$
\rho(t) =
\begin{cases}
\frac{C}{(1-t)^2} \exp(\frac{1}{t-1}) &\ {\rm if}\ 0\leq t\leq 1, \\
0 &\ {\rm if}\ t>1,
\end{cases}
$$
for a constant C such that
$$
\int_{\hbox{\bbbld C}^n} \rho(\|z\|^2)\beta^n=1.
$$
\item
\label{pr-pr3-4}
If $u\in SH_m (\Omega)$\ and $\gamma: \hbox{\bbbld R} \rightarrow \hbox{\bbbld R}$\ is a convex,
nondecreasing function then $\gamma\circ u\in SH_m(\Omega)$.
\item
\label{pr-pr3-5}
If $u, v\in SH_m(\Omega)$ then $\max\{u,v\}\in SH_m(\Omega)$.
\item
\label{pr-pr3-6}
Let $\{u_\alpha\}\subset SH_m(\Omega)$ be a locally uniformly bounded from above and
$u= \sup u_\alpha$. Then the upper semi-continuous regularization
$u^*$ is $m$-sh and is equal to $u$ almost everywhere.
\end{enumerate}
\end{proposition}
\begin{proof}
\eqref{pr-pr3-1} and \eqref{pr-pr3-2} and the first part of \eqref{pr-pr3-3}
are obvious from the definition of $m$-sh functions.
From the formula \eqref{pr-re} we have, for $\eta \in \Gamma_m^*$,
$$
\int (u\ast\rho_{\varepsilon})~dd^c\phi\wedge\eta
= \int u~dd^c(\phi\ast\rho_{\varepsilon})\wedge \eta\geq 0,
$$
since $\phi\ast\rho_{\varepsilon}$ is again a nonnegative test function.
Thus \eqref{pr-pr3-3} is proved.
For \eqref{pr-pr3-4}, the smooth function $\gamma\ast\rho_\varepsilon$
(the standard regularization on $\hbox{\bbbld R}$)
is convex and increasing, therefore $(\gamma\ast\rho_\varepsilon)\circ u\in SH_m(\Omega)$.
Since
$(\gamma\ast\rho_\varepsilon)\circ u$ decreases to $\gamma\circ u$
as $\varepsilon\rightarrow 0$, applying
the first part of \eqref{pr-pr3-3} we have $\gamma\circ u\in SH_m(\Omega)$.
In order to prove \eqref{pr-pr3-5},
note that by using \eqref{pr-pr3-3} it is enough to show that
$w=\max \{u_\varepsilon, v_\varepsilon\}$ is $m$-sh,
where $u_\varepsilon:= u*\rho_\varepsilon, v_\varepsilon:= v*\rho_\varepsilon$.
Since $w$ is semi-convex, i.e there is a constant $C=C_\varepsilon >0$ big enough such that
$w+C\|z\|^2=\max \{u_\varepsilon+C\|z\|^2, v_\varepsilon+C\|z\|^2\}$ is a convex function
in $\hbox{\bbbld R}^{2n}$, hence
it has second derivative almost everywhere and $dd^c w(x) \in \Gamma_m$ for almost
everywhere $x $ in $\Omega$.
Let $w_\varepsilon$ is a regularization of $w$,
by the formula of the convolution
$w_\varepsilon(x) = \int_\Omega w(x-\varepsilon y)\rho(y)\beta^n(y)$ we have
$$
dd^c w_\varepsilon(x) = \int_\Omega dd^c w(x-\varepsilon y)\rho(y)\beta^n(y).
$$
Thus, for $\eta \in \Gamma_m^*$
$$
dd^c w_\varepsilon(x)\wedge\eta
= \int_\Omega\left[ dd^c w(x-\varepsilon y)\wedge\eta\right] \rho(y)\beta^n(y)
\geq 0.
$$
\eqref{pr-pr3-6} is a consequence of \eqref{pr-pr3-5} and Choquet's Lemma.
\end{proof}
\subsection{The complex Hessian operator}
\label{pr2}
For $1\leq k\leq m$, $u_1, ..., u_k\in SH_m\cap L^\infty_{loc}(\Omega)$ the operator
$dd^cu_k\wedge dd^cu_{k-1}\wedge ... \wedge dd^cu_1\wedge\beta^{n-m}$
is defined inductively by (see \cite{Bl}, \cite{DK})
$$
dd^cu_k\wedge dd^cu_{k-1}\wedge ... \wedge dd^cu_1\wedge\beta^{n-m}
:= dd^c(u_kdd^cu_{k-1}\wedge ... \wedge dd^cu_1\wedge\beta^{n-m})
\leqno(H_k)
$$
which is a closed positive current of bidegree $(n-m+k,n-m+k)$.
This operator is also continuous under
decreasing sequences and symmetric (see Remark~\ref{pr-reth9}).
In the case $k=m$, $dd^cu_1\wedge dd^cu_2\wedge ... \wedge dd^cu_m\wedge\beta^{n-m}$
is a nonnegative Borel measure, in particular, when $u=u_1=...=u_m$ currents (measures)
$(dd^cu)^m\wedge\beta^{n-m}$ are well-defined for $u\in L_{loc}^\infty(\Omega)$.
The above definitions essentially follow from the analogous definitions of
Bedford and Taylor (\cite{BT1}, \cite{BT2}) for plurisubharmonic functions.
\begin{proposition}[Chern-Levine-Nirenberg inequalities]
\label{cln}
Let $K\subset\subset U \subset\subset \Omega$, where $K$ is compact, $U$ is open.
Let $u_1,...,u_k\in SH_m\cap L^\infty(\Omega)$, $1\leq k\leq m$
and $v\in SH_m(\Omega)$ then there exists a
constant $C=C_{K,U,\Omega}\geq 0$ such that
\begin{enumerate}
\item[(i)]
$\|dd^cu_1\wedge ... \wedge dd^cu_k\wedge\beta^{n-m}\|_{K}
\leq C~\|u_1\|_{ L^\infty(U)} ... \|u_k\|_{ L^\infty(U)},$
\item[(ii)]
$\|dd^cu_1\wedge ... \wedge dd^cu_k\wedge\beta^{n-m}\|_{K}
\leq C~\|u_1\|_{ L^1(\Omega)}.\|u_2\|_{ L^\infty(\Omega)} ... \|u_k\|_{ L^\infty(\Omega)},$
\item[(iii)]
$\|vdd^cu_1\wedge ... \wedge dd^cu_k\wedge\beta^{n-m}\|_{K}
\leq C~\|v\|_{ L^1(\Omega)}.\|u_1\|_{ L^\infty(\Omega)} ... \|u_k\|_{ L^\infty(\Omega)}.$
\end{enumerate}
\end{proposition}
\begin{proof}
{\bf (i)} By induction we only need to prove that
$$
\|dd^cu_1\wedge ... \wedge dd^cu_k\wedge\beta^{n-m}\|_{K}
\leq C~\|u_1\|_{ L^\infty(U)} \|dd^cu_2\wedge ...
\wedge dd^cu_k\wedge\beta^{n-m}\|_U.
$$
In fact, let $\chi\geq 0$ be a test function equal to 1 on $K$.
Then an integration by parts yields
$$
\|dd^cu_1\wedge ... \wedge dd^cu_k\wedge\beta^{n-m}\|_{K}
\leq C\int_U\chi dd^cu_1\wedge ... \wedge dd^cu_k\wedge\beta^{n-k}
=C\int_Uu_1dd^c\chi\wedge ... \wedge dd^cu_k\wedge\beta^{n-k}.
$$
Thus,
$$
\|dd^cu_1\wedge ... \wedge dd^cu_k\wedge\beta^{n-m}\|_{K}
\leq C' \|u_1\|_{ L^\infty(U)}\|dd^cu_2\wedge ...
\wedge dd^cu_k\wedge\beta^{n-m}\|_U,
$$
where $C'$ depends only on bounds of coefficients of $dd^c\chi$ and on the set $U$.
{\bf (ii)} It is a simple consequence of (i), and the result
$\| dd^c w \wedge\beta^{n-1}\|_K\leq C_{K,U}\|w\|_{ L^1(U)}$
for every $w\in SH_m(\Omega)$ (see \cite{D2}, Remark 3.4).
{\bf (iii)} See \cite{D2} Proposition 3.11.
\end{proof}
\subsection{ $m$-pseudoconvex domains}
\label{pr3}
Let $\Omega$ be a bounded domain with $\partial\Omega$ in the class $C^2$.
Let $\rho\in C^2$ in a neighborhood of $\bar{\Omega}$
be a defining function of $\Omega$, i.e. a function such that
$$
\rho<0 \;\; \text{on} \;\; \Omega, \;\;\;\; \rho
= 0 \;\; \text{and} \;\; d\rho\ne 0
\;\; \text{ on } \;\; \partial\Omega.
$$
\begin{definition}
\label{pr-de-4}
A $C^2$ bounded domain is called strongly $m$-pseudoconvex
if there is a defining function $\rho$ and some $\varepsilon>0$
such that $(dd^c\rho)^k\wedge\beta^{n-k}\geq \varepsilon\beta^n$
in $\bar{\Omega}$ for every $1\leq k\leq m$.
\end{definition}
It is obvious that a strongly pseudoconvex domain is a strongly $m$-pseudoconvex domain.
The properties of strongly $m$-pseudoconvex domains are similar
to those of strongly pseudoconvex domains, e.g, it can be
shown that strongly $m$-pseudoconvexity is characterized by
a condition on its boundary (see \cite{Li}, Theorem 3.1).
We also have the criterion that if the Levi form of $\Omega$
corresponding to $\rho$ belongs to
the interior of $\Gamma_{m-1}$ then $\Omega$
is strongly $m$-pseudoconvex (see \cite{Li}, Proposition 3.3).
\subsection{Cegrell's inequalities for the complex Hessian operator}
\label{pr4}
It is sufficient for our purpose in this section to work within the class of
$m$-sh functions which are continuous
up to the boundary and equal to 0 on the boundary.
Let $\Omega$ be a strongly $m$-pseudoconvex domain in $\hbox{\bbbld C}^n$,
we denote
$$
\mathcal{E}_0(m)
=\lbrace u\in SH_m(\Omega) \cap C(\bar\Omega); ~u_{|_{\partial\Omega}}=0,
~ \int_\Omega(dd^cu)^m\wedge\beta^{n-m}<+\infty \rbrace.
$$
For the case $m=n$, this class was introduced by Cegrell in \cite{Ce1}.
It is a convex cone for $1\leq m\leq n$
(see \cite{Ce1}, p. 188 ). Our goal is to establish inequalities
very similar to the one due to Cegrell (see \cite{Ce2}, Lemma 5.4, Theorem 5.5)
for the Monge-Amp\`ere operator. In order to avoid confusions and
trivial statements we only consider $2\leq m\leq n-1.$
\begin{proposition}
\label{pr-pr-5}
Let $u,v,h\in\mathcal{E}_0(m)$, and $1\leq p, q\leq m$, $p+q\leq m$, set $T=-hS$ where
$S=dd^ch_1\wedge...\wedge dd^ch_{m-p-q}\wedge\beta^{n-m}$
with $h_1,...,h_{m-p-q}$ are also in $\mathcal{E}_0(m)$, then
$$
\int_\Omega (dd^cu)^p\wedge (dd^cv)^q\wedge T
\leq\left[\int_\Omega
(dd^cu)^{p+q}\wedge T\right]^\frac{p}{p+q}
\left[\int_\Omega (dd^cv)^{p+q}\wedge T\right]^\frac{q}{p+q}.
$$
\end{proposition}
\begin{proof}
See Lemma 5.4 in \cite{Ce2}.
We only remark here that two sides of the inequality are finite because of the
convexity of the cone $\mathcal{E}_0(m)$.
\end{proof}
\begin{remark}
\label{pr-repr5}
The statement in Proposition 1.5 is still true when
$h\in SH_m\cap L^\infty(\Omega)$,
$\lim_{\zeta \rightarrow \partial\Omega} h(\zeta)=0$ and
$\int_\Omega (dd^ch)^m\wedge \beta^{n-m}<+\infty$ since the integration by
parts formula is valid as in the case of the continuous case (see \cite{Ce2}, Corollary 3.4 ).
\end{remark}
Applying Proposition~\ref{pr-pr-5} for some special cases of
$m$-sh functions in $\mathcal{E}_0(m)$ we obtain
\begin{corollary}
\label{pr-co-6}
For $u,v,h\in\mathcal{E}_0(m)$, $1\leq p\leq m-1$, then
\begin{enumerate}
\item[(i)]
\begin{equation*}
\begin{aligned}
\int_\Omega -h (dd^cu)^p\wedge (dd^cv)^{m-p}\wedge & \beta^{n-m} \\
& \leq\left[\int_\Omega
- h(dd^cu)^m\wedge\beta^{n-m}\right]^\frac{p}{m}
\left[\int_\Omega -h(dd^cv)^m\wedge\beta^{n-m}\right]^\frac{m-p}{m},
\end{aligned}
\end{equation*}
\item[(ii)]
$\int_\Omega(dd^cu)^p\wedge (dd^cv)^{m-p}\wedge\beta^{n-m}
\leq\left[\int_\Omega (dd^cu)^m\wedge\beta^{n-m}\right]^\frac{p}{m}
\left[\int_\Omega(dd^cv)^m\wedge\beta^{n-m}\right]^\frac{m-p}{m}.$
\end{enumerate}
\end{corollary}
\begin{proof}
{\bf (i)} follows from Proposition~\ref{pr-pr-5} when $u=u_1=...=u_p$, $v=v_1=...=v_q$.
{\bf (ii)} comes from the fact that for
$\rho$ a defining function of $\Omega$ we have
$$
\int_\Omega(dd^cu)^p\wedge (dd^cv)^{m-p}\wedge\beta^{n-m}
=\lim_{\varepsilon\rightarrow 0}\int_{\{\rho< -\varepsilon\}}
(dd^cu)^p\wedge (dd^cv)^{m-p}\wedge\beta^{n-m},
$$
and
\begin{align*}
& \int_{U_\varepsilon}(dd^cu)^p\wedge (dd^cv)^{m-p}\wedge\beta^{n-m} \\
&\leq \int_\Omega -h^*_{U_\varepsilon,\Omega}(dd^cu)^p
\wedge (dd^cv)^{m-p}\wedge\beta^{n-m}\\
&\leq\left[\int_\Omega -h^*_{U_\varepsilon,\Omega}(dd^cu)^m
\wedge\beta^{n-m}\right]^\frac{p}{m}
\left[ \int_\Omega -h^*_{U_\varepsilon,\Omega}(dd^cv)^m
\wedge\beta^{n-m}\right]^\frac{m-p}{m}\\
&\leq \left[\int_\Omega (dd^cu)^m\wedge\beta^{n-m}\right]^\frac{p}{m}
\left[\int_\Omega (dd^cv)^m\wedge\beta^{n-m}\right]^\frac{m-p}{m},
\end{align*}
where $U_\varepsilon=\{\rho<-\varepsilon\}$ and
$ h_{U_\varepsilon,\Omega}
= \sup\{u\in SH_m(\Omega);~ u\leq 0;~ u_{|_{U_\varepsilon}}\leq -1\}$.
It is clear that $-1\leq h^*_{U_\varepsilon,\Omega}\leq 0$,
$\lim_{\zeta\rightarrow\partial\Omega}h^*_{U_\varepsilon,\Omega}(\zeta)=0$ and
$\int_\Omega (dd^c h^*_{ U_\varepsilon,\Omega})^m\wedge \beta^{n-m}<+\infty$.
Hence the inequality (i) is still applicable by Remark~\ref{pr-repr5}.
\end{proof}
\subsection{ $m$-capacity, convergence theorems, the comparison principle}
\label{pr}
For $E$ a Borel set in $\Omega$ we define
$$
cap_m(E,\Omega)
= \sup \{ \int_E (dd^cu)^m\wedge \beta^{n-m},
\; u\in SH_m(\Omega), \; 0\leq u\leq 1\}.
$$
In view of Proposition~\ref{cln}, it is finite as soon as
$E$ is relatively compact in $\Omega$.
This is the version of the relative capacity in the case of
$m$-subharmonic functions. It is an useful tool to
establish convergent properties, especially the comparison principle.
\begin{theorem}[Convergence theorem]
\label{pr-th-9}
Let $\{u_k^j\}_{j=1}^{\infty}$, $k=1,...,m$ be locally uniformly bounded sequences of
$m$-subharmonic functions in $\Omega$,
$u_k^j\rightarrow u_k\in SH_m\cap L^\infty(\Omega)$ in $\mathop{cap}_m$ as $j\rightarrow \infty$.
Then
$$
\lim_{j\rightarrow\infty} dd^cu_1^j\wedge ....\wedge dd^cu_m^j\wedge\beta^{n-m}
= dd^cu_1\wedge ...\wedge dd^c u_m\wedge\beta^{n-m}
$$
in the topology of currents.
\end{theorem}
\begin{proof}
See the proof of Theorem 1.11 in \cite{K2}.
\end{proof}
\begin{remark}
\label{pr-reth9}
One may prove as in Theorem 2.1 of \cite{BT2} that for $1\leq k\leq m$, let $u_1^j,...,u_k^j$
be decreasing sequences of locally bounded $m$-sh functions such that
$\lim_{j\rightarrow\infty}u^j_l(z)=u_l(z)\in SH_m\cap L^\infty_{loc}(\Omega)$
for all $z\in\Omega$ and $1\leq l\leq k$. Then
$$
\lim_{j\rightarrow\infty} dd^cu_1^j\wedge ....\wedge dd^cu_k^j\wedge\beta^{n-m}
= dd^cu_1\wedge ...\wedge dd^c u_k\wedge\beta^{n-m}
$$
in the sense of currents. Thus, the currents obtained in the inductive definition
$(H_k)$ of the wedge product of currents associated to locally bounded
$m$-sh functions are closed positive currents.
\end{remark}
\begin{proposition}
\label{pr-pr-10}
If $u_j\in SH_m\cap L^\infty(\Omega)$ is a sequence decreasing to a bounded function
$u$ in $\Omega$ then it converges to $u\in SH_m\cap L^\infty(\Omega)$ with respect to
$\mathop{cap}_m$. In particular, Theorem~\ref{pr-th-9} holds in this case.
\end{proposition}
\begin{proof}
See Proposition 1.12 in \cite{K2}.
\end{proof}
\begin{theorem}[Quasi-continuity]
\label{pr-th-11}
For a $m$-subharmonic function $u$ defined in $\Omega$ and for each
$\varepsilon>0$, there is an open subset $U$ such that
$\mathop{cap}_m(U,\Omega) < \varepsilon$ and $u$ is continuous in $\Omega \setminus U$.
\end{theorem}
\begin{proof}
See Theorem 1.13 in \cite{K2}.
\end{proof}
From the quasi-continuity of $m$-subharmonic functions
one can derive several important results.
\begin{theorem}
\label{pr-th-12}
Let $u,v$ be locally bounded $m$-sh functions on $\Omega$.
Then we have an inequality of measures
$$
(dd^c\max\{u,v\})^m\wedge\beta^{n-m}
\geq {\bf 1}_{\{u\geq v\}}(dd^cu)^m\wedge\beta^{n-m}
+{\bf 1}_{\{u<v\}}(dd^cv)^m\wedge\beta^{n-m}.
$$
\end{theorem}
\begin{proof}
See Theorem 6.11 in \cite{D1}.
\end{proof}
\begin{theorem}[Comparison principle]
\label{pr-th-13}
Let $\Omega$ be an open bounded subset of $\hbox{\bbbld C}^n$.
For $u,v\in SH_m\cap L^\infty(\Omega)$ satisfying
$\liminf_{\zeta\rightarrow z}(u-v)(\zeta)\geq 0$ for any $z\in \partial \Omega$, we have
$$
\int_{\{u<v\}} (dd^cv)^m\wedge\beta^{n-m}
\leq \int_{\{u<v\}} (dd^cu)^m\wedge\beta^{n-m}.
$$
\end{theorem}
\begin{proof}
The proof follows the lines of the proof of Theorem 1.16 in \cite{K2}.
First consider $u,v\in C^\infty(\Omega)$,
$E=\{u<v\}\subset\subset\Omega$, and smooth $\partial\Omega$.
In this case, put $u_\varepsilon=\max\{u+\varepsilon,v\}$ and use Stokes' theorem to get
\begin{equation}
\label{pr-th13-0}
\begin{aligned}
& \int_E(dd^cu_\varepsilon)^m\wedge\beta^{n-m}
=\int_{\partial E} d^cu_\varepsilon\wedge (dd^c u_\varepsilon)^{m-1}\wedge\beta^{n-m} \\
&= \int_{\partial E} d^cu\wedge (dd^c u)^{m-1}\wedge\beta^{n-m}
=\int_{ E} (dd^c u)^{m}\wedge\beta^{n-m}
\end{aligned}
\end{equation}
(since $u_\varepsilon=u+\varepsilon$ on neighborhood of $\partial E$).
By Theorem~\ref{pr-th-9}, $(dd^cu_\varepsilon)^m\wedge\beta^{n-m}$
converges weakly$^*$ to $(dd^cv)^m\wedge\beta^{n-m}$ as
$\varepsilon\rightarrow 0$ on the open set $E$, it implies that
$$
\int_E (dd^cv)^m\wedge\beta^{n-m}
\leq \liminf_{\varepsilon\rightarrow \infty}
\int_E(dd^cu_\varepsilon)^m\wedge\beta^{n-m}.
$$
This combining with \eqref{pr-th13-0} imply the statement.
For the general case, suppose $\|u\|, \|v\|<1$, fix $\varepsilon>0$ and $\delta>0$.
From the quasi-continuity, there is an open set $U$ such that
$cap_m(U,\Omega)<\varepsilon$ and $u=\tilde{u}$, $v=\tilde v$ on
$\Omega\setminus U$ for some continuous functions
$\tilde u$, $\tilde v$ in $\Omega$. Let $u_k$, $v_k$
be the standard regularizations of $u$ and $v$.
By Dini's theorem $u_k$ and $v_k$ uniformly converge (correspondingly) to
$u$ and to $v$ on $\Omega\setminus U$. Then for $k>k_0$ big enough, subsets
$E(\delta):=\{\tilde u+\delta<\tilde v\}$ and $E_k(\delta):=\{u_k+\delta<v_k\}$ satisfy
\begin{equation}
\label{pr-th13-1}
E(2\delta)\setminus U\subset\subset \bigcap_k E_k(\delta)\setminus U
\;\; \text{ and } \;\; \bigcup_k E_k(\delta)\setminus U
\subset\subset\{\tilde u<\tilde v\}.
\end{equation}
In what follows we shall often use the estimate
$$
\int_U (dd^cw)^m\wedge\beta^{n-m}
\leq cap_m(U,\Omega)<\varepsilon
\;\; \text{ where } \;\; 0\leq w\leq 1,
$$
not mentioning this any more.
Since $\{u+2\delta<v\}=\{\tilde u+2\delta<\tilde v\}$ on $\Omega\setminus U$ ,
\begin{equation}
\label{pr-th13-2}
\begin{aligned}
\int_{\{u+2\delta<v\}}(dd^cv)^m \wedge\beta^{n-m}
&\leq\int_{\{\tilde u+2\delta<\tilde v\}\setminus U}
(dd^cv)^m\wedge\beta^{n-m} +\varepsilon \\
& = \int_{E(2\delta)\setminus U}
(dd^cv)^m\wedge\beta^{n-m} +\varepsilon.
\end{aligned}
\end{equation}
Since $(dd^cv_k)^m\wedge\beta^{n-m}$ weakly$^*$ converges to
$(dd^cv)^m\wedge\beta^{n-m}$ and
$E(2\delta)$ is open and by \eqref{pr-th13-1} we get
\begin{equation}
\label{pr-th13-3}
\begin{aligned}
\int_{E(2\delta)}(dd^cv)^m\wedge\beta^{n-m}
& \leq \liminf_{k\rightarrow\infty}\int_{E(2\delta)}
(dd^cv_k)^m\wedge\beta^{n-m} \\
& \leq \liminf_{k\rightarrow\infty}\int_{E_k(\delta)}
(dd^cv_k)^m\wedge\beta^{n-m}+\varepsilon.
\end{aligned}
\end{equation}
Now, from Sard's theorem, we may assume that $E_k(\delta)$ has smooth boundary
(changing $\delta$ if needed), thus using the argument of the smooth case we have
\begin{equation}
\label{pr-th13-4}
\int_{E_k(\delta)}(dd^cv_k)^m\wedge\beta^{n-m}
\leq \int_{E_k(\delta)}(dd^cu_k)^m\wedge\beta^{n-m}.
\end{equation}
Therefore, by \eqref{pr-th13-2}, \eqref{pr-th13-3} and \eqref{pr-th13-4}, we have
\begin{equation}
\label{pr-th13-5}
\int_{\{u+2\delta<v\}}(dd^cv)^m\wedge\beta^{n-m}
\leq \liminf_{k\rightarrow\infty}\int_{E_k(\delta)}
(dd^cu_k)^m\wedge\beta^{n-m}+2\varepsilon.
\end{equation}
Furthermore, using \eqref{pr-th13-1} and the fact that
$(dd^cu_k)^m\wedge\beta^{n-m}$ weakly$^*$ converges to
$(dd^cu)^m\wedge\beta^{n-m}$ we obtain
\begin{equation}
\label{pr-th13-6}
\limsup_{k\rightarrow\infty}\int_{\overline{\cup_k E_k(\delta)\setminus U}}
(dd^cu_k)^m\wedge\beta^{n-m}
\leq \int_{\overline{\cup_k E_k(\delta)\setminus U}}
(dd^cu)^m\wedge\beta^{n-m}.
\end{equation}
Thus, from \eqref{pr-th13-1}, \eqref{pr-th13-5} and \eqref{pr-th13-6} one has
\begin{equation}
\label{pr-th13-7}
\int_{\{u+2\delta<v\}}(dd^cv)^m\wedge\beta^{n-m}
\leq \int_{\{\tilde u<\tilde v\}}(dd^cu)^m\wedge\beta^{n-m}+3\varepsilon
\leq \int_{\{u<v\}}(dd^cu)^m\wedge\beta^{n-m}+4\varepsilon.
\end{equation}
Finally, letting $\delta$ and $\varepsilon$ tend to $0$ in \eqref{pr-th13-7} the statement is proved.
\end{proof}
\begin{corollary}
\label{pr-co-14}
Under the assumption of Theorem~\ref{pr-th-13} we have
\begin{enumerate}
\item[(a)]
If $(dd^cu)^m\wedge\beta^{n-m}\leq (dd^cv)^m\wedge\beta^{n-m}$ then $v\leq u$,
\item[(b)]
If $(dd^cu)^m\wedge\beta^{n-m} = (dd^cv)^m\wedge\beta^{n-m}$
and $\lim_{\zeta\rightarrow z}(u-v)(\zeta)=0$ for $z\in\partial\Omega$ then $u=v$,
\item[(c)]
If $\lim_{\zeta\rightarrow \partial \Omega} u(\zeta)
=\lim_{\zeta\rightarrow \partial \Omega} v(\zeta)=0$
and $u\leq v$ in $\Omega$, then
$$
\int_\Omega (dd^c v)^m\wedge \beta^{n-m}
\leq \int_\Omega (dd^cu)^m\wedge \beta^{n-m}.
$$
\end{enumerate}
\end{corollary}
\begin{proof}
For (a) and (b) see Corollary 1.17 in \cite{K2}. For (c), let $\varepsilon>0$,
applying Theorem~\ref{pr-th-13} we have
$$
\int_\Omega (dd^c v)^m\wedge \beta^{n-m}
\leq (1+\varepsilon)^n\int_\Omega (dd^cu)^m\wedge \beta^{n-m}.
$$
Then, letting $\varepsilon \rightarrow 0$ which gives the result.
\end{proof}
\bigskip
\section{Subsolution theorem}
\label{su}
\setcounter{equation}{0}
In this section we will prove our main theorem. The method we use here is similar to the
one from the proof of the plurisubharmonic case (see \cite{K2}, Theorem 4.7). We first recall
the theorem due to Dinew and Ko\l odziej about the weak solution of the complex Hessian
equation with the right hand side in $ L^p$ (see \cite{DK}, Theorem 2.10). From now on we
only consider $1<m<n$.
\begin{theorem}[\cite{DK}]
\label{su-th-0}
Let $\Omega$ be a smoothly strongly $m$-pseudoconvex domain. Then for $p>n/m$,
$f\in L^p(\Omega)$ and a continuous function $\varphi$ on $\partial\Omega$ there exists
$u\in SH_m(\Omega)\cap C(\bar\Omega )$ satisfying
$$
(dd^cu)^m\wedge\beta^{n-m}=f\beta^n ,
$$
and $u=\varphi$ on $\partial\Omega$.
\end{theorem}
Let us state the subsolution theorem
\begin{theorem}
\label{su-th-1}
Let $\Omega$ be a smoothly strongly $m$-pseudoconvex domain in $\hbox{\bbbld C}^n$,
and let $\mu$ be a finite positive Borel measure in $\Omega$.
If there is a subsolution $v$, i.e
\begin{equation}
\label{su-th1-1}
\begin{cases}
v \in SH_m \cap L^{\infty}(\Omega), \\
(dd^cv)^m \wedge \beta^{n-m} \geq d\mu, \\
\lim_{\zeta\rightarrow z} v(\zeta)=\varphi (z)
\text{ for any } z\in\partial\Omega,
\end{cases}
\end{equation}
then there is a solution $u$ of the following Dirichlet problem
\begin{equation}
\label{su-th1-2}
\begin{cases}
u \in SH_m\cap L^{\infty}(\Omega), \\
(dd^cu)^m \wedge \beta^{n-m} = d\mu, \\
\lim_{\zeta\rightarrow z} u(\zeta)
=\varphi (z) \text{ for any } z\in\partial\Omega.
\end{cases}
\end{equation}
\end{theorem}
\begin{proof}
We first prove Theorem 2.1 under two extra assumptions:
1) the measure $\mu$ has compact support in $\Omega$;
2) the function $\varphi$ is in the class $C^2 .$
Using the first of those conditions we can modify $v$ so that
$v$ is $m$-subharmonic in a neighborhood of $\Omega$ (and still is a subsolution).
To do this take an open subset
$supp~\mu \subset \subset U \subset \subset\Omega$ and consider the envelope
$$
\hat{v}= \sup\lbrace w\in SH_m(\Omega): w\leq 0,~w
\leq v ~\text{ on }~ U \rbrace.
$$
Then from Proposition~\ref{pr-pr-3}-\eqref{pr-pr3-6} $\hat{v}^*$ is a competitor in the definition of
the envelope, hence $\hat{v}=\hat{v}^*\in SH_m(\Omega)$. The balayage procedure implies that
$\hat{v}=v$ on $U$ and $\lim_{\zeta\rightarrow z} \hat{v}(\zeta)=0$ for any
$z\in\partial\Omega$ (the balayage still works
as in the case of plurisubharmonic functions by results in \cite{Bl}, Theorem 1.2, Theorem 3.7).
Thus,
$(dd^c\hat{v})^m\wedge\beta^{n-m}\geq d\mu$ as $supp~ \mu \subset \subset U$.
Next, take $\rho$ a defining function of $\Omega$ which is smooth on a neighborhood
$\Omega_1$ of $\bar{\Omega}$ and $(dd^c\rho)^k\wedge\beta^{n-k}\geq \varepsilon\beta^n$,
$1\leq k\leq m$, in $\bar{\Omega}$ for some
$\varepsilon>0$. Since $\hat{v}$ is bounded we can further choose
$\rho$ satisfying $\rho\leq\hat{v}$ on $\bar{U}$. Put
$$
v_1(z):=
\begin{cases}
\max\{\rho(z),\hat{v}(z)\} &\text{on} \;\; \bar{\Omega}, \\
\rho(z) &\text{on} \;\; \Omega_1\setminus \bar{\Omega}.
\end{cases}
$$
Hence $v_1$ is a subsolution which is defined and $m$-subharmonic
in a neighborhood of $\bar{\Omega }$.
We still write $v$ instead of $v_1$ in what follows. Furthermore,
using the balayage procedure (as above)
one can make the support of $d\nu:=(dd^cv)^m\wedge\beta^{n-m}$ compact in $\Omega$.
Now, we can sketch the rest of the proof of the theorem.
We will approximate $d\mu$ by a sequence of
measures $\mu_j$ for which the Dirichlet problem is solvable
(using Theorem~\ref{su-th-0}) obtaining a
sequence of solutions $\{u_j\}$ corresponding to $\mu_j$.
Then we take a limit point $u$ of $\{u_j\}$ in
$ L^1(\Omega)$. Finally we show that $u_j\rightarrow u$ with respect to $\mathop{cap}_m$ in order to
conclude that $u$ is a solution of \eqref{su-th1-2}.
By the Radon-Nikodym theorem $d\mu=hd\nu$, $0\leq h\leq 1$.
For the subsolution $v$ one can define the
regularizing sequence $w_j\downarrow v$ in a neighborhood of the closure of $\Omega$.
Let us write
$(dd^cw_j)^m\wedge\beta^{n-m}=g_j\beta^n$, $\mu_j:=hg_j\beta^n$.
Then by Proposition~\ref{pr-pr-10}
$\lim_{j\rightarrow\infty} \mu_j=\mu$. As $\mu$ has compact support,
so $\mu_j$'s does. In particular,
$hg_j\in L^p(\Omega)$ for every $p>0$.
Therefore, applying Theorem~\ref{su-th-0} we have $u_j$ solving
\begin{equation}
\label{su-pf-1}
\begin{cases}
u_j\in SH_m(\Omega) \cap C(\bar{\Omega}), \\
(dd^cu_j)^m\wedge\beta^{n-m}=\mu_j,\\
u_j (z)=\varphi (z) \text{ for } z\in\partial\Omega.
\end{cases}
\end{equation}
Now we set $u=(\limsup u_j)^*$, and passing to a subsequence we assume that
$u_j$ converges to $u$ in $ L^1(\Omega)$.
From the definition of $w_j$ they are uniformly bounded.
Choosing a uniform constant $C$such that $w_j-C < \varphi$
on $\partial \Omega$, by Corollary~\ref{pr-co-14}-(a),
$w_j-C\leq u_j\leq\sup_{\bar\Omega}\varphi$. Thus, $\{u_j\} $ is uniformly bounded.
In particular, $u$ is also
bounded and now we shall check that
$\lim_{\Omega\ni\zeta\rightarrow z}u(\zeta)=\varphi(z)$ for every
$z\in\partial\Omega$. For this we only need $\varphi$ to be continuous.
Since $w_j$ converges uniformly to $v$ on $\partial\Omega$ and
$\partial\Omega$ is compact, given $\varepsilon>0$
we have $|w_j-v|<\varepsilon$ on a small neighborhood of $\partial\Omega$
when $j$ big enough. Since $\varphi$ is continuous on $\partial\Omega$,
there is an approximant $g\in C^2(\overline{\Omega})$ of the continuous extension of
$\varphi$ such that $|g-\varphi|<\varepsilon$ on $\partial\Omega$.
For $A>0$ big enough, $A\rho+g$ is a $m$-sh function.
By the comparison principle,
it implies that $w_j+A\rho+g-2\varepsilon\leq u_j$ on $\Omega$.
Then $v+A\rho+\varphi-4\varepsilon\leq u_j$ on a small neighborhood of
$\partial\Omega$ for $j$ big enough.
Hence, $v+A\rho+\varphi-4\varepsilon\leq \liminf_{j\rightarrow \infty} u_j\leq u$
on a small neighborhood of $\partial\Omega$.
Because this is true for arbitrary $\varepsilon>0$, we obtain
$\lim_{\zeta\rightarrow z}u(\zeta)=\varphi(z)$ for any $z\in\partial\Omega.$
The difficult part is to show that $u_j$ converges in $\mathop{cap}_m$ to $u$.
\begin{lemma}
\label{su-le-2}
The function $u$ defined above solves the Dirichlet problem \eqref{heq}
provided that for any $a>0$ and any compact $K\subset\Omega$ we have
\begin{equation}
\label{su-le2-1}
\lim_{j\rightarrow\infty} \int_{K\cap\{u-u_j\geq a\}}
(dd^cu_j)^m\wedge\beta^{n-m}
= \lim_{j\rightarrow\infty}\mu_j(K\cap\{u-u_j\geq a\})
=0.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{su-le-2}]
Using Theorem~\ref{pr-th-12} we have
\begin{align*}
(dd^cu_j)^m \wedge \beta^{n-m}
& = 1_{\{u-u_j\geq a\}}(dd^cu_j)^m\wedge\beta^{n-m}
+1_{\{u-u_j<a\}}(dd^cu_j)^m\wedge\beta^{n-m} \\
&\leq 1_{\{u-u_j\geq a\}}\mu_j
+ (dd^c\max\{u,u_j+a\})^m\wedge\beta^{n-m}.
\end{align*}
It follows that
\begin{equation}
\label{su-le2-2}
\mu_j
\leq 1_{\{u-u_j\geq a\}}\mu_j+(dd^c\max\{u-a,u_j\})^m\wedge\beta^{n-m}.
\end{equation}
Now, for any integer $s$ we may choose $j(s)$ such that $\mu_{j(s)}(\{u-u_{j(s)}\geq 1/s\})<1/s$.
From \eqref{su-le2-1} and \eqref{su-le2-2} we infer that
\begin{equation}
\label{su-le2-2'}
\mu
\leq \liminf_{s\rightarrow\infty} (dd^c\rho_s)^m\wedge\beta^{n-m},
\end{equation}
it means that $ \mu$ is less than any limit point of the right hand side,
where $\rho_s=\max\{u-1/s, u_{j(s)}\}$.
By the Hartogs lemma, $\rho_s\rightarrow u$ uniformly on any compact $E$
such that $u_{|_E}$ is continuous. So it follows from the quasi-continuity of
$m$-sh functions that $\rho_s$ converges to $u$ in $\mathop{cap}_m$.
Therefore, by Theorem~\ref{pr-th-9}
$(dd^c\rho_s)^m\wedge\beta^{n-m}\rightarrow (dd^cu)^m\wedge\beta^{n-m}$ as measures.
This combined with \eqref{su-le2-2'} implies
\begin{equation}
\label{su-le2-3}
\mu\leq (dd^cu)^m\wedge\beta^{n-m}.
\end{equation}
For the reverse inequality, let
$\Omega_\varepsilon = \{z\in\Omega~; dist(z,\partial\Omega)<\varepsilon\}$.
We will show that for $\varepsilon>0$
\begin{equation}
\label{su-le2-3'}
\mu(\Omega)\geq \int_{\Omega_\varepsilon} (dd^cu)^m\wedge\beta^{n-m}.
\end{equation}
Indeed, firstly we note that $\rho_s=u_{j(s)}$ on a neighborhood of
$\partial\Omega_\varepsilon$ for $\varepsilon$ small enough
since $u-u_{j(s)}<1/s$ on $\partial\Omega$, $u-u_{j(s)}$ is upper semi-continuous
on $\Omega$ and $\partial\Omega$ is compact.
Hence, by the weak$^*$ convergence $\mu_{j(s)}\rightarrow \mu$ and Stokes' theorem,
\begin{align*}
\mu(\Omega)\geq \mu(\overline{\Omega_\varepsilon})
&\geq \limsup_{j(s)\rightarrow\infty} \mu_{j(s)}(\overline{\Omega_\varepsilon}) \\
&\geq \liminf_{j(s)\rightarrow\infty}\mu_{j(s)}(\Omega_\varepsilon) \\
&= \liminf_{j(s)\rightarrow\infty} \int_{\Omega_\varepsilon}
(dd^cu_{j(s)})^m\wedge\beta^{n-m}
=\liminf_{j(s)\rightarrow\infty} \int_{\Omega_\varepsilon}
(dd^c\rho_s)^m\wedge\beta^{n-m} \\
&\geq \int_{\Omega_\varepsilon} (dd^cu)^m\wedge\beta^{n-m},
\end{align*}
where in the last inequality we use the weak$^*$ convergence
$(dd^c\rho_s)^m\wedge\beta^{n-m}\rightarrow (dd^cu)^m\wedge\beta^{n-m}$.
Therefore, \eqref{su-le2-3'}is proved. Let $\varepsilon\rightarrow 0$, then it implies
$\mu(\Omega)\geq (dd^cu)^m\wedge\beta^{n-m}(\Omega)$.
Thus the measures in \eqref{su-le2-3} are equal. The lemma follows.
\end{proof}
It remains to prove \eqref{su-le2-1} in Lemma 2.2 above.
It is a consequence of the following lemma.
\begin{lemma}
\label{su-le-3}
Suppose that there is a subsequence of $\{u_j\}$, still denoted by $\{u_j\}$, such that
\begin{equation*}
\label{su-le3-1}
\int_{\{u-u_j\geq a_0\}} (dd^cu_j)^m\wedge\beta^{n-m}>A_0, \;\; A_0>0, a_0>0.
\end{equation*}
Then, for $0\leq p \leq m$ there exist $a_p$, $A_p$, $k_1>0$ such that
\begin{equation}
\label{su-le3-1'}
\int_{\{u-u_j
\geq a_p\}} (dd^cv_j)^{m-p}\wedge (dd^cv_k)^{p}\wedge\beta^{n-m}>A_p,
\;\; j>k>k_1 ,
\end{equation}
for $v_j's$ the solutions (from Theorem~\ref{su-th-0}) of the Dirichlet problem
\begin{equation}
\label{su-le3-2}
\begin{cases}
v_j\in SH_m(\Omega)\cap C(\bar{\Omega}), \\
(dd^cv_j)^m\wedge\beta^{n-m} = \nu_j \; (= g_j\beta^n),\\
v_j(z)= 0 \;\; \text{ on } \;\; \partial\Omega.
\end{cases}
\end{equation}
Note that $\{v_j\}$ is uniformly bounded as a consequence of the uniform boundedness of
$\{w_j\}$ and Corollary~\ref{pr-co-14}-(a).
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{su-le-3}]
We will prove it by induction over $p$. For $p=0$ the statement holds by the hypothesis.
Suppose that \eqref{su-le3-1'} is true for $p<m$, we need to prove it for $p+1$.
The first observation is that if
$T(r,s):=(dd^cu_r)^q\wedge (dd^cv_s)^{m-q}\wedge\beta^{n-m}$
then there is a constant $C$ independent of $r,s$ such that
\begin{equation}
\label{su-le3-3}
\int_\Omega T(r,s)\leq C.
\end{equation}
Indeed, fix a $C^2$ extension of $\varphi$ to a neighborhood of the closure of
$\Omega $. If $\rho$ is a defining function of $\Omega$, then there is a constant
$A>0$ such that $A\rho\pm\varphi\in SH_m(\Omega)$.
We shall check that $u_r+A\rho-\varphi$ belongs to $\mathcal{E}_0(m)$.
It is enough to verify
$$
\int_\Omega (dd^c(u_r+A\rho - \varphi))^m\wedge\beta^{n-m}<+\infty.
$$
In fact, from
$(dd^cu_r)^m\wedge \beta^{n-m}
=hg_r \beta^n\leq (dd^c (M_r\rho+\varphi))^m\wedge \beta^{n-m}$
for some $M_r>0$ and Corollary~\eqref{pr-co-14}-(a) we have
$u_r\geq M_r\rho +\varphi$ in $\Omega$.
Hence, $u_r+A\rho -\varphi\geq (M_r+A)\rho$ in $\Omega$.
Thus, by Corollary~\ref{pr-co-14}-(c)
$$
\int_\Omega (dd^cu_r+A\rho-\varphi)^m\wedge \beta^{n-m}
\leq \int_\Omega (dd^c (M_r+A)\rho)^m\wedge \beta^{n-m}<+\infty.
$$
Now, we note that $\mu_r(\Omega)$ and $\nu_s(\Omega)$ are bounded as
$\mu$ and $\nu$ have compact support.
Next, from Cegrell's inequalities, Corollary~\ref{pr-co-6}-(ii), for $1\leq k\leq m-1$, it implies
\begin{equation*}
\label{su-le3-3.1}
\begin{aligned}
\int_\Omega (dd^c(u_r+A\rho-\varphi))^k
& \wedge(dd^c\rho)^{m-k}\wedge\beta^{n-m} \\
&\leq \left[\int_\Omega (dd^c(u_r+A\rho-\varphi))^m\wedge\beta^{n-m}\right]^\frac{k}{m}
\left[\int_\Omega(dd^c\rho)^m\wedge\beta^{n-m}\right]^\frac{m-k}{m}.
\end{aligned}
\end{equation*}
Hence,
\begin{equation}
\label{su-le3-3.2}
\begin{aligned}
I(r) = &\, \int_\Omega (dd^c(u_r+A\rho-\varphi))^m\wedge\beta^{n-m}\\
\leq &\, \int_\Omega (dd^cu_r)^m\wedge\beta^{n-m}
+\int_\Omega (dd^c(A\rho-\varphi))^m\wedge\beta^{n-m}\\
\;\; &\,+ C(A,\varphi)\sum_{k=1}^{m-1}\int_\Omega (dd^cu_r
+A\rho -\varphi)^k\wedge(dd^c\rho)^{m-k}\wedge\beta^{n-m} \\
\leq &\, \mu_r(\Omega)+C(A,\rho,\varphi)\\
\;\; &\, +C(A,\varphi)\sum_{k=1}^{m-1}
\left[\int_\Omega(dd^c(u_r + A\rho-\varphi))^m\wedge\beta^{n-m} \right]^\frac{k}{m}
\left[\int_\Omega(dd^c\rho)^m\wedge\beta^{n-m} \right]^\frac{m-k}{m} \\
\leq &\, \mu_r(\Omega)+C(A,\rho,\varphi)
+C'(A,\varphi,\rho) \sum_{k=1}^{m-1}\left[I(r)\right]^\frac{k}{m}.
\end{aligned}
\end{equation}
Consider the two sides of the inequality \eqref{su-le3-3.2} as two positive functions in $r$.
$\mu_r(\Omega)$'s are bounded, and the degree of $I(r)$ on the right hand side is strictly
less than the degree of $I(r)$ on the left hand side, therefore $I(r)$ are bounded by
a constant independent of $r$. Again, by Cegrell's inequalities, Corollary~\ref{pr-co-6}-(ii),
as $v_s$ obviously belongs to $\mathcal{E}_0(m)$,
\begin{equation*}
\label{su-le3-3.3}
\begin{aligned}
\int_\Omega T(r,s)
&\leq \int_\Omega (dd^c(u_r+A\rho-\varphi))^q\wedge(dd^cv_s)^{m-q}\wedge\beta^{n-m}\\
&\leq \left[\int_\Omega (dd^c(u_r+A\rho-\varphi))^m\wedge\beta^{n-m}\right]^\frac{q}{m}
\left[ \int_\Omega (dd^cv_s)^m\wedge\beta^{n-m}\right]^\frac{m-q}{m}\\
&\leq \left[I(r)\right]^\frac{q}{m}\left[\nu_s(\Omega)\right]^\frac{m-q}{m}\\
&\leq C''(A,\varphi,\rho),
\end{aligned}
\end{equation*}
because $I(r)$ and $\nu_s(\Omega)$ are bounded. Thus we have proved \eqref{su-le3-3}.
We may assume that $-1<u_j, v_j<0$,
because all functions $u_j,v_j$ are uniformly bounded by a constant independent of $j$,
the estimates in the statement of Lemma~\ref{su-le-3} will only be changed by a uniformly
positive constant. To simplify notations we set
$S(j,k):=(dd^cv_j)^{m-p-1}\wedge (dd^cv_k)^p\wedge\beta^{n-m}$.
Fix a positive number $d>0$ (specified later in \eqref{su-le3-5})
and recall that we need a uniform estimate from below for
$\int_{\{u-u_j\geq d\}} dd^cv_k\wedge S(j,k)$.
From the assumption on $u_j, v_j$, we have $u-u_j\leq {\bf 1}_{\{u-u_j \geq d\}}+d$.
It follows that
\begin{equation*}
\label{su-le3-4'}
\begin{aligned}
J(j,k):=
\int_\Omega (u-u_j)(dd^cv_k)\wedge S(j,k)
& \leq \int_\Omega {\bf 1}_{\{u-u_j\leq d\}}dd^cv_k\wedge S(j,k)
+ d\int_\Omega dd^cv_k\wedge S(j,k) \\
& \leq \int_{\{u-u_j\geq d\}}dd^cv_k\wedge S(j,k) + d C,
\end{aligned}
\end{equation*}
where $C$ is from \eqref{su-le3-3}. Therefore
\begin{equation}
\label{su-le3-4}
\int_{\{u-u_j\geq d\}}dd^cv_k\wedge S(j,k)\geq J(j,k)- dC.
\end{equation}
The induction hypothesis says that there exist $a_p, A_p>0$ and $k_1>0$ such that
\begin{equation}
\label{su-le3-5}
\int_{\{u-u_j\geq a_p\}}
(dd^cv_j)^{m-p}\wedge (dd^cv_k)^{p}\wedge\beta^{n-m}
>A_p, \;\; j>k>k_1.
\end{equation}
We fix another small positive constant $\varepsilon>0$ and put
$J'(j,k):=\int_\Omega (u-u_j)dd^cv_j\wedge S(j,k)$.
{\it Claim.}
\begin{enumerate}
\item[(a)] $J'(j,k)- J(j,k) \leq \varepsilon$,
\item[(b)] $J'(j,k)\geq a_pA_p -\varepsilon(1+C)$ for $j>k>k_2$.
\end{enumerate}
\begin{proof}[Proof of Claim]
{\bf (a)} By the quasi-continuity, we can choose an open set $U$ such that functions
$u , v$ are continuous off the set $U$ and $\mathop{cap}_m(U,\Omega)<\varepsilon/2^{m+1}$.
Then
\begin{equation}
\label{su-cl-1}
\int_U (dd^c(v_j+v_k))^m\wedge\beta^{n-m}
<2^m cap_m(U,\Omega)
<\varepsilon/2,
\end{equation}
\begin{equation}
\label{su-cl-2}
\int_U (dd^c(u_j+v_k))^m\wedge\beta^{n-m}<\varepsilon/2.
\end{equation}
Therefore
\begin{equation}
\label{su-cl-3}
\begin{aligned}
J'(j,k)-J(j,k)
& = \int_\Omega (u-u_j)dd^cv_j\wedge S(j,k)
-\int_\Omega (u-u_j)dd^cv_k\wedge S(j,k) \\
& = \int_\Omega v_jdd^c(u-u_j)\wedge S(j,k)
- \int_\Omega v_kdd^c(u-u_j)\wedge S(j,k) \\
& = \int_\Omega (v_j-v_k)dd^c(u-u_j)\wedge S(j,k)\\
& = \int_{\Omega\setminus U} (v_j-v_k)dd^c(u-u_j)\wedge S(j,k)
+ \int_U (v_j-v_k)dd^c(u-u_j)\wedge S(j,k)\\
& \leq \int_{\Omega\setminus U} \|v_j-v_k\|dd^c(u+u_j)\wedge S(j,k)
+ \int_Udd^c(u+u_j)\wedge S(j,k),
\end{aligned}
\end{equation}
where in the second equality we used the integration by parts formula twice with
$u=u_j=\varphi$, $v_j=0$ on the boundary, and in the last estimate we used the fact
$-1<u_j,v_j<0$. Since $v_j$ converges uniformly to $v$ on
$\Omega\setminus U$ one can find $l>k_1$ such that $\|v_j-v_k\|<\varepsilon/ 2C$ on
$\Omega\setminus U$ for $j>k>l>k_1$. This combined with \eqref{su-le3-3}, \eqref{su-cl-1}
and \eqref{su-cl-2}imply that each of the integrals in the last line of \eqref{su-cl-3}
is at most $\varepsilon/2$.The first part of the claim follows.
{\bf (b)} We first observe that from the upper bound of all $u_j$ (resp. $v_j$)
by $\sup\varphi$ (resp. $0$) on the boundary, we have for $k>k_2>l$,
in a neighborhood of $\partial \Omega$
\begin{equation}
\label{su-cl-4}
v_k\leq v+\varepsilon \;\; \text{and} \;\; u_k\leq u+\varepsilon.
\end{equation}
Those inequalities are still valid (after increasing $k_2$) on
$\Omega\setminus U$ thanks to the Hartogs lemma.
Hence, using \eqref{su-le3-3}, \eqref{su-cl-1} and \eqref{su-cl-4} we have for $j>k>k_2$
\begin{align*} J'(j,k)
& = \int_\Omega (u-u_j)dd^cv_j\wedge S(j,k)\\
& \geq a_p\int_{\{ u-u_j\geq a_p\}}dd^cv_j\wedge S(j,k)
+ \int_{\{ u-u_j <a_p\}}(u-u_j)dd^cv_j\wedge S(j,k) \\
& = a_p\int_{\{ u-u_j\geq a_p\}}dd^cv_j\wedge S(j,k)
+ \int_{\{ u-u_j <a_p\}\cap(\Omega\setminus U)}(u-u_j)dd^cv_j\wedge S(j,k)\\
& \;\; + \int_{\{ u-u_j <a_p\}\cap U}(u-u_j)dd^cv_j\wedge S(j,k)\\
& \geq a_p\int_{\{ u-u_j\geq a_p\}}dd^cv_j\wedge S(j,k)
- \varepsilon \int_{\Omega\setminus U}dd^cv_j\wedge S(j,k)
-\int_Udd^cv_j\wedge S(j,k)\\
&\geq a_p A_p-\varepsilon(1+C).
\end{align*}
Thus the proof of the claim is finished.
\end{proof}
From {\it Claim} and \eqref{su-le3-4} we get
$$
\int_{\{u-u_j\geq d\}}dd^cv_k\wedge S(j,k)
\geq J(j,k)- dC\geq J'(j,k)-\varepsilon-dC\geq a_pA_p
-\varepsilon(1+C)-\varepsilon-dC.
$$
If we take
\begin{equation}
\label{su-le3-5}
a_{m+1}:=d=\frac{a_pA_p}{4C}\;\; \text{ and }\;\; \varepsilon\leq \frac{a_pA_p}{2(2+C)},
\end{equation}
then
$$
\int_{\{u-u_j\geq d\}}dd^cv_k\wedge S
\geq \frac{a_pA_p}{4}:=A_{p+1} \;\; \text{for} \;\; j>k>k_2,
$$
which finishes the proof of the inductive step and that of Lemma~\ref{su-le-3}.
\end{proof}
{\it End of the proof of Theorem 2.1.} It is enough to prove the condition (2.4) in Lemma 2.2.
We argue by contradiction. Suppose that it is not true.
Then the assumptions of Lemma 2.3 are valid and its statement for
$p=m$ tells that for a fixed $k>k_1$
$$
\int_{\{u-u_j
\geq a_m\}} (dd^cv_k)^m\wedge\beta^{n-m}
>A_m \;\; \text{ when } \;\; j>k.
$$
Thus
\begin{equation}
\label{su-pf-2}
V(\{u-u_j\geq a_m\})
\geq \frac{1}{M_k}\int_{\{u-u_j\geq a_m\}}(dd^cv_k)^m\wedge\beta^{n-m}
>\frac{A_m}{M_k}\;\; \text{for} \;\; j>k,
\end{equation}
because $(dd^cv_k)^m\wedge\beta^{n-m}=g_k\beta^n\leq M_k\beta^n$ for some $M_k>0$.
But \eqref{su-pf-2} contradicts the fact $u_j\rightarrow u$ in $ L^1_{loc}$,
i.e every subsequence of $\{u_j\}$ also converges to $u$ in $ L^1_{loc}$.
Thus, the theorem is proved under two extra assumptions.
{\it General case (we remove two extra assumptions).} {\bf 1)}
Suppose that $\varphi\in C(\partial\Omega)$ and
the measure $\mu$ has compact support in $\Omega$.
We choose a decreasing sequence
$\varphi_k\in C^2(\partial\Omega)$ converging to $\varphi$.
Then we obtain a sequence of solutions $u_k$ satisfying
$$
\begin{cases}
u_k\in SH_m\cap L^\infty(\Omega) , \\
(dd^cu_k)^m\wedge\beta^{n-m}=\mu ,\\
\lim_{\zeta\rightarrow z} u_k(\zeta)
=\varphi_k (z) \text{ for any } z\in\partial\Omega.
\end{cases}
$$
It follows from the comparison principle, Corollary~\ref{pr-co-14}-(a),
that $u_k$ is decreasing and $u_k\geq v_0$ with $v_0$ a subsolution
without modifications. Set $u=\lim u_k$.
Then $u\geq v_0$ and $(dd^cu)^m\wedge\beta^{n-m}=\mu$ by Proposition~\ref{pr-pr-10}.
Thus, $u$ is the required solution.
{\bf 2)} Suppose that $\mu$ is a finite positive Borel measure, $\varphi\in C(\partial\Omega)$.
Let $\chi_j$ be a non-decreasing sequence of cut-off functions
$\chi_j\uparrow 1$ on $\Omega$. Since $\chi_j\mu$ have compact support in
$\Omega$, one can find solutions corresponding to $\chi_j\mu$, the solutions will be
bounded from below by the given subsolution $v_0$ (from the comparison principle)
and they will decrease to the solution by the convergence theorem.
Thus we have proved Theorem~\ref{su-th-1}.
\end{proof}
|
1,116,691,499,791 | arxiv | \section{The toggling frame}
\label{supp-mat-sec:SI-rotating-frame}
\noindent We consider a master equation for the joint state $\rho(t)$ of a driven qubit, a quantum environment, a cavity mode, and a quasi-continuum of transmission-line modes coupled to the cavity input and output ports, evolving via a time-dependent Hamiltonian $H(t)$. In addition, we assume the qubit (with Pauli-z operator $\sigma_z$) has a dephasing rate $\gamma_\phi$ independent of the quantum environment, and that the occupation of the cavity mode (with annihilation operator $a$) decays at an extrinsic rate $\kappa_{\mathrm{ext}}$ independent of coupling to the input and output transmission lines [Eq.~\eqref{eq:master-eq} and Fig.~\ref{fig:cavity-schematic} of the main text]:
\begin{equation}
\boxed{\dot{\rho}(t)=-i[H(t),\rho(t)]+\frac{\gamma_\phi}{2}\mathcal{D}[\sigma_z]\rho(t)+\kappa_{\mathrm{ext}}\mathcal{D}[a]\rho(t).}\label{eq-supp-mat:master-eq-lab-frame}
\end{equation}
Here, the damping superoperator acts according to $\mathcal{D}[\mathcal{O}]\rho=\mathcal{O}\rho\mathcal{O}^\dagger-\{\mathcal{O}^\dagger\mathcal{O},\rho\}/2$ for an arbitrary operator $\mathcal{O}$, and the Hamiltonian $H(t)$ can be written in terms of a contribution $H_0(t)$ that excludes the cavity/transmission-line coupling:
\begin{equation}
H(t)=H_0(t)+g\sigma_x(a^\dagger+a)+\sum_{i=1,2}\sum_k(\eta_{k,i}e^{i\omega_k t}r_{k,i}^\dagger a+\mathrm{h.c.}),\label{eq-supp-mat:lab-frame-hamiltonian}
\end{equation}
where the qubit is coupled to the cavity mode with a Rabi coupling of strength $g$, and where the cavity mode is coupled to transmission-line modes with strength $\eta_{k,i}$. The mode of the input ($i=1$) or output ($i=2$) transmission line having freqency $\omega_k$ is associated with an annihilation operator $r_{k,i}$. We further assume that any coupling $V$ between the qubit and the environment is secular: $\commute{V}{\sigma_z}=0$. The Hamiltonian of the qubit, environment, and decoupled cavity can then be written as
\begin{equation}
H_0(t)=\frac{1}{2}\left[\Delta+\Omega(t)\right]\sigma_z+H_{\mathrm{drive}}(t)+\omega_ca^\dagger a,
\end{equation}
where $\Delta$ is the bare qubit splitting, $\omega_c$ is the frequency of the cavity mode, $H_\mathrm{drive}(t)$ is an arbitrary drive acting on the qubit alone (which will generate a sequence of $\pi$-pulses in a dynamical decoupling sequence), and where noise on the qubit can be divided into a classical parameter [$\eta(t)$] and a quantum operator [$h(t)$] acting on the environment alone:
\begin{equation}
\Omega(t)=\eta(t)+h(t).
\end{equation}
The contribution $\eta(t)$ is taken to be generated by stationary Gaussian noise with zero mean, fully described by the noise spectrum
\begin{equation}
S_\eta(\omega) = \int dt e^{-i\omega t}\llangle\eta(t)\eta(0)\rrangle,
\end{equation}
where $\llangle\cdots\rrangle$ indicates an average over realizations of $\eta(t)$. Inhomogeneous broadening [low-frequency fluctuations in $\eta(t)$] will lead to Gaussian free-induction decay of the qubit on a time scale $T_2^*$ given by
\begin{equation}
\frac{2}{\left(T_2^*\right)^2}=\int_{-\infty}^\infty\frac{d\omega}{2\pi}S_\eta(\omega).
\end{equation}
We now transform to a toggling frame to account for the effect of the qubit drive $H_{\mathrm{drive}}(t)$. Unitary evolution $U(t)$ under the full Hamiltonian $H(t)$ is related to the toggling-frame unitary $\tilde{U}(t)$ through
\begin{equation}
U(t)=\mathcal{T}e^{-i\int_0^t dt' H(t')}=U_{\mathrm{TF}}(t)\tilde{U}(t),
\end{equation}
where
\begin{equation}
U_{\mathrm{TF}}(t)=U_{\mathrm{drive}}(t)R(t).
\end{equation}
Here, $U_\mathrm{drive}(t)$ eliminates evolution under $H_\mathrm{drive}(t)$,
\begin{equation}
U_{\mathrm{drive}}(t)= \mathcal{T}e^{-i\int_0^tdt'H_{\mathrm{drive}}(t')},
\end{equation}
and $R(t)$ defines the rotating frame subject to $U_\mathrm{drive}(t)$:
\begin{equation}
R(t)=\mathcal{T}e^{-i\Delta\int_0^t dt'[a^\dagger a+U_{\mathrm{drive}}^\dagger (t')\sigma_z U_{\mathrm{drive}}(t')/2]}.
\end{equation}
This transformation allows for a simpler analysis of observables $\tilde{O}(t)$ evolving under the action of the toggling-frame Hamiltonian $\tilde{H}(t)$:
\begin{eqnarray}
\tilde{H}(t) & = & U_{\mathrm{TF}}^\dagger(t) H(t) U_{\mathrm{TF}}(t) -iU_{\mathrm{TF}}^\dagger(t)\dot{U}_{\mathrm{TF}}(t),\\
\tilde{\mathcal{O}}(t) & = & \tilde{U}^\dagger(t)\mathcal{O}\tilde{U}(t),\quad \tilde{U}(t)=\mathcal{T}e^{-i\int_0^t dt'\tilde{H}(t')}.
\end{eqnarray}
The lab-frame expectation value $\braket{\mathcal{O}}_t$ can then be related to $\braket{\tilde{\mathcal{O}}}_t$ through
\begin{equation}
\braket{\mathcal{O}}_t=\llangle\mathrm{Tr}\{U^\dagger(t)\mathcal{O}U(t)\rho(0)\}\rrangle=\llangle\mathrm{Tr}\{\hat{\mathcal{O}}(t)\tilde{\rho}(t)\}\rrangle;\quad \tilde{\rho}(t)=\tilde{U}(t)\rho(0)\tilde{U}^\dagger(t),\label{eq-supp-mat:expectation-val}
\end{equation}
where we have included both the quantum average $\mathrm{Tr}\left\{\cdots \rho(0)\right\}$ and the classical average over noise realizations $\llangle\cdots \rrangle$ in the definition of the expectation value $\braket{\cdots}_t$. Further, we denote by a ``hat'' the analog of an interaction-picture operator:
\begin{equation}
\hat{\mathcal{O}}(t)=U_{\mathrm{TF}}^\dagger(t) \mathcal{O} U_{\mathrm{TF}}(t).\label{eq-supp-mat:toggling-frame-op}
\end{equation}
These definitions give, for example, $\braket{a}_t=\mathrm{Tr}\{\hat{a}(t)\tilde{\rho}(t)\}=e^{-i\Delta t}\mathrm{Tr}\{a\tilde{\rho}(t)\}=e^{-i\Delta t}\braket{\tilde{a}}_t$. The toggling-frame density operator evolves under
\begin{equation}
\dot{\tilde{\rho}}(t)=-i[\tilde{H}(t),\tilde{\rho}(t)]+\frac{\gamma_\phi}{2}\mathcal{D}[\hat{\sigma}_z(t)]\tilde{\rho}(t)+\kappa_{\mathrm{ext}}\mathcal{D}[a]\tilde{\rho}(t),\label{eq-supp-mat:master-eq-rot-frame}
\end{equation}
where, in writing the transformed damping superoperators, we have used $\hat{\sigma}_z^2(t)=\sigma_z^2=1$ and $\hat{a}(t)=e^{i\Delta t}a$. The toggling-frame Hamiltonian is now given by
\begin{equation}
\tilde{H}(t)=\tilde{H}_0(t)+g\hat{\sigma}_x(t)[e^{i\Delta t}a^\dagger+\mathrm{h.c.}]+\sum_{i=1,2}\sum_k(\eta_{k,i}e^{i(\omega_k-\Delta) t}r_{k,i}^\dagger a+\mathrm{h.c.}),\label{eq-supp-mat:hamiltonian}
\end{equation}
and in terms of the qubit-cavity detuning $\delta=\Delta-\omega_c$, the decoupled Hamiltonian is
\begin{equation}
\tilde{H}_0(t)=\frac{1}{2}\Omega(t)\hat{\sigma}_z(t)-\delta a^\dagger a.
\end{equation}
\section{Relating the output field to qubit coherence}
\label{supp-mat-sec:S2-cavity-field-coherence}
\noindent We can recover the well-known input-output relation~\cite{supp:gardiner1985input} by integrating the Heisenberg equation of motion, $\dot{r}_{k,i}(t)=i\commute{H(t)}{r_{k,i}(t)}$, resulting in
\begin{equation}
\braket{r_{k,i}}_t=e^{-i\omega_kt} \braket{r_{k,i}}_0-i\eta_{k,i}\int_0^t dt'\:e^{-i\omega_k(t-t')}e^{-i\Delta t'}\braket{\tilde{a}}_{t'},\quad i=1,2.\label{eq-supp-mat:eom-rk}
\end{equation}
Summing Eq.~\eqref{eq-supp-mat:eom-rk} over a quasi-continuous set of modes $k$ and performing a Markov approximation for wide-bandwidth transmission lines gives the input-output relation
\begin{equation}
r_{\mathrm{out},i}(t) = r_{\mathrm{in},i}(t)-i\sqrt{\kappa_i}e^{-i\Delta t}\braket{\tilde{a}}_t,\label{eq-supp-mat:input-output}
\end{equation}
where
\begin{equation}
r_{\mathrm{out},i}(t)=\sqrt{\frac{c}{L}}\sum_k \braket{r_{k,i}}_t,\quad r_{\mathrm{in},i}(t)=\sqrt{\frac{c}{L}}\sum_ke^{-i\omega_kt}\braket{r_{k,i}}_0, \quad \kappa_i=\frac{L}{c}\lvert\eta_i(\omega_c)\rvert^2
\end{equation}
for $\eta_i(\omega=\omega_k)=\eta_{k,i}$. Here, we have assumed one-dimensional transmission lines of length $L$ supporting linearly-dispersing modes ($\omega_k=c|k|$) with speed of light $c$. In order to relate the transmission-line dynamics more transparently to qubit coherence dynamics, we consider the quantum Langevin equation for the cavity field $\braket{\tilde{a}}_t$, obtained by substituting Eq.~\eqref{eq-supp-mat:master-eq-rot-frame} into $\braket{\dot{\tilde{a}}}_t=\mathrm{Tr}\left[a\dot{\tilde{\rho}}(t)\right]$. Within the same Markov approximation used to obtain Eq.~\eqref{eq-supp-mat:input-output}, this gives
\begin{equation}
\braket{\dot{\tilde{a}}}_t=\left(i\delta-\frac{\kappa}{2}\right)\braket{\tilde{a}}_t-ige^{i\Delta t}\braket{\sigma_x}_t,\label{eq-supp-mat:eom-cavity}
\end{equation}
where $\kappa=\kappa_1+\kappa_2+\kappa_{\mathrm{ext}}.$ Equation \eqref{eq-supp-mat:eom-cavity} provides a useful relation between $\braket{\sigma_x}_t$ (the lab-frame qubit coherence) and the cavity field $\braket{\tilde{a}}_t=e^{i\Delta t}\braket{a}_t$. This relationship is valid for an arbitrary qubit drive and for arbitrarily large qubit-cavity coupling $g$.
The goal is now to understand dynamics of the cavity field evolving under Eq.~\eqref{eq-supp-mat:eom-cavity} due to some specific driven qubit dynamics $\braket{\sigma_x}_t$. We assume the joint state of the qubit, cavity, environment, and transmission line is given, for $t\le 0$, by $\rho(t\le 0)=\ket{g,0,0}\bra{g,0,0}\otimes\bar{\rho}_{\mathrm{E}}$, where $\ket{\sigma,n_c,\nu}$ denotes the state of the qubit ($\sigma=g,e$), the cavity mode containing $n_c$ photons, and the transmission line ($\nu=0$ is the vacuum for all $i,k$). The initial state of the environment, $\rho_{\mathrm{E}}(t\le 0)=\bar{\rho}_{\mathrm{E}}$, is assumed to be stationary for $t<0$. This is true provided (i) the qubit drive is not turned on until $t=0$, $H_{\mathrm{drive}}(t<0)=0$, (ii) the qubit-environment interaction $V$ is secular (i.e. $\commute{V}{\sigma_z}=0$, as assumed above), and (iii) the environment has reached a steady-state in contact with the qubit, $\commute{\bar{\rho}_{\mathrm{E}}}{H_{\mathrm{E}}+\bra{g}V\ket{g}}=0$ (here, $H_\mathrm{E}$ is the free Hamiltonian of the environment). The state $\ket{g,0,0}$ is stationary provided either the qubit-cavity coupling vanishes [$g(t)=0$ for $t<0$], or the qubit and cavity are far detuned until $t=0$, so that $\ket{g,0,0}$ is an eigenstate of $H(t)$ with small corrections. Integrating Eq.~\eqref{eq-supp-mat:eom-cavity} with these assumptions recovers Eq.~\eqref{eq:cavity-output} of the main text,
\begin{equation}
\boxed{\braket{\tilde{a}}_t=-ig\int_{-\infty}^\infty dt'\:\chi_c(t-t')e^{i\Delta t'}\braket{\sigma_x}_{t'},\quad\chi_c(t)=e^{i\delta t-\tfrac{\kappa}{2}t}\Theta(t),\quad\braket{\sigma_x}_t\propto\Theta(t),}\label{eq-supp-mat:cavity-output}
\end{equation}
where $\Theta(t)$ is a Heaviside function. A finite qubit coherence, $\braket{\sigma_x}_t\neq 0$, can be introduced at $t>0$ with, e.g., a rapid $\pi/2$-pulse, after which the cavity field will evolve according to Eq.~\eqref{eq-supp-mat:cavity-output}.
In terms of the Fourier transform, $\braket{\mathcal{O}}_\omega=\int dt\:e^{-i\omega t}\braket{\mathcal{O}}_t$, Eq.~\eqref{eq-supp-mat:cavity-output} reads
\begin{equation}
\braket{\tilde{a}}_\omega=-ig\chi_c(\omega)\braket{\sigma_x}_{\omega-\Delta},\quad\chi_c(\omega)=\frac{1}{i(\omega-\delta)+\kappa/2}.\label{eq-supp-mat:cavity-output-freq}
\end{equation}
In the limit of low $Q=\omega_c/\kappa<1$, we can take $\chi_c(\omega)\sim 2/\kappa$ to be flat on the scale of variation of $\braket{\sigma_x}_{\omega-\Delta}$ for $\Delta\sim\omega_c$. The cavity field consequently mirrors the dynamics of the qubit time-locally: $\braket{a}_t\propto\braket{\sigma_x}_t$ \cite{supp:bertet}. Notably, Eqs.~\eqref{eq-supp-mat:cavity-output} and \eqref{eq-supp-mat:cavity-output-freq} accurately reflect dynamics even in the regime of high $Q$. This is typically the regime of interest for the devices (see, e.g.,~\cite{supp:stockklauser2017strong, supp:mi2017strong, supp:mi2018coherent, supp:samkharadze2018strong,supp:landig2018coherent, supp:cubaynes2019highly, supp:viennot2015coherent}) designed to reach the strong-coupling regime of cavity QED. The high-$Q$ regime also admits a cavity-filter approximation, in which we replace $\braket{\sigma_x}_t\simeq\braket{\sigma_-}_t$ in the convolution:
\begin{equation}
\braket{\tilde{a}}_t\simeq-ig\int_{-\infty}^\infty dt'\:\chi_c(t-t')e^{i\Delta t'}\braket{\sigma_-}_{t'}\quad \left[\mathrm{high-}Q:\;\mathrm{max}\left(|\delta|,\kappa\right)\ll|\Delta|\right].\label{eq-supp-mat:cavity-output-high-Q}
\end{equation}
A direct consequence of this cavity-filter (high-$Q$) approximation is that the cavity field $\braket{\tilde{a}}_t$ [and hence, the output field $r_\mathrm{out,2}(t)$ via Eq.~\eqref{eq-supp-mat:input-output}] will show a unique signature of quantum noise when we consider a dynamical decoupling sequence in Sec.~\ref{supp-mat-sec:S3-qubit-drive}, below.
Equation \eqref{eq-supp-mat:cavity-output-high-Q}, together with the input-output relation, Eq.~\eqref{eq-supp-mat:input-output}, provide a direct link between the dynamics of the output field $r_{\mathrm{out},2}(t)$ and qubit coherence dynamics $\braket{\sigma_-}_t$. No stationarity assumption, weak-coupling approximation, or Markov approximation has been made on $\braket{\sigma_-}_t$ up to this point. Provided an accurate model of non-Markovian dynamics can be found for $\braket{\sigma_-}_t$ (under, say, a dynamical decoupling sequence), this model can be directly tested from a measurement of $r_\mathrm{out,2}(t)$. Alternatively, non-Markovian dynamics in $\braket{\sigma_-}_t$ can be inferred from the transient dynamics of $r_{\mathrm{out},2}(t)$. The qubit dynamics translated to the output field will, however, depend on the effects of the cavity filter and cavity-induced backaction, as we now show.
\section{Cavity-induced backaction}
\label{supp-mat-sec:S3-qubit-drive}
\noindent We assume that a finite qubit coherence is created at $t=0^+>0$ with a rapid $({-}\pi/2)_y$-rotation: $\rho(0^+)=\ketbra{+}{+}\otimes\bar{\rho}_{\mathrm{E}}\otimes\ketbra{0,0}{0,0}$, where $\sigma_x\ket{+}=\ket{+}$, and where $\ket{0,0}$ indicates the simultaneous vacuum state of the cavity and transmission line. The initial $\pi/2$-pulse is followed by a sequence of (effectively instantaneous) dynamical-decoupling $\pi_x$-pulses due to $H_{\mathrm{drive}}(t)$. For such a pulse sequence, $\hat{\sigma}_z(t)=U_{\mathrm{drive}}^\dagger(t)\sigma_zU_{\mathrm{drive}}(t)=s(t)\sigma_z$, where $s(t)=(-1)^{n(t)}$ is a sign function depending on $n(t)$, the number of $\pi$-pulses having taken place up to time $t$. The lab-frame expectation value $\braket{\sigma_-}_t$ in Eq.~\eqref{eq-supp-mat:cavity-output-high-Q} is then related to toggling-frame observables via:
\begin{equation}
\braket{\sigma_-}_t = \begin{cases}
e^{-i\phi(t)}\braket{\tilde{\sigma}_-}_t,\quad &n(t)\text{ even}\\
e^{i\phi(t)}\braket{\tilde{\sigma}_+}_t,\quad &n(t)\text{ odd}
\end{cases};\quad \phi(t)=\int_0^t dt' s(t')\Delta.\label{eq-supp-mat:lab-toggle}
\end{equation}
The phase $\phi(t)$ advances at a rate $\dot{\phi}(t)=+\Delta$ for $n(t)$ even and $\dot{\phi}(t)=-\Delta$ for $n(t)$ odd, so $\braket{\sigma_-}_t\sim e^{-i\Delta t}$ for all times $t$ up to corrections $\sim\braket{\tilde{\sigma}_\pm}_t$. To solve for $\braket{\tilde{\sigma}_\pm}_t$, we evaluate the Heisenberg equations of motion under $\tilde{H}(t)$ [from Eq.~\eqref{eq-supp-mat:hamiltonian}], which we rewrite as:
\begin{equation}
\tilde{H}(t)=\tilde{H}_0(t)+\sum_{i=1,2}\sum_k(\eta_{k,i}e^{i(\omega_k-\Delta) t}r_{k,i}^\dagger a+\mathrm{h.c.})+\begin{cases}
g(e^{i(\phi(t)-\Delta t)}\sigma_+a+\mathrm{h.c.})+\text{counter-rot.},\quad &n(t)\text{ even}\\
g(e^{i(\phi(t)+\Delta t)}\sigma_+a^\dagger+\mathrm{h.c.})+\text{counter-rot.},\quad &n(t)\text{ odd}
\end{cases}.
\end{equation}
The term ``counter-rot.'' indicates counter-rotating terms $\sim e^{\pm i2\Delta t}$ that lead to small corrections for $|g|\ll|\delta\pm\Delta|$. The usual excitation-preserving co-rotating terms ($\sim \sigma_+a$ and $\sim\sigma_-a^\dagger$) for $n(t)$ even are replaced by excitation non-conserving terms ($\sim \sigma_-a$ and $\sigma_+a^\dagger$) for $n(t)$ odd. These will generally lead to cavity heating, making the analysis of qubit-cavity dynamics under a dynamical decoupling sequence more challenging than for the undriven case \cite{supp:beaudoin2017hamiltonian}. These effects can nevertheless be controlled in appropriate limits. Neglecting counter-rotating terms in $\tilde{H}(t)$, the equation of motion for $\tilde{\sigma}_-(t)$ is then given [within the rotating-wave approximation (RWA)] by
\begin{equation}
\dot{\tilde{\sigma}}_-(t)\simeq-\left[is(t)\Omega(t)+\gamma_\phi\right]\tilde{\sigma}_-(t)+\begin{cases}
ig e^{i\left[\phi(t)-\Delta t\right]}\tilde{\sigma}_z(t)\tilde{a}(t),\quad &n(t)\text{ even}\\
ig e^{i\left[\phi(t)+\Delta t\right]}\tilde{\sigma}_z(t)\tilde{a}^\dagger(t),\quad &n(t)\text{ odd}
\end{cases},\quad \left(\mathrm{RWA:}\, g\ll |\delta\pm\Delta|\right).\label{eq-supp-mat:sigma-minus-eom}
\end{equation}
The bilinear terms $\sim \tilde{\sigma}_z(t)\tilde{a}(t)$ and $\sim \tilde{\sigma}_z(t)\tilde{a}^\dagger(t)$ make a general integration of these equations difficult. For free-induction decay [$n(t)=0$ for all $t$], the dynamics are restricted (under the rotating-wave approximation and for an undriven cavity with $\kappa_1=0$) to the subspace spanned by $\left\{\ket{g,0,0},\ket{e,0,0},\ket{g,1,0},\ket{g,0,k}\right\}$, where $\ket{g,0,k}=r_{k,2}^\dagger\ket{g,0,0}$. In this case, the state of the qubit and cavity is restricted to the bottom three rungs of the Jaynes-Cummings ladder. Within this subspace, we have $\left<\tilde{\sigma}_z(t)\tilde{a}(t)\right>=-\braket{\tilde{a}}_t$ for all time, allowing for a direct solution to the coupled equations for $\braket{\tilde{\sigma}_-}_t$ and $\braket{\tilde{a}}_t$. As described above, this exact replacement is no longer possible under a dynamical decoupling sequence. However, we can justify a similar approximate replacement provided (i) that $g\ll\kappa$, so that the cavity contains at most one photon at any time, and (ii) that the minimum time $\tau$ between $\pi$-pulses is long compared to the timescale $\kappa^{-1}$ of cavity transients. Under these conditions, we perform the restricted-subspace approximation:
\begin{equation}
\braket{\tilde{\sigma}_z(t)\tilde{a}(t)}\simeq -\braket{\tilde{a}(t)},\quad n(t)\text{ even};\quad
\braket{\tilde{\sigma}_z(t)\tilde{a}^\dagger(t)}\simeq \braket{\tilde{a}^\dagger(t)},\quad n(t)\text{ odd};\quad (g<\kappa,\,\kappa\tau\gg 1).\label{eq-supp-mat:subspace}
\end{equation}
The first approximate equality follows from the same logic given above for free-induction decay, $n(t)=0$. The second approximation follows provided evolution is approximately restricted to the subspace $\left\{\ket{g,0,0},\ket{e,0,0},\ket{e,1,0},\ket{e,0,k}\right\}$ for most of the time when $n(t)$ is odd. These approximations will be violated due to heating effects on a time scale $\sim \kappa^{-1}$ in the vicinity of $\pi$-pulses, but if the time between subsequent $\pi$-pulses is sufficiently long, the cumulative effect of these transients will amount to a small correction to the qubit coherence dynamics.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{backaction.pdf}
\caption{(a) Solid black line: The echo envelope $\tilde{C}(n\tau)=\left\llangle\braket{\tilde{\psi}(n\tau)\lvert\sigma_-\rvert\tilde{\psi}(n\tau)}\right\rrangle/\braket{\sigma_-}_0$, where $\ket{\tilde{\psi}}$ evolves under $\tilde{H}(t)$ in the restricted subspace described by Eq.~\eqref{eq-supp-mat:basis}. Gray line: $\tilde{C}(n\tau)\simeq\llangle e^{-\Gamma_{\mathrm{P}}(\eta)n\tau/2}\rrangle$. Black dashed line: The approximate form $\tilde{C}(n\tau)\simeq e^{-\sqrt{\gamma_{\mathrm{P}}n\tau}}$. Inset: The qubit coherence $C(t)$, consisting of Gaussian revivals with width $\sim T_2^*$ centered at times $t=n\tau$, with $n$ an integer. (b) The cavity field $\braket{\tilde{a}}_t=\sum_{\sigma=e,g}\left\llangle\alpha_{\sigma 0}^*(t)\alpha_{\sigma 1}(t)\right\rrangle$. (c) Red (blue) dashed line: $\left\llangle|\alpha_{g1}(t)|^2\right\rrangle$ [$\left\llangle|\alpha_{e1}(t)|^2\right\rrangle$], normalized by $\zeta=\sqrt{\pi}g^2T_2^*/\kappa$. Black line: The cavity occupation $\braket{n_c}_t=\sum_{\sigma=e,g}\left\llangle\lvert\alpha_{\sigma 1}(t)\rvert^2\right\rrangle$. We have verified that the inequality $\lvert\braket{\tilde{a}}_t\rvert^2\leq\braket{n_c}_t(1-\braket{n_c}_t)$ is satisfied for all times $t$, as required by positivity of the cavity density matrix in the subspace of $n_c=0,1$. (Though unresolvable on this scale, $\braket{n_c}_t$ also rises initially on a timescale $T_2^*$.) For all figures, we take $g=0.1\kappa$, $\kappa T_2^*=0.1$ (giving $\sqrt{\pi}gT_2^* \sim 10^{-2}$, $\zeta \sim 10^{-3}$), and $\kappa\tau=10$.}
\label{fig:backaction}
\end{figure}
To illustrate the validity of the restricted-subspace approximation, Eq.~\eqref{eq-supp-mat:subspace}, we consider the simplified case of $\gamma_\phi=h(t)=0$ and additionally assume that the low-frequency noise is static [$\eta(t)=\eta$]. We then directly integrate the Schr\"odinger equation [$\partial_{t}\ket{\tilde{\psi}(t)}=-i\tilde{H}(t)\ket{\tilde{\psi}(t)}$] \emph{without} assuming Eq.~\eqref{eq-supp-mat:subspace}, but restricting to a subspace that allows for at most one photon in the cavity or transmission line (this can always be justified for $g<\kappa$ at a sufficiently short time):
\begin{equation}
\ket{\tilde{\psi}(t)}=\alpha_{g0}(t)\ket{g,0,0}+\alpha_{e0}(t)\ket{e,0,0}+\alpha_{g1}(t)\ket{g,1,0}+\alpha_{e1}(t)\ket{e,1,0}+\sum_k\left[\alpha_{gk}(t)\ket{g,0,k}+\alpha_{ek}(t)\ket{e,0,k}\right].\label{eq-supp-mat:basis}
\end{equation}
For a fixed value of the noise $\eta$, the Schr\"odinger equation can be integrated analytically piecewise for each time interval between $\pi$-pulses. After averaging over a Gaussian distribution in $\eta$ values with $\llangle\eta^2\rrangle=2/(T_2^*)^2$, the resulting contributions $\llangle|\alpha_{g1}(t)|^2\rrangle$ and $\llangle|\alpha_{e1}(t)|^2\rrangle$ to the number of cavity photons, $\braket{n_c}_t=\sum_{\sigma={e,g}}\llangle|\alpha_{\sigma 1}(t)|^2\rrangle$, are shown in Fig.~\ref{fig:backaction}(c) for the case of a Carr-Purcell-Meiboom-Gill (CPMG) sequence:
\begin{equation}
\left(\frac{\tau}{2}-\pi_x-\frac{\tau}{2}\right)^N\quad \mathrm{(CPMG)}.\label{eq-supp-mat:CPMG}
\end{equation}
Here, $\theta_\alpha$ indicates an infinitesimal-duration rotation of the qubit by angle $\theta$ about the $\alpha$-axis ($\alpha=x,y$), and $\frac{\tau}{2}$ indicates a delay by a time interval $\tau/2$. Up to small transient corrections on a time scale $\sim 1/\kappa$ around the times of $\pi$-pulses, we have $\alpha_{e1}(t)\simeq 0$ for $n(t)$ even and $\alpha_{g1}(t)\simeq 0$ for $n(t)$ odd, justifying the approximations in Eq.~\eqref{eq-supp-mat:subspace}. In addition, the exact echo envelope for this case, $\tilde{C}(n\tau)=\left\llangle\bra{\tilde{\psi}(n\tau)}\sigma_-\ket{\tilde{\psi}(n\tau)}\right\rrangle/\braket{\sigma_-}_0$, is shown in Fig.~\ref{fig:backaction}(a), and the cavity field $\braket{\tilde{a}}_t=\left\llangle\bra{\tilde{\psi}(t)}a\ket{\tilde{\psi}(t)}\right\rrangle$ is shown in Fig.~\ref{fig:backaction}(b). Even in this case, where there is no external source of pure-dephasing dynamics [$\gamma_\phi=h(t)=0$ and $\eta(t)=\eta$], qubit coherence is still lost due to inhomogeneously broadened Purcell decay, an effect that we now consider in detail.
To separate out fast and slow dynamics, we write $\Omega(t)=\eta+\delta\Omega(t)$, where we assume that $\eta$ is a large static contribution (inhomogeneous broadening) and that $\delta\Omega(t)=\delta \eta(t)+h(t)$ generates pure-dephasing dynamics that are slow on the scale $\kappa^{-1}$. We now eliminate the evolution under the fast term $\sim s(t)\eta$ by defining
\begin{equation}
\dbtilde{\sigma}_-(t)=e^{i\int_0^tdt's(t')\eta}\tilde{\sigma}_-(t).\label{eq-supp-mat:slow-ev}
\end{equation}
In terms of this quantity, and within the restricted subspace approximation [Eq.~\eqref{eq-supp-mat:subspace}], Eq.~\eqref{eq-supp-mat:sigma-minus-eom} can be written as
\begin{equation}
\dot{\dbtilde{\sigma}}_-(t)\simeq -\left[is(t)\delta\Omega(t)+\gamma_\phi\right]\dbtilde{\sigma}_-(t) +ig e^{i\int_0^tdt's(t')\eta}\begin{cases}
{-}e^{i[\phi(t)-\Delta t]}\tilde{a}(t),\quad &n(t)\text{ even}\\
e^{i[\phi(t)+\Delta t]}\tilde{a}^\dagger(t),\quad &n(t)\text{ odd}
\end{cases}.\label{eq-supp-mat:sigma-minus-eom-2}
\end{equation}
In order to rewrite Eq.~\eqref{eq-supp-mat:sigma-minus-eom-2} as a closed equation, we insert the result for $\tilde{a}(t)$ in terms of $\sigma_-(t)$ within the cavity-filter (high-$Q$) approximation [leading to Eq.~\eqref{eq-supp-mat:cavity-output-high-Q} after averaging]. Neglecting contributions $\sim a(0),\,r_{k,i}(0)$ that vanish under the average ($\braket{\tilde{a}}_0=\braket{r_{k,i}}_0=0$), this gives:
\begin{equation}
\dot{\dbtilde{\sigma}}_-(t) \simeq -\left[is(t)\delta\Omega(t)+\gamma_\phi\right]\dbtilde{\sigma}_-(t) -i\int_0^t dt'\Sigma(t,t')\dbtilde{\sigma}_-(t'),\label{eq-supp-mat:sigma-minus-eom-3}
\end{equation}
with a time-nonlocal memory kernel (self-energy) given by
\begin{equation}
\Sigma(t,t') = -ig^2 e^{i\int_{t'}^tdt''s(t'')\eta}\begin{cases}
\chi_c(t-t')e^{i[\phi(t)-\phi(t')-\Delta(t-t')]},\quad &n(t)\text{ even}\\
\chi_c^*(t-t')e^{i[\phi(t)-\phi(t')+\Delta(t-t')]},\quad &n(t)\text{ odd}
\end{cases}.
\end{equation}
The cavity susceptibility $\chi_c(t-t')$ suppresses contributions for which the times $t,t'$ are well separated. Major contributions to the integral thus occur for $t-t' \lesssim \kappa^{-1}\ll \tau$. Except for small intervals of width $\sim 1/\kappa$ around the time of the $\pi$-pulses, we will thus have $n(t)=n(t')$ wherever $\chi_c(t-t')$ has significant weight, giving $\phi(t)-\phi(t')\simeq s(t)\Delta(t-t')$. This justifies the following replacements, with small corrections for $\kappa\tau\gg 1$:
\begin{equation}
e^{i[\phi(t)-\phi(t')-\Delta(t-t')]} \simeq 1,\quad [n(t)\,\mathrm{even}];\quad e^{i[\phi(t)-\phi(t')+\Delta(t-t')]} \simeq 1,\quad [n(t)\,\mathrm{odd}],\quad\quad (\kappa\tau\gg 1).
\end{equation}
With these replacements, the self-energy becomes
\begin{equation}
\Sigma(t,t')\simeq \Sigma_0(t,t-t'),\quad \Sigma_0(t_1,t_2)= -ig^2e^{-\frac{\kappa}{2}t_2}e^{is(t_1)(\eta+\delta)t_2},\quad (\kappa\tau\gg 1).
\end{equation}
If $\dbtilde{\sigma}_-(t)$ evolves slowly on the timescale $\kappa^{-1}$, then we can write the equation of motion for $\dbtilde{\sigma}_-$ in terms of the dispersive shift $\Delta\omega(\eta)$ and Purcell decay rate $\Gamma_P(\eta)$ as
\begin{equation}
\dot{\dbtilde{\sigma}}_-(t)\simeq -\left\{is(t)[\delta\Omega(t)+\Delta\omega(\eta)]+\gamma_\phi+\frac{1}{2}\Gamma_{\mathrm{P}}(\eta)\right\}\dbtilde{\sigma}_-(t),\label{eq-supp-mat:sigma-minus-eom-4}
\end{equation}
where
\begin{equation}
i\int_0^\infty dt'\Sigma_0(t,t')=\frac{g^2\left[\kappa/2+is(t)(\delta+\eta)\right]}{(\delta+\eta)^2+(\kappa/2)^2}=\frac{1}{2}\Gamma_{\mathrm{P}}(\eta)+is(t)\Delta\omega(\eta).
\end{equation}
Integrating Eq.~\eqref{eq-supp-mat:sigma-minus-eom-4} and transforming back to $\tilde{\sigma}_-(t)$ [via Eq.~\eqref{eq-supp-mat:slow-ev}] then gives
\begin{equation}
\braket{\tilde{\sigma}_-}_ t\simeq e^{-\gamma_\phi t}\braket{e^{-\frac{\Gamma_{\mathrm{P}}(\eta)}{2}t}e^{-i\int_0^tdt's(t')\Delta\omega(\eta)}\mathcal{T}e^{-i\int_0^tdt's(t')[\eta+\delta\Omega(t')]}}\braket{\sigma_-}_0.\label{eq-supp-mat:sigma-minus}
\end{equation}
The average $\left<\cdots\right>$ includes both a quantum average and an average over realizations of the noise $\eta(t)$. If we assume that the inhomogeneous broadening $\eta$ is approximately statistically independent of the dynamical contribution $\delta\eta(t)=\eta(t)-\eta$ over the short timescale $\sim T_2^*$ of the coherence revivals, $\llangle\delta\eta(t)\eta\rrangle\simeq 0$, then we can write
\begin{eqnarray}
\braket{\tilde{\sigma}_-}_ t &\simeq& \left\llangle e^{-\frac{\Gamma_{\mathrm{P}}(\eta)}{2}t}e^{-i\int_0^tdt's(t')[\Delta\omega(\eta)+\eta]}\right\rrangle \tilde{C}_0(t)\braket{\sigma_-}_0,\label{eq-supp-mat:sigma-minus-2}\\
\tilde{C}_0(t)&=&e^{-\gamma_\phi t} \braket{e^{-i\int_0^tdt's(t')\delta\Omega(t')}},\label{eq-supp-mat:C0}
\end{eqnarray}
where $\tilde{C}_0(t)$ describes the contribution to the slowly varying envelope of qubit coherence due to the environment and low-frequency noise, in the absence of cavity coupling.
Provided the restricted-subspace approximation [Eq.~\eqref{eq-supp-mat:subspace}] holds, the result given in Eqs.~\eqref{eq-supp-mat:sigma-minus-2} and \eqref{eq-supp-mat:C0} can be used to describe qubit coherence dynamics under an arbitrary dynamical decoupling sequence. For concreteness, we now specialize to an $N$-pulse CPMG sequence with pulse spacing $\tau$ [Eq.~\eqref{eq-supp-mat:CPMG}]. In this case, $\int_0^{n\tau}dts(t)=0$, leading to a perfect cancellation of the inhomogeneous broadening $\eta$ and dispersive shift $\Delta\omega(\eta)$ at revival/echo times $t=n\tau$. These echoes are suppressed by an overall decay envelope set by $\llangle e^{-\Gamma_{\mathrm{P}}(\eta)n\tau/2}\rrangle$, which gives rise to an asymptotic stretched-exponential behavior arising from the inhomogeneously broadened Purcell decay:
\begin{equation}
\llangle e^{-\frac{\Gamma_{\mathrm{P}(\eta)}}{2}n\tau}\rrangle=\int_{-\infty}^\infty d\eta\:\frac{T_2^*}{\sqrt{4\pi}}e^{-\frac{1}{4}\eta^2(T_2^*)^2}e^{-\frac{\Gamma_{\mathrm{P}}(\eta)}{2}n\tau}\sim e^{\left(\frac{\kappa T_2^*}{4}\right)^2}e^{-\sqrt{\gamma_{\mathrm{P}}n\tau}},\quad n\tau\to\infty,\label{eq-supp-mat:gammap}
\end{equation}
where
\begin{equation}
\boxed{\gamma_{\mathrm{P}}\simeq (gT_2^*)^2\frac{\kappa}{2},\quad T_2^*\delta\ll1.}\label{eq-supp-mat:gammap2}
\end{equation}
The results displayed in Eqs.~\eqref{eq-supp-mat:gammap} and \eqref{eq-supp-mat:gammap2} were obtained by noting that the integrand ($\propto \exp[-F(\eta)]$) approaches the sum of two Gaussians centered at $\eta=\pm\eta_0$ in the limit $n\tau\to\infty$, where $\eta_0$ is a stationary point [$F'(\eta_0)=0$].
Combining Eqs.~\eqref{eq-supp-mat:sigma-minus-2} and \eqref{eq-supp-mat:gammap}, the total echo envelope is thus given at any echo $t=n\tau$ (for $\kappa T_2^*<1$) by
\begin{equation}
\tilde{C}(n\tau)=\braket{\tilde{\sigma}_-}_{n\tau}/\braket{\tilde{\sigma}_-}_0\simeq e^{-\sqrt{\gamma_{\mathrm{P}}n\tau}}\tilde{C}_0(n\tau),\quad (\kappa T_2^*<1).
\end{equation}
If $\tilde{C}_0(t)$ is slowly-varying on the timescale $\sim T_2^*$ of each echo, this gives:
\begin{equation}
\braket{\tilde{\sigma}_-}_t\simeq\braket{\sigma_-}_0\sum_n G_{n}(t-n\tau)\tilde{C}(n\tau),\label{eq-supp-mat:coherence-expansion}
\end{equation}
where
\begin{equation}
G_n(t) = e^{\sqrt{\gamma_{\mathrm{P}}n\tau}}\left\llangle e^{-\Gamma_\mathrm{P}(\eta)n\tau/2}e^{-i\eta t}\right\rrangle.\label{eq-supp-mat:revival-shape}
\end{equation}
The same asymptotic analysis described above can be used to find the shape of $G_n(t)$ at short and long times (for $\kappa T_2^*<1$):
\begin{equation}
G_{n}(t)\simeq\begin{cases}
e^{-\left(\frac{t}{T_2^*}\right)^2},\quad &\gamma_{\mathrm{P}}n\tau<1\\
e^{-\left(\frac{t}{2T_2^*}\right)^2}\cos{\left[\sqrt{2}(\gamma_{\mathrm{P}}n\tau)^{1/4}\frac{t}{T_2^*}\right]},\quad &\gamma_{\mathrm{P}}n\tau>1
\end{cases}.\label{eq-supp-mat:revival-profile}
\end{equation}
Remarkably, the shape of the echoes changes as $n$ increases: For $n\tau<1/\gamma_\mathrm{P}$, the echoes are simply Gaussians broadened by $T_2^*$, but for $n\tau>1/\gamma_\mathrm{P}$, the width of the echo peaks doubles, and moreover, the peaks show a cosine modulation due to a combination of Purcell decay and inhomogeneous broadening [Fig.~\ref{fig:asymptotics}].
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{asymptotics.pdf}\hspace{1.1cm}
\caption{Solid line: Revivals described by $e^{i\Delta t}C(t)=e^{i\Delta t}\braket{\sigma_-}_t/\braket{\sigma_-}_0$ [evaluated from Eq.~\eqref{eq-supp-mat:coherence-expansion} by numerically integrating Eq.~\eqref{eq-supp-mat:revival-shape} for $G_n(t)$, assuming $\Delta\tau=2\pi m$ for some $m\in\mathbb{Z}$ so that $e^{i\Delta t}C(t)$ is real] for three different values of $n$: $n=1, 200, 2000$. We assume as in Fig.~\ref{fig:backaction} that $g=0.1\kappa$, $\kappa T_2^*=0.1$, and $\kappa\tau=10$, giving $\gamma_{\mathrm{P}}^{-1}=2000\tau$. For $\gamma_{\mathrm{P}}n\tau\gtrsim1$, the revivals are suppressed by a stretched exponential. In addition, the Gaussian envelope broadens and becomes modulated by a cosine due to an interplay of inhomogeneous broadening and Purcell decay.}
\label{fig:asymptotics}
\end{figure}
Recalling the relation between the lab-frame expectation value $\braket{\sigma_-}_t$ and the toggling-frame observables $\braket{\tilde{\sigma}_\pm}_t$ [Eq.~\eqref{eq-supp-mat:lab-toggle}], we then find that
\begin{equation}
e^{i\Delta t}\braket{\sigma_-}_t\simeq \braket{\sigma_-}_0\sum_{n=0}^N G_n(t-n\tau)e^{i\Delta n\tau}\begin{cases}
\tilde{C}(n\tau),\quad &n\text{ even}\\
\tilde{C}^*(n\tau),\quad &n\text{ odd}
\end{cases}.\label{eq-supp-mat:sigma-expectation-time}
\end{equation}
Substituting Eq.~\eqref{eq-supp-mat:sigma-expectation-time} into Eq.~\eqref{eq-supp-mat:cavity-output-high-Q} gives
\begin{equation}
\braket{\tilde{a}}_t\simeq\braket{\sigma_-}_0\left[\frac{1}{2}f_0(t)+\sum_{n=1}^Nf_n(t-n\tau)e^{i\Delta n\tau}\mathcal{K}^n\tilde{C}(n\tau)\right],\quad \kappa\tau\gg 1,\label{eq-supp-mat:train-of-sharks}
\end{equation}
where we have introduced the complex conjugation operator $\mathcal{K}$ ($\mathcal{K}z=z^*, z\in\mathbb{C}$), and where the wavepackets $f_n(t)$ are given by
\begin{equation}\label{eq-supp-mat:sharkfin}
f_n(t)=-ig\int_{-\infty}^tdt' \chi_c(t-t')G_n(t').
\end{equation}
The first term, $(1/2)f_0(t)$, in Eq.~\eqref{eq-supp-mat:train-of-sharks} reflects the fact that revivals with $n\geq 1$ have twice as much ``area'' as the initial free-induction decay. For $\kappa T_2^*\ll1$, we can replace $G_n(t)$ in Eq.~\eqref{eq-supp-mat:sharkfin} by a delta function with appropriate weight:
\begin{equation}
G_n(t)\simeq\delta(t)\sqrt{\pi}T_2^*\bar{G}_n;\quad \bar{G}_n=\frac{1}{\sqrt{\pi}T_2^*}\int_{-\infty}^\infty dt \:G_n(t).
\end{equation}
In this case, we have the simplified form
\begin{equation}\label{eq-supp-mat:sharkfin-simplified}
f_n(t)\simeq-i\sqrt{\pi}gT_2^*\bar{G}_n\chi_c(t)=-i\sqrt{\pi}gT_2^*\bar{G}_ne^{i\delta t-\frac{\kappa}{2}t}\Theta(t), \quad \kappa T_2^*\ll 1.
\end{equation}
The amplitude of the $n^\mathrm{th}$ revival in the cavity field due to a CPMG sequence is thus suppressed by the dimensionless factor $\bar{G}_n$ \emph{in addition} to any suppression due to coherence decay. From the asymptotic forms given in Eq.~\eqref{eq-supp-mat:revival-profile}, we have
\begin{equation}
\bar{G}_n\simeq\begin{cases}1,& n\ll 1/(\gamma_\mathrm{P}\tau)\\
2 e^{-2\sqrt{\gamma_\mathrm{P}n\tau}},& n\gg 1/(\gamma_\mathrm{P}\tau)
\end{cases}.
\end{equation}
Even in the absence of external sources of dephasing [$\gamma_\phi=h(t)=0$], Purcell decay in $\tilde{C}(n\tau)\simeq e^{-\sqrt{\gamma_\mathrm{P}n\tau}}$ together with the additional suppression of $\bar{G}_n$ due to the cosine modulation of echoes [Eq.~\eqref{eq-supp-mat:revival-profile}] results in an asymptotic suppression of cavity revivals:
\begin{equation}\label{eq-supp-mat:GnCAsymptotic}
\bar{G}_n\tilde{C}(n\tau)\sim 2e^{-3\sqrt{\gamma_\mathrm{P}n\tau}},\quad \left[\gamma_\phi=h(t)=0, n\tau\gg 1/\gamma_\mathrm{P}\right].
\end{equation}
Taking the Fourier transform of Eq.~\eqref{eq-supp-mat:train-of-sharks} and applying the approximations given above for $\kappa T_2^*\ll 1$ gives
\begin{equation}
\braket{\tilde{a}}_{\omega}\simeq-i\braket{\sigma_-}_0\sqrt{\pi}gT_2^*\chi_c(\omega)\left[\frac{1}{2}+\sum_{n=1}^N e^{-i(\omega-\Delta)n\tau}\bar{G}_n\mathcal{K}^n \tilde{C}(n\tau)\right],\quad T_2^*\kappa\ll1, \label{eq-supp-mat:cavity-field-CPMG}
\end{equation}
where $\chi_c(\omega)=[i(\omega-\delta)+\kappa/2]^{-1}$. Since $\chi_c(\omega)$ is peaked at $\omega=\delta$, we can maximize the signal by considering $\omega=\delta$. This gives Eq.~\eqref{eq:spectrum-peak} of the main text,
\begin{equation}
\boxed{\braket{\tilde{a}}_{\omega=\delta}\simeq-i\braket{\sigma_-}_0\frac{2\sqrt{\pi}gT_2^*}{\kappa}\left[\tilde{C}_{N,\tau}(\delta)-\frac{1}{2}\right],}\label{eq-supp-mat:cavity-spectrum-at-delta}
\end{equation}
where
\begin{equation}
\tilde{C}_{N,\tau}(\omega)\equiv\sum_{n=0}^N e^{-in\omega\tau}[e^{in\delta_\Delta\tau}\bar{G}_n\mathcal{K}^n\tilde{C}(n\tau)] \label{eq-supp-mat:dft}
\end{equation}
is the discrete Fourier transform of $e^{in\delta_\Delta\tau}\bar{G}_n\mathcal{K}^n\tilde{C}(n\tau)$. We have written Eq.~\eqref{eq-supp-mat:dft} in terms of $\delta_\Delta$, where $\delta_\Delta\equiv\Delta$ (mod $2\pi/\tau$), in order to emphasize that, due to the periodicity of $e^{i\Theta}$ ($\Theta\in\mathbb{R}$), the shift by $\Delta$ of the frequency content of the echo envelope is equivalent to a phase shift due to a smaller quantity $\delta_\Delta$ bounded above by $2\pi/\tau$. [In the case where $\tau=2\pi m/\Delta$ for some $m\in\mathbb{Z}$, $\delta_\Delta=0$.] Provided both $\Delta$ and $\tau$ are known, the revival amplitudes $\bar{G}_n\mathcal{K}^n\tilde{C}(n\tau)$ can be recovered by sweeping the detuning over some interval $O(2\pi/\tau)$, inverting the discrete Fourier transform, multiplying by $\bar{G}_n^{-1}$, and, in the case where $\tau\neq 2\pi m/\Delta$, multiplying by the appropriate phase factor ($e^{-i\delta_\Delta n\tau}$).
\section{Quantum noise}
\label{supp-mat-sec:S4-quantum-noise}
We can find a generic expression for the echo envelope amplitude $\tilde{C}_0(n\tau)$ [Eq.~\eqref{eq-supp-mat:C0}] within a leading-order Magnus expansion and Gaussian approximation. These approximations are generic and will be satisfied for a broad class of non-Markovian environments. Provided $\braket{h}_0=0$, a Magnus expansion to second order in $h(t)$, followed by a Gaussian approximation~\cite{supp:beaudoin2013enhanced}, gives
\begin{equation}
\tilde{C}_0(n\tau)\simeq e^{-\gamma_\phi n\tau}\mathrm{exp}\bigg\{{-}\int \frac{d\omega}{2\pi}\frac{F(\omega,n\tau)}{\omega^2}S(\omega)\bigg\},
\end{equation}
where $F(\omega,n\tau)$ is the CPMG filter function
\begin{equation}
F(\omega,n\tau) = \frac{\omega^2}{2}\left|\int_0^{n\tau}s(t)e^{i\omega t}\right|^2,
\end{equation}
and where, in terms of $\Omega(t)= \eta(t)+h(t)$,
\begin{equation}
\boxed{S(\omega)=\mathrm{lim}_{\epsilon\rightarrow 0^+}\int_{-\infty}^{\infty}dt\:e^{-i\omega t-\epsilon\lvert t\rvert}\llangle\braket{\Omega(\lvert t\rvert)\Omega}\rrangle.}\label{eq-supp-mat:spectral-density}
\end{equation}
Equation \eqref{eq-supp-mat:spectral-density} corresponds to Eq.~\eqref{eq:spectral-density} of the main text. In general, there are two contributions to $S(\omega)$: a classical part $S_{\mathrm{c}}(\omega)=\mathrm{Re}\:S(\omega)=S_h(\omega)+S_\eta(\omega)$ that sets the magnitude of $\tilde{C}_0(n\tau)$, and a quantum part $S_{\mathrm{q}}(\omega)=\mathrm{Im}\:S(\omega)$ that gives rise to a phase. The classical and quantum parts $S_h(\omega)$ and $S_{\mathrm{q}}(\omega)$ depend, respectively, on symmetrized and anti-symmetrized correlation functions:
\begin{align}
&S_h(\omega)=\frac{1}{2}\int \frac{dt}{2\pi}e^{-i\omega t}\braket{\{h(t),h\}},\label{eq-supp-mat:specral-density-classical}\\
&S_{\mathrm{q}}(\omega)=\mathrm{lim}_{\epsilon\rightarrow 0^+}\frac{1}{2i}\int \frac{dt}{2\pi}e^{-i\omega t-\epsilon \lvert t\rvert}\braket{[h(\lvert t\rvert),h]}.\label{eq-supp-mat:spectral-density-quantum}
\end{align}
In noise spectroscopy, it is commonly assumed that $S_{\mathrm{q}}(\omega)=0$ and consequently, that $\tilde{C}_0(n\tau)$ is real-valued~\cite{supp:szankowski2017environmental, supp:alvarez2011measuring, supp:yuge2011measurement, supp:bylander2011noise, supp:szankowski2019transition}. However, ignoring the quantum-noise correction when it is significant can lead to erroneous conclusions. In these cases, the simple generalization provided by Eq.~\eqref{eq-supp-mat:spectral-density-quantum} can be used to perform accurate noise spectroscopy, even when the quantum noise is significant.
\section{Modeling the effect of inhomogeneous broadening on the cavity transmission spectrum}
\label{supp-mat-sec:s5-input-output}
\noindent In order to calculate the cavity transmission spectrum $A_\mathrm{T}(\omega)=r_{\mathrm{out},2}(\omega)/r_{\mathrm{in},1}(\omega)$ using input-output theory~\cite{supp:gardiner1985input}, one must consider the coupled quantum Langevin equations of the cavity and (qubit-environment) system. We assume no qubit driving, $H_{\mathrm{drive}}(t)=0$. Within a rotating-wave approximation (RWA) requiring that $g$, $\lvert\delta\rvert\ll\lvert\Delta+\omega_c\rvert$, these equations can be decoupled in linear-response with respect to $g$~\cite{supp:kohler2018dispersive, supp:mielke2021nuclear}, resulting in a qubit susceptibility [Eq.~\eqref{eq-supp-mat:input-output-susceptibility}, below] that depends only on the eigenstates $\ket{\sigma,m}$ of $H_{\sigma}=\bra{\sigma}V\ket{\sigma}+H_{\mathrm{E}}$ for a fixed value of $\sigma\in\{e,g\}$. The effect of inhomogeneous broadening can then be modeled by averaging $A_\mathrm{T}(\omega)=\llangle A_\mathrm{T}(\omega,\eta)\rrangle$ over the distribution $\llangle\dotsm\rrangle$ of noise realizations. For a time-independent $\eta$, this gives
\begin{equation}
\boxed{A_{\mathrm{T}}(\omega)\simeq\bigg\langle\hspace{-0.15cm}\bigg\langle\frac{-\sqrt{\kappa_1\kappa_2}}{i(\omega+\omega_c)+ig^2\chi_\eta(\omega)+\kappa/2}\bigg\rangle\hspace{-0.15cm}{\bigg\rangle},} \label{eq-supp-mat:input-output-transmission}
\end{equation}
where
\begin{equation}
\boxed{\chi_\eta(\omega)=i\sum_{mn}\frac{(p_{en}-p_{gm})\lvert\braket{g,m\rvert e,n}\rvert^2}{i(\omega+\Delta+\eta-(\epsilon_{gm}-\epsilon_{en}))+\gamma_\phi}.}\label{eq-supp-mat:input-output-susceptibility}
\end{equation}
In Eq$.$~(\ref{eq-supp-mat:input-output-susceptibility}), we denote by $\epsilon_{\sigma m}$ the eigenenergies of the Hamiltonian $H_{\sigma}$: $H_{\sigma}\ket{\sigma, m}=\epsilon_{\sigma m}\ket{\sigma, m}$. This result also assumes an initial state $\rho(0)$ that is diagonal in the eigenbasis of $\sum_{\sigma}H_{\sigma}\ket{\sigma}\bra{\sigma}$, so that $\rho(0)=\sum_{\sigma,m} p_{\sigma m}\ket{\sigma}\bra{\sigma}\otimes\ket{\sigma, m}\bra{\sigma, m}$.
\section{Quantifying the signal}
\label{supp-mat-sec:s6-signal}
After each measurement cycle, typically involving an $N$-pulse dynamical decoupling sequence, the transmission line coupled to the output port will have a quantum state $\rho_{\mathrm{TL}}$ that encodes information about the coherence dynamics that occurred throughout the dynamical decoupling sequence. The amount of information that can be gained per measurement cycle will be limited by both the nature of the (generally mixed) state $\rho_{\mathrm{TL}}$ and by the inference procedure used to extract information from it. Here, we characterize a measure of the signal that depends only on $\rho_{\mathrm{TL}}$.
Provided $\rho_\mathrm{TL}$ can be described in the subspace of zero or one photons (this limit can always be reached by taking $\kappa_2/\kappa$ sufficiently small), it can generally be written in the form
\begin{equation}
\rho_{\mathrm{TL}}=(1-S)\rho_{\mathrm{inc}}+S\ket{\Psi}\bra{\Psi},\label{eq-supp-mat:trans-line-dm-2}
\end{equation}
where the incoherent part $\rho_\mathrm{inc}$ satisfies $\mathrm{Tr}\{r_{k,2}\rho_{\mathrm{inc}}\}=0\;\forall k$. In Eq.~\eqref{eq-supp-mat:trans-line-dm-2}, the size of the signal is characterized by $S\in[0,1]$, and the coherence is fully described by the state $\ket{\Psi}$ of an effective two-level system:
\begin{equation}
\ket{\Psi}=\frac{1}{\sqrt{2}}\left(\ket{0}+\ket{1}\right);\quad \ket{1}=\frac{2}{S}\sum_k \braket{r_{k,2}}_t r_{k,2}^\dagger\ket{0};\quad S=2[\sum_k\lvert \braket{r_{k,2}}_t\rvert^2]^{1/2}.\label{eq-supp-mat:signal}
\end{equation}
To interpret the meaning of the signal $S$, it is useful to consider an extreme example. We consider a qubit prepared in an initial state determined by some fixed (but initially undetermined) phase $\phi_0$: $\braket{\sigma_-}_0=e^{-i\phi_0}/2$. That qubit is then coupled to the cavity, but otherwise has no source of dephasing: $\gamma_\phi=h(t)=\eta(t)=0$. We assume for this example that there is no additional dynamics induced through dynamical decoupling [$H_\mathrm{drive}(t)=0$] and furthermore take $\kappa_1=\kappa_\mathrm{ext}=0$. For a time $t\gg \Gamma_{\mathrm{P}}^{-1}$, the state of the qubit will be transferred, via a Wigner-Weisskopf decay process, to a definite pure state $\ket{\Psi}=\ket{\Psi(\phi_0)}$ of the output transmission line, giving $S=1$. The initial phase $\phi_0$ of the qubit can then be inferred (in principle) through a phase estimation procedure by performing measurements on a well defined two-level subspace of transmission-line states, yielding up to one bit per measurement. In general, the state of the transmission line may be correlated with the state of the qubit, environment, and cavity. These correlations, together with the average over realizations of the random noise parameter $\eta(t)$ will lead to a mixed state $\rho_\mathrm{TL}$ with $S<1$. Having a reduced value $S<1$ thus sets a fundamental limitation on the information that can be extracted from the complete state of the transmission line. It is straightforward to characterize the maximum achievable signal $S$ given the coefficients $\braket{r_{k,2}}_t$ arising from a CPMG sequence, fully accounting for correlation with other degrees of freedom and accounting for random noise.
Limits on the signal $S$ can be found in the present context from the sequence of derivations given above. We substitute Eq.~\eqref{eq-supp-mat:sharkfin-simplified} for the approximate form of the wavepackets $f_n(t)$ (for $\kappa T_2^*\ll 1$) into Eq.~\eqref{eq-supp-mat:train-of-sharks} for the cavity field $\braket{\tilde{a}}_t$. This result is then substituted into Eq.~\eqref{eq-supp-mat:eom-rk} for $\braket{r_{k,2}}_t$, which, for $\braket{r_{k,2}}_0=0$, gives
\begin{equation}\label{eq-supp-mat:rk}
\braket{r_{k,2}}_t\simeq -\braket{\sigma_-}_0\sqrt{\pi}g T_2^*\eta_{k,2}\left[\frac{1}{2}X_{0k}(t)+\sum_{n=1}^N \bar{G}_nX_{nk}(t)\mathcal{K}^n\tilde{C}(n\tau)\right],
\end{equation}
where (recalling the relation $\delta = \Delta-\omega_c$),
\begin{equation}
X_{nk}(t)=e^{i\omega_k(t-n\tau)}\int_0^tdt'e^{-i(\omega_k-\omega_c)t'-\frac{\kappa}{2}t'}\Theta(t'-n\tau).
\end{equation}
For times $t-n\tau\gg\kappa^{-1}$ long compared to the timescale over which cavity transients die out, this object takes the simple form
\begin{equation}
X_{nk}(t)\simeq \frac{e^{-i\omega_k(t-n\tau)}}{\frac{\kappa}{2}-i(\omega_k-\omega_c)}\Theta(t-n\tau);\quad t-n\tau\gg \kappa^{-1}.
\end{equation}
To evaluate $S$, we substitute the expression for $\braket{r_{k,2}}_t$ [Eq.~\eqref{eq-supp-mat:rk}] into $\sum_k \left|\braket{r_{k,2}}_t\right|^2$. In addition to terms arising from the same echo/revival, proportional to
\begin{equation}
\sum_k |\eta_{k,2}|^2\left|X_{nk}(t)\right|^2\simeq \sum_k \frac{|\eta_{k,2}|^2}{(\omega_k-\omega_c)^2+(\kappa/2)^2}\Theta(t-n\tau)\simeq\frac{\kappa_2}{\kappa},\quad t-n\tau\gg \kappa^{-1},
\end{equation}
there will also be cross terms associated with distinct echoes at times $n\tau$, $m\tau$, with $n\ne m$. These cross-terms will, however, be suppressed exponentially for $\kappa\tau\gg 1$:
\begin{equation}
\sum_k |\eta_{k,2}|^2X_{nk}(t)X_{mk}^*(t)\simeq \frac{\kappa_2}{\kappa} e^{-|n-m|\kappa\tau/2}\Theta(t-n\tau)\Theta(t-m\tau)\simeq 0; \quad (n\ne m, \kappa\tau\gg 1).
\end{equation}
Neglecting these cross terms for $\kappa\tau\gg 1$, we then find that
\begin{equation}
S=2\left[\left|\braket{\sigma_-}_0\right|^2\pi(gT_2^*)^2 \frac{\kappa_2}{\kappa} N_\mathrm{eff}\right]^{1/2},
\end{equation}
where the parameter $N_\mathrm{eff}$ scales with the number of revivals/echoes that can be achieved before coherence is lost:
\begin{equation}
N_\mathrm{eff} = \frac{1}{4}+\sum_{n=1}^N \left|\bar{G}_n\tilde{C}(n\tau)\right|^2,\quad t-N\tau>\kappa^{-1}.
\end{equation}
For a Hahn echo sequence ($N=1$), the maximum signal is achieved for $|\braket{\sigma_-}_0|=1/2$ and $|\bar{G}_1\tilde{C}(\tau)|=1$, giving
\begin{equation}
\boxed{ S \le S_\mathrm{Hahn} = \frac{\sqrt{5\pi}}{2} gT_2^*\sqrt{\frac{\kappa_2}{\kappa}}.}
\end{equation}
In this case, the total recoverable signal per cycle is thus still limited by $gT_2^*\ll 1$. In the case of a CPMG sequence, we expect the product $|\bar{G}_n\tilde{C}(n\tau)|$ to be upper-bounded in the best case by the asymptotic form given in Eq.~\eqref{eq-supp-mat:GnCAsymptotic}, resulting in
\begin{eqnarray}
N_\mathrm{eff} & \le & \frac{1}{4}+\sum_{n=1}^\infty 4e^{-6\sqrt{\gamma_\mathrm{P}n\tau}}\\
& \simeq & \frac{4}{\gamma_\mathrm{P}\tau}\int_0^\infty dx e^{-6\sqrt{x}}= \frac{2}{9\gamma_\mathrm{P}\tau},\quad (\gamma_\mathrm{P}\tau\ll 1).
\end{eqnarray}
Inserting this result into the definition for $S$ and using the relation $\gamma_\mathrm{P}=(gT_2^*)^2\kappa/2$ gives an approximate upper bound on the signal that can be achieved with a CPMG sequence:
\begin{equation}
\boxed{S\lesssim S_\mathrm{CPMG} = \frac{2\sqrt{\pi}}{3}\sqrt{\frac{\kappa_2}{\kappa}\frac{1}{\kappa\tau}}.}\label{eq-supp-mat:SCPMG}
\end{equation}
For the CPMG sequence, the recoverable signal is not limited by $gT_2^*\ll1$, but it is still small in the parameter $1/\kappa\tau\ll 1$.
We can improve on the result given in Eq.~\eqref{eq-supp-mat:SCPMG} by modulating the coupling $g\rightarrow g(t)$ [or the detuning $\delta\rightarrow \delta(t)$] as a function of time so that $g(t)\neq 0$ [$\delta(t)\lesssim (T_2^*)^{-1}$] only for times $\lvert t-n\tau\rvert\leq t_{\mathrm{on}}$, where $t_{\mathrm{on}}<T_2^*$ is short compared to the duration of a revivial. This has the effect of reducing cavity-induced backaction on the qubit by eliminating incoherent Purcell decay at a rate $\Gamma_{\mathrm{P}}=g^2\kappa/[(\delta+\eta)^2+(\kappa/2)^2]$ for times when the qubit coherence is already suppressed by inhomogeneous broadening. In this case, the self-energy is given by
\begin{equation}
\Sigma(t,t')\simeq -ig^2\sum_n\Theta_n(t)\Theta_n(t'),
\end{equation}
where $\Theta_n(t)=\Theta\left(t-\left(n\tau-\tfrac{t_{\mathrm{on}}}{2}\right)\right)-\Theta\left(t-\left(n\tau+\tfrac{t_{\mathrm{on}}}{2}\right)\right)$. Substituting this result into the equation of motion for $\dbtilde{\sigma}_-(t)$ [Eq.~\eqref{eq-supp-mat:sigma-minus-eom-3}] then gives
\begin{equation}
\tilde{C}(n\tau)=\left[1-\left(gt_\mathrm{on}\right)^2\right]^n\tilde{C}_0(n\tau),\label{eq-supp-mat:ton-decay}
\end{equation}
leading to wavepackets of the form
\begin{equation}
f_n(t)\simeq -i g t_\mathrm{on}\chi_c(t)=-i g t_\mathrm{on}e^{-i\delta t-\frac{\kappa}{2}t}\Theta(t).
\end{equation}
Following the same reasoning that led to the limits $S_\mathrm{Hahn}$ and $S_\mathrm{CPMG}$, we assume the ideal case where $\tilde{C}_0(n\tau)=1$ [in Eq.~\eqref{eq-supp-mat:ton-decay}] and $\left|\braket{\sigma_-}_0\right|=1/2$. This gives an upper bound
\begin{equation}
S\le \left[(g t_\mathrm{on})^2 \frac{\kappa_2}{\kappa} N_\mathrm{eff}\right]^{1/2},
\end{equation}
where
\begin{equation}
N_\mathrm{eff}\le \frac{1}{4}+\sum_{n=1}^\infty \left[1-(g t_\mathrm{on})^2\right]^n\simeq \frac{1}{(g t_\mathrm{on})^2},\quad |g t_\mathrm{on}|\ll 1.
\end{equation}
The signal, being $T_2^*$-independent, is therefore no longer limited by inhomogeneous broadening:
\begin{equation}
\boxed{S\lesssim S_\mathrm{max} = \sqrt{\frac{\kappa_2}{\kappa}}.}
\end{equation}
In this case, for $\kappa_2/\kappa\to 1$, it is possible (at least in principle) to extract one bit of information per cycle, similar to the case of a single-shot readout.
|
1,116,691,499,792 | arxiv | \section{Introduction}
Let $G$ be a group. A subset $S$ of $G$ is \emph{product-free} if
there do not exist $a,b,c \in S$ (not necessarily distinct\footnote{In some
sources, one does require $a \neq b$. For instance, as noted in
\cite{guiduci-hart}, I mistakenly assumed this in
\cite[Theorem~3]{kedlaya-amm}.})
such that $ab=c$.
One can ask about the existence of large product-free subsets for various
groups, such as the groups of integers (see next section), or compact
topological groups (as suggested in \cite{kedlaya-amm}). For the rest of this
paper, however, I will require $G$ to be a finite group of order
$n > 1$. Let $\alpha(G)$ denote the size of the largest product-free subset of
$G$; put $\beta(G) = \alpha(G)/n$, so that $\beta(G)$ is the density of
the largest product-free subset. What can one say about
$\alpha(G)$ or $\beta(G)$ as a function
of $G$, or as a function of $n$?
(Some of our answers will include an unspecified positive constant;
I will always call this constant $c$.)
The purpose of this paper is threefold. I first
review the history of this problem, up to and including
my involvement via Joe Gallian's REU (Research Experience for Undergraduates)
at the University of Minnesota, Duluth,
in 1994;
since I did this once already in \cite{kedlaya-amm}, I will be briefer
here. I then describe some very recent progress made by Gowers
\cite{gowers}. Finally, I speculate on the gap between the lower and upper
bounds, and revisit my 1994 argument to show that this gap cannot be
closed using Gowers's argument as given.
Note the usual convention that multiplication and inversion are
permitted to act on subsets of $G$, i.e., for $A,B \subseteq G$,
\[
AB = \{ab: a \in A, b \in B\}, \qquad A^{-1} = \{a^{-1}: a \in A\}.
\]
\section{Origins: the abelian case}
In the abelian case, product-free subsets are more customarily called
\emph{sum-free} subsets.
The first group in which such subsets were studied is the
group of integers $\mathbb{Z}$; the first reference I could find for
this is Abbott and Moser \cite{abbott-moser},
who expanded upon Schur's theorem that the set
$\{1, \dots, \lfloor n! e \rfloor\}$ cannot be partitioned into
$n$ sum-free sets. This led naturally to considering sum-free
subsets of finite abelian groups, for which the following is easy.
\begin{theorem}
For $G$ abelian, $\beta(G) \geq \frac{2}{7}$.
\end{theorem}
\begin{proof}
For $G = \mathbb{Z}/p\mathbb{Z}$ with $p > 2$,
we have $\alpha(G) \geq \lfloor
\frac{p+1}{3} \rfloor$ by taking
\[
S = \left\{ \left\lfloor \frac{p+1}{3}
\right\rfloor, \dots, 2 \left\lfloor \frac{p+1}{3}
\right\rfloor - 1 \right\}.
\]
Then apply the following lemma.
\end{proof}
\begin{lemma} \label{L:quotient}
For $G$ arbitrary, if $H$ is a quotient of $G$, then
\[
\beta(G) \geq \beta(H).
\]
\end{lemma}
\begin{proof}
Let $S'$ be a product-free subset of $H$ of size $\alpha(H)$.
The preimage of $S'$ in $G$
is product-free of size $\#S' \#G/\#H$, so
$\alpha(G) \geq \alpha(H)\#G/\#H$.
\end{proof}
In fact, one can prove an exact formula for $\alpha(G)$ showing
that this construction is essentially optimal. Many cases were
established around 1970, but only in 2005 was the proof of the
following result finally
completed by Green and Ruzsa \cite{green-ruzsa}.
\begin{theorem}[Green-Ruzsa]
Suppose that $G$ is abelian.
\begin{enumerate}
\item[(a)] If $n$ is divisible by a prime $p \equiv 2 \pmod{3}$,
then for the least such $p$, $\alpha(G) = \frac{n}{3} + \frac{n}{3p}$.
\item[(b)] Otherwise, if $3 | n$, then $\alpha(G) = \frac{n}{3}$.
\item[(c)] Otherwise, $\alpha(G) = \frac{n}{3} - \frac{n}{3m}$,
for $m$ the exponent (largest order of any element) of $G$.
\end{enumerate}
\end{theorem}
One possible explanation for the delay
is that it took this long for this subject to migrate into the mathematical
mainstream, as part of the modern subject of \textit{additive combinatorics}
\cite{tao-vu}; see Section~\ref{sec:interlude}.
The first appearance of the problem of computing $\alpha(G)$ for
nonabelian $G$ seems to have been
in a 1985 paper of Babai and S\'os \cite{babai-sos}.
In fact, the problem appears there as an afterthought;
the authors were more interested in \textit{Sidon sets},
in which the equation $ab^{-1} = cd^{-1}$
has no solutions with $a,b,c,d$ taking
at least three distinct values. This construction can be related to
embeddings of graphs as induced subgraphs of Cayley graphs;
product-free subsets arise because they relate to the special case
of embedding stars in Cayley graphs.
Nonetheless, the Babai-S\'os paper is the first
to make a nontrivial assertion about $\alpha(G)$ for general $G$; see
Theorem~\ref{T:babai-sos}.
This circumstance suggests rightly
that the product-free problem is only one of a broad class of
problems about structured subsets of groups; this class can be
considered a nonabelian version of additive combinatorics,
and progress on problems in this class has been driven as much by
the development of the abelian theory as by interest from
applications in theoretical computer science. An example of the latter
is a problem of Cohn and Umans \cite{cohn-umans} (see also
\cite{cksu}): to find groups $G$
admitting large subsets $S_1, S_2, S_3$ such that the
equation $a_1 b_1^{-1} a_2 b_2^{-1} a_3 b_3^{-1} = e$, with
$a_i, b_i \in S_i$, has only solutions with $a_i = b_i$ for all $i$.
A sufficiently good construction would resolve an ancient problem
in computational algebra: to prove that two $n \times n$ matrices can be
multiplied using $O(n^{2+\epsilon})$ ring operations for any $\epsilon > 0$.
\section{Lower bounds: Duluth, 1994}
Upon my arrival at the REU in 1994, Joe gave me the paper of Babai and S\'os,
perhaps hoping I would have some new insight about Sidon sets. Instead,
I took the path less traveled and started thinking about
product-free sets.
The construction of product-free subsets given in \cite{babai-sos}
is quite simple: if $H$ is a proper subgroup of $G$, then any
nontrivial coset of $H$ is product-free. This is trivial to
prove directly, but it occurred to me to formulate it in terms of
permutation actions. Recall that
specifying a transitive permutation action of the group $G$ is the same
as simply identifying a conjugacy class of subgroups: if $H$ is one
of the subgroups, the action is left multiplication on
left cosets of $H$. (Conversely, given an action, the point stabilizers
are conjugate subgroups.) The construction of Babai and S\'os can then
be described as follows.
\begin{theorem}[Babai-S\'os] \label{T:babai-sos}
For $G$ admitting a transitive action on $\{1,\dots,m\}$ with $m>1$,
$\beta(G) \geq m^{-1}$.
\end{theorem}
\begin{proof}
The set of all $g \in G$ such that $g(1) = 2$ is product-free of size $n/m$.
\end{proof}
I next wondered: what if you allow $g$ to carry 1 into a slightly
larger set, say a set $T$ of $k$ elements?
You would still get a product-free set
if you forced each $x \in T$ to map to something not in $T$.
This led to the following argument.
\begin{theorem} \label{T:kedlaya}
For $G$ admitting a transitive action on $\{1,\dots,m\}$ with $m>1$,
$\beta(G) \geq c m^{-1/2}$.
\end{theorem}
\begin{proof}
For a given $k$, we compute a lower bound for the average size of
\[
S = \bigcup_{x \in T} \{g \in G: g(1) = x\}
- \bigcup_{y \in T} \{g \in G: g(1), g(y) \in T\}
\]
for $T$ running over $k$-element subsets of $\{2, \dots, m\}$.
Each set in the first union contains $n/m$ elements, and they are all disjoint,
so the first union contains $kn/m$ elements.
To compute the average of a set in the second union, note that for fixed
$g \in G$ and $y \in \{2,\dots,m\}$,
a $k$-element subset $T$ of $\{1, \dots, m\}$
contains $g(1), y, g(y)$ with probability $\frac{k(k-1)}{m(m-1)}$
if two of the three coincide and $\frac{k(k-1)(k-2)}{m(m-1)(m-2)}$
otherwise. A bit of arithmetic then shows that the average size of $S$
is at least
\[
\frac{kn}{m} - \frac{k^3n}{(m-2)^2}.
\]
Taking $k \sim (m/3)^{1/2}$, we obtain
$\alpha(G) \geq c n/m^{1/2}$. (For any fixed $\epsilon > 0$,
the implied constant can be improved to
$e^{-1} - \epsilon$ for $m$ sufficiently large;
see the proof of Theorem~\ref{T:lower bound}. On the other hand, the
proof as given can be made constructive in case $G$ is doubly transitive,
as then there is no need to average over $T$.)
\end{proof}
This gives a lower bound depending on the parameter $m$, which we can
view as the index of the largest proper subgroup of $G$. To state a bound
depending only on $n$, one needs to know something about the dependence
of $m$ on $n$; by
Lemma~\ref{L:quotient}, it suffices to prove a lower bound on $m$ in terms
of $n$ for all \emph{simple} nonabelian
groups. I knew this could be done in
principle using the classification of finite
simple groups (CFSG); after some asking around,
I got hold of a manuscript by Liebeck and Shalev
\cite{liebeck-shalev} that included the bound I wanted, leading to
the following result from \cite{kedlaya}.
\begin{theorem}
Under CFSG, the group $G$ admits a transitive action on a set of size $1 <
m \leq c n^{3/7}$.
Consequently, Theorem~\ref{T:babai-sos} implies
$\alpha(G) \geq cn^{4/7}$, whereas
Theorem~\ref{T:kedlaya} implies
$\alpha(G) \geq c n^{11/14}$.
\end{theorem}
At this point, I was pretty excited to have discovered something interesting
and probably publishable. On the other hand,
I was completely out of ideas! I had no hope of getting any stronger
results, even for specific classes of groups, and it seemed impossible
to derive any nontrivial upper bounds at all. In fact, Babai and S\'os
suggested in their paper that maybe $\beta(G) \geq c$ for all $G$;
I was dubious about this, but I couldn't convince myself that one couldn't
have $\beta(G) \geq c n^{-\epsilon}$ for all $\epsilon > 0$.
So I decided to write this result up by itself, as my first Duluth
paper, and ask Joe for another problem (which naturally he provided).
My paper ended up
appearing as \cite{kedlaya}; I revisited the topic when I was asked to submit
a paper in connection with being named a runner-up for the Morgan Prize
for undergraduate research, the result being \cite{kedlaya-amm}.
I then put this problem in a mental deep freezer,
figuring (hoping?) that my youthful foray into combinatorics
would be ultimately forgotten, once I had made some headway with some
more serious mathematics, like algebraic number theory or algebraic
geometry. I was reassured by the
expectation that the nonabelian product-free problem was both intractable
and of no interest to anyone, certainly not to any serious mathematician.
Ten years passed.\footnote{If you do not recognize this reference, you may
not have
read the excellent novel \textit{The Grasshopper King}, by fellow
Duluth REU alumnus Jordan Ellenberg.}
\section{Interlude: back to the future}
\label{sec:interlude}
Up until several weeks before the Duluth conference, I had been planning to
speak about the latest and greatest in algebraic number
theory (the proof of Serre's conjecture linking modular forms
and mod $p$ Galois representations, recently completed by
Khare and Wintenberger). Then I got an email that suggested that
maybe I should try embracing my past instead of running from it.
A number theorist friend (Michael Schein) reported having attended
an algebra seminar at Hebrew University about product-free
subsets of finite groups, and hearing my name in this
context. My immediate reaction was to wonder what self-respecting mathematician
could possibly be interested in my work on this problem.
The answer was Tim Gowers, who had recently established a nontrivial
upper bound for $\alpha(G)$ using a remarkably simple argument.
It seems that in the
ten years since I had moved on to ostensibly more mainstream
mathematics, additive combinatorics had come into its own, thanks partly
to the efforts of no fewer than three Fields medalists (Tim Gowers, Jean
Bourgain, and Terry Tao); some sources date the start of this boom to
Ruzsa's publication in 1994 of a simplified proof \cite{ruzsa} of a theorem of
Freiman on subsets of $\mathbb{Z}/p\mathbb{Z}$ having few pairwise sums.
In the process, some interest had spilled over
to nonabelian problems.
The introduction to Gowers's paper \cite{gowers}
cites\footnote{Since Joe is fond of noting ``program firsts'', I
should point out that
this appears to be the first citation of a Duluth paper
by a Fields medalist. To my chagrin, I think it
is also the first such citation of any of my papers.}
my Duluth paper as giving the
best known lower bound on $\alpha(G)$ for general $G$.
At this point, it became
clear that I had to abandon my previous plan for the conference in favor
of a return visit to my mathematical roots.
\section{Upper bounds: bipartite Cayley graphs}
In this section, I'll proceed quickly through
Gowers's upper bound construction.
Gowers's paper
\cite{gowers} is exquisitely detailed;
I'll take that fact as license to be
slightly less meticulous here.
The strategy of Gowers is to consider three sets $A,B,C$
for which there is no true equation
$ab=c$ with $a \in A, b \in B, c \in C$, and give an upper bound on
$\#A \#B \#C$.
To do this, he studies a certain \emph{bipartite Cayley graph}
associated to $G$. Consider the bipartite graph
$\Gamma$ with vertex set $V_1 \cup V_2$, where each $V_i$
is a copy of $G$, with an edge from $x \in V_1$ to $y \in V_2$
if and only if $yx^{-1} \in A$.
We are then given that there are no edges
between $B \subseteq V_1$ and $C \subseteq V_2$.
A good reflex at this point would be to consider
the eigenvalues of the adjacency matrix of $\Gamma$. For bipartite graphs,
it is more convenient to do something slightly different using singular values;
although this variant of spectral analysis of graphs is quite natural, I am only
aware of the reference \cite{bollobas-nikiforov} from 2004 (and only
thanks to Gowers for pointing it out).
Let $N$ be the \emph{incidence matrix}, with columns indexed by $V_1$
and rows by $V_2$, with an entry in row $x$ and column $y$ if $xy$
is an edge of $\Gamma$.
\begin{theorem} \label{T:svd}
We can factor $N$ as a product
$U \Sigma V$ of $\#G \times \#G$ matrices over $\mathbb{R}$, with $U,V$ orthogonal
and $\Sigma$ diagonal with nonnegative entries.
(This is called a \emph{singular value decomposition} of $N$.)
\end{theorem}
\begin{proof}
(Compare \cite[Theorem~2.6]{gowers}, or see any textbook on numerical
linear algebra.)
By compactness of the unit ball, there is a greatest
$\lambda$ such that $\|N\mathbf{v}\| = \lambda \|\mathbf{v}\|$ for some nonzero $\mathbf{v} \in
\mathbb{R}^{V_1}$.
If $\mathbf{v} \cdot \mathbf{w} = 0$, then $f(t) = \|N(\mathbf{v} + t \mathbf{w})\|^2$
has a local maximum at $t=0$, so
\[
0 = \frac{d}{dt} \|N(\mathbf{v} + t \mathbf{w})\|^2 = 2 t (N\mathbf{v}) \cdot (N\mathbf{w}).
\]
Apply the same construction to the orthogonal complement of $\mathbb{R} \mathbf{v}$
in $\mathbb{R}^{V_1}$.
Repeating, we obtain an orthonormal basis of $\mathbb{R}^{V_1}$;
the previous calculation shows that the image of this basis in $\mathbb{R}^{V_2}$
is also orthogonal. Using these to construct $V,U$ yields the claim.
\end{proof}
The matrix $M = NN^T$ is symmetric, and has
several convenient properties.
\begin{enumerate}
\item[(a)]
The trace of $M$ equals the number of edges of $\Gamma$.
\item[(b)]
The eigenvalues of $M$ are the squares of the diagonal entries of $\Sigma$.
\item[(c)]
Since $\Gamma$ is regular of degree $\#A$ and connected, the
largest eigenvalue of $M$ is $\#A$,
achieved by the all-ones eigenvector $\mathbf{1}$.
\end{enumerate}
\begin{lemma} \label{L:subspace}
Let $\lambda$ be the second largest diagonal entry of $\Sigma$.
Then the set $W$ of $\mathbf{v} \in \mathbb{R}^{V_1}$ with
$\mathbf{v} \cdot \mathbf{1} = 0$ and $\|N \mathbf{v}\| = \lambda \|\mathbf{v}\|$ is
a nonzero subspace of $\mathbb{R}^{V_1}$.
\end{lemma}
\begin{proof}
(Compare \cite[Lemma~2.7]{gowers}.)
From Theorem~\ref{T:svd}, we obtain an orthogonal basis $\mathbf{v}_1, \dots, \mathbf{v}_n$
of $\mathbb{R}^{V_1}$, with $\mathbf{v}_1 = \mathbf{1}$, such that $N\mathbf{v}_1, \dots, N\mathbf{v}_n$
are orthogonal, and $\|N\mathbf{v}_1\|/\|\mathbf{v}_1\|, \dots, \|N\mathbf{v}_n\|/\|\mathbf{v}_n\|$
are the diagonal entries of $\Sigma$; we may then identify $W$
as the span of the $\mathbf{v}_i$ with $i > 1$ and $\|N\mathbf{v}_i\| = \lambda \|\mathbf{v}_i\|$.
Alternatively, one may note that $W$ is obviously closed under scalar
multiplication, then check that $W$ is closed under addition
as follows.
If $\mathbf{v}_1, \mathbf{v}_2 \in W$, then
$\|N (\mathbf{v}_1 \pm \mathbf{v}_2)\| \leq \lambda \|\mathbf{v}_1 \pm \mathbf{v}_2\|$, but by
the parallelogram law
\begin{align*}
\|N\mathbf{v}_1 + N\mathbf{v}_2\|^2 +
\|N\mathbf{v}_1 - N\mathbf{v}_2\|^2 &= 2 \|N\mathbf{v}_1\|^2 + 2 \|N\mathbf{v}_2\|^2 \\
&= 2 \lambda^2 \|\mathbf{v}_1\|^2 + 2 \lambda^2 \|\mathbf{v}_2\|^2 \\
&= \lambda^2 \|\mathbf{v}_1 + \mathbf{v}_2\|^2 + \lambda^2 \|\mathbf{v}_1 - \mathbf{v}_2\|^2.
\end{align*}
Hence $\|N (\mathbf{v}_1 \pm \mathbf{v}_2)\| = \lambda \|\mathbf{v}_1 \pm \mathbf{v}_2\|$.
\end{proof}
Gowers's upper bound on $\alpha(G)$ involves the parameter $\delta$,
defined as the smallest dimension of a
nontrivial representation\footnote{One could just as well restrict
to real representations, which would increase $\delta$
by a factor of 2 in some cases. For instance,
if $G = \PSL_2(q)$ with $q \equiv 3 \pmod{4}$, this would give
$\delta = q-1$.}
of $G$.
For instance, if $G = \PSL_2(q)$ with $q odd$, then
then $\delta = (q-1)/2$.
\begin{lemma} \label{L:gowers}
If $\mathbf{v} \in \mathbb{R}^{V_1}$ satisfies $\mathbf{v} \cdot \mathbf{1} = 0$, then
$\|N\mathbf{v}\| \leq (n\#A/\delta)^{1/2} \|\mathbf{v}\|$.
\end{lemma}
\begin{proof}
Take $\lambda, W$ as in Lemma~\ref{L:subspace}.
Let $G$ act on $V_1$ and $V_2$ by right multiplication; then $G$
also acts on $\Gamma$. In this manner, $W$ becomes a real representation of $G$
in which no nonzero vector is fixed. In particular, $\dim(W) \geq \delta$.
Now note that the number of edges of $M$, which is
$n\#A$, equals the trace of $M$, which is at least $\dim(W) \lambda^2 \geq
\delta \lambda^2$.
This gives $\lambda^2 \leq n\#A/\delta$, proving the claim.
\end{proof}
We are now ready to prove Gowers's theorem
\cite[Theorem~3.3]{gowers}.
\begin{theorem}[Gowers] \label{T:gowers}
If $A,B,C$ are subsets of $G$ such that there is no true
equation $ab=c$ with $a \in A, b \in B, c \in C$, then
$\#A \#B \#C \leq n^3/\delta$. Consequently, $\beta(G)
\leq \delta^{-1/3}$.
\end{theorem}
For example, if $G = \PSL_2(q)$ with $q$ odd, then $n \sim c q^3$, so
$\alpha(G) \leq c n^{8/9}$. On the lower bound side, $G$ admits subgroups of
index $m \sim c q$, so $\alpha(G) \geq c n^{5/6}$.
\begin{proof}
Write $\#A = rn, \#B = sn, \#C = tn$. Let $\mathbf{v}$ be the characteristic
function of $B$ viewed as an element of $\mathbb{R}^{V_1}$, and put
$\mathbf{w} = \mathbf{v} - s \mathbf{1}$. Then
\begin{align*}
\mathbf{w} \cdot \mathbf{1} &= 0 \\
\mathbf{w} \cdot \mathbf{w} &= (1-s)^2 \#B + s^2 (n - \#B) = s(1-s)n \leq sn,
\end{align*}
so by Lemma~\ref{L:gowers}, $\|N\mathbf{w}\|^2 \leq r n^2sn/\delta$.
Since $ab = c$ has no solutions with $a \in A, b \in B, c \in C$,
each element of $C$ corresponds to a zero entry in
$N\mathbf{v}$. However, $N \mathbf{v} = N \mathbf{w} + r sn \mathbf{1}$, so each zero entry in
$N \mathbf{v}$ corresponds to an entry of $N \mathbf{w}$ equal to $-rsn$. Therefore,
\[
(tn)(rsn)^2 \leq \|N\mathbf{w}\|^2 \leq rsn^3/\delta,
\]
hence $rst \delta \leq 1$ as desired.
\end{proof}
As noted by Nikolov and Pyber \cite{nikolov-pyber}, the
extra strength in Gowers's theorem is useful for other applications in group
theory, largely via the following corollary.
\begin{cor}[Nikolov-Pyber]
If $A,B,C$ are subsets of $G$ such that $ABC \neq G$,
then
$\#A \#B \#C \leq n^3/\delta$.
\end{cor}
\begin{proof}
Suppose that $\#A \#B \#C > n^3/\delta$.
Put $D = G \setminus AB$,
so that $\#D = n - \#(AB)$.
By Theorem~\ref{T:gowers}, we have $\#A \#B \#D \leq n^3/\delta$,
so $\#C > \#D$. Then for any $g \in C$, the sets $AB$ and $gC^{-1}$
have total cardinality more than
$n$, so they must intersect. This yields $ABC = G$.
\end{proof}
Gowers indicates that his motivation for this argument was the
notion of a \emph{quasi-random graph} introduced by
Chung, Graham, and Wilson \cite{cgw}. They show that (in a suitable
quantitative sense) a graph
looks random in the sense of having the right number of short cycles
if and only if it also looks random from the spectral viewpoint, i.e.,
the second largest eigenvalue of its adjacency matrix is not too large.
\section{Coda}
As noted by Nikolov and Pyber \cite{nikolov-pyber},
using CFSG to get a strong quantitative version of Jordan's theorem on finite
linear groups, one can produce upper and lower bounds for $\alpha(G)$ that
look similar. (Keep in mind that the index of a proper subgroup must be
at least $\delta+1$, since any permutation representation of degree $m$
contains a linear representation of dimension $m-1$.)
\begin{theorem}
Under CFSG, the group $G$ has a proper subgroup of index
at most $c \delta^2$. Consequently,
\[
c n/\delta \leq \alpha(G) \leq cn/\delta^{1/3}.
\]
\end{theorem}
Moreover, for many natural examples (e.g., $G = A_m$ or $G = \PSL_2(q)$),
$G$ has a proper subgroup of index at most $c\delta$, in which case one has
\[
c n/\delta^{1/2} \leq \alpha(G) \leq cn/\delta^{1/3}.
\]
Since the gap now appears quite small, one might ask about closing it.
However, one can adapt the argument of \cite{kedlaya}
to show that Gowers's argument alone will not suffice, at least for
families of groups with $m \leq c \delta$.
(Gowers proves some additional results about products taken more than
two at a time \cite[\S 5]{gowers};
I have not attempted to extend this construction to that setting.)
\begin{theorem} \label{T:lower bound}
Given $\epsilon > 0$,
for $G$ admitting a transitive action on $\{1,\dots,m\}$ for $m$
sufficiently large,
there exist $A,B,C \subseteq G$ with $(\#A)(\#B)(\#C) \geq
(e^{-1}-\epsilon)n^3/m$,
such that the equation $ab=c$ has no solutions with $a \in A, b \in B,
c \in C$. Moreover, we can force $B=C$, $C=A$, or $A=B^{-1}$ if desired.
\end{theorem}
\begin{proof}
We first give a quick proof of the lower bound $cn^3/m$.
Let $U,V$ be subsets of $\{1,\dots,m\}$ of respective sizes $u,v$.
Put
\begin{align*}
A &= \{g \in G: g(U) \cap V = \emptyset\} \\
B &= \{g \in G: g(1) \in U\} \\
C &= \{g \in G: g(1) \in V\};
\end{align*}
then clearly the equation $ab=c$ has no solutions with $a \in A, b \in B,
c \in C$. On the other hand,
\[
\#A \geq n - u \frac{vn}{m}, \qquad
\#B = \frac{un}{m},
\qquad
\#C = \frac{vn}{m},
\]
and so
\[
(\#A)(\#B)(\#C) \geq \frac{n^3}{m}
\left( \frac{uv}{m} \right) \left( 1 - \frac{uv}{m} \right).
\]
By taking $u,v = \lfloor \sqrt{m/2} \rfloor$, we obtain
$(\#A)(\#B)(\#C) \geq cn^3/m$.
To optimize the constant, we must average over choices of $U,V$.
Take $u,v = \lfloor \sqrt{m} \rfloor$. By
inclusion-exclusion, for any positive integer $h$,
the average of $\#A$ is bounded below by
\[
\sum_{i=0}^{2h-1} (-1)^i n \frac{u(u-1)\cdots(u-i+1)v(v-1)\cdots(v-i+1)}{i! m(m-1)
\cdots (m-i+1)}.
\]
(The $i$-th term counts occurrences of $i$-element subsets in $g(U) \cap V$.
We find $\binom{v}{i}$ $i$-element sets inside $V$; on average,
each one occurs
inside $g(U)$ for $n \binom{u}{i} / \binom{m}{i}$ choices of $g$.)
Rewrite this as
\[
n \left( \sum_{i=0}^{2h-1} (-1)^i \frac{(m^{1/2})^i
(m^{1/2})^i}{m^i i!} +
o(1) \right),
\]
where $o(1) \to 0$ as $m \to \infty$.
For any $\epsilon > 0$, we have
\[
(\#A)(\#B)(\#C) \geq n^3 \frac{m}{m^2}
\left( e^{-1} - \epsilon \right)
\]
for $h$ sufficiently large, and
$m$ sufficiently large depending on $h$.
This gives the desired lower bound.
Finally, note that we may achieve $B=C$ by taking $U = V$.
To achieve the other equalities,
note that if the triplet $A,B,C$ has the desired property, so do
$B^{-1},A^{-1},C^{-1}$ and $C,B^{-1},A$.
\end{proof}
I have no idea whether one can sharpen
Theorem~\ref{T:gowers} under the hypothesis $A=B = C$
(or even just $A=B$). It might be enlightening to collect some
numerical evidence using examples generated by Theorem~\ref{T:kedlaya};
with Xuancheng Shao, we have done this for $\PSL_2(q)$ for $q \leq 19$.
I should also mention again that (as suggested in
\cite{kedlaya-amm}) one can also study product-free
subsets of compact topological groups, which are large for Haar measure. Some such
study is implicit in \cite[\S 4]{gowers}, but we do not
know what explicit bounds come out.
|
1,116,691,499,793 | arxiv | \section{Introduction}
Clustering sets out to find groups for subjects based on several different
characteristics (variables) with no subgroup labels other than the
observed information. Ideal clustering memberships achieve the target
such that subjects within a cluster are considered to be similar for
the given characteristics (variables). The degree of similarity and
dissimilarity can be defined in plenty of ways, and there are various
methods for grouping subjects, including hierarchical clustering ,
k-means, and DBSCAN to name a few. See e.g. \citet{berkhin2006survey},
\citet{bouveyron2014model}, \citet{murtagh2012algorithms} for brief
literature review of conventional clustering analysis in a multivariate
data context.
In many situations, however, only one variable per subject was measured,
but it was measured time after time. Functional data clustering is
a somewhat distinctive notion to deal with grouping based on such
data. The functional data clustering differs from conventional clustering
in two aspects: data format and time coordinates. First, the data
may be collected at unequally spaced time points, and many ‘missing’
values occur if an analyst aligns records into the conventional ‘variable-by-variable’
format. Second, even all subjects were observed at the same time points,
the conventional clustering fails to take into account the coordinating
order of variables, on which adjacent data collected for the same
subject are expected to have similar values. Several methods for functional
data have been suggested in the literature, and we review three major
categories in the following: distance-based methods, decomposition-based
methods, and model-based methods.
Distance-based methods, using pointwise distance between pairs of
subjects, are the most straightforward approach (e.g., \citealp{tarpey2003clustering};
\citealp{genolini2010kml}). They often deal with the two issues mentioned
above by certain curve smoothing or imputation techniques, and subsequently
distances between subjects are computed to which the conventional
distance-based methods can be applied. Little attention, however,
has been paid to the uncertainty of smoothing or imputation. To the
best of our knowledge, the only two exceptions are (1) the prediction
based approach of \citet{alonso2006time} that modified by \citet{vilar2010non},
and (2) the hypotheses-testing-like approach of \citet{maharaj1996significance}.
The former is computationally intensive and the latter is designed
for invertible ARMA process, which restrict their application.
Decomposition-based methods, overcome the smoothing and sequential
order issues through transforming the observed data into a finite
series of common features, and the procedures deal with uncertainty
of smoothing implicitly. For example, \citet{abraham2003unsupervised}
used spline basis functions, \citet{james2000principal} used functional
principal component analysis, and \citet{warren2005clustering} reviewed
more sophisticated ‘feature-extraction’ algorithms. These approaches
define common features for all groups and then assign weights to features
by which groups are identified. Each group has different weights on
those features and each group can be interpreted according to its
lower-dimensional projection on features. Features extracted from
a certain transformation of data are also popular, such as spectral
densities (\citealp{fan2004generalised}), periodogram (\citealp{caiado2006periodogram};
\citealp{de2010classification}), and permutation distribution (\citealp{brandmaier2012permutation}).
Nonetheless, in reality not all groups share the same number of features,
and how to determine an appropriate number of dimensions is not easy.
In light of the difficulties encountered by the first two methods,
many researchers suggest the third alternative, various model-based
frameworks.They estimate individual underlying curves and cluster
subjects simultaneously, and then statistical inference can be made
based on the working models for clusters, such as measuring the uncertainty
for cluster assignment and ‘within-cluster’ variation. Unfortunately,
these approaches encounter other challenges. Purely parametric functional
forms used in traj \citep{jones2007advances} may not be realistic
and its assumption of subjects sharing the same `underlying' curve
within a group can be too restrictive. Applying semi- or non-parametric
methods has to do some dimension reduction within
each group (e.g., FCM by \citealp{james2003clustering}; funHDDC by
\citealp{bouveyron2011model}; Funclust by \citealp{jacques2013funclust};
and K-centre by \citealp{chiou2007functional}), but this encounters
a similar problem as decomposition-based methods. A pure likelihood-based
framework (without dimension reduction) called longclust is proposed
by \citet{mcnicholas2010model}. This method is limited to short time
series and breaks down easily due to the curse of dimensionality.
Even worse, the notion of distribution for random functions is not
well-defined as curves could have infinite dimensions (see e.g.,
\citealp{delaigle2010defining}).
The aforementioned review describes the strengths and weaknesses
of the existing functional data clustering methods. Moreover, it is
worth mentioning that the curve variability is an important issue.
Clustering curves can be a difficult ‘chicken-and-egg’ problem between
(1) how to determine the within-cluster variations before identifying
subgroups, and (2) how to separate subgroups when within-cluster variations
are unknown. This dilemma is related directly to the smoothing uncertainty
problem in distance-based approaches. Decomposition-based and model-based
approaches estimate such variability with necessity, but the estimation
id often distorted when outliers occurs. A two-step strategy exploiting
relative merits of different methods seems reasonable: initially separate
potential outliers based on 'outlier-invariant' pairwise distance,
and then form main clusters with another appropriate clustering method.
For such a strategy, a distance measure concerning the variability
of curve estimation or feature selection is crucial.
In this article, we develop an easily implementable and practically
advantageous method for distance measure between subjects. Instead
of estimating the best-representing curve for each subject as fixed
during clustering, we propose to measure the dissimilarity between
subjects based on pair-by-pair varying curve estimates for a subject.
By applying the technique of smoothing splines, the curve smoothing
is completely determined by the chosen smoothing parameter. The intuitions
behind our proposal are that smoothing parameters of smoothing splines
reflect inverse signal-to-noise ratios and that the smoothing results
for two similar subjects are expected to be close if an identical
smoothing parameter is applied. Specifically, if the unobserved true
curves of subjects $i$ and $j$ are similar, the estimates for them
should resemble with each other, no matter whether we use a smoothing
parameter primarily for the $i$-th or the $j$-th subject. Our distance
is then calculated through commuting between the smoothing parameters
for a pair.
The rest of the article is organized as follows. Section 2 describes
the proposed dissimilarity and some of its properties. Its effectiveness
is shown through simulations comparing to other dissimilarity measures
in Section 3. An example of its application to methadone dosages observations
is given in Section 4, where we also identified outliers with a rather
simple method. Finally, Section 5 provides some concluding remarks
and discussion concerning future directions.
\section{The Proposed Distance}
We utilize the smoothing spline as our smoothing method, and so we
briefly introduce the smoothing spline before our proposal. Assume
that the curve of $i$-th subject is observed at distinct finite time
points $\{t_{i,1},\ldots,t_{i,K_{i}}\}$ in an interval $[T_{L},T_{U}]$
with measurement errors according to the model
\begin{equation}
y_{i,k}=f_{i}(t_{i,k})+\epsilon_{i,k},\: k=1,\ldots,K_{i},\: i=1,\ldots,n,\label{eq:observation equation}
\end{equation}
where $\epsilon_{i,k}\distras{i.i.d.}N(0,\sigma^{2})$. A reasonable
estimation of $f_{i}$ is to minimize $\frac{1}{K_{i}}\sum_{k}(y_{i,k}-f_{i}(t_{i,k}))^{2}$
but control the wiggleness of $f_{i}$ such as $\int_{T_{L}}^{T_{U}}(f_{i}^{''}(t))^{2}dt\leq\rho$
for a positive $\rho$. This estimator is equivalent to a smoothing
spline $\hat{f_{i}}(\cdot;\lambda)$ which minimizes
\begin{equation}
\frac{1}{K_{i}}(\mathbf{y}_{i}-\mathbf{f}_{i})^{\prime}(\mathbf{y}_{i}-\mathbf{f}_{i})+\lambda\int_{T_{L}}^{T_{U}}(f_{i}^{''}(t))^{2}dt\label{eq:spline-variational form}
\end{equation}
given a smoothing parameter $\lambda$, where $\mathbf{y}_{i}=(y_{i,1},\ldots,y_{i,K_{i}})^{\prime}$
and $\mathbf{f}_{i}=\left(f_{i}(t_{i,1}),\ldots,f_{i}(t_{i,K_{i}})\right)^{\prime}$
(see e.g. \citealp{wahba1980some[TPS]}; \citealp{green1993nonparametric}).
There are various methods to determine an appropriate $\lambda$ in
(\ref{eq:spline-variational form}) , and once $\lambda$ chosen $\hat{f}_{i}(t;\lambda)$
for $t\in[T_{L},T_{U}]$ is completely established. We exploit a mixed-effects
model representation (e.g., \citealp{wang1998smoothing}) of the problem
in (\ref{eq:spline-variational form}) as
\begin{equation}
\mathbf{y}_{i}=\mathbf{X}_{i}\boldsymbol{\beta}_{i}+\mathbf{u}_{i}+\boldsymbol{\epsilon}_{i},\label{eq:spline-mixed form}
\end{equation}
where $\boldsymbol{\beta}_{i}$ is the fixed effect, $\mathbf{X}_{i}$
has two columns being $1$'s and $(t_{i,1},\ldots,t_{i,K_{i}})^{\prime}$,
$\boldsymbol{\epsilon}_{i}=(\epsilon_{i,1},\ldots,\epsilon_{i,K_{i}})^{\prime}\sim N(0,\sigma^{2}\mathbf{I})$,
and $\mathbf{u}_{i}\sim N(\mathbf{0},\sigma_{u}^{2}\mathbf{R})$ with
$\sigma_{u}^{2}=\sigma^{2}/(K_{i}\lambda)$ and the $(k,k^{*})$ element
of $\mathbf{R}$ being
\[
\left(T_{U}-T_{L}\right)^{-2}\int_{T_{L}}^{T_{U}}(t_{i,k}-\tau)_{+}(t_{i,k^{*}}-\tau)_{+}d\tau
\]
with $a_{+}=\max(0,a)$. As a function of variance for $\mathbf{u}_{i}$
in (\ref{eq:spline-mixed form}), $\lambda$ can be determined based
on the restricted maximum likelihood method and $K_{i}\lambda$ has
a useful interpretation of inverse signal-to-noise ratio as $\sigma_{u}^{2}/\sigma^{2}$.
Additionally, it been shown that the smoothing results are more robust
even when the correlation structure of $\textrm{var}(\boldsymbol{\epsilon}_{i})$
is mis-specified (e.g.\citealp{wang1998smoothing} and \citealp{krivobokova2007note}).
Our proposal starts with finding $\hat{\lambda}_{i}$ in (\ref{eq:spline-mixed form})
for each subject based on $\mathbf{y}_{i}$. The estimated curve is
denoted by $\hat{f}_{i}(\cdot;\hat{\lambda}_{i})$, which amounts
to obtaining $\hat{f}_{i}(\cdot;\lambda)$ given $\lambda=\hat{\lambda}_{i}$
in (\ref{eq:spline-variational form}) for observations $\mathbf{y}_{i}$.
Fixed on the smoothing parameter $\hat{\lambda}_{i}$, we can obtain
$\hat{f}_{j}(\cdot;\hat{\lambda}_{i})$ based on observations $\mathbf{y}_{j}$.
The roles of the two subjects can be exchanged, and similarly we have
$\hat{f}_{j}(\cdot;\hat{\lambda}_{j})$ and $\hat{f}_{i}(\cdot;\hat{\lambda}_{j})$
. Then the distance between subjects $i$ and $j$ is calculated as
\begin{equation}
d_{i,j}=\frac{1}{2}\left\{ \left[\int_{T_{L}}^{T_{U}}\left(\hat{f}_{i}(t;\hat{\lambda}_{i})-\hat{f}_{j}(t;\hat{\lambda}_{i})\right)^{2}dt\right]^{1/2}+\left[\int_{T_{L}}^{T_{U}}\left(\hat{f}_{i}(t;\hat{\lambda}_{j})-\hat{f}_{j}(t;\hat{\lambda}_{j})\right)^{2}dt\right]^{1/2}\right\} .\label{eq:proposed distance}
\end{equation}
Due to the roles of $\hat{\lambda}_{i}$ and $\hat{\lambda}_{j}$
in (\ref{eq:proposed distance}), we call it a smoothing parameter
commutation based distance, and explain its underlying rationale below.
First if the `true' $f_{i}$ and $f_{j}$ are similar, it is expected
that $\hat{f}_{i}$ and $\hat{f}_{j}$ from $\mathbf{y}_{i}$ and
$\mathbf{y}_{j}$ should be close, given an identical smoothing parameter.
Second it takes the variation of smoothing into consideration with
diverse $\lambda$'s for different pair of $(i,j)$'s. It focuses
on how similar a pair of curves can be, instead of the distance between
(fixed) estimated curves. Third $d_{i,j}\geq0$, $d_{i,j}=0$ if $i=j$,
and $d_{i,j}=d_{j,i}$, so conventional distance based clustering
methods can be applied straightforward. Fourth it reduces to rooted
integral squared difference of $f_{i}$ and $f_{j}$ when no missing
values and measurement errors present.
Our proposal also has several pragmatic advantages. First, missing
values or irregular time points can be handled directly, thanks to
the nature of smoothing splines. Second, the dissimilarity also serves
as a useful tool for outlier detection (see Section \ref{sec:Real-Data-Application}).
Third, the implementation is almost handy since subroutines for smoothing
splines and numerical integration are widely available. Although the
computing burden for (\ref{eq:proposed distance}) seems heavy at
first glance, it can be done more efficiently among $n$ subjects.
Given $\lambda$ a fast $O(K_{i})$ algorithm to compute $\hat{f_{i}}(t;\lambda)$
does exist (e.g., \citealp{hutchinson1985smoothing}). Thus, one needs
to solve $\hat{\lambda}_{i}$ in (\ref{eq:spline-mixed form}) only
$n$ times for the $n$ subjects, and then adopts the fast algorithm
for $\left\{ \hat{f}_{j}(t;\hat{\lambda}_{i}):\: i,j=1,\ldots,n\right\} $.
Therefore the computational complexity is proportional to that in
treating $f=\hat{f_{i}}(t;\lambda_{i})$ as fixed and calculating
distance as squared root of $\int_{T_{L}}^{T_{U}}\left(\hat{f}_{i}(t;\hat{\lambda}_{i})-\hat{f}_{j}(t;\hat{\lambda}_{j})\right)^{2}dt$
(see \citealp{ramsay2005smoothing} and the latter procedure is referred
to as $d_{SS}$ in what follows).
\section{Simulation}
We conduct a simulation to investigate whether our proposed measure
is more representative than other dissimilarity measures when observations
were contaminated with (independent or dependent) noises. If an analyst is
interested in the relative shape pattern of curves, regardless of
shift, shrinkage, expansion, or magnitude, then several alignment,
normalization, and warping tools can be applied in preprocessing (e.g.,\citet{berndt1994using},
\citet{gaffney2004joint}, and \citet{liu2009simultaneous}). For
fear of losing focus, we do not consider distance measures engaging
with the preprocessing.
We consider the following four random curve models over $t\in[0,1]$
\begin{align*}
f^{(1)}(t;\eta)= & \eta,\\
f^{(2)}(t;\eta)= & \sin(2\pi t)-t+2\eta\cos(4\pi t),\\
{\normalcolor f^{(3)}}(t;\eta)= & 3t+2\eta t,\\
f^{(4)}(t;\eta)= & 5\eta\left\{ (t-0.5)^{2}-2t(1-t)\right\} ,
\end{align*}
where $\eta\sim N(1,0.3^{2})$. The four functional forms stand for
constant, periodic, linear, and nonlinear (unobserved) true curves,
respectively. The observed data are generated according to (\ref{eq:observation equation})
merely at 200 time points, $t_{k}\in\{0,1/199,\ldots,198/199,1\}$,
with noises coming from four mechanisms
\begin{align}
\textrm{WN:\qquad\quad\quad} & \epsilon_{k}=\xi_{k},\nonumber \\
\textrm{AR}:\qquad\quad\quad & \epsilon_{k}=0.8\epsilon_{k-1}+\xi_{k},\nonumber \\
\textrm{SARMA:\quad}\:\:\:\: & \epsilon_{k}=0.8\epsilon_{k-10}+0.8\xi_{k-10}+\xi_{k},\nonumber \\
\textrm{BILR{\normalcolor :}}\qquad\quad\: & \epsilon_{k}=0.8\epsilon_{k-1}+0.2\xi_{k-1}-0.2\epsilon_{k-1}\xi_{k-1}+\xi_{k},\label{eq:noise}
\end{align}
where $\xi_{k}\distras{i.i.d.}N(0,1)$ and $\xi_{k}$ is independent
of $\epsilon_{k^{\prime}}$ for $k^{\prime}\neq k$. That is, we set
$K_{i}\equiv200$, $t_{ik}\equiv(k-1)/199$. The four noise mechanisms
are examples of usual assumption for noises: purely
independent process, stationary process, cyclostationary process,
and nonstationary process. For each combination of $f\in\left\{ f^{(1)},f^{(2)},f^{(3)},f^{(4)}\right\} $
and mechanism of $\epsilon_{k}$, 10 series are generated according
to 10 independent $\eta$ as well as 10 sets of $\epsilon_{k}$'s,
and totally there are 160 series mimicking the longitudinal observations
from 160 subjects.
Then several distance measures are calculated based on the simulated
data. Following the notation in \citet{montero2014tsclust}, we compare
10 measures, including our proposal (referred to as $d_{OUR}$) and
point-wise Euclidean distance $d_{EUCL}=\sqrt{\sum_{k}(y_{ik}-y_{jk})^{2}}$,
and the eight others are listed in Table \ref{tab:Distance-measures}.
Two comparison criteria are defined as follows:
\begin{align*}
Q & =\underset{a,b}{\min}\sum_{i}\sum_{j\neq i}\frac{\left(a+b\hat{d}_{i,j}-d_{i,j}\right)^{2}}{d_{i,j}},\\
R & =\sum_{i}\sum_{j\neq i}(\hat{r}_{i,j}-r_{i,j})^{2},
\end{align*}
where $\hat{d}_{i,j}$ is one of the considered distance measures
between the $i$-th and $j$-th subjects, $d_{i,j}=\sqrt{\sum_{k}(f_{i}(t_{ik})-f_{j}(t_{jk}))^{2}}$
is the true distance without noise, and $\hat{r}_{i,j}$ and $r_{i,j}$
are the corresponding rank of $\hat{d}_{i,j}$ and $d_{i,j}$ among
all pairs of $(i,j)$'s, respectively. The quantity $Q$ reflects
the loss, normalized by the true distance scales, for (linear) approximation
to all the pairs of true distances, while $R$ measures the deviation
from monotonicity between $\hat{d}_{i,j}$ and $d_{i,j}$. A good
measure should have a small value of $Q$ or $R$. The averaged $Q$
and $R$ values for the 10 measures over 200 simulation replicates
are given in Table \ref{tab:Q comparison} and Table \ref{tab:R comparison},
respectively.
The two comparison criteria are highly coherent in that they almost
always sort the same best and worst measures. As expected, $d_{EUCL}$
is often among the best measures since there are no missing data and
$d_{EUCL}$ is unbiased in many situations. But it does not good enough
if the signal or noise is periodic ($f^{(2)}$, SARMA, respectively).
Our method and $d_{SS}$ always fall in the best 3 measures, either
for 10 curves within an individual group or for 160 curves as a whole.
Note that $d_{SS}$ and $d_{EUCL}$ have almost identical result within
a group, due to both utilize the mixed-effects model representation
of smoothing splines. The difference lies in that $d_{SS}$ regarding
$\hat{f}_{i}(t;\hat{\lambda}_{i})$ as a fixed estimate of $f_{i}$.
Our method outperforms for between-group distance, which indicates
the advantage of accounting for smoothing variation via smoothing
parameter commutation. In certain cases $d_{PRED,h}$ and $d_{MAH}$
are good measures, which also take estimation uncertainty into consideration.
\begin{table}
\small
\begin{centering}
\begin{tabular}{lll}
Notation & Description & Literature\tabularnewline
\hline
$d_{MAH}$ & parametric testing of equality of processes & \citet{maharaj1996significance}\tabularnewline
$d_{GLK}$ & nonparametric equality testing of log-spectra & \citet{fan2004generalised}\tabularnewline
$d_{SS}$ & based on spline smoothing curves & \citet{ramsay2005smoothing}\tabularnewline
$d_{CORT}$ & correlation-based modification of $d_{EUCL}$ & \citet{chouakria2007adaptive}\tabularnewline
$d_{IP}$ & based on integrated periodogram & \citet{de2010classification}\tabularnewline
$d_{PRED,h}$ & based on predicted values at future & \citet{vilar2010non}\tabularnewline
$d_{CID}$ & complexity-based modification of $d_{EUCL}$ & \citet{batista2011complexity}\tabularnewline
$d_{PDC}$ & permutation distributions of order patterns & \citet{brandmaier2012permutation}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\label{tab:Distance-measures}Distance measures to be compared.}
\end{table}
\begin{table}
\small
\begin{tabular}{crrrrrrrrrr}
\hline
& $d_{EUCL}$ & $d_{OUR}$ & $d_{MAH}$ & $d_{GLK}$ & $d_{SS}$ & $d_{CORT}$ & $d_{IP}$ & $d_{PRED,h}$ & $d_{CID}$ & $d_{PDC}$\tabularnewline
\hline
$f^{(1)}$+W & \textbf{1.59} & \textbf{0.36} & 8.45 & 8.55 & \textbf{0.37} & 3.82 & 9.32 & 1.85 & 2.19 & 8.63\tabularnewline
$f^{(1)}$+A & \textbf{5.61} & \textbf{5.66} & 8.07 & 8.02 & \textbf{5.66} & 6.4 & 9.54 & \textbf{4.61} & 5.82 & 7.96\tabularnewline
$f^{(1)}$+S & 8.07 & \textbf{6.15} & 8.67 & 8.55 & \textbf{6.16} & 8.68 & 10.53 & \textbf{7.32} & 8.19 & 8.78\tabularnewline
$f^{(1)}$+B & \textbf{7.65} & \textbf{7.66} & 8.48 & 8.48 & \textbf{7.65} & 7.89 & 11.88 & \textbf{5.62} & 7.87 & 8.48\tabularnewline
$f^{(2)}$+W & 2.21 & \textbf{0.87} & 3.79 & 3.56 & \textbf{0.83} & 3.54 & \textbf{1.35} & 3.96 & 2.76 & 4.06\tabularnewline
$f^{(2)}$+A & \textbf{3.94} & \textbf{3.94} & \textbf{3.91} & 3.97 & \textbf{3.94} & \textbf{3.94} & 5.69 & 4.01 & \textbf{3.95} & 3.97\tabularnewline
$f^{(2)}$+S & 3.96 & \textbf{3.81} & 3.83 & \textbf{3.83} & \textbf{3.63} & 3.94 & 5.71 & 4.04 & 3.93 & 3.95\tabularnewline
$f^{(2)}$+B & \textbf{3.99} & \textbf{3.99} & 4.05 & 4.06 & \textbf{3.99} & \textbf{3.99} & 12.32 & 4.04 & 4.04 & 4.09\tabularnewline
$f^{(3)}$+W & 1.49 & \textbf{1.05} & 1.40 & 1.40 & \textbf{0.99} & 1.52 & \textbf{1.38} & 1.58 & 1.50 & 1.53\tabularnewline
$f^{(3)}$+A & \textbf{1.49} & \textbf{1.49} & \textbf{1.47} & \textbf{1.49} & \textbf{1.49} & 1.50 & 2.38 & 1.51 & \textbf{1.49} & \textbf{1.49}\tabularnewline
$f^{(3)}$+S & 1.53 & \textbf{1.49} & \textbf{1.49} & \textbf{1.49} & \textbf{1.49} & 1.52 & 4.07 & 1.54 & 1.52 & 1.51\tabularnewline
$f^{(3)}$+B & \textbf{1.56} & \textbf{1.56} & \textbf{1.55} & \textbf{1.56} & \textbf{1.56} & 1.57 & 12.44 & 1.58 & \textbf{1.56} & \textbf{1.56}\tabularnewline
$f^{(4)}$+W & \textbf{2.31} & \textbf{0.79} & 3.17 & 3.19 & \textbf{0.81} & 2.94 & 3.53 & 2.69 & 2.54 & 3.18\tabularnewline
$f^{(4)}$+A & \textbf{3.20} & \textbf{3.20} & 3.21 & 3.27 & \textbf{3.20} & 3.23 & 4.01 & \textbf{3.04} & 3.23 & 3.22\tabularnewline
$f^{(4)}$+S & 3.29 & \textbf{3.19} & 3.23 & \textbf{3.22} & \textbf{3.18} & 3.29 & 4.74 & 3.32 & 3.28 & 3.25\tabularnewline
$f^{(4)}$+B & \textbf{3.35} & \textbf{3.35} & 3.38 & 3.38 & \textbf{3.35} & 3.36 & 9.01 & 3.31 & 3.38 & 3.37\tabularnewline
ALL & \textbf{24.83} & \textbf{23.92} & 29.29 & 29.30 & \textbf{24.45} & 26.13 & 32.38 & 25.63 & 28.86 & 29.28\tabularnewline
\hline
\end{tabular}\caption{\label{tab:Q comparison} Averaged Q values over 200 simulated replicates
among 10 distance measures for each combination of $f$ and $\epsilon_{k}$
(with 10 random curves), and all the 160 curves. W, A, S, and B in
the first column stand for WN, AR, SARMA, and BILR in (\ref{eq:noise}),
respectively. Bold digits are the best 3 within each row.}
\end{table}
\begin{table}
\footnotesize
\begin{tabular}{crrrrrrrrrr}
\hline
& $d_{EUCL}$ & $d_{OUR}$ & $d_{MAH}$ & $d_{GLK}$ & $d_{SS}$ & $d_{CORT}$ & $d_{IP}$ & $d_{PRED,h}$ & $d_{CID}$ & $d_{PDC}$\tabularnewline
\hline
$f^{(1)}$+W & \textbf{0.73} & \textbf{0.24} & 12.26 & 12.15 & \textbf{0.24} & 2.22 & 12.01 & 1.11 & 1.16 & 12.39\tabularnewline
$f^{(1)}$+A & \textbf{4.89} & \textbf{4.89} & 12.11 & 12.02 & \textbf{4.89} & 5.85 & 12.29 & \textbf{3.98} & 5.15 & 12.29\tabularnewline
$f^{(1)}$+S & 7.79 & \textbf{5.21} & 11.82 & 11.94 & \textbf{5.24} & 10.20 & 12.29 & \textbf{7.23} & 8.87 & 12.18\tabularnewline
$f^{(1)}$+B & \textbf{7.73} & \textbf{7.73} & 12.27 & 12.15 & \textbf{7.73} & 8.30 & 12.25 & \textbf{4.70} & 8.20 & 12.43\tabularnewline
$f^{(2)}$+W & 3.01 & \textbf{1.04} & 8.88 & 6.69 & \textbf{1.01} & 6.27 & \textbf{1.29} & 11.69 & 4.21 & 12.27\tabularnewline
$f^{(2)}$+A & \textbf{9.20} & \textbf{9.20} & 10.24 & 10.59 & \textbf{9.19} & 9.80 & 8.15 & 12.66 & 9.45 & 12.05\tabularnewline
$f^{(2)}$+S & 10.99 & \textbf{8.19} & 10.14 & 10.18 & \textbf{7.88} & 11.71 & \textbf{7.85} & 13.45 & 11.33 & 12.35\tabularnewline
$f^{(2)}$+B & \textbf{10.59} & \textbf{10.6} & 11.62 & 11.63 & \textbf{10.6} & 10.89 & 10.77 & 12.66 & 10.75 & 12.10\tabularnewline
$f^{(3)}$+W & 9.18 & \textbf{4.49} & 8.16 & 8.10 & \textbf{4.27} & 10.79 & \textbf{6.78} & 14.54 & 10.09 & 12.35\tabularnewline
$f^{(3)}$+A & \textbf{11.5} & \textbf{11.53} & 11.89 & 11.87 & \textbf{11.53} & 11.69 & 12.06 & 13.88 & \textbf{11.53} & 12.22\tabularnewline
$f^{(3)}$+S & 11.90 & \textbf{11.53} & \textbf{11.72} & 12.17 & \textbf{11.34} & 11.95 & 12.10 & 14.09 & 11.87 & 11.96\tabularnewline
$f^{(3)}$+B & \textbf{11.99} & \textbf{12.02} & 12.12 & 12.05 & \textbf{12.02} & \textbf{11.94} & 12.26 & 13.51 & 12.06 & 12.31\tabularnewline
$f^{(4)}$+W & \textbf{4.63} & \textbf{1.31} & 11.56 & 12.23 & \textbf{1.32} & 7.87 & 12.29 & 7.49 & 5.79 & 12.21\tabularnewline
$f^{(4)}$+A & \textbf{9.89} & \textbf{9.89} & 11.59 & 12.28 & \textbf{9.88} & 10.45 & 12.47 & 10.00 & 10.02 & 12.18\tabularnewline
$f^{(4)}$+S & 11.71 & \textbf{10.23} & \textbf{11.34} & 12.11 & \textbf{10.23} & 11.87 & 12.19 & 13.08 & 11.72 & 12.20\tabularnewline
$f^{(4)}$+B & \textbf{11.24} & \textbf{11.26} & 12.24 & 12.20 & \textbf{11.26} & 11.52 & 12.24 & \textbf{10.83} & 11.41 & 11.68\tabularnewline
ALL & \textbf{1155.6} & \textbf{874.7} & 4160.5 & 4063.7 & \textbf{901.0 } & 1315.4 & 3768.6 & 1239.8 & 2693.1 & 4191.6 \tabularnewline
\hline
\end{tabular}\caption{\label{tab:R comparison}Averaged R values over 200 simulated replicates
among 10 distance measures for each combination of $f$ and $\epsilon_{k}$
(with 10 random curves), and all the 160 curves. W, A, S, and B in
the first column stand for WN, AR, SARMA, and BILR in (\ref{eq:noise}),
respectively. Bold digits are the best 3 within each row.}
\end{table}
\section{Real Data Application\label{sec:Real-Data-Application}}
We shall apply (\ref{eq:proposed distance}) to a methadone maintenance
therapy data in \citet{lin2015clustering}. Daily methadone dosages
in mg for 314 participants between 01 January 2007 and 31 December
2008 were collected. The (partially) observed dose levels for each
patient from day 1 to day 180 were used for clustering. \citet{lin2015clustering}
categorized the dosages into 7 levels, one of which is missing value,
and proposed a new dissimilarity measure for clustering ordinal data.
The ordering of time coordinates, however, were discarded in their
approach. In this example, we use the primary prescription dosage,
and do not recode missing values separately. Smoothing splines take
care the irregular follow-up time points of patients automatically,
which may not be an easy task for other measures listed in Table \ref{tab:Distance-measures}.
The clustering procedure consists of three steps: (1) calculating
the distance matrix , (2) detecting and removing outliers, and (3)
forming clusters with the remaining data. We started from obtaining
the pairwise distance matrix based on (\ref{eq:proposed distance}).
Then two outliers were simply detected by calculating the average
distance of each patient's nearest 3 neighbors. Two had the distance
in magnitude of 500 and 1 010, while all the others had distance falling
{[}39,300{]}. Cluster identification result can be affected significantly
by a few far away noisy points, which should be removed in order to
make more reliable clustering. Our method to detect outliers is similar
to \citet{ramaswamy2000efficient} based on dissimilarity. Excluding
the two outliers, the remaining 312 dosage curves of patients were
clusterd into 5 subgroups via “partitioning around medoids” (PAM),
as shown in Figure \ref{fig:Subgroups}. The mean curves for each
subgroups are also shown in Figure \ref{fig:mean-profile} (a).
It is obvious Group 1 and 2 are more stable, remaining a dose level
roughly within {[}10,40{]} and {[}40,80{]}, respectively. Group 3
has an upward trend while Group 4 has a downward trend, and from Figure
\ref{fig:mean-profile} the two mean curves cross around day 85. Group
5 goes up quickly and stay a dose level around 80. Although Group
6 has a similar trend to Group 5, it fluctuates heavily over a larger
range and looks more unstable. Overall, these figures indicate that
a patient with early higher dosage taken (roughly above 60 mg at day
45) tends not to reduce the level afterward and a monitoring between
the second and third month can be critical.
Results based on a model-based functional data clustering are also
given for comparison. We used the `funcit' function in the 'funcy'
package (\citealp{funcy}) on The Comprehensive R Archive Network
(CRAN; \citealp{CRAN}). The model option of the function is set to
be `iterSubspace', i.e., an implementation of the algorithm in \citealp{chiou2007functional}.
The theoretical mean profiles of clusters based on participants including
and excluding outliers are shown in Figure \ref{fig:mean-profile}
(c) and Figure \ref{fig:mean-profile} (d), respectively. Profiles
of the two outlier participants are also shown in Figure \ref{fig:mean-profile}
(b).
Although PAM does not provide theoretical mean profiles so that it
can not be directly compared to the model-based method, note the resemblance
between Figures \ref{fig:mean-profile} (a) and \ref{fig:mean-profile}
(d). Excluding the two outliers did improve the model-based method
in that the average distance to mean profile reduced 7.6\% from 166.7
to 154.9, which gave more compact clusters. Inspecting Figure \ref{fig:mean-profile}
(b), we can realize the interlacing of the 2nd, 3rd, and 4th subgroups
in Figure \ref{fig:mean-profile} (c). Clearly, it is hard to group
the two curves of outliers into the found groups. Forcing to include
them needs to exaggerate the within-group variation, no matter which
groups they are assigned to. Then the boundaries of groups are getting
blurred, so are the representativeness of mean profiles.
Unfortunately, identifying outlier during the model-based clustering
procedure can be tautological, since the unknown `ordinary' within-group
variation depends on telling apart which are `abnormal' participants.
In contrast, dissimilarity in a distance-based method (including our
proposal) is not affected by whether outliers occurs, and can serve
as an outlier detector. The simulations above reveal the stable superiority
of the proposed dissimilarity, and it is usable in a beneficial preclean
step for model-based clusterings.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.5]{subgroup}
\par\end{centering}
\caption{\label{fig:Subgroups}Subgroups from PAM clustering of the 312 patients
in methadone maintenance therapy.}
\end{figure}
\begin{figure}
\noindent \begin{centering}
\centering{
\begin{tabular}{cc}
\includegraphics[scale=0.48]{mean_profile} & \includegraphics[scale=0.48]{outliers}\tabularnewline
(a) & (b)\tabularnewline
\includegraphics[scale=0.48]{mean_314} & \includegraphics[scale=0.48]{mean_312}\tabularnewline
(c) & (d)\tabularnewline
\end{tabular}
\par\end{centering}
\noindent \centering{}\medskip{}
\caption{\label{fig:mean-profile}(a): Mean curves of subgroups in Figure \ref{fig:Subgroups};
(b) Dosage profiles of the two excluded outliers; (c) Mean profiles
of a model-based clustering method including the two outliers; (d)
Mean profiles of the same clustering method with (c) but excluding
the two outliers. }
\label{fig:prediction map}
\end{figure}
\section{Conclusion and Discussion}
We have shown that distance based on smoothed data is better than
distance based on specific time series assumptions, if the underlying
curves are changed gradually. With smoothing parameter commutation,
the proposed distance measure gains some improvement of the widely
used approach in \citet{ramsay2005smoothing} without introducing
further computational complexity. We also demonstrated a simple method
for outlier detection that helps model-based functional data clustering
form more compact subgrous.
The `funcy' package on CRAN integrated several model-based clustering
methods for functional data, but most of them require regular measurements
and do not fit the methadone dosage example with many missing values.
The only two methods of the package allowing irregular measurements
are `fitfclust' and `iterSubspace', and we apply the latter merely
because the former was eating up more than 20GB memories and spending
6 hours at each iteration for the example, which is not yet a practical
choice for general applications.
There are many other nonparametric regression methods other than smoothing
splines, e.g., local polynomial regressions, wavelet analysis. Different
techniques stand out in different situations. It is of interest to
study whether there exist analogous parameter commutation operations
and similar advantages when applying other nonparametric regressions.
This direction is left as a future work.
|
1,116,691,499,794 | arxiv | \section{Introduction}
\subsection{Degree sum condition for graphs with high connectivity to be Hamiltonian}
\label{intro}
In this paper,
we consider only finite undirected graphs
without loops or multiple edges.
For standard graph-theoretic terminology not explained,
we refer the reader to \cite{Bondybook}.
A \textit{Hamiltonian cycle} of a graph is a cycle containing all the vertices of the graph.
A graph having a Hamiltonian cycle is called a \textit{Hamiltonian graph}.
The Hamiltonian problem has long been fundamental in graph theory.
Since it is NP-complete,
no easily verifiable necessary and sufficient condition seems to exist.
Then instead of that,
many researchers have investigated sufficient conditions
for a graph to be Hamiltonian.
In this paper, we deal with a degree sum type condition,
which is one of the main stream of this study.
We introduce four invariants, including degree sum,
which play important roles for the existence of a Hamiltonian cycle.
Let $G$ be a graph.
The number of vertices of $G$
is called its \textit{order},
denoted by $n(G)$.
A set $X$ of vertices in $G$ is called \textit{an independent set in $G$}
if no two vertices of $X$ are adjacent in $G$.
The \textit{independence number} of $G$
is defined by
the maximum cardinality of an independent set in $G$,
denoted by $\alpha(G)$.
For two distinct vertices $x,y \in V(G)$,
the \textit{local connectivity} $\kappa_G(x,y)$
is defined to be the maximum number of internally-disjoint paths
connecting $x$ and $y$ in $G$.
A graph $G$ is \textit{$k$-connected}
if
$\kappa_G(x,y) \ge k$
for any two distinct vertices $x, y \in V(G)$.
The \textit{connectivity} $\kappa(G)$ of $G$
is the maximum value of $k$ for which $G$ is $k$-connected.
We denote by $N_{G}(x)$ and $d_{G}(x)$
the neighbor and the degree of a vertex $x$ in $G$, respectively.
If $\alpha(G) \ge k$,
let
$$
\sigma _{k} (G) =
\min \big\{\sum_{x\in X}\dg{G}{x}
\colon \text{$X$ is an independent set in $G$
with $|X|=k$} \big\};
$$
otherwise
let $\sigma _{k} (G) =
+\infty$.
If the graph $G$ is clear from the context,
we simply
write
$n$,
$\alpha$, $\kappa$ and
$\sigma_k$ instead of
$n(G)$,
$\alpha(G)$,
$\kappa(G)$
and $\sigma_k(G)$, respectively.
\paragraph{}
One of the main streams of the study
of the Hamiltonian problem
is,
as mentioned above,
to consider
degree sum type
sufficient conditions
for graphs to have a Hamiltonian cycle.
We list some of them below.
(Each of the conditions is best possible in some sense.)
\begin{refthm}
\label{degresult}
Let $G$ be a graph of order at least three.
If $G$ satisfies one of the following,
then $G$ is Hamiltonian.
\begin{enumerate}[{\upshape (i)}]
\item
{\upshape (Dirac \cite{Dirac})}
The minimum degree of $G$ is at least $\frac{n}{2}$.
\item
{\upshape (Ore \cite{Ore})}
$\sigma_2 \ge n$.
\item
{\upshape (Chv\'{a}tal and Erd\H{o}s \cite{Chvatal&Erdos})}
$\alpha \le \kappa$.
\item
{\upshape (Bondy \cite{Bondy})}
$G$ is $k$-connected and
$\displaystyle
\sigma_{k+1} > \frac{(k+1)(n-1)}{2}$.
\item
{\upshape (Bauer, Broersma, Veldman and Li \cite{BBVL})}
$G$ is $2$-connected and $\sigma_3 \ge n + \kappa$.
\end{enumerate}
\end{refthm}
To be exact,
Theorem \ref{degresult} (iii) is not
a degree sum type condition,
but it is closely related.
Bondy \cite{Bondyrem} showed that
Theorem \ref{degresult} (iii) implies (ii).
The current research of this area is based on
Theorem \ref{degresult} (iii).
Let us explain how to expand the research from Theorem \ref{degresult} (iii):
Let $G$ be a $k$-connected graph,
and suppose that one wants to consider whether $G$ is Hamiltonian.
If $\alpha \leq k$,
then
it follows from Theorem \ref{degresult} (iii) that
$G$ is Hamiltonian.
Hence
we may assume that $\alpha \geq k+1$,
that is,
$G$ has an independent set of order $k+1$.
Thus,
it is natural to consider a $\sigma_{k+1}$ condition
for a $k$-connected graph.
Bondy \cite{Bondy}
gave a $\sigma_{k+1}$ condition of Theorem \ref{degresult} (iv).
\paragraph{}
In this paper,
we give a much weaker
$\sigma_{k+1}$ condition
than that of Theorem \ref{degresult} (iv).
\begin{thm}\label{main}
Let $k$ be an integer with $k \ge 1$
and
let $G$ be a $k$-connected graph.
If $$\sigma_{k+1} \ge n+\kappa+(k-2)(\alpha-1),$$
then $G$ is Hamiltonian.
\end{thm}
Theorem \ref{main}
was conjectured by
Ozeki and Yamashita \cite{OY},
and
has been proven
for small integers $k$:
The case $k=2$ of Theorem \ref{main}
coincides Theorem \ref{degresult} (v).
The cases $k=1$ and $k=3$ were shown
by Fraisse and Jung \cite{FJ},
and by Ozeki and Yamashita \cite{OY}, respectively.
\subsection{Best possibility of Theorem \ref{main}}
\label{degbest}
In this section,
we show that
the $\sigma_{k+1}$ condition in Theorem \ref{main} is
best possible in some senses.
We first
discuss the lower bound
of the $\sigma_{k+1}$ condition.
For an integer $l\geq 2$ and $l$ vertex-disjoint graphs $H_{1},\ldots ,H_{l}$,
we define the graph $H_{1}+\cdots +H_{l}$ from the union of $H_{1},\ldots ,H_{l}$
by joining every vertex of $H_{i}$ to every vertex of $H_{i+1}$ for $1\leq i\leq l-1$.
Fix an integer $k\geq 1$.
Let $\kappa $, $m$ and $n$ be integers
with $k\leq \kappa <m$ and $2m+1\leq n\leq 3m-\kappa $.
Let $G_{1}=K_{n-2m}+\overline{K}_{\kappa}+\overline{K}_{m}+\overline{K}_{m-\kappa}$,
where $K_{l}$ denotes a complete graph of order $l$
and
$\overline{K}_{l}$ denotes the complement of $K_{l}$.
Then
$\alpha(G_{1})=m+1$, $\kappa (G_{1})=\kappa $ and
\begin{eqnarray*}
\sigma_{k+1}(G_{1})
&=& (n - 2m -1 + \kappa) + km\\
&=&
n(G_{1}) + \kappa(G_{1}) +(k-2)(\alpha(G_{1})-1)-1.
\end{eqnarray*}
(Note that it follows from condition
``$n\leq 3m-\kappa $''
that $n - 2m -1 + \kappa < m$.)
Since deleting all the vertices in $\overline{K}_{\kappa}$
and those in $\overline{K}_{m -\kappa}$
breaks $G_1$ into $m+1$ components,
we see that $G_{1}$ has no Hamiltonian cycle.
Therefore,
the $\sigma_{k+1}$ condition in Theorem \ref{main} is
best possible.
We next
discuss
the relation between the coefficient of $\kappa$
and that of $\alpha - 1$.
By Theorem \ref{degresult} (iii),
we may assume that $\alpha \ge \kappa+1$.
This implies that
$$n+\kappa+(k-2)(\alpha-1)
\ge n+ (1 + \varepsilon) \kappa + (k-2-\varepsilon)(\alpha-1)$$
for arbitrarily $\varepsilon > 0$.
Then
one may expect
that
the $\sigma_{k+1}$ condition in Theorem \ref{main}
can be replaced with
``$n+ (1 + \varepsilon) \kappa + (k-2-\varepsilon)(\alpha-1)$''
for some $\varepsilon > 0$.
However, the graph $G_{1}$ as defined above shows that it is not true:
For any $\varepsilon > 0$,
there exist two integers $m$ and $\kappa$
such that
$\varepsilon(m-\kappa) \geq 1$.
If we construct the above graph $G_1$
from such integers $m$ and $\kappa$,
then
we have
\begin{eqnarray*}
\sigma_{k+1}(G_{1})
&=& n + \kappa + (k-2) m - 1\\
&=&
n + (1 + \varepsilon) \kappa
+ (k-2-\varepsilon)m-1
+\varepsilon(m- \kappa)\\
&\ge&
n(G_{1}) + (1 + \varepsilon) \kappa(G_{1})
+ (k-2-\varepsilon)\big(\alpha(G_{1}) -1\big),
\end{eqnarray*}
but $G_{1}$ is not Hamiltonian.
This means that
the coefficient $1$ of $\kappa$
and
the coefficient $k-2$ of $\alpha - 1$
are,
in a sense,
best possible.
\subsection{Comparing Theorem \ref{main} to other results}
\label{compairothers}
In this section,
we
compare
Theorem \ref{main}
to
Theorem \ref{degresult} (iv)
and
Ota's result (Theorem \ref{Ota}).
We first show that
the $\sigma_{k+1}$ condition of Theorem \ref{main} is weaker than that of Theorem \ref{degresult} (iv).
Let $G$ be a $k$-connected graph
satisfying the $\sigma_{k+1}$ condition of Theorem \ref{degresult} (iv).
Assume that
$\alpha \ge (n+1)/2$.
Let $X$ be an independent set of order at least $(n+1)/2$.
Then
$|V(G) \setminus X| \le (n-1)/2$
and
$|V(G) \setminus X| \ge k$
since $V(G) \setminus X$ is a cut set.
Hence $(n+1)/2 \ge k+1$,
and
we can take a subset $Y$ of $X$ with $|Y|=k+1$.
Then
$N_G(y) \subseteq V(G) \setminus X$ for $y\in Y$,
and hence
$\sum_{y \in Y} d_G(y)
\le (k+1)|V(G) \setminus X|
\le
(k+1)(n-1)/2$.
This contradicts
the $\sigma_{k+1}$ condition
of Theorem \ref{degresult} (iv).
Therefore
$n/2 \ge \alpha$.
Moreover,
by Theorem \ref{degresult} (iii),
we may assume that
$\alpha \geq \kappa+1$.
Therefore,
the following inequality holds:
\begin{eqnarray*}
\sigma _{k+1}
&>&\frac{(k+1)(n-1)}{2}\\
&=&n-1+\frac{(k-1)(n-1)}{2}\\
&\ge&n-1+\frac{(k-1)(2\alpha-1)}{2}\\
&\ge&n-1+(k-1)(\alpha-1)\\
&\ge& n+\kappa+(k-2)(\alpha-1)-1.
\end{eqnarray*}
Thus,
the $\sigma_{k+1}$ condition
of Theorem \ref{degresult} (iv)
implies
that
of Theorem \ref{main}.
We next compare
Theorem \ref{main}
to the following Ota's result.
\begin{thm}[Ota \cite{Ota}]\label{Ota}
Let $G$ be a $2$-connected graph.
If $\sigma_{l+1} \ge n+l(l-1)$
for all integers $l$
with $l \ge \kappa$,
then $G$ is Hamiltonian.
\end{thm}
We first mention about the reason
to compare Theorem \ref{main} to Theorem \ref{Ota}.
Li \cite{Hao13}
proved the following theorem,
which was conjectured by
Li, Tian, and Xu \cite{LTX10}.
(Harkat-Benhamadine, Li and Tian \cite{HLT},
and
Li, Tian, and Xu \cite{LTX10}
have already proven the case $k=3$ and the case $k=4$,
respectively.)
\begin{thm}[Li \cite{Hao13}]
\label{coro}
Let $k$ be an integer with $k \ge 1$
and
let $G$ be a $k$-connected graph.
If $\sigma_{k+1} \ge n+(k-1)(\alpha-1)$,
then $G$ is Hamiltonian.
\end{thm}
In fact,
Li showed Theorem \ref{coro}
just as a corollary of Theorem \ref{Ota}.
Note that
Theorem \ref{main} is,
assuming Theorem \ref{degresult} (iii),
an improvement of Theorem \ref{coro}.
Therefore
we should show
that
Theorem \ref{main} cannot be implied
by Theorem \ref{Ota}.
(Ozeki, in his Doctoral Thesis \cite{Ozeki}, compared the relation between
several theorems,
including
Theorem \ref{degresult} (i), (ii), (iii) and (v),
the case $k=3$ of
Theorems
\ref{main} and \ref{coro},
and Theorem \ref{Ota}.)
Let $\kappa, r,k,m$ be integers such that
$4 \le r$,
$3 \le k \le \kappa-2$
and
$m = (k+1)(r-2)+4.$
Let $G_{2}=K_{1} + \overline{K}_{\kappa}+K_{\kappa+m-r}+(\overline{K}_{m}+K_{r})$.
Then
$n(G_{2})=2\kappa + 2m + 1$,
$\kappa(G_{2})=\kappa$
and
$\alpha(G_{2})=\kappa + m$.
Since
\begin{eqnarray*}
\kappa + k(\kappa + m) - (k+1) (\kappa + m -r +1)
&=& (k+1)(r-1) -m \\
&=& (k+1)(r-1) - (k+1)(r-2)-4 \\
&=& k-3 \\
&\ge& 0,
\end{eqnarray*}
it follows that
\begin{eqnarray*}
\sigma_{k+1}(G_2)
&=& \min\big\{\kappa + k(\kappa + m),\ (k+1) (\kappa + m -r+1)\big\} \\
&=& \kappa+k(\kappa+m) -(k-3)\\
&=& (2\kappa + 2m + 1) + \kappa + (k-2)(\kappa + m - 1)\\
&=& n(G_2) + \kappa(G_2) + (k-2)(\alpha(G_2) - 1).
\end{eqnarray*}
Hence the assumption of Theorem \ref{main} holds.
On the other hand,
for $l=\alpha(G_2) - 1 = \kappa+m-1$,
we have
\begin{eqnarray*}
n(G_2) + l(l-1) - \sigma_{l+1}(G_2)
&=& (2\kappa + 2m+1) + (\kappa+m-1)(\kappa + m-2) \\
&& - \big\{\kappa(\kappa+m-r+1)+m(\kappa+m)\big\} \\
&=& \kappa (r - 2) -m +3
\\
&=& \kappa (r - 2)- (k+1)(r-2)-4 +3
\\
&=& (\kappa-k-1)(r - 2)-1
\\
&\ge& (r - 2)-1
\\
&>& 0.
\end{eqnarray*}
Hence the assumption of Theorem \ref{Ota}
does not hold.
These yield that
for the graph $G_2$,
we can apply Theorem \ref{main},
but cannot apply Theorem \ref{Ota}.
\section{Notation and lemmas}
Let $G$ be a graph
and $H$ be a subgraph of $G$,
and
let $x \in V(G)$ and $X \subseteq V(G)$.
We denote by
$N_G(X)$
the set of vertices in $V(G) \setminus X$
which are adjacent to some vertex in $X$.
We define
$N_H(x) = N_G(x) \cap V(H)$
and
$\dg{H}{x} = |N_H(x)|$.
Furthermore,
we define
$N_H(X) = N_G(X) \cap V(H)$.
If there is no fear of confusion,
we often identify $H$
with its vertex set $V(H)$.
For example,
we often write $G- H$
instead of $G- V(H)$.
For a subgraph $H$,
a path $P$ is called an \textit{$H$-path}
if both end vertices of $P$ are contained in $H$
and all internal vertices are not contained in $H$.
Note that each edge of $H$ is an $H$-path.
Let $C$ be a cycle (or a path) with a fixed orientation in a graph $G$.
For $x,y \in V(C)$,
we denote
by $\IR{C}{x}{y}$
the path from $x$ to $y$
along the orientation of ${C}$.
The reverse sequence
of $\IR{C}{x}{y}$
is denoted by $\IL{C}{y}{x}$.
We denote $\IR{C}{x}{y} - \{x,y\}$,
$\IR{C}{x}{y} - \{x\}$
and $\IR{C}{x}{y} - \{y\}$
by $\ir{C}{x}{y}$, $\iR{C}{x}{y}$ and $\Ir{C}{x}{y}$, respectively.
For $x \in V(C)$,
we denote the successor and the predecessor
of $x$ on $C$ by $x^{+}$ and $x^{-}$, respectively.
For $X \subseteq V(C)$,
we define $X^{+} = \{x^{+} : x \in X\}$
and $X^{-} = \{x^{-} : x \in X \}$.
Throughout this paper, we consider that every cycle
has a fixed orientation.
In this paper, we extend the concept of
\textit{insertible},
introduced by Ainouche \cite{Ainouche92},
which has been used for the proofs of the results on cycles.
Let $G$ be a graph,
and $H$ be a subgraph of $G$.
Let
$X(H)= \{u \in V(G-H) : \mbox{$uv_1, uv_2 \in E(G)$ for some $v_1v_2 \in E(H)$} \}$,
let
$I(x;H) = \{v_1v_2 \in E(H) : xv_1, xv_2 \in E(G)\}$ for $x \in V(G-H)$,
and let
$Y(H)= \{u \in V(G-H) : \text{ $d_{H}(u) \ge \alpha(G)$}\}$.
\begin{lem}
\label{D cup Q is hamilton}
Let $D$ be a cycle of a graph $G$.
Let $k$ be a positive integer
and
let
$Q_{1}, Q_{2},\ldots, Q_{k}$ be paths of $G - D$ with fixed orientations
such that $V(Q_{i}) \cap V(Q_{j}) = \emptyset$ for $1 \le i<j \le k$.
If the following {\rm(I)} and {\rm(II)} hold,
then $G[V(D \cup Q_{1} \cup Q_{2}\cup \cdots \cup Q_{k})]$ is Hamiltonian.
\begin{enumerate}[{\upshape(I)}]
\item
For $1 \le i \le k$ and $a \in V(Q_{i})$,
$a \in X(D)\cup Y(\iR{Q_{i}}{a}{b_{i}} \cup D)$,
where $b_{i}$ is the last vertex of $Q_{i}$.
\item
For $1 \le i<j \le k$, $x \in V(Q_{i})$ and $y \in V(Q_{j})$,
$I(x; D) \cap I(y; D) = \emptyset$.
\end{enumerate}
\end{lem}
\begin{proof}
We can easily see that
$G[V(D \cup Q_{1} \cup Q_{2}\cup \cdots \cup Q_{k})]$ contains a cycle $D^{*}$
such that $V(D) \cup \big( X(D) \cap V(Q_1 \cup Q_2 \cup \cdots \cup Q_{k}) \big) \subseteq V(D^{*})$.
In fact,
we can insert
all vertices of $X(D) \cap V(Q_{1})$ into $D$
by choosing the following
$u_{1}, v_{1} \in V(Q_{1})$ and
$w_{1}w_{1}^{+} \in E(D)$ inductively.
Take the first vertex $u_{1}$ in $X(D) \cap V(Q_{1})$
along the orientation of $Q_{1}$,
and let $v_{1}$ be the last vertex in $X(D) \cap V(Q_{1})$ on $Q_{1}$
such that $I(u_{1}; D) \cap I(v_{1}; D) \neq \emptyset$.
Then we can insert
all vertices of $\IR{Q_{1}}{u_{1}}{v_{1}}$
into $D$.
To be exact,
taking $w_{1}w_{1}^{+} \in I(u_{1}; D) \cap I(v_{1}; D)$,
$D_{1}^{1}:=w_{1}\IR{Q_{1}}{u_{1}}{v_{1}}\IR{D}{w_{1}^{+}}{w_{1}}$
is such a cycle.
By the choice of $u_{1}$ and $v_{1}$,
$w_{1}w_{1}^{+} \notin I(x; D)$ for all $x \in V(Q_{1} - \IR{Q_{1}}{u_{1}}{v_{1}})$,
and
$X(D) \cap V(Q_{1} - \IR{Q_{1}}{u_{1}}{v_{1}})$
is contained in some component of $Q_{1} - \IR{Q_{1}}{u_{1}}{v_{1}}$.
Moreover,
note that $E(D) \setminus \{w_{1}w_{1}^{+}\} \subseteq E(D_{1}^{1})$.
Hence by repeating this argument,
we can obtain a cycle $D_{1}^{*}$ of $G[V(D \cup Q_1)]$
such that $V(D) \cup \big( X(D) \cap V(Q_{1}) \big) \subseteq V(D_{1}^{*})$
and $E(D) \setminus \bigcup_{x \in V(Q_{1})}I(x; D) \subseteq E(D_{1}^{*})$.
Then by (II),
$I(x;D) \subseteq E(D_{1}^{*})$ for all $x \in V(Q_{2} \cup \cdots \cup Q_{k})$.
Therefore
$G[V(D \cup Q_{1} \cup Q_{2}\cup \cdots \cup Q_{k})]$ contains a cycle $D^{*}$
such that
$V(D) \cup \big( X(D) \cap V(Q_1 \cup Q_2 \cup \cdots \cup Q_{k}) \big) \subseteq V(D^{*})$.
We choose a cycle $C$ of $G[V(D \cup Q_{1} \cup Q_{2}\cup \cdots \cup Q_{k})]$
containing all vertices in $V(D) \cup \big( X(D) \cap V(Q_1 \cup Q_2 \cup \cdots \cup Q_{k}) \big)$
so that $|C|$ is as large as possible.
Now, we change the ``base'' cycle from $D$ to $C$,
and use the symbol ${(\cdot)}^{+}$ for the orientation of $C$.
Suppose that
$V(Q_{i} - C) \neq \emptyset$ for some $i$ with $i \in \{1, 2,\ldots,k\}$.
We may assume that $i = 1$.
Let $w$ be the last vertex in $V(Q_{1} - C)$ along $Q_{1}$.
Since $C$ contains all vertices in $X(D) \cap V(Q_{1})$,
it follows from (I) that $w \in Y(Q_{1}(w,b_1] \cup D)$,
that is,
$|N_{G}(w) \cap V(Q_{1}(w,b_1] \cup D)| \ge \alpha(G)$.
By the choice of $w$,
we obtain
$V(Q_{1}(w,b_1] \cup D) \subseteq V(C) $.
Therefore
$|N_{C}(w)^{+} \cup \{w\}| \ge |N_{G}(w) \cap V(Q_{1}(w,b_1] \cup D)| + 1 \ge \alpha(G) + 1$.
This implies that
$N_{C}(w)^{+} \cup \{w\}$ is not an independent set in $G$.
Hence
$wz^{+} \in E(G)$ for some $z \in N_{C}(w)$
or
$z_{1}^{+}z_{2}^{+} \in E(G)$ for some distinct $z_{1}, z_{2} \in N_{C}(w)$.
In the former case, let
$C' = w\IR{C}{z^{+}}{z}w$,
and in the latter case,
let $C' = w\IR{\overleftarrow{C}}{z_{1}}{z_{2}^{+}}\IR{C}{z_{1}^{+}}{z_{2}}w$.
Then $C'$
is a cycle of $G[V(D \cup Q_{1} \cup Q_{2}\cup \cdots \cup Q_{k})]$ such that
$V(C) \cup \{w\} \subseteq V(C')$, which contradicts the choice of $C$.
Thus $V(Q_{1} \cup Q_{2}\cup \cdots \cup Q_{k})$ are contained in $C$,
and hence $C$ is a Hamiltonian cycle of
$G[V(D \cup Q_{1} \cup Q_{2}\cup \cdots \cup Q_{k})]$.
\end{proof}
In the rest of this section,
we fixed the following notation.
Let $C$ be a longest cycle in a graph $G$,
and $H_{0}$ be a component of $G - C$.
For $u \in N_{C}(H_{0})$,
let $u' \in N_{C}(H_{0})$ be a vertex
such that $\ir{C}{u}{u'} \cap N_{C}(H_{0})=\emptyset$,
that is,
$u'$ is the successor of $u$ in $N_C(H_{0})$ along the orientation of $C$.
For $u \in N_{C}(H_{0})$,
a vertex $v \in \ir{C}{u}{u'}$
is \textit{insertible}
if $v \in X(\IR{C}{u'}{u}) \cup Y(\iR{C}{v}{u})$.
A vertex in $\ir{C}{u}{u'}$ is said to be \textit{non-insertible}
if it is not insertible.
\begin{lem}
\label{non-insertible}
There exists a non-insertible vertex
in $\ir{C}{u}{u'}$ for $u \in N_{C}(H_{0})$.
\end{lem}
\begin{proof}
Let $u \in N_{C}(H_{0})$,
and suppose that
every vertex in $\ir{C}{u}{u'}$ is insertible.
Let
$P$ be a $C$-path joining $u$ and $u'$ with
$V(P) \cap V(H_{0}) \neq \emptyset$.
Let
$D = \IR{C}{u'}{u}P[u,u']$
and $Q = \ir{C}{u}{u'}$.
Let $v \in V(Q)$.
Since $v$ is insertible,
it follows that
$v \in X(\IR{C}{u'}{u}) \cup Y(C(v,u])$.
Since $\IR{C}{u'}{u}$ is a subpath of $D$,
we have
$v \in X(D) \cup Y(Q(v,u') \cup D)$.
Hence, by Lemma \ref{D cup Q is hamilton},
$G[V(D \cup Q)]$ is Hamiltonian,
which contradicts the maximality of $C$.
\end{proof}
\begin{figure}[h]
\begin{center}
\includegraphics{lemma2.eps}
\caption{Lemma \ref{insertible}}
\label{insertible-pict2}
\end{center}
\end{figure}
\begin{lem}
\label{insertible}
Let $u_{1}, u_{2} \in N_{C}(H_{0})$ with $u_{1} \neq u_{2}$,
and let $x_{i}$ be the first non-insertible vertex along $\ir{C}{u_i}{u_i'}$ for $i \in \{ 1, 2\}$.
Then the following hold (see Figure \ref{insertible-pict2}).
\begin{enumerate}[{\upshape (i)}]
\item\label{crossing}
There exists no $C$-path joining $v_{1} \in \iR{C}{u_{1}}{x_{1}}$ and $v_{2} \in \iR{C}{u_{2}}{x_{2}}$.
In particular,
$x_{1}x_{2} \not\in E(G)$.
\item\label{crossing2}
If there exists a $C$-path joining $v_{1} \in \iR{C}{u_{1}}{x_{1}}$
and $w \in \iR{C}{v_{1}}{u_2}$,
then there exists no $C$-path joining $v_{2} \in \iR{C}{u_{2}}{x_{2}}$ and $w^{-}$.
\item\label{W1}
If there exist a $C$-path joining $v_{1} \in \iR{C}{u_{1}}{x_{1}}$ and $w_{1} \in \ir{C}{v_{1}}{u_{2}}$
and a $C$-path joining $v_{2} \in \iR{C}{u_{2}}{x_{2}}$ and $w_{2} \in \Ir{C}{w_{1}}{u_{2}}$,
then there exists no $C$-path joining $w_{1}^{-}$ and $w_{2}^{+}$.
\item\label{W2}
If for each $i \in \{1, 2\}$,
there exists a $C$-path joining $v_{i} \in \iR{C}{u_{i}}{x_{i}}$ and $w_{i} \in \iR{C}{v_{i}}{u_{3-i}}$,
then there exists no $C$-path joining $w_{1}^{-}$ and $w_{2}^{-}$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $P_{0}$ be a $C$-path which connects $u_1$ and $u_2$, and $V(P_{0}) \cap V(H_{0}) \neq \emptyset$.
We first show (\ref{crossing}) and (\ref{crossing2}).
Suppose that
the following {\rm(a)} or {\rm(b)} holds
for some $v_{1} \in \iR{C}{u_{1}}{x_{1}}$ and some $v_{2} \in \iR{C}{u_{2}}{x_{2}}$:
(a) There exists a $C$-path $P_{1}$ joining $v_{1}$ and $v_{2}$.
(b) There exist disjoint $C$-paths $P_{2}$ joining $v_{l}$ and $w$, and $P_{3}$ joining $v_{3-l}$ and $w^{-}$
for some $l \in \{1, 2\}$ and some $w \in \iR{C}{v_{l}}{u_{3-l}}$.
We choose such vertices $v_{1}$ and $v_{2}$ so that $|\IR{C}{u_{1}}{v_{1}}| + |\IR{C}{u_{2}}{v_{2}}|$ is as small as possible.
Without loss of generality,
we may assume that $l = 1$ if {\rm(b)} holds.
Since $N_{C}(H_{0})\cap \{v_{1},v_{2}\}=\emptyset$,
$(V(P_{1}) \cup V(P_{2}) \cup V(P_{3})) \cap V(P_{0})= \emptyset$.
Therefore, we can define a cycle
\begin{align*}
D =
\left \{
\begin{array}{ll}
P_{1}[v_{1},v_{2}] \IR{C}{v_{2}}{u_{1}} P_{0}[u_{1},u_{2}]\IR{\overleftarrow{C}}{u_{2}}{v_{1}} & \textup{if \rm{(a)} holds,} \\[1mm]
P_{2}[v_{1},w] \IR{C}{w}{u_{2}} \IL{P_{0}}{u_{2}}{u_{1}}\IR{\overleftarrow{C}}{u_{1}}{v_{2}}P_{3}[v_{2},w^{-}] \IR{\overleftarrow{C}}{w^{-}}{v_{1}}
&\textup{otherwise.}
\end{array}
\right.
\end{align*}
For $i \in \{1, 2\}$,
let $Q_{i} = \ir{C}{u_{i}}{v_{i}}$.
By Lemma \ref{non-insertible},
we can obtain the following statement (1), and
by the choice of $v_{1}$ and $v_{2}$,
we can obtain the following statements (2)--(5):
\noindent
\begin{enumerate}[{\upshape(1)}]
\item
$N_G(x) \cap \ir{P_{0}}{u_{1}}{u_{2}} = \emptyset$
for $x \in V(Q_{1} \cup Q_{2})$.
\item
$N_G(x) \cap (\ir{P_{1}}{v_{1}}{v_{2}} \cup \ir{P_{2}}{v_{1}}{w} \cup \ir{P_{3}}{v_{2}}{w^{-}}) = \emptyset$
for $x \in V(Q_{1} \cup Q_{2})$.
\item
$xy \notin E(G)$ for $x \in V(Q_{1})$ and $y \in V(Q_{2})$.
\item
$I(x;C) \cap I(y;C) = \emptyset$ for $x \in V(Q_{1})$ and $y \in V(Q_{2})$.
\item
If (b) holds, then
$w^{-}w \not \in I(x;C)$ for $x \in V(Q_1 \cup Q_2)$.
\end{enumerate}
Let $a \in V(Q_i)$ for some $i \in \{ 1,2 \}$.
Note that
each vertex of $Q_{i}$ is insertible,
that is,
$a \in X(C[u_{i}',u_{i}]) \cup Y(C(a,u_i])$.
We show that
$a \in X(D) \cup Y(Q_i(a,v_i) \cup D)$.
If $a \in X(C[u_{i}',u_{i}])$,
then
the statements (3) and (5)
yield that $a \in X(D)$.
Suppose that
$a \in Y(C(a,u_i])$.
By (3),
$N_G(a) \cap C(a,u_i] \subseteq N_G(a) \cap \big( Q_i(a,v_i) \cup D \big)$.
This implies that
$a \in Y(\ir{Q_{i}}{a}{v_i} \cup D)$.
By (1), (2) and (4),
$I(x;D) \cap I(y;D) = \emptyset$ for $x \in V(Q_{1})$ and $y \in V(Q_{2})$.
Thus, by Lemma \ref{D cup Q is hamilton},
$G[V(D \cup Q_1 \cup Q_2)]$ is Hamiltonian,
which contradicts the maximality of $C$.
By using similar argument as above, we can also show (\ref{W1}) and (\ref{W2}).
We only prove (\ref{W1}).
Suppose that for some $v_{1} \in \iR{C}{u_{1}}{x_{1}}$ and $v_{2} \in \iR{C}{u_{2}}{x_{2}}$,
there exist disjoint $C$-paths
$\IR{P_{1}}{v_{1}}{w_{1}}$, $\IR{P_{2}}{v_{2}}{w_{2}}$ and $\IR{P_{3}}{w_{1}^{-}}{w_{2}^{+}}$
with $w_{1} \in \ir{C}{v_{1}}{u_{2}}$ and $w_{2} \in \Ir{C}{w_{1}}{u_{2}}$.
We choose such $v_{1}$ and $v_{2}$ so that $|\IR{C}{u_{1}}{v_{1}}| + |\IR{C}{u_{2}}{v_{2}}|$
is as small as possible.
Let $Q_{i} = \ir{C}{u_{i}}{v_{i}}$ for $i \in \{1, 2\}$.
Then by Lemma \ref{insertible} (\ref{crossing}),
$xy \notin E(G)$
for $x \in V(Q_{1})$ and $y \in V(Q_{2})$.
By the choice of $v_{1}$ and $v_{2}$ and Lemma \ref{insertible} (\ref{crossing2}),
$w_{1}w_{1}^{-}, w_{2}w_{2}^{+} \notin I(x;\IR{C}{v_1}{u_1}) \cup I(y;\IR{C}{v_2}{u_2})$
for $x \in V(Q_{1})$ and $y \in V(Q_{2})$.
By Lemma \ref{insertible} (\ref{crossing}) and (\ref{crossing2}),
$I(x;\IR{C}{v_1}{u_2} \cup \IR{C}{v_2}{u_1}) \cap I(y;\IR{C}{v_1}{u_2} \cup \IR{C}{v_2}{u_1}) = \emptyset$
for $x \in V(Q_{1})$ and $y \in V(Q_{2})$.
Hence by applying Lemma \ref{D cup Q is hamilton}
as
$$D = \IR{P_{1}}{v_{1}}{w_{1}} \IR{C}{w_{1}}{w_{2}} \IR{\overleftarrow{P_{2}}}{w_{2}}{v_{2}}\IR{C}{v_{2}}{u_{1}}\IR{P_0}{u_{1}}{u_{2}}
\IR{\overleftarrow{C}}{u_{2}}{w_{2}^{+}}\IR{\overleftarrow{P_{3}}}{w_{2}^{+}}{w_{1}^{-}}\IR{\overleftarrow{C}}{w_{1}^{-}}{v_{1}},$$
$Q_{1}$ and $Q_{2}$,
we see that there exits a longer cycle than $C$, a contradiction.
\end{proof}
\section{Proof of Theorem \ref{main}}
\begin{proof}[Proof of Theorem \ref{main}]
The cases $k=1$, $k=2$ and $k=3$ were shown
by Fraisse and Jung \cite{FJ},
by Bauer et al.~\cite{BBVL}
and by Ozeki and Yamashita \cite{OY}, respectively.
Therefore, we may assume that $k \geq 4$.
Let $G$ be a graph
satisfying the assumption of Theorem \ref{main}.
By Theorem \ref{degresult} (iii),
we may assume
$\alpha(G) \geq \kappa(G)+1$.
Let $C$ be a longest cycle in $G$.
If $C$ is a Hamiltonian cycle of $G$,
then there is nothing to prove.
Hence we may assume that $G - V(C)\not=\emptyset$.
Let $H=G - V(C)$ and $x_{0} \in V(H)$.
Choose a longest cycle $C$ and $x_{0}$
so that
\begin{center}
$\dg{C}{x_0}$ is as large as possible.
\end{center}
Let $H_{0}$ be the component of $H$ such that $x_0 \in V(H_{0})$.
Let $$N_{C}(H_{0}) =U= \{u_1,u_{2},\ldots,u_{m}\}.$$
Note that $m \ge \kappa(G) \ge k$.
Let
$$M_{0}=\{0,1,\ldots,m\}\text{ and } M_{1}=\{1,2,\ldots,m\}.$$
Let $u_i'$ be the vertex in $N_{C}(H_{0})$
such that $\ir{C}{u_i}{u_i'} \cap N_{C}(H_{0})=\emptyset$.
By Lemma \ref{non-insertible},
there exists a non-insertible vertex in $\ir{C}{u_{i}}{u_{i}'}$.
Let $x_i \in \ir{C}{u_{i}}{u_{i}'}$
be the first non-insertible vertex along the orientation of $C$
for each $i \in M_{1}$,
and let
$$X=\{x_1,x_2,\ldots,x_m\}.$$
Note that $\dg{C}{x_{0}} \le |U|=|X|$.
Let $$D_i=\ir{C}{u_i}{x_i}
\text{ for each $i \in M_{1}$, and }
D=\bigcup_{i \in M_{1}}D_{i}.$$
\paragraph{}
We check the degree of $x_{i}$ in $C$ and $H$.
Since $x_i$ is non-insertible,
we can see that
\begin{equation} \label{C}
\dg{C}{x_{i}} \le |D_{i}| +\alpha(G)-1 \text{\ \ for $i \in M_{1}$.}
\end{equation}
By the definition of $x_{i}$,
we clearly have
$N_{H_{0}}(x_{i})=\emptyset$ for $i \in M_{1}$.
Moreover, by Lemma \ref{insertible} (i),
$N_{H}(x_i) \cap N_{H}(x_j) =\emptyset$
for $i,j \in M_{1}$ with $i\not=j$.
Thus we obtain
\begin{equation}
\sum_{i\in M_{0}}
\dg{H}{x_i} \le |H|-1, \label{H}
\end{equation}
and
\begin{equation}
\sum_{i\in M_{1}}
\dg{H}{x_i} \le |H|-|H_{0}|. \label{H2}
\end{equation}
We check the degree sum in $C$ of two vertices in $X$.
Let $i$ and $j$ be distinct two integers in $M_{1}$.
In this paragraph,
we
let $C_i = \IR{C}{x_i}{u_{j}}$
and $C_j = \IR{C}{x_j}{u_{i}}$.
By Lemma \ref{insertible} (ii),
we have
$N_{C_i}(x_i)^- \cap N_{C_i}(x_{j})=\emptyset$
and
$N_{C_j}(x_j)^- \cap N_{C_j}(x_{i})=\emptyset$.
By Lemma \ref{insertible} (i),
$N_{C_i}(x_i)^- \cup N_{C_i}(x_{j}) \subseteq C_{i} \setminus D$,
$N_{C_j}(x_j)^- \cup N_{C_j}(x_{i}) \subseteq C_{j} \setminus D$
and
$N_{D_i}(x_j) = N_{D_j}(x_i)=\emptyset$.
Thus, we obtain
\begin{equation}\label{2vertices}
\dg{C}{x_i} +\dg{C}{x_j}
\le |C|-\sum_{h \in M_{1}\setminus \{i,j\}}|D_{h}| \text{\ \ for $i, j \in M_{1}$ with $i \neq j$}.
\end{equation}
\bigskip
By Lemma \ref{insertible} (i) and since $N_{H_{0}}(x_{i}) = \emptyset$ for $i \in M_{1}$,
we obtain the following.
\begin{Claim}\label{Xindep}
$X\cup \{x_{0}\}$ is an independent set,
and hence $|X| \le \alpha(G)-1$.
\end{Claim}
\begin{Claim}\label{k+1}
$|X| \ge \kappa(G)+1$.
\end{Claim}
\begin{proof}
Let $s$ and $t$ be distinct two integers in $M_{1}$.
By the inequality (\ref{2vertices}),
we have
$$\dg{C}{x_s} +\dg{C}{x_t} \le |C|-\sum_{i \in M_{1}\setminus \{s,t\}}|D_{i}|.$$
Let $I$ be a subset of $M_{0}$
such that
$|I|=k+1$ and $\{0,s,t\} \subseteq I$.
By Claim \ref{Xindep},
$\{x_{i} : i \in I\}$
is an independent set.
By the inequality (\ref{C}),
we deduce
\begin{eqnarray*}
\sum_{i \in I\setminus \{0,s,t\}}\dg{C}{x_i}
&\le&
\sum_{i \in I\setminus \{0,s,t\}}|D_{i}|+(k-2)(\alpha(G)-1).
\end{eqnarray*}
By the inequality (\ref{H}) and the definition of $I$,
we obtain
\begin{eqnarray*}
\sum_{i \in I}\dg{H}{x_i}
&\le&
|H| - 1.
\end{eqnarray*}
Thus,
it follows from
these three inequalities
that
$$\sum_{i \in I}\dg{G}{x_i}
\leq n+(k-2)(\alpha(G)-1)-1+\dg{C}{x_0}.$$
Since $\sigma_{k+1}(G) \geq n+\kappa(G)+(k-2)(\alpha(G)-1)$,
we have
$|X| \ge \dg{C}{x_0} \ge \kappa(G)+1$.
\end{proof}
\paragraph{}
Let $S$ be a cut set with $|S|=\kappa(G)$, and
let $V_1,V_2, \ldots , V_p$
be the components of $G - S$.
By Claim \ref{k+1},
we may assume that
\begin{center}
there exists an integer $l$ such that
$C[u_{l}, u_{l}') \subseteq V_{1}.$
\end{center}
By Lemma \ref{insertible} (i),
we obtain
\begin{equation}\label{xl}
\dg{C}{x_l}
\le |C \cap (V_1 \cup S)|-|(\bigcup_{i \in M_{1} \setminus \{l\}}D_{i} \cup X) \cap (V_1 \cup S)|.
\end{equation}
\paragraph{}
By replacing the labels $x_2$ and $x_3$ if necessary,
we may assume that
$x_1$, $x_2$ and $x_3$ appear
in this order along the orientation of $C$.
In this paragraph,
the indices are taken modulo $3$.
From now we let $$C_i=\IR{C}{x_i}{u_{i+1}}$$
and
$$W_i :=\{w \in V(C_i) : \text{$w^+ \in N_{C_i}(x_i)$ and $w^- \in N_{C_i}(x_{i+1})$} \}$$
for each $i\in \{1,2,3\}$,
and let $W:=W_1 \cup W_2 \cup W_3$
(see Figure \ref{DefW}).
Note that $W \cap (U \cup \{x_{1}, x_{2}, x_{3}\}) =\emptyset$,
by the definition of $C_{i}$ and $W_{i}$ and by Lemma \ref{insertible} (i).
\begin{figure}[h]
\begin{center}
\includegraphics{DefofW.eps}
\caption{The definition of $W$.}
\label{DefW}
\end{center}
\end{figure}
\begin{Claim}\label{U1}
$D \cup X \cup W \cup H \subseteq V_{1} \cup S$.
In particular,
$x_0 \in V_1 \cup S$.
\end{Claim}
\begin{proof}
We first show that $D \cup X \cup W \subseteq V_{1} \cup S$.
Suppose not.
Without loss of generality,
we may assume that
there exists an integer $h$ in $M_{1} \setminus \{l\}$ such that
$\big(D_{h} \cup \{x_{h}\} \cup (W \cap \ir{C}{x_{h}}{u_{h}'})\big) \cap V_2\not=\emptyset,$
say $v \in
\big(D_{h} \cup \{x_{h}\} \cup (W \cap \ir{C}{x_{h}}{u_{h}'})\big) \cap V_2$.
Since $v \in V_{2}$,
it follows from Lemma \ref{insertible} (i) and (ii)
that
$$
\dg{C}{v}
\le |C \cap (V_2 \cup S)|-|(\bigcup_{i \in M_{1} \setminus \{h\}} D_{i} \cup X) \cap (V_2 \cup S)|.
$$
Let $I$ be a subset of $M_{0}\setminus\{h\}$
such that
$|I|=k$ and $\{0,l\} \subseteq I$.
By Claim \ref{Xindep} and Lemma \ref{insertible} (i) and (ii),
$\{x_i : i \in I\} \cup \{v\}$ is an independent set
of order $k+1$.
By the above inequality
and the inequality (\ref{xl}),
we obtain
\begin{eqnarray*}
\lefteqn{\dg{C}{x_l} +\dg{C}{v}}\\
& \le &
|C \cap (V_1 \cup V_2 \cup S)|
+|C \cap S|
-|(\bigcup_{i \in M_{1} \setminus \{l,h\}} D_{i} \cup X)\cap (V_1 \cup V_{2} \cup S)|\\
&=& |C| + |C \cap S| - |C \cap (\bigcup_{3 \le j \le p}V_j)|
-|(\bigcup_{i \in M_{1} \setminus \{l,h\}} D_{i} \cup X)\cap (V_1 \cup V_{2} \cup S)|\\
&\le& |C| + |C \cap S| - |(\bigcup_{i \in M_{1} \setminus \{l, h\}}D_{i} \cup X) \cap (\bigcup_{3 \le j \le p}V_{j})|\\
&&{}-|(\bigcup_{i \in M_{1} \setminus \{l,h\}} D_{i} \cup X)\cap (V_1 \cup V_{2} \cup S)|\\
&\le& |C| + \kappa(G) -\sum_{i \in M_{1} \setminus \{l,h\}}|D_{i} \cap (\bigcup_{1 \le j \le p}V_{j} \cup S)|
-|X \cap (\bigcup_{1 \le j \le p}V_{j} \cup S)|\\
&\le& |C| + \kappa(G) -\sum_{i\in I \setminus \{0,l\}}|D_{i}|- |X|\\
&\le& |C| + \kappa(G) -\sum_{i\in I \setminus \{0,l\}}|D_{i}|- \dg{C}{x_0}.
\end{eqnarray*}
On the other hand,
the inequality (\ref{C}) yields that
$$\sum_{i\in I \setminus \{0,l\}}\dg{C}{x_i} \le \sum_{i\in I \setminus \{0,l\}}|D_{i}|+(k-2)(\alpha(G)-1).$$
By the above two inequalities,
we deduce
$$\sum_{i \in I}\dg{C}{x_i}+\dg{C}{v}
\le |C| +\kappa(G)+(k-2)(\alpha(G)-1).$$
Recall that
$\{x_i : i \in I\} \cup \{v\}$ is an independent set,
in particular, $x_{0}\not\in \bigcup_{i \in I} N_{H}(x_{i}) \cup N_{H}(v)$.
Since
$ N_{H}(x_{i}) \cap N_{H}(x_{j}) = \emptyset$ for $i, j \in I$ with $i \neq j$
and
$(\bigcup_{i \in I} N_{H}(x_{i})) \cap N_{H}(v)=\emptyset$
by Lemma \ref{insertible} (i) and (ii),
it follows that $\sum_{i \in I}\dg{H}{x_i}+\dg{H}{v} \le |H|-1$.
Combining this inequality with the above inequality,
we get
$\sum_{i \in I}\dg{G}{x_i}+\dg{G}{v}
\le n+\kappa(G)+(k-2)(\alpha(G)-1)-1$,
a contradiction.
\paragraph{}
We next show that
$H - H_{0} \subseteq V_1 \cup S$.
Suppose not.
Without loss of generality,
we may assume that
there exists a vertex $y \in (H - H_{0}) \cap V_2$.
Let $H_{y}$ be a component of $H$ with $y \in V(H_{y})$.
Note that $H_{y}\not=H_{0}$.
Suppose that
$N_C(H_{y}) \cap (D_{h} \cup \{ x_h \}) \neq \emptyset$ for some $h \in M_{1} \setminus \{l\}$.
Then
Lemma \ref{insertible} (i) yields that
$$d_C(y) \le |C \cap (V_2 \cup S)|-|(\bigcup_{i \in M_1 \setminus \{ h \}} D_i \cup X) \cap (V_2 \cup S)|.$$
Hence, by the same argument as above,
we can obtain a contradiction.
Thus
we may assume that
$N_C(H_{y}) \cap (D_{i} \cup \{ x_i \}) = \emptyset$ for all $i \in M_{1} \setminus \{l\}$.
Then,
since $y \in V_{2}$ and $D_{l} \cup \{x_{l}\} \subseteq V_{1}$,
we have
$$d_C(y) \le |C \cap (V_2 \cup S)|-|(\bigcup_{i \in M_1} D_i \cup X) \cap (V_2 \cup S)|.$$
Let $I$ be a subset of $M_{0}$
such that
$|I|=k$ and $\{0,l\} \subseteq I$.
Since
$x_{l} \in V_{1}$,
$y \in V_{2}$,
$H_{y}\not=H_{0}$
and
$N_C(H_{y}) \cap (D_{i} \cup \{ x_i \}) = \emptyset$ for all $i \in M_{1} \setminus \{l\}$,
it follows from
Claim \ref{Xindep}
that $\{x_i : i \in I\} \cup \{y\}$ is an independent set of order $k+1$.
By the above inequality
and the inequality (\ref{xl}),
we obtain
\begin{eqnarray*}
\lefteqn{\dg{C}{x_l} +\dg{C}{y}}\\
& \le &
|C \cap (V_1 \cup V_2 \cup S)|
+|C \cap S|
-|(\bigcup_{i \in M_{1} \setminus \{l\}} D_{i} \cup X)\cap (V_1 \cup V_{2} \cup S)|\\
&\le& |C| +|C \cap S| -\sum_{i\in I \setminus \{0,l\}}|D_{i}|- \dg{C}{x_0}.
\end{eqnarray*}
Therefore,
by the above inequality and the inequality (\ref{C}),
we obtain
$$\sum_{i \in I}\dg{C}{x_i}+\dg{C}{y}
\le |C| +|C \cap S|+(k-2)(\alpha(G)-1).$$
Since
$H_{0}\not=H_{y}$
and
$N_C(H_{y}) \cap (D_{i} \cup \{ x_i \}) = \emptyset$ for all $i \in M_{1}\setminus \{l\}$,
it follows that
$(\bigcup_{i \in I \setminus \{l\}} N_{H}(x_{i})) \cap V(H_{y})=\emptyset$.
Since
$x_{l} \in V_{1}$
and
$y \in V_{2}$,
we have
$N_{H}(x_{l}) \cap N_{H}(y) \subseteq H \cap S$.
Therefore,
we obtain
$$\sum_{i \in I}\dg{H}{x_i}+\dg{H}{y} \le |H|+|H \cap S|-2.$$
Combining the above two inequalities,
$\sum_{i \in I}\dg{G}{x_i}+\dg{G}{y}\le n+\kappa(G)+(k-2)(\alpha(G)-1)-2$,
a contradiction.
\paragraph{}
We finally show that
$H_0 \subseteq V_1 \cup S$.
Suppose not.
Without loss of generality,
we may assume that
there exists a vertex $y_{0} \in H_{0} \cap V_{2}$.
Then
$$\dg{G}{y_{0}} \le |U \cap (V_{2} \cup S)|+ |H_{0}|-1.$$
Since $u_{l} \in V_{1}$,
we have $H_{0} \cap S \not=\emptyset$.
Note that by the above argument,
$X \subseteq V_{1} \cup S$.
Therefore,
by Claim \ref{k+1},
$|X \cap V_{1}| = |X|-|X \cap S|
\ge \kappa(G)+1-(|S|-|H_{0}\cap S|)
\ge \kappa(G)+1-(\kappa(G)-1) = 2$.
Let $x_{s} \in X \cap V_{1}$ with $x_{s}\not=x_{l}$.
Let $I$ be a subset of $M_{1}$
such that
$|I|=k$ and $\{l,s\} \subseteq I$.
Then
$\{x_i : i \in I\} \cup \{y_{0}\}$ is an independent set
of order $k+1$.
By Lemma \ref{insertible} (i),
we have
$N_{C}(x_{l})^- \cap (U \setminus \{u_{l}\}) =\emptyset$
and
$N_{C}(x_{s})^- \cap (U \setminus \{u_{s}\}) =\emptyset$.
Since $x_{l}, x_{s} \in V_{1}$,
it follows that
$(N_{C}(x_{l}) \cup N_{C}(x_{s})) \cap (U \cap V_{2}) =\emptyset$.
Therefore,
we can improve the inequality (\ref{2vertices}) as follows:
\begin{equation*}
\dg{C}{x_l} +\dg{C}{x_s} \le |C|-\sum_{i \in I\setminus \{l,s\}}|D_{i}|-|U \cap V_{2}|.
\end{equation*}
By the inequality (\ref{C}) and the inequality (\ref{H2}),
$$\sum_{i \in I \setminus \{l,s\}}\dg{C}{x_i}
\le \sum_{i \in I\setminus \{l,s\}}|D_{i}|+(k-2)(\alpha(G)-1)
\text{\ \ and\ \ }
\sum_{i \in I }\dg{H}{x_i}
\le |H|-|H_{0}|.$$
Hence, by the above four inequalities,
we deduce
$\dg{G}{y_{0}}+\sum_{i \in I}\dg{G}{x_i}
\le n+\kappa(G)+(k-2)(\alpha(G)-1)-1,$
a contradiction.
\end{proof}
\paragraph{}
By Claim \ref{U1},
\begin{center}
there exists an integer $r$
such that
$\iR{C}{x_{r}}{u_{r}'} \cap \bigcup_{i=2}^{p}V_{i}\not=\emptyset$,
\end{center}
say
$$v_{2} \in \iR{C}{x_{r}}{u_{r}'} \cap \bigcup_{i=2}^{p}V_{i}.$$
Choose $r$ and $v_{2}$ so that
$v_{2}\not= u_{r}'$
if possible.
Without loss of generality,
we may assume that
$v_{2} \in V_{2}.$
Note that
\begin{equation}
\dg{G}{v_{2}} \le |V_{2} \cup S|-1.\label{v2}
\end{equation}
\begin{Claim}\label{dgw}
$\dg{C}{w}\le \dg{C}{x_{0}} \le |X| \le \alpha(G)-1$
for each $w \in W$.
\end{Claim}
\begin{proof}
Let $w \in W$.
Without loss of generality,
we may assume that
$w \in W_1$.
Then by applying Lemma \ref{D cup Q is hamilton}
as $Q_{1} = D_{1}$, $Q_{2} = D_{2}$ and
$$D = x_1\IR{C}{w^{+}}{u_{2}}\IR{P}{u_2}{u_1}\IL{C}{u_1}{x_{2}}\IL{C}{w^{-}}{x_1},$$
where $\IR{P}{u_2}{u_1}$ is a $C$-path
passing through some vertex of $H_{0}$,
we can obtain a cycle $C'$ such that $V(C) \setminus \{w\} \subseteq V(C')$ and $V(C') \cap V(H_{0}) \neq \emptyset$
(note that (I) and (II) of Lemma \ref{D cup Q is hamilton} hold,
by Lemma \ref{insertible} (i) and (ii) and the definition of insertible and $D_{i}$).
Note that by the maximality of $|C|$, $|C'| = |C|$.
Note also that $\dg{C'}{w} \ge \dg{C}{w}$.
By the choice of $C$ and $x_{0}$,
we have
$\dg{C'}{w} \le \dg{C}{x_{0}}$,
and hence
by Claim \ref{Xindep} and the fact that $\dg{C}{x_{0}} \le |X|$,
we obtain
$\dg{C}{w}\le \dg{C}{x_{0}} \le |X| \le \alpha(G)-1$.
\end{proof}
By Lemma \ref{insertible}
and Claim \ref{U1},
we have
\begin{equation}\label{xwH}
\sum_{i \in M_{0}}\dg{H}{x_{i}}+\sum_{w \in W}\dg{H}{w} \le |H|-|\{x_{0}\}| =|H \cap (V_1 \cup S)|-1.
\end{equation}
Moreover,
by Lemma \ref{insertible} and Claim \ref{Xindep},
the following claim holds.
\begin{Claim}\label{XW}
$X \cup W \cup \{x_{0}\}$ is an independent set.
\end{Claim}
We now check the degree sum of the vertices $x_{1},x_{2}$ and $x_{3}$ in $C$.
In this paragraph,
the indices are taken modulo $3$.
By Lemma \ref{insertible} (ii),
$(N_{C_i}(x_i)^- \cup N_{C_i}(x_{i+1})^+) \cap N_{C_i}(x_{i+2})=\emptyset$
for $i\in \{1,2,3\}$.
Clearly,
$N_{C_i}(x_{i})^- \cap N_{C_i}(x_{i+1})^+=W_i$
and
${N_{C_i}(x_i)}^- \cup {N_{C_i}(x_{i+1})}^+
\cup {N_{C_i}(x_{i+2})} \subseteq C_i \cup
\{u_{i+1}^+\}$.
By Lemma \ref{insertible} (i),
$(N_{C_{i}}(x_i)^{-} \cup N_{C_{i}}(x_{i+2})) \cap D_j=\emptyset$
for $i \in \{1,2,3\}$ and $j\in M_{1}$.
For $i \in \{1,2,3\}$,
let
$$L_{i}=\left\{ x_{j} \in X \setminus \{x_{i+1}\} : N_{C_{i}}(x_{i+1})^{+} \cap D_{j}\not=\emptyset \right\}$$
and
let
$L=\bigcup_{i \in \{1,2,3\}}L_{i}$
(see Figure \ref{DefL}).
\begin{figure}[h]
\begin{center}
\includegraphics{DefofL.eps}
\caption{The definition of $L$.}
\label{DefL}
\end{center}
\end{figure}
Note that $L \cap \{x_1,x_2,x_3\}=\emptyset$
and $W \cap L=\emptyset$
by Lemma \ref{insertible} (i).
Therefore
the following inequality holds:
$$\dg{C_i}{x_1}+ \dg{C_i}{x_2}+\dg{C_i}{x_3}
\leq |C_i| + |W_i|+ 1-\sum_{j \in M_{1}}|C_i \cap D_{j}|+ |L_{i}|$$
for $i \in\{1,2,3\}$.
By Lemma \ref{insertible} (i),
we have
$N_{C}(x_{i}) \cap D_{j}=\emptyset$
for $i,j \in M_{1}$ with $i\not=j$,
and hence
$$\dg{D_i}{x_1}+ \dg{D_i}{x_2}+\dg{D_i}{x_3}
\leq |D_i|$$
for $i \in\{1,2,3\}$.
Let $I$ be a subset of $M_{0}$
such that
$I \cap \{1,2,3\}=\emptyset$.
Let $L_{I}=L \cap \{x_{i} : i \in I\}$.
Note that $|L \cap \{x_{i}\}| - |D_i| \le 0$ for each $i \in M_{1} \setminus \{1,2,3\}$.
Thus,
we deduce
\begin{eqnarray}
\dg{C}{x_1}+\dg{C}{x_2}+\dg{C}{x_3}
&\le&
\sum_{i=1}^3(|C_i|+|W_i|+|L_{i}|+1-\sum_{j \in M_{1}}|C_{i} \cap D_{j}|+|D_{i}|)\nonumber\\
&=&
|C|+|W|+|L|-\sum_{i \in M_{1} \setminus \{1,2,3\}}|D_{i}|+3\label{WL}\nonumber\\
&\le&
|C|+|W|+|L_{I}|-\sum_{i \in I\setminus\{0\}}|D_{i}|+3\label{W3}\\
&\le&
|C|+|W|+3.\label{W}
\end{eqnarray}
\begin{Claim}\label{WLk-2}
$|W|+|L| \ge \kappa(G)-2 \ge 1$.
\end{Claim}
\begin{proof}
Let $I$ be a subset of $M_{0}$
such that $|I|=k-2$ and
$I \cap \{1,2,3\}=\emptyset$.
Suppose that
$|W|+|L_{I}|\le \kappa(G)-3$.
By Claim \ref{XW},
$\{x_{i} : i \in I\}\cup \{x_{1},x_{2},x_{3}\}$ is an independent set of order $k+1$.
By the inequality (\ref{WL}),
we obtain
\begin{eqnarray*}
\dg{C}{x_1} +\dg{C}{x_2}+\dg{C}{x_3}
&\leq&
|C|+\kappa(G)-\sum_{i \in I\setminus\{0\}}|D_{i}|.
\end{eqnarray*}
Therefore,
this inequality,
the inequalities (\ref{C}) and (\ref{H})
and Claim \ref{dgw}
yield that
\begin{eqnarray*}
\sum_{ i= 1}^{3}\dg{G}{x_i}+\sum_{ i\in I }\dg{G}{x_i}
&\le&
n+\kappa(G)+(k-2)(\alpha(G)-1)-1,
\end{eqnarray*}
a contradiction.
Therefore
$|W|+|L| \ge |W|+|L_{I}| \ge \kappa(G)-2$.
\end{proof}
\begin{Claim}\label{x0a-1}
$\dg{C}{x_{0}}=|U|=|X|= \alpha(G)-1$.
In particular, $N_{C}(x_{0})=U$.
\end{Claim}
\begin{proof}
Suppose that $\dg{C}{x_{0}} \le \alpha(G)-2$.
In this proof,
we assume $x_l =x_1$
(recall that $l$ is an integer such that $C[u_{l}, u_{l}') \subseteq V_{1}$,
see the paragraph below the proof of Claim \ref{k+1}).
We divide the proof into two cases.
\bigskip
\noindent\textit{Case 1.} $|W| \ge k-3$.
\begin{subclaim}\label{W2k-5}
$|W| \le \kappa(G)+k-5$.
\end{subclaim}
\begin{proof}
Suppose that $|W| \ge \kappa(G)+k-4$.
By Claim \ref{U1},
we obtain
\begin{eqnarray*}
|(W \cup \{x_{0},x_{1},x_{2},x_{3}\}) \cap V_{1}|
&=&|W \cup \{x_{0},x_{1},x_{2},x_{3}\}|-|(W\cup \{x_{0},x_{1},x_{2},x_{3}\}) \cap S|\\
& \ge& (\kappa(G)+k-4+4) -\kappa(G) =k.
\end{eqnarray*}
Let $W'$ be a subset of $(W\cup \{x_{0},x_{1},x_{2},x_{3}\}) \cap V_{1}$
such that $|W'|=k$ and $x_{1} \in W'$.
Since $W' \subseteq V_{1}$
and $v_{2} \in V_{2}$,
it follows from Claim \ref{XW}
that
$W' \cup \{v_{2}\}$
is an independent set of order $k+1$.
By the inequality (\ref{xl}) and Claims \ref{U1} and \ref{dgw},
we obtain
\begin{eqnarray*}
\dg{C}{x_1}
&\le&
|C \cap (V_1 \cup S)|-\sum_{i \in M_{1} \setminus \{1\}}|(D_{i} \cap (V_1 \cup S) | - |X \cap (V_1 \cup S)|\\
&\le& |C \cap (V_1 \cup S)|-\sum_{i \in \{2,3\}}|D_{i}|-|X|\\
&\le& |C \cap (V_1 \cup S)|-\sum_{i \in \{2,3\}}|D_{i}|-\dg{C}{w_0},
\end{eqnarray*}
where $w_{0} \in W'\setminus\{x_{1},x_{2},x_{3}\}$
(note that $|W'| = k \ge 4$).
By the inequality (\ref{C}) and Claim \ref{dgw},
$$
\sum_{x \in W' \cap \{x_{2},x_{3}\}}\dg{C}{x}
+
\sum_{w \in W'\setminus \{w_{0},x_{1},x_{2},x_{3}\}}\dg{C}{w}
\le \sum_{i \in \{2,3\}}|D_{i}|+(k-2)(\alpha(G)-1).$$
By the above two inequalities,
we obtain
\begin{equation*}
\sum_{w \in W'}\dg{C}{w}
\le |C \cap (V_1 \cup S)|
+(k-2)(\alpha(G)-1).
\end{equation*}
Therefore,
since
$\sum_{w \in W'}\dg{H}{w} \le |H \cap (V_1 \cup S)|-1$
by the inequality (\ref{xwH}),
it follows that
\begin{equation*}\label{notr}
\sum_{w \in W'}\dg{G}{w}
\le |V_1 \cup S|+(k-2)(\alpha(G)-1)-1.
\end{equation*}
Summing this inequality and the inequality (\ref{v2})
yields that
$\sum_{w \in W'}\dg{G}{w}
+\dg{G}{v_{2}}
\le n+\kappa(G)+(k-2)(\alpha(G)-1)-2,$
a contradiction.
\end{proof}
By the assumption of Case 1,
we can take a subset $W^{*}$ of $W \cup \{x_{0}\}$
such that $|W^{*}|=k-2$.
By Claim \ref{XW},
$W^{*} \cup \{x_{1},x_{2},x_{3}\}$ is independent.
Moreover,
by Claim \ref{dgw} and
the assumption that $\dg{C}{x_{0}}\leq \alpha(G) -2$,
we have
$$\sum_{w \in W^{*}}\dg{C}{w} \le(k-2)(\alpha(G)-2).$$
By Subclaim \ref{W2k-5},
summing
this inequality
and
the inequality (\ref{W})
yields
that
\begin{eqnarray*}
&&\sum_{i=1}^{3}\dg{C}{x_i}
+\sum_{w \in W^{*}}\dg{C}{w}\\
&\le&
|C|+|W|+3+(k-2)(\alpha(G)-2)\\
&\le&
|C|+(\kappa(G)+k-5)+3-(k-2)+(k-2)(\alpha(G)-1)\\
&=&
|C|+\kappa(G)+(k-2)(\alpha(G)-1).
\end{eqnarray*}
Therefore,
since
$\sum_{i=1}^{3}\dg{H}{x_i}+\sum_{w \in W^{*}}\dg{H}{w} \le |H|-1$
by the inequality (\ref{xwH}),
we obtain
$\sum_{i=1}^{3}\dg{G}{x_i}+\sum_{w \in W^{*}}\dg{G}{w}
\le n+\kappa(G)+(k-2)(\alpha(G)-1)-1$,
a contradiction.
\bigskip
\noindent\textit{Case 2.} $|W| \le k-4$.
By Claim \ref{WLk-2},
we can take a subset
$L^{*}$ of $L$
such that
$|L^{*}|=k-3-|W|$.
Let $I=\{ i : x_{i} \in L^{*}\} $.
By Claim \ref{XW},
$W \cup L^{*}\cup \{x_{0}, x_{1},x_{2},x_{3}\}$ is
an independent set of order $k+1$.
By the inequality (\ref{WL}),
we have
\begin{eqnarray*}
\dg{C}{x_1}+\dg{C}{x_2}+\dg{C}{x_3}
&\le&
|C|+|W|+|L^{*}|-\sum_{i \in I}|D_{i}|+3\\
&=&
|C|+k-3-\sum_{i \in I}|D_{i}|+3\\
&\le&
|C|+\kappa(G)-\sum_{i \in I}|D_{i}|.
\end{eqnarray*}
On the other hand,
it follows from Claim \ref{dgw}, the assumption $\dg{C}{x_{0}}\leq \alpha -2$ and the inequality (\ref{C}) that
\begin{eqnarray*}
\sum_{w \in W \cup \{x_{0}\}}\dg{C}{w}+
\sum_{x \in L^{*}}\dg{C}{x}
&\le&
(|W|+1)(\alpha(G)-2)
+\sum_{i \in I }|D_{i}|+|L^{*}|(\alpha(G)-1)\\
&=&
(k-2)(\alpha(G)-1)
-|W|-1
+\sum_{i \in I }|D_{i}|\\
&\le&
(k-2)(\alpha(G)-1)
+\sum_{i \in I }|D_{i}|-1.
\end{eqnarray*}
Thus,
we deduce
$$\sum_{i=1}^{3}\dg{C}{x_i}+
\sum_{w \in W \cup \{x_{0}\}}\dg{C}{w}+
\sum_{x \in L^{*}}\dg{C}{x}
\le
|C|+\kappa(G)+(k-2)(\alpha(G)-1)-1.$$
By the inequality (\ref{xwH}),
we obtain
$$\sum_{i=1}^{3}\dg{H}{x_i}
+\sum_{w \in W \cup \{x_{0}\}}\dg{H}{w}+
\sum_{x \in L^{*}}\dg{H}{x}
\le |H|-1.
$$
Summing the above two inequalities
yields that
$\sum_{i=1}^{3}\dg{G}{x_i}+
\sum_{w \in W \cup \{x_{0}\}}\dg{G}{w}+
\sum_{x \in L^{*}}\dg{G}{x}
\le n+\kappa(G)+(k-2)(\alpha(G)-1)-2,$
a contradiction.
\bigskip
By Cases 1 and 2,
we have
$\dg{C}{x_{0}} \ge \alpha(G)-1$.
Since
$|U|=|X|$,
it follows
from Claim \ref{dgw}
that $\dg{C}{x_{0}} =|U|=|X| = \alpha(G)-1$.
In particular, $N_{C}(x_{0})=U$
because $N_{C}(x_{0}) \subseteq N_{C}(H_{0}) = U$.
This completes the proof of Claim \ref{x0a-1}.
\end{proof}
\begin{Claim}\label{WX}
$W \subseteq X$.
\end{Claim}
\begin{proof}
If $W \setminus X \neq \emptyset$,
then by Claim \ref{XW},
we have $\dg{C}{x_{0}} \le |X| \le \alpha(G)-2$,
which contradicts Claim \ref{x0a-1}.
\end{proof}
\begin{Claim} \label{c}
If there exist distinct two integers $s$ and $t$ in $M_1$
such that
$u_s \in N_C(x_t)$,
then
$N_C(x_s) \cap \IR{C}{u_{t}}{u_{s}} \subseteq U $.
\end{Claim}
\begin{proof}
Suppose that there exists
a vertex $z \in N_C(x_s) \cap C[u_{t},u_{s}]$ such that $z \not \in U$.
We show that
$X \cup \{x_0,z^+ \}$ is an independent set of order $|X|+2$.
By Claim \ref{XW},
we only show that $z^+ \not \in X$
and $z^+ \not\in N_{C}(x_{i})$
for each $x_i \in X \cup \{x_{0}\}$.
Since $z \not \in U$,
it follows from Lemma \ref{insertible} (i)
that $z^+ \not \in X$.
Suppose that $z^+ \in N_C(x_h)$ for some $x_h \in X \cup \{ x_0 \}$.
Since $x_s$ is a non-insertible vertex,
it follows that $x_h \neq x_s$.
Let $z_{s}$ be the vertex in $C(u_{s},x_{s}]$
such that
$z \in N_{G}(z_{s})$
and
$z \not\in N_{G}(v)$ for all $v \in C(u_{s},z_{s})$.
By Lemma \ref{insertible} (ii),
we obtain $x_h \not \in C[u_s',z]$.
Therefore,
$x_h \in C(z,u_s] \cup \{x_{0}\}$.
If $x_h \in C(z,u_s]$,
then
we let $z_{h}$ be the vertex in $C(u_{h},x_{h}]$
such that
$z^{+} \in N_{G}(z_{h})$
and
$z^{+} \not\in N_{G}(v)$ for all $v \in C(u_{h},z_{h})$.
We define the cycle $C^{*}$ as follows (see Figure \ref{Xxzh-fig}):
$$C^{*}
=
\begin{cases}
z_s \overleftarrow{C} [z,x_t] \overleftarrow{C} [u_s,z_h]C[z^+,u_h] x_0 \overleftarrow{C}[u_t,z_s]
&\text{if $x_h \in C(z,u_s]$,}\\
z_s\overleftarrow{C}[z,x_t]\overleftarrow{C}[u_s,z^+]x_h\overleftarrow{C}[u_t,z_s]
&\text{if $x_{h}=x_{0}$.}
\end{cases}
$$
\begin{figure}[h]
\begin{center}
\includegraphics{Xx0z.eps}
\caption{Claim \ref{c}}
\label{Xxzh-fig}
\end{center}
\end{figure}
Then,
by similar argument in the proof of Lemma \ref{insertible},
we can obtain a longer cycle than $C$
by inserting all vertices of $V(C \setminus C^{*})$ into $C^{*}$.
This contradicts that $C$ is longest.
Hence $z^+ \not\in N_{C}(x_{h})$
for each $x_h \in X \cup \{x_{0}\}$.
Thus,
by Claim \ref{x0a-1},
$X \cup \{x_0,z^+ \}$ is an independent set of order $|X|+2 = \alpha(G)+1$,
a contradiction.
\end{proof}
\paragraph{}We divide the rest of the proof into two cases.
\bigskip
\noindent\textbf{Case 1. $v_{2} \not\in U$.}
\medskip
Let $Y=N_{G}(v_{2}) \cap X$,
and
let $\gamma=|X|-\kappa(G)-1$.
Note that $|X| = \kappa(G)+\gamma+1 \ge k+\gamma+1$
and
$x_{l}\not\in Y$ since $x_{l}\in V_{1}$.
\begin{Claim} \label{ge3}
$|Y| \ge \gamma+3$.
\end{Claim}
\begin{proof}
Suppose that
$|Y| \le \gamma+2$.
By the assumption of Case 1,
we have
$x_{0}v_{2} \not\in E(G)$.
Since
$|M_{0}| = |X|+1 \ge k+\gamma+2$
and $|Y| \le \gamma+2$,
there exists a subset $I$ of $M_{0} \setminus \{i :x_{i} \in Y\}$
such that
$|I|=k$
and
$\{0,l\} \subseteq I$.
Then
$\{x_{i} : i \in I\} \cup \{v_{2}\}$
is an independent set of order $k+1$.
By the inequality (\ref{xl})
and Claims \ref{U1} and \ref{x0a-1},
we obtain
\begin{eqnarray*}
\dg{C}{x_l}
&\le& |C \cap (V_1 \cup S)|-\sum_{i \in I \setminus \{0,l\}}|D_{i}|-|X|\\
&=& |C \cap (V_1 \cup S)|-\sum_{i \in I \setminus \{0,l\}}|D_{i}|-\dg{C}{x_0}.
\end{eqnarray*}
Therefore
it follows from the inequality (\ref{C}) that
\begin{equation*}
\sum_{i \in I}\dg{C}{x_i}
\le |C \cap (V_1 \cup S)|
+(k-2)(\alpha(G)-1).
\end{equation*}
By the inequality (\ref{xwH}),
$\sum_{i \in I}\dg{H}{x_i} \le |H \cap (V_1 \cup S)| -1$.
Summing these two inequalities
and the inequality (\ref{v2})
yields that
\begin{equation*}\label{notr}
\sum_{i \in I}\dg{G}{x_i} +\dg{G}{v_{2}}
\le n+ \kappa(G) +(k-2)(\alpha(G)-1)-2,
\end{equation*}
a contradiction.
\end{proof}
Recall that $r$ is an integer such that $v_{2} \in \iR{C}{x_{r}}{u_{r}'} \cap V_{2}$
(see the paragraph below the proof of Claim \ref{U1}).
In the rest of Case 1, we assume that $l=1$.
If $u_{r}' \not= u_{1}$, then let $r = 2$ and $u_{3} = u_{2}'$;
otherwise, let $r = 3$ and let $u_{2}$ be the vertex with $u_{2}' = u_{3}$.
By Claim \ref{WX},
we have $W \subseteq X$.
Hence
we obtain $Y \cup W \cup L \subseteq X \setminus \{x_{1}\}$.
Recall that $W \cap L = \emptyset$.
Therefore,
by Claims \ref{WLk-2} and \ref{ge3},
we obtain
\begin{eqnarray*}
|Y \cap (W \cup L)|
&=& |Y|+|W|+|L|-|Y \cup (W \cup L)|\\
&\ge& \gamma+3+\kappa(G)-2-|X\setminus\{x_{1}\}|\\
&=& \gamma+3+\kappa(G)-2-((\kappa(G)+\gamma+1)-1) = 1.
\end{eqnarray*}
Hence there exists a vertex $x_{h} \in Y \cap (W \cup L)$,
that is,
$v_{2} \in N_{C}(x_{h}) \setminus U$.
Since $\ir{C}{x_{2}}{x_{3}} \cap X =\emptyset$
and $\ir{C}{x_{3}}{x_{1}} \cap X =\emptyset$ if $r=3$,
either
$u_{h}\in N_{C}(x_{1})$ and $u_{h}\in C(x_{3},u_{1})$
or
$u_{h}\in N_{C}(x_{2})$ and $u_{h}\in C(x_{1},u_{2})$
holds
(especially, if $r=3$ then $u_h \in N_C(x_2)$ and $u_h \in C(x_1,u_2)$ holds)
(see Figure \ref{case1}).
\begin{figure}[h]
\begin{center}
\includegraphics{case1.eps}
\caption{The case ${r} = {2}$ and the case ${r} = {3}$.}
\label{case1}
\end{center}
\end{figure}
If ${r} = {2}$ and $u_{h} \in N_{C}(x_{1})$,
then
$v_{2} \in \IR{C}{u_{1}}{u_{h}}$
(see Figure \ref{case1} (i)).
If ${r} = {2}$ and $u_{h} \in N_{C}(x_{2})$,
then
$v_{2} \in \IR{C}{u_{2}}{u_{h}}$
(see Figure \ref{case1} (ii)).
If ${r} = {3}$,
then
$u_{h} \in N_{C}(x_{2})$
and
$v_{2} \in \IR{C}{u_{2}}{u_{h}}$
(see Figure \ref{case1} (iii)).
In each case,
we obtain a contradiction to Claim \ref{c}.
\bigskip
\noindent\textbf{Case 2. $v_{2} \in U$.}
\medskip
We rename $x_i \in X$ for $i \ge 1$ as follows (see Figure \ref{jumping-fig}):
Rename an arbitrary vertex of $X$ as $x_{1}$.
For $i \ge 1$,
we rename
$x_{i+1} \in X$
so that
$u_{i+1} \in N_C(x_{i}) \cap (U \setminus \{ u_{i} \})$
and
$|C[u_{i+1},x_{i})|$ is as small as possible.
(For $x_{i} \in X$,
let $x_i'$ and $x_{i}''$ be the successors of $x_{i}$ and $x_{i}'$ in $X$ along the orientation of $C$, respectively.
Then by applying Claim \ref{WLk-2} as $x_1=x_{i}$, $x_2=x_{i}'$ and $x_3=x_i''$,
it follows that $W \cup L \neq \emptyset$.
By the definition of $x_{i}', x_{i}''$ and Claim \ref{WX},
we have $W_{1} = W_{2} = \emptyset$ (note that $W \cap \{x_{1}, x_{2}, x_{3}\} = \emptyset$).
By the definitions of $x_{i}', x_{i}'', L_{1}$ and $L_{2}$,
we also have $L_{1} = L_{2} = \emptyset$.
Thus $W_{3} \cup L_{3} \neq \emptyset$.
By Lemma \ref{insertible} (i) and since $W \cup L \subseteq X$,
this implies that $N_C(x_{i}) \cap (U \setminus \{ u_{i} \}) \neq \emptyset$.)
Let $h$ be the minimum integer
such that
$x_{h+1} \in C(x_h,x_1]$.
Note that this choice implies $h \geq 2$.
We rename $h$ vertices in $X$
as $\{x_1,x_{2},\ldots,x_{h}\}$ as above,
and
$m-h$ vertices in $X \setminus \{x_{1}, x_{2},\ldots,x_{h}\}$
as $\{x_{h+1}, x_{h+2}, \ldots, x_{m}\}$ arbitrarily.
Let
\begin{align*}
A_1=A_{h+1} = C[x_{1},x_{h})
\textup{ and }
A_i= C[x_{i},x_{i-1})
\textup{ for }
2 \le i \le h.
\end{align*}
Let
$$U_{1}=\{u_i \in U : x_i \in X \cap V_{1}\}.$$
If possible,
choose $x_1$ so that
$A_{2} \cap U_1 = \emptyset$.
\begin{figure}[h]
\begin{center}
\scalebox{1}{\includegraphics{last3.eps}}
\caption{The choice of $\{x_1, \dots, x_{h}\}$.}
\label{jumping-fig}
\end{center}
\end{figure}
\paragraph{}
We divide the proof of Case 2 according to whether $h \le k$ or $h \ge k+1$.
\bigskip
\noindent\textbf{Case 2.1.} $h \le k$.
\medskip
By the choice of
$\{x_1, \dots, x_{h}\}$,
we have
\begin{eqnarray}
\mbox{$N_{A_{i+1}}(x_i) \cap U \subseteq \{ u_i \}$ for $1 \le i \le h$.}\label{tonari}
\end{eqnarray}
By Claim \ref{c} and (\ref{tonari}),
we obtain
\begin{eqnarray}
\mbox{$N_{C \setminus A_{i}}(x_i) \subseteq (U \setminus (A_i \cup A_{i+1})) \cup D_i \cup \{ u_i \}$ for $2 \le i \le h$.}\label{sotogawa}
\end{eqnarray}
By Lemma \ref{insertible} (i) and (ii),
$N_{A_{i}}(x_i)^{-} \cap N_{A_{i}}(x_{1})=\emptyset$
for $2 \le i \le h$.
By Lemma \ref{insertible} (i),
we have
$N_{A_{i}}(x_i)^{-} \cup N_{A_{i}}(x_{1}) \subseteq A_i \setminus D$
for $3 \le i \le h$.
Thus,
it follows from (\ref{sotogawa})
that for $3 \le i \le h$
\begin{eqnarray*}
\mbox{$d_C(x_i) \le (|U|-|(A_{i} \cup A_{i+1}) \cap U| +|D_i|+1) + (|A_{i}|-|A_i \cap D|-\dg{A_{i}}{x_{1}})$.}
\end{eqnarray*}
By Lemma \ref{insertible} (i) and (\ref{tonari}),
we have
$N_{A_{2}}(x_2)^{-} \cup N_{A_{2}}(x_{1}) \subseteq (A_2 \setminus (U \cup D)) \cup D_1 \cup \{ u_1 \}$.
Thus,
by (\ref{sotogawa}),
we have
\begin{eqnarray*}
d_C(x_2) &\le& (|U|- |(A_{2} \cup A_{3}) \cap U|+|D_2|+1 )\\
&&{}+ (|A_2|-|A_2 \cap (U \cup D)|+|D_1|+1-\dg{A_{2}}{x_{1}}).
\end{eqnarray*}
Since $|A_1 \cap X|=|A_1 \cap U|$,
it follows from Lemma \ref{insertible} (i) that
\begin{eqnarray*}
d_{A_{1}}(x_1)
&\le&
|A_{1}|-|A_1 \cap D|-|A_1 \cap X|\\
&=&
|A_{1}|-|A_1 \cap D|-|A_1 \cap U|.
\end{eqnarray*}
By Claim \ref{x0a-1},
$d_{C}(x_0) = |U| = \alpha(G)-1$.
Thus,
since $h \le k$,
we obtain
\begin{eqnarray*}
\sum_{0 \le i \le h}d_C(x_i)
&\le&
\sum_{1 \le i \le h}|A_i|+
h|U|
- 2\sum_{1 \le i \le h}|A_i\cap U| +h
+\sum_{1 \le i \le h}|D_{i}| -\sum_{1 \le i \le h}|A_i \cap D|\\
&=&
|C|+
(h-2)|U|
+h
+\sum_{1 \le i \le h}|D_{i}|-|D|\\
&\le&
|C|+k+(h-2)(\alpha(G)-1)
+\sum_{1 \le i \le h}|D_{i}|-|D|.
\end{eqnarray*}
Let $I$ be a subset of $M_{0}$
such that
$|I|=k+1$
and
$\{0,1,\ldots, h\} \subseteq I$.
By Claim \ref{XW},
$\{x_i : i \in I\}$ is an independent set
of order $k+1$.
By the above inequality and the inequality (\ref{C}),
we have \begin{eqnarray*}
\sum_{i \in I}d_C(x_i)
&\le& |C|+k+(k-2)(\alpha(G)-1)
\end{eqnarray*}
By the inequality (\ref{H}),
$\sum_{i \in I}d_H(x_i) \le |H|-1$.
Hence
$\sum_{ i \in I}d_G(x_i) \le |G|+\kappa(G)+(k-2)(\alpha(G) -1)-1$,
a contradiction.
\bigskip
\noindent\textbf{Case 2.2.} $h \ge k+1$.
\medskip
By Claims \ref{U1} and \ref{x0a-1}, the assumption of Case 2 and
the choice of $r$ and $v_{2}$,
we have $\bigcup_{i=2}^{p}V_{i} \subseteq U=N_{C}(x_{0})$.
Since $x_{0} \in V_{1} \cup S$ by Claim \ref{U1},
this implies that $x_{0} \in S$.
\begin{Claim} \label{XV1}
$|X \cap V_{1}| \le k-1$.
\end{Claim}
\begin{proof}
Suppose that
$|X \cap V_{1}| \ge k$.
Let $I$ be a subset of $M_{1}$
such that
$|I|=k$
and
$I \subseteq \{i : x_{i} \in X \cap V_{1}\}$.
Then
$\{x_i : i \in I\} \cup \{v_{2}\}$ is an independent set
of order $k+1$.
Let $s$ and $t$ be integers in $I$.
Since
$x_s,x_t \in V_1$,
$D \subseteq V_1 \cup S$
and
$\bigcup_{i=2}^{p}V_{i} \subseteq U$,
the similar argument as that of the inequality (\ref{2vertices})
implies that
\begin{equation*}
\dg{C}{x_s} +\dg{C}{x_t}
\le
|C \cap (V_{1} \cup S)|-\sum_{i \in I\setminus \{s,t\}}|D_{i}|.
\end{equation*}
By the inequalities (\ref{C}) and (\ref{xwH}),
we have
$
\sum_{i \in I \setminus \{s,t\}}\dg{C}{x_i}
\le \sum_{i \in I\setminus \{s,t\}}|D_{i}|+(k-2)(\alpha(G)-1)$
and
$
\sum_{i \in I }\dg{H}{x_i}
\le |H \cap (V_{1} \cup S)|-1$,
respectively.
On the other hand,
we obtain
$\dg{G}{v_{2}} \le |V_{2} \cup S|-1.$
By these four inequalities,
$\sum_{i \in I}\dg{G}{x_i}+\dg{G}{v_{2}}
\le n+\kappa(G)+(k-2)(\alpha(G)-1)-2,$
a contradiction.
Therefore
$|X \cap V_{1}| \le k-1$.
\end{proof}
Recall $U_{1}=\{u_i \in U : x_i \in X \cap V_{1}\}$.
By Claim \ref{XV1},
we have
$|U_1| \le k-1$.
By the assumption of Case 2.2 and the choice of $x_{1}$,
we obtain
$A_2 \cap U_1= \emptyset$,
and hence
we can take
a subset $I$ of $\{2,3,\ldots,h\}$
such that $|I|=k$
and
$\{i : A_{i+1} \cap U_{1}\not=\emptyset\} \subseteq I$.
Let
$$X_{I}=\{x_i : i \in I\}.$$
By Claim \ref{XW},
$X_{I} \cup \{x_{0}\}$ is an independent set
of order $k+1$.
Let
\begin{align*}
B_1 = B_{h+1} = \ir{C}{u_1}{u_{h}}
\text{\ \ and\ \ } B_i = \ir{C}{u_i}{u_{i-1}} \text{\ \ for } 2 \le i \le h.
\end{align*}
Then,
since
$|\Ir{C}{u_i}{u_{i}'}| \ge 2$ for $i \in M_1 \setminus I$,
the following inequality holds:
\begin{eqnarray*}
|C|
& \ge& \sum_{i \in I}|B_i\cup \{u_i\}|+2\Big(|U|-\sum_{i \in I}|(B_i\cup \{u_i\}) \cap U|\Big)\\
& =& \sum_{i \in I}|B_i|+2\Big(|U|-\sum_{i \in I}|B_i \cap U|\Big)-k.
\end{eqnarray*}
If $x_{i} \in X_{I} \cap S$,
then
it follows from Lemma \ref{insertible} (i)
and Claim \ref{c}
that
\begin{eqnarray*}
d_C(x_i)
&\le&
\Big(|U|- |B_{i}\cap U|- |B_{i+1} \cap U_{1}|\Big) + \Big(|B_i|-|\{x_{i}\}|-|(B_i \cap U)^{+}|\Big)\\
&=&
|U|+ |B_i|-2|B_i \cap U|-|B_{i+1} \cap U_{1}|-1.
\end{eqnarray*}
If $x_{i} \in X_{I} \cap V_{1}$,
then,
by Lemma \ref{insertible} (i)
and Claim \ref{c},
\begin{eqnarray*}
d_C(x_i)
&\le&
\Big(|U|- |B_{i}\cap U|- |B_{i+1} \cap U_{1} |-|(U \cap V_{2}) \setminus B_{i}|+ |B_{i+1} \cap U_1 \cap V_2|\Big)\\
&&{}+ \Big(|B_i|-|\{x_{i}\}|-|(B_i \cap U)^{+}|- |U \cap V_{2} \cap B_{i}|\Big)\\
&=&
|U| + |B_i|-2|B_i \cap U|-|B_{i+1} \cap U_{1}|-1- \Big(|U \cap V_{2}| - |B_{i+1} \cap U_1 \cap V_2|\Big).
\end{eqnarray*}
Since $U \cap V_2 \neq \emptyset$, we obtain
$|U \cap V_{2}| - |B_{i+1} \cap U_1 \cap V_2| \ge 1$ for all $i \in I$ except for at most one,
and hence
$$\sum_
{i \in I\,:\,x_i \in X_I \cap V_1}
\Big(|U \cap V_{2}| - |B_{i+1} \cap U_1 \cap V_2|\Big) \ge |X_{I} \cap V_{1}|-1.$$
By the choice of $I$,
we have
$$|U_{1}|=\sum_{i \in I}|A_{i+1} \cap U_{1}|
= \sum_{i \in I}|B_{i+1} \cap U_{1}|+ \big|\{u_{i} : x_{i} \in X_{I} \cap V_{1}\}\big|.$$
On the other hand,
since $x_{0} \in S$,
it follows from Claim \ref{U1}
that
$$
|U_{1}|=|X \cap V_1| = |X \setminus S| \ge |X| -(\kappa(G)-1).$$
Moreover, by Claim \ref{x0a-1},
\begin{align*}
d_{C}(x_{0}) = |U| = |X| = \alpha(G) - 1.
\end{align*}
Thus,
we deduce
\begin{eqnarray*}
\sum_{i \in I \cup \{ 0 \}}d_C(x_i)
&\le&
(k+1)|U|
+\sum_{i \in I}|B_i|
-2\sum_{i \in I}|B_i\cap U|
\\
&&{}
-\sum_{i \in I}|B_{i+1} \cap U_{1}|-k-(|X_{I} \cap V_{1}|-1)
\\
&=&
\Big( \sum_{i \in I}|B_i|+2\big(|U|-\sum_{i \in I}|B_i \cap U|\big)-k \Big)
+
(k-1)|U|\\
&&{}-
\Big( \sum_{i \in I}|B_{i+1} \cap U_{1}|+ \big|\{u_{i} : x_{i} \in X_{I} \cap V_{1}\}\big| \Big) + 1\\
&\le&
|C|
+(k-1)|U|
+\kappa(G)-|X|\\
&=&
|C|
+\kappa(G)
+(k-2)(\alpha(G)-1).
\end{eqnarray*}
By the inequality (\ref{H}),
$\sum_{i \in I \cup \{ 0 \}}d_H(x_i) \le |H|-1$.
Hence
$\sum_{ i \in I \cup \{ 0 \}}d_G(x_i) \le |G|+\kappa(G)+(k-2)(\alpha(G) -1)-1$, a contradiction.
\end{proof}
|
1,116,691,499,795 | arxiv |
\section*{Acknowledgments}
This research was supported in part by the National Science Foundation award IIS-1723943. We thank Brandon Araki and Kiran Vodrahalli for valuable discussions and helpful suggestions. We would also like to thank Kasper Green Larsen, Alexander Mathiasen, and Allan Gronlund for pointing out an error in an earlier formulation of Lemma~\ref{lem:order-statistic-sampling}.
\subsection{Analytical Results for Section~\ref{sec:analysis_empirical} (Empirical Sensitivity)}
\label{app:analysis_empirical}
Recall that the sensitivity $\s[j]$, of an edge $j \in \mathcal{W}$ is defined as the maximum (approximate) relative importance over a subset $\SS \subseteq \PP \stackrel{i.i.d.}{\sim} {\mathcal D}^n$, $\abs{\SS} = n'$ (Definition~\ref{def:empirical-sensitivity}). We now establish a technical result that quantifies the accuracy of our approximations of edge importance.
\subsection{Order Statistic Sampling}
\begin{lemma}
\label{lem:order-statistic-sampling}
Let $C \ge 2$ be a constant and ${\mathcal D}$ be a distribution with CDF $F(\cdot)$ satisfying $F(\nicefrac{M}{C}) \leq \exp(-1/K)$ where $K \in \Reals_+$ is a universal constant and $M = \min \{x \in [0,1] : F(x) = 1\}$. Let $\SS = \{X_1, \ldots, X_n\}$ be a set of $n = |\SS|$ independent and identically distributed (i.i.d.) samples each drawn from the distribution ${\mathcal D}$. Let $X_{n+1} \sim {\mathcal D}$ be an i.i.d. sample. Then,
\begin{align*}
\Pr \left(C \, \max_{X \in \SS} X \leq X_{n+1} \right) \leq \exp(-n/K)
\end{align*}
\end{lemma}
\begin{proof}
Let $X_\mathrm{max} = \max_{X \in \SS}$ and let ${\mathcal F}$ denote the failure event $C \, X_\mathrm{max} < X_{n+1}$. Then,
\begin{align*}
\Pr({\mathcal F}) &= \Pr(C \, X_\mathrm{max} < X_{n+1}) \\
&= \int_{0}^M \Pr(X_\mathrm{max} < \nicefrac{x}{C} | X_{n+1} = x) \, p(x) \, dx \\
&= \int_{0}^M \Pr\left(X < \nicefrac{x}{C} \right)^n \, p(x) \, dx &\text{since $X_1, \ldots, X_n$ are i.i.d.} \\
&\leq \int_{0}^M F(\nicefrac{x}{C})^n \, p(x) \, dx &\text{where $F(\cdot)$ is the CDF of $X \sim {\mathcal D}$} \\
&\leq F(\nicefrac{M}{C})^n \int_{0}^M p(x) \, dx &\text{by monotonicity of $F$} \\
&= F(\nicefrac{M}{C})^n \\
&\leq \exp(-n/K) &\text{CDF Assumption},
\end{align*}
and this completes the proof.
\LL{I am worried that the pdf does not always exist. Igor and I discussed that (and we could double-check with him) but if we skip the integrals involving the pdf, we can just formulate it as Lebesgue integral, such that we don't need to ever rely on the pdf.} \CB{How about the new version of the proof (above) which doesn't use Riemann-Stieltjes? I'm not sure whether this fixes the problem, but at least there is no Riemann-Stieltjes.}
\end{proof}
\subsection{Case of Positive Weights}
In this subsection, we establish approximation guarantees under the assumption that the weights are strictly positive. The next subsection will then relax this assumption to conclude that a neuron's value can be approximated well even when the weights are not all positive.
Let $C = 9 K \ge 9 K$ be a constant. We would like to apply Lemma~\ref{lem:order-statistic-sampling} in conjunction with Assumption~\ref{asm:cdf} to conclude that a logarithmic (in $1/\delta$ and $\eta \cdot \eta^*$) sized $\SS \subseteq \PP$ suffices to obtain $\Pr_{x \sim {\mathcal D}}(C \, \s < \gHat{x}) \leq \delta$. However, Assumption~\ref{asm:cdf} is defined with respect to the CDF of $\g{x}$, denoted by $\cdf{\cdot}$, and \emph{not} that of $\gHat{x}$, denoted by $\cdfHat{\cdot}$. To bridge this gap, we establish the following technical result that relates the CDF of $\gHat{x}$ to that of $\g{x}$, provided that $\hat a^{\ell -1}(x) \in (1 \pm \epsilon) a^{\ell}(x)$.
\begin{lemma}
\label{lem:cdf-relationship}
Let $\epsilon \in (0,1/2)$, $\ell \in \br{2,\ldots,L}$. Let $\Input \sim {\mathcal D}$ be a randomly drawn point and assume that $\hat a^{\ell-1}(\Input) \in (1 \pm \epsilon) a^{\ell-1}(\Input)$. Then, for all $j \in \mathcal{W} \subseteq [\eta^{\ell-1}]$ and for any constant $\gamma \in [0,1]$,
\begin{align*}
\cdf{\gamma/3} \leq \cdfHat{\gamma} \leq \cdf{3\gamma}.
\end{align*}
\end{lemma}
\begin{proof}
Let $\Input \sim {\mathcal D}$ be a randomly drawn point and let $j \in \mathcal{W}$ be arbitrary. By definitions of the CDF and $\g{x}$, we have
\begin{align*}
\cdfHat{\gamma} &= \Pr \left( \gHat{x} \leq \gamma \right) \\
&= \Pr \Big( \gHatDef{x} \leq \gamma \Big) \\
&= \Pr \Big( \WWRowCon_j \, \hat a_{j} (\Input) \leq \gamma \, \sum_{k \in \mathcal{W}} \WWRowCon_k \, \hat a_{k}(\Input) \Big) \\
&= \Pr \Big( \WWRowCon_j \, \hat a_{j}(\Input) \leq \gamma \, \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, \hat a_{k}(\Input) + \gamma \WWRowCon_j \, \hat a_{j}(\Input) \Big).
\end{align*}
Define $\hat{\Sigma}_{(-j)} = \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, \hat a_{k}(\Input)$ and $\Sigma_{(-j)} = \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, a_{k}(\Input)$ for notational brevity. Note that since $\hat a^{\ell-1}(\Input) \in (1 \pm \epsilon) a^{\ell-1}(\Input)$, we have
\begin{align*}
\hat{\Sigma}_{(-j)} &= \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, \hat a_{k}(\Input) \\
&\leq (1 +\epsilon) \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, a_{k}(\Input) \\
&= (1 + \epsilon) \Sigma_{(-j)}.
\end{align*}
Equipped with this inequality, we continue from above by rearranging the expression
\begin{align*}
\cdfHat{\gamma} &= \Pr \Big( (1 - \gamma) \WWRowCon_j \, \hat a_{j}(\Input) \leq \gamma \, \hat{\Sigma}_{(-j)} \Big) \\
&\leq \Pr \Big( (1 - \gamma) \WWRowCon_j \, \hat a_{j}(\Input) \leq \gamma \, (1 + \epsilon) \Sigma_{(-j)} \Big) \\
&\leq \Pr \Big( (1 - \gamma) (1- \epsilon) \WWRowCon_j \, a_{j}(\Input) \leq \gamma \, (1 + \epsilon) \Sigma_{(-j)} \Big),
\end{align*}
where in the last inequality we used the fact that $a_{j} (1-\epsilon) \leq \hat a_{j}$ by assumption of the lemma. Moreover, since $\epsilon \in (0,1/2)$, observe that the ratio $\nicefrac{1 + \epsilon}{1 - \epsilon} \leq 3$. Dividing both sides by $1 - \epsilon$ in the expression above and applying this inequality we obtain
\begin{align*}
\cdfHat{\gamma} &\leq \Pr \Big( (1 - \gamma) \WWRowCon_j \, a_{j}(\Input) \leq 3 \gamma \, \Sigma_{(-j)} \Big) \\
&= \Pr \Big( \WWRowCon_j \, a_{j} \leq 3 \gamma \Sigma_{(-j)} + \gamma \WWRowCon_j \, a_{j}(\Input) \Big) \\
&\leq \Pr \Big( \WWRowCon_j \, a_{j}(\Input) \leq 3 \gamma \left( \Sigma_{(-j)} + \WWRowCon_j \, a_{j}(\Input) \right)\Big) \\
&= \Pr(\g{x} \leq 3 \gamma),
\end{align*}
and this concludes the proof for the upper bound. The argument for the lower bound is symmetric in that it uses the lower bound $\hat{\Sigma}_{(-j)} \ge (1 - \epsilon) \Sigma_{(-j)}$ and upper bound $a_{j}(\Input)(1+\epsilon) \ge \hat a_{j}(\Input)$ instead in conjunction with the fact that $\nicefrac{1-\epsilon}{1 + \epsilon} \ge 1/3$ for $\epsilon \in (0,1/2)$.
\end{proof}
We now combine Lemmas~\ref{lem:order-statistic-sampling} and \ref{lem:cdf-relationship} to establish our main result of the section.
\begin{theorem}[Empirical Sensitivity Approximation]
\label{thm:sensitivity-approximation}
Let $\epsilon \in (0,1/2), \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, Consider a set $\SS = \{\Input_1, \ldots, \Input_n\} \subseteq \PP$ of size $|\SS| = \ceil*{\kPrime \logTerm }$ such that $\hat a^{\ell-1}(\Input') \in (1 \pm \epsilon) a^{\ell-1}(\Input')$ for all $\Input' \in \SS$. Then,
$$
\Pr_{\Input \sim {\mathcal D}} \left(\exists{j \in \mathcal{W}} : C \, \s < \gHat{x} \right) \leq \frac{\delta |\mathcal{W}|} {4 \eta \, \eta^*},
$$
where $C = \Cdef \ge 9 K$ and $\mathcal{W} \subseteq [\eta^{\ell-1}]$.
\end{theorem}
\begin{proof}
Consider an arbitrary $j \in \mathcal{W}$ and $x' \in \SS$ corresponding to $\g{x'}$ with CDF $\cdf{\cdot}$ and recall that $M = \min \{x \in [0,1] : \cdf{x} = 1\}$ as in Assumption~\ref{asm:cdf}. Let $\hat{M} = \min \{x \in [0,1] : \cdfHat{x} = 1\}$ be the analogous bound for the CDF associated with our relative importance approximation $\gHat{x}$.
Invoking Lemma~\ref{lem:cdf-relationship}, we have $
\cdfHat{3 \, M} \geq \cdf{M} = 1$. Thus, $\hat{M} \leq 3 M$ by definition of $\hat{M}$. Now, we have
\begin{align*}
\cdfHat{\nicefrac{\hat{M}}{C}} &\leq \cdfHat{\nicefrac{3 \, M}{C}} &\text{Since $\hat{M} \leq 3 M$} \\
&\leq \cdf{\nicefrac{9 \, M}{C}} &\text{By the upper bound of Lemma~\ref{lem:cdf-relationship}} \\
&\leq \cdf{\nicefrac{M}{K}} &\text{Since $C \ge 9 K$} \\
&\leq \exp(-1/K) &\text{By Assumption~\ref{asm:cdf}}.
\end{align*}
Thus we have shown that the random variables $\gHat{x'}$ for $x' \in \SS$ satisfy the CDF condition required by Lemma~\ref{lem:order-statistic-sampling}. Thus, invoking Lemma~\ref{lem:order-statistic-sampling}, we obtain
\begin{align*}
\Pr(C \, \s < \gHat{x} ) &= \Pr \left(C \, \max_{\Input' \in \SS} \gHat{x'} < \gHat{x} \right) \\
&\leq \exp(-|\SS|/K).
\end{align*}
Since our choice of $j \in \mathcal{W}$ was arbitrary, the bound applies for any $j \in \mathcal{W}$. Thus, we have by the union bound
\begin{align*}
\Pr(\exists{j \in \mathcal{W}} \,: C \, \s < \gHat{x}) &\leq \sum_{j \in \mathcal{W}} \Pr(C \, \s < \gHat{x} ) \\
&\leq \abs{\mathcal{W}} \exp(-|\SS|/K) \\
&= \left(\frac{|\mathcal{W}|}{\eta^*} \right) \frac{\delta}{4 \eta},
\end{align*}
and this concludes the proof.
\end{proof}
In practice, the set $\SS$ referenced above is chosen to be a subset of the original data points, i.e., $\SS \subseteq \PP$ (see Alg.~\ref{alg:main}, Line~\ref{lin:s-construction}). Thus, we henceforth assume that the size of the input points $|\PP|$ is large enough (or the specified parameter $\delta \in (0,1)$ is sufficiently large) so that $|\PP| \ge |\SS|$.
\CB{TODO: Move this over to Importance Sampling Section}
\CB{Don't we need this to hold for the maximum $\DeltaNeuron$ over all $i \in [\eta^\ell]$????????}
\begin{lemma}[Empirical $\Delta_\neuron^{\ell}$ Approximation]
\label{lem:delta-hat-approx}
Let $\delta \in (0,1)$, $\lambda_* = \lambdamax$, and define
$$
\DeltaNeuronHat = \DeltaNeuronHatDef,
$$
where $\kappa = \sqrt{2 \lambda_*} \left(1 + \sqrt{2 \lambda_*} \logTerm \right)$ and $\SS \subseteq \PP$ is as in Alg.~\ref{alg:main}. Then,
$$
\Pr_{\Input \sim {\mathcal D} } \left(\max_{i \in [\eta^\ell]} \DeltaNeuron[\Input] \leq \DeltaNeuronHat \right) \ge 1 - \frac{\delta}{4 \eta}.
$$
\end{lemma}
\begin{proof}
Define the random variables $\mathcal{Y}_{\Input'} = \E[\DeltaNeuron[\Input']] - \DeltaNeuron[\Input']$ for each $\Input' \in \SS$ and consider the sum $$
\mathcal{Y} = \sum_{\Input' \in \SS} \mathcal{Y}_{\Input'} = \sum_{\Input' \in \SS} \left(\E[\DeltaNeuron[\Input]] - \DeltaNeuron[\Input']\right).
$$
We know that each random variable $\mathcal{Y}_{\mathbf{\Input}'}$ satisfies $\E[\mathcal{Y}_{\mathbf{\Input}'}] = 0$ and by Assumption~\ref{asm:subexponential}, is subexponential with parameter $\lambda \leq \lambda_*$. Thus, $\mathcal{Y}$ is a sum of $|\SS|$ independent, zero-mean $\lambda_*$-subexponential random variables, which implies that $\E[\mathcal{Y}] = 0$ and that we can readily apply Bernstein's inequality for subexponential random variables~\cite{vershynin2016high} to obtain for $t \ge 0$
$$
\Pr \left(\frac{1}{|\SS|} \mathcal{Y} \ge t\right) \leq \exp \left(-|\SS| \, \min \left \{-\frac{t^2}{4 \, \lambda_*^2}, \frac{t}{2 \, \lambda_*} \right\} \right).
$$
Since $\SS = \ceil*{\kPrime \logTerm } \ge 2 \lambda^* \log \left(\logTermInside / \delta \right)$, we have for $t = \sqrt{2 \lambda_*}$,
\begin{align*}
\Pr \left(\E[\DeltaNeuron[\Input]] - \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] \ge t \right) &= \Pr \left(\frac{1}{|\SS|} \mathcal{Y} \ge t\right) \\
&\leq \exp \left( -|\SS| \frac{t^2}{4 \lambda_*^2} \right) \\
&\leq \exp \left( - \log \left(\logTermInside / \delta \right) \right) \\
&= \frac{\delta}{8 \, \eta \, \eta^* }.
\end{align*}
Moreover, for single $\Input \sim {\mathcal D}$, by the equivalent definition of a subexponential random variable~\cite{vershynin2016high}, we have for $u \ge 0$
$$
\Pr(\DeltaNeuron[\Input] - \E[\DeltaNeuron[\Input]] \ge u) \leq \exp \left(-\min \left \{-\frac{u^2}{4 \, \lambda_*^2}, \frac{u}{2 \, \lambda_*} \right\} \right).
$$
Thus, for $u = 2 \lambda_* \, \log \left(\logTermInside / \delta \right)$ we obtain
$$
\Pr(\DeltaNeuron[\Input] - \E[\DeltaNeuron[\Input]] \ge u) \leq \exp \left( - \log \left(\logTermInside / \delta \right) \right) = \frac{\delta}{ 8 \, \eta \, \eta^* }.
$$
Therefore, by the union bound, we have with probability at least $1 - \frac{\delta}{4 \eta \, \eta^*}$:
\begin{align*}
\DeltaNeuron[\Input] &\leq \E[\DeltaNeuron[\Input]] + u \\
&\leq \left(\frac{1}{|\SS|} \sum_{\mathbf{\Input}' \in \SS} \DeltaNeuron[\Input'] + t \right) + u \\
&= \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] + \left(\sqrt{2 \lambda_*} + 2 \lambda_* \, \log \left(\logTermInside / \delta \right) \right) \\
&= \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] + \kappa \\
&\leq \DeltaNeuronHat,
\end{align*}
where the last inequality follows by definition of $\DeltaNeuronHat$.
Thus, by the union bound, we have
\begin{align*}
\Pr_{\Input \sim {\mathcal D} } \left(\max_{i \in [\eta^\ell]} \DeltaNeuron[\Input] > \DeltaNeuronHat \right) &= \Pr \left(\exists{i \in [\eta^\ell]}: \DeltaNeuron[\Input] > \DeltaNeuronHat \right) \\
&\leq \sum_{i \in \eta^{\ell-1}} \Pr \left(\DeltaNeuron[\Input] > \DeltaNeuronHat \right) \\
&\leq \eta^{\ell-1} \left(\frac{\delta}{4 \eta \, \eta^*} \right) \\
&\leq \frac{\delta}{4 \, \eta},
\end{align*}
where the last line follows by definition of $\eta^* \ge \eta^{\ell -1}$.
\end{proof}
\subsection{Amplification}
\label{sec:analysis-amplification}
In the context of Lemma~\ref{lem:neuron-approx}, define the relative error of a (randomly generated) row vector $\WWHatRowCon^\ell = \WWHatRowCon^{\ell +} - \WWHatRowCon^{\ell -} \in \Reals^{1 \times \eta^{\ell-1}}$ with respect to a realization $\mathbf{\Input}$ of a point $\Input \sim {\mathcal D}$ as
$$
\err{\WWHatRowCon^\ell} = \left |\frac{\dotp{\WWHatRowCon^\ell}{ \hat{a}^{\ell-1}(\Point)}}{\dotp{\WWRowCon^\ell}{a^{\ell-1}(\Point)}} - 1 \right|.
$$
Consider a set $\TT \subseteq \PP \setminus \SS$ of size $\abs{\TT}$ such that $\TT \stackrel{i.i.d.}{\sim} {\mathcal D}^{|\TT|}$, and let
$$
\err[\TT]{\WWHatRowCon^\ell} = \frac{1}{|\TT|} \sum_{\Input \in \TT} \err[\Input]{\WWHatRow^\ell}.
$$
When the layer is clear from the context we will refer to $\err[\Input]{\WWHatRowCon}$ as simply $\err[\Input]{\WWHatRowCon}$.
\begin{restatable}[Expected Error]{lemma}{lemexpectederror}
\label{lem:expected-error}
Let $\epsilon, \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, and $i \in [\eta^\ell]$. Conditioned on the event $\mathcal{E}^{\ell-1}$, \textsc{CoreNet} generates a row vector $\WWHatRowCon = \WWHatRowCon^{ +} - \WWHatRowCon^{-} \in \Reals^{1 \times \eta^{\ell-1}}$ such that
$$
\E[\err[\Input]{\WWHatRowCon} \, \mid \, \mathcal{E}^{\ell-1}] \leq %
\epsilonLayer \, \DeltaNeuronHat \left(k + \frac{5 \, (1 + k \epsilonLayer)}{\sqrt{\log(8 \eta/\delta)}} \right) + \frac{\delta \, \left(1 + k \epsilonLayer \right)}{\eta} \, \E_{\Input \sim {\mathcal D}} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right],
$$
where $k = 2 \, (\ell -1)$.
\end{restatable}
We now state the advantageous effect of amplification, i.e., constructing multiple approximations for each neuron's incoming edges and then picking the best one, as formalized below.
\begin{restatable}[Amplification]{theorem}{thmamplification}
\label{thm:amplification}
Given $\epsilon, \delta \in (0,1)$ such that $\frac{\delta}{\eta}$ is sufficiently small, let $\ell \in \br{2,\ldots,L}$ and $i \in [\eta^{\ell}]$. Let $\tau = \ceil*{\frac{\log(4 \, \eta / \delta)}{\log(10/9)}}$ and consider the reparameterized variant of Alg.~\ref{alg:main} where we instead have
\begin{enumerate}
\item $\SS \subseteq \PP$ of size $|\SS| \ge \ceil*{\logTermAmplif \kPrime}$,
\item $\DeltaNeuronHat = \DeltaNeuronHatDef$ as before, but $\kappa$ is instead defined as
$$
\kappa = \sqrt{2 \lambda_*} \left(1 + \sqrt{2 \lambda_*} \logTermAmplif \right), \qquad \text{and}
$$
\item $m \ge \SampleComplexityAmplif$ in the sample complexity in \textsc{SparsifyWeights}.
\end{enumerate}
Among $\tau$ approximations $(\WWHatRow^\ell)_1, \ldots, (\WWHatRow^\ell)_\tau$ generated by Alg.~\ref{alg:sparsify-weights}, let
$$
\WWHatRow^* = \argmin_{\WWHatRow^\ell \in \{(\WWHatRow^\ell)_1, \ldots, (\WWHatRow^\ell)_\tau\}} \err[\TT]{\WWHatRow^\ell},
$$
and $\TT \subseteq (\PP \setminus \SS)$ be a subset of points of size $|\TT| = \ceil*{8 \log \left( 8 \, \tau \, \eta / \, \delta\right) }$. Then,
$$
\Pr_{\WWHatRow^*, \hat{a}^{l-1}(\cdot)} \left( \E_{\Input | \WWHatRow^*} \, [\err[\Input]{\WWHatRow^*} \, \mid \, \WWHatRow^*, \mathcal{E}^{\ell-1}] \leq k \epsilonLayer[\ell + 1] \, \mid \, \mathcal{E}^{\ell-1} \right) \ge 1 - \frac{\delta}{\eta},
$$
where $k = 2 \, (\ell -1)$.
\end{restatable}
\section{Analysis}
\label{sec:analysis}
In this section, we establish the theoretical guarantees of our neural network compression algorithm (Alg.~\ref{alg:main}). The full proofs of all the claims presented in this section can be found in the Appendix.
\subsection{Preliminaries}
\label{sec:analysis_empirical}
Let $\Input \sim {\mathcal D}$ be a randomly drawn input point.
We explicitly refer to the pre-activation and activation values at layer $\ell \in \{2, \ldots, \ell\}$ with respect to the input $x \in \mathrm{supp}({\mathcal D})$ as $z^{\ell}(\Input)$ and $a^{\ell}(\Input)$, respectively. The values of $z^{\ell}(\Input)$ and $a^{\ell}(\Input)$ at each layer $\ell$ will depend on whether or not we compressed the previous layers $\ell' \in \{2, \ldots, \ell\}$. To formalize this interdependency, we let $\hat z^{\ell}(x)$ and $\hat a^{\ell}(x)$ denote the respective quantities of layer $\ell$ when we replace the weight matrices $W^2, \ldots, W^{\ell}$ in layers $2, \ldots, \ell$ by $\hat{W}^2, \ldots, \hat{W}^{\ell}$, respectively.
For the remainder of this section (Sec.~\ref{sec:analysis}) we let $\ell \in \br{2,\ldots,L}$ be an arbitrary layer and let $i \in [\eta^\ell]$ be an arbitrary neuron in layer $\ell$. For purposes of clarity and readability, we will omit the the variable denoting the layer $\ell \in \br{2,\ldots,L}$, the neuron $i \in [\eta^\ell]$, and the incoming edge index $j \in [\eta^{\ell-1}]$, whenever they are clear from the context. For example, when referring to the intermediate value of a neuron $i \in [\eta^\ell]$ in layer $\ell \in \br{2,\ldots,L}$, $z_i^\ell (\Input) = \dotp{\WWRow^\ell}{ \hat a^{\ell-1}(\Input)} \in \Reals$ with respect to a point $\Input$, we will simply write $z(\Input) = \dotp{\WWRowCon}{a(\Input)} \in \Reals$, where $\WWRowCon := \WWRow^\ell \in \Reals^{1 \times \eta^{\ell -1}}$ and $a(\Input) := a^{\ell-1}(\Input) \in \Reals^{\eta^{\ell-1} \times 1}$. Under this notation, the weight of an incoming edge $j$ is denoted by $\WWRow[j] \in \Reals$.
\subsection{Importance Sampling Bounds for Positive Weights}
\label{sec:analysis_positive}
In this subsection, we establish approximation guarantees under the assumption that the weights are positive. Moreover, we will also assume that the input, i.e., the activation from the previous layer, is non-negative (entry-wise). The subsequent subsection will then relax these assumptions to conclude that a neuron's value can be approximated well even when the weights and activations are not all positive and non-negative, respectively.
Let $\mathcal{W} = \{\edge \in [\eta^{\ell-1}] : \WWRow[\edge] > 0\} \subseteq [\eta^{\ell-1}]$ be the set of indices of incoming edges with strictly positive weights. To sample the incoming edges to a neuron, we quantify the relative importance of each edge as follows.
\begin{definition}[Relative Importance]
The importance of an incoming edge $j \in \mathcal{W}$ with respect to an input $\Input \in \supp$ is given by the function $\g{\Input}$, where
$
\g{\Input} = \gDef{x} \quad \forall{j \in \mathcal{W}}.
$
\end{definition}
Note that $\g{x}$ is a function of the random variable $x \sim {\mathcal D}$.
We now present our first assumption that pertains to the Cumulative Distribution Function (CDF) of the relative importance random variable.
\begin{assumption}
\label{asm:cdf}
There exist universal constants $K, K' > 0 $ such that for all $j \in \mathcal{W}$, the CDF of the random variable $\g{x}$ for $\Input \sim {\mathcal D}$, denoted by $\cdf{\cdot}$, satisfies
$$
\cdf{\nicefrac{M_j}{K}} \leq \exp\left(-\nicefrac{1}{K'}\right),
$$
where $M_j = \min \{y \in [0,1] : \cdf{y} = 1\}$.
\end{assumption}
Assumption~\ref{asm:cdf} is a technical assumption on the ratio of the weighted activations that will enable us to rule out pathological problem instances where the relative importance of each edge cannot be well-approximated using a small number of data points $\SS \subseteq \PP$. Henceforth, we consider a uniformly drawn (without replacement) subsample $\SS \subseteq \PP$ as in Line~\ref{lin:sample-s} of Alg.~\ref{alg:main}, where $\abs{\SS} = \ceil*{\kPrime \logTerm }$, and define the sensitivity of an edge as follows.
\begin{definition}[Empirical Sensitivity]
\label{def:empirical-sensitivity}
Let $\SS \subseteq \PP$ be a subset of distinct points from $\PP \stackrel{i.i.d.}{\sim} {\mathcal D}^{n}$.Then, the sensitivity over positive edges $j \in \mathcal{W}$ directed to
a neuron is defined as
$
\s[j] \, = \, \max_{\Input \in \SS} \g{x}.
$
\end{definition}
Our first lemma establishes a core result that relates the weighted sum with respect to the sparse row vector $\WWHatRowCon$, $\sum_{k \in \mathcal{W}} \WWHatRowCon_k \, \hat a_{k}(x)$, to the value of the of the weighted sum with respect to the ground-truth row vector $\WWRowCon$, $\sum_{k \in \mathcal{W}} \WWRowCon_k \, \hat a_{k}(x)$. We remark that there is randomness with respect to the randomly generated row vector $\WWHatRow^\ell$, a randomly drawn input $\Input \sim {\mathcal D}$, and the function $\hat{a}(\cdot) = \hat{a}^{\ell-1}(\cdot)$ defined by the randomly generated matrices $\hat{W}^2, \ldots, \hat{W}^{\ell-1}$ in the previous layers.
Unless otherwise stated, we will henceforth use the shorthand notation $\Pr(\cdot)$ to denote $\Pr_{\WWHatRowCon^\ell, \, \Input, \, \hat{a}^{\ell-1}} (\cdot)$. Moreover, for ease of presentation, we will first condition on the event $\mathcal{E}_{\nicefrac{1}{2}}$ that
$
\hat{a}(\Input) \in (1 \pm \nicefrac{1}{2}) a(\Input)
$ holds. This conditioning will simplify the preliminary analysis and will be removed in our subsequent results.
\begin{restatable}[Positive-Weights Sparsification]{lemma}{lemposweightsapprox}
\label{lem:pos-weights-approx}
Let $\epsilon, \delta \in (0,1)$, and $\Input \sim {\mathcal D}$.
\textsc{Sparsify}$(\mathcal{W}, \WWRowCon, \epsilon, \delta, \SS, a(\cdot))$ generates a row vector $\WWHatRowCon$ such that
\begin{align*}
\Pr \left(\sum_{k \in \mathcal{W}} \WWHatRowCon_k \, \hat a_{k}(x) \notin (1 \pm \epsilon) \sum_{k \in \mathcal{W}} \WWRowCon_k \, \hat a_{k}(x) \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} \right) &\leq \frac{3 \delta}{8 \eta}
\end{align*}
where
$
\nnz{\WWHatRowCon} \leq \SampleComplexity[\epsilon],
$
and $S = \sum_{j \in \mathcal{W}} \s[j]$.
\end{restatable}
\subsection{Importance Sampling Bounds} \label{sec:analysis_sampling}
We now relax the requirement that the weights are strictly positive and instead consider the following index sets that partition the weighted edges: $\mathcal{W}_+ = \{\edge \in [\eta^{\ell-1}] : \WWRow[\edge] > 0\}$ and $\mathcal{W}_- = \{\edge \in [\eta^{\ell-1}]: \WWRow[\edge] < 0 \}$. We still assume that the incoming activations from the previous layers are positive (this assumption can be relaxed as discussed in Appendix~\ref{app:negative}).
We define $\DeltaNeuron[\Input]$ for a point $\Input \sim {\mathcal D}$ and neuron $i \in [\eta^\ell]$ as
$
\DeltaNeuron[\Input] = \DeltaNeuronDef[\Input].
$
The following assumption serves a similar purpose as does Assumption~\ref{asm:cdf} in that it enables us to approximate the random variable $\DeltaNeuron[\Input]$ via an empirical estimate over a small-sized sample of data points $\SS \subseteq \PP$.
\begin{assumption}[Subexponentiality of ${\DeltaNeuron[\Input]}$]
\label{asm:subexponential}
There exists a universal constant $\lambda > 0$, $\lambda < K'/2$\footnote{Where $K'$ is as defined in Assumption~\ref{asm:cdf}} such that for any layer $\ell \in \br{2,\ldots,L}$ and neuron $i \in [\eta^\ell]$, the centered random variable $\Delta = \DeltaNeuron[\Input] - \E_{\Input \sim {\mathcal D}}[\DeltaNeuron[\Input]]$ is subexponential~\citep{vershynin2016high} with parameter $\lambda$, i.e., $ \E[\exp \left(s \Delta \right)] \leq \exp(s^2 \lambda^2) \quad \forall{|s| \leq \frac{1}{\lambda}}$.
\end{assumption}
For $\epsilon \in (0,1)$ and $\ell \in \br{2,\ldots,L}$, we let $\epsilon' = \frac{\epsilon}{\epsilonDenomContant \, (L-1)}$ and define
$
\epsilonLayer[\ell] = \epsilonLayerDef = \epsilonLayerDefWordy,
$
where $\DeltaNeuronHat = \DeltaNeuronHatDef$. To formalize the interlayer dependencies, for each $i \in [\eta^\ell]$ we let $\mathcal{E}^\ell_i$ denote the (desirable) event that $\hat{z}_i^\ell (\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^{\ell}_i (\Input)$
holds, and let $\mathcal{E}^\ell = \cap_{i \in [\eta^\ell]} \, \mathcal{E}_{i}^\ell$ be the intersection over the events corresponding to each neuron in layer $\ell$.
\begin{restatable}[Conditional Neuron Value Approximation]{lemma}{lemneuronapprox}
\label{lem:neuron-approx}
Let $\epsilon, \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, $i \in [\eta^\ell]$, and $\Input \sim {\mathcal D}$. \textsc{CoreNet} generates a row vector $\WWHatRow^\ell = \WWHatRow^{\ell +} - \WWHatRow^{\ell -} \in \Reals^{1 \times \eta^{\ell-1}}$ such that
\begin{align}
\label{eq:neuronapprox}
\Pr \big(\, \mathcal{E}_i^\ell\, \, \mid \, \mathcal{E}^{\ell -1}\big) = %
\Pr \left( \hat{z}_i^\ell(\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z_i^\ell(\Input) \, \mid \, \mathcal{E}^{\ell -1} \right) %
\ge 1 - \nicefrac{\delta}{\eta},
\end{align}
where $\epsilonLayer = \epsilonLayerDef$ and
$
\nnz{\WWHatRow^\ell} \leq \SampleComplexity + 1,
$
where $S = \sum_{j \in \Wplus} \s[j] + \sum_{j \in \Wminus} \s[j]$.
\end{restatable}
The following core result establishes unconditional layer-wise approximation guarantees and culminates in our main compression theorem.
\begin{restatable}[Layer-wise Approximation]{lemma}{lemlayer}
\label{lem:layer}
Let $\epsilon, \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, and $\Input \sim {\mathcal D}$. \textsc{CoreNet} generates a sparse weight matrix $\hat{W}^\ell \in {\REAL}^{\eta^\ell \times \eta^{\ell-1}}$ such that, for $\hat{z}^\ell(\Input) = \hat{W}^\ell \hat a^\ell(\Input)$,
$$
\Pr_{(\hat{W}^2, \ldots, \hat{W}^\ell), \, \Input } (\mathcal{E}^{\ell}) %
= \Pr_{(\hat{W}^2, \ldots, \hat{W}^\ell), \, \Input } \left(\hat z^{\ell}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^\ell (\Input) \right)
\geq 1 - \frac{\delta \, \sum_{\ell' = 2}^{\ell} \eta^{\ell'}}{\eta}.
$$
\end{restatable}
\begin{restatable}[Network Compression]{theorem}{thmmain}
\label{thm:main}
For $\epsilon, \delta \in (0, 1)$, Algorithm~\ref{alg:main} generates a set of parameters $\hat{\theta} = (\hat{W}^2, \ldots, \hat{W}^L)$ of size
\begin{align*}
\size{\hat{\theta}} &\leq \sum_{\ell = 2}^{L} \sum_{i=1}^{\eta^\ell} \left( \ceil*{\frac{32 \, (L-1)^2 \, (\DeltaNeuronHatLayers)^2 \, S_\neuron^\ell \, \kmax \, \log (8 \, \eta / \delta) }{\epsilon^2}} + 1\right) \\
\end{align*}
in $\Bigo \left( \eta \, \, \eta^* \, \log \big(\eta \, \eta^*/ \delta \big) \right)$ time such that $\Pr_{\hat{\theta}, \, \Input \sim {\mathcal D}} \left(f_{\paramHat}(x) \in (1 \pm \epsilon) f_\param(x) \right) \ge 1 - \delta$.
\end{restatable}
We note that we can obtain a guarantee for a set of $n$ randomly drawn points by invoking Theorem~\ref{thm:main} with $\delta' = \delta / n$ and union-bounding over the failure probabilities, while only increasing the sampling complexity logarithmically, as formalized in Corollary~\ref{cor:generalized-compression}, Appendix~\ref{app:analysis_sampling}.
\ificlr
\else
\input{analysis_amplification}
\fi
\subsection{Generalization Bounds}
As a corollary to our main results, we obtain novel generalization bounds for neural networks in terms of empirical sensitivity.
Following the terminology of~\cite{arora2018stronger}, the expected margin loss of a classifier $f_\param:\Reals^d \to \Reals^k$ parameterized by $\theta$ with respect to a desired margin $\gamma > 0$ and distribution ${\mathcal D}$ is defined by $
L_\gamma(f_\param) = \Pr_{(x,y) \sim {\mathcal D}_{\mathcal{X},\mathcal{Y}}} \left(f_\param(x)_y \leq \gamma + \max_{i \neq y} f_\param(x)_i\right)$.
We let $\hat{L}_\gamma$ denote the empirical estimate of the margin loss. The following corollary follows directly from the argument presented in~\cite{arora2018stronger} and Theorem~\ref{thm:main}.
\begin{corollary}[Generalization Bounds]
\label{cor:generalization-bounds}
For any $\delta \in (0,1)$ and margin $\gamma > 0$, Alg.~\ref{alg:main} generates weights $\hat{\theta}$ such that with probability at least $1 - \delta$, the expected error $L_0(f_{\paramHat})$ with respect to the points in $\PP \subseteq \mathcal{X}$, $|\PP| = n$, is bounded by
\begin{align*}
L_0(f_{\paramHat}) &\leq \hat{L}_\gamma(f_\param) + \widetilde{\Bigo} \left(\sqrt{\frac{\max_{\Input \in \PP} \norm{f_\param (x)}_2^2 \, L^2 \, \sum_{\ell = 2}^{L} (\DeltaNeuronHatLayers)^2 \, \sum_{i=1}^{\eta^\ell} S_\neuron^\ell }{\gamma^2 \, n}} \right).
\end{align*}
\end{corollary}
\subsection{Analytical Results for Section~\ref{sec:analysis-amplification} (Theorem~\ref{thm:amplification}, Amplification)}
\label{app:analysis-amplification}
In the context of the notation introduced in Section~\ref{sec:analysis-amplification} recall that the relative error of a (randomly generated) row vector $\WWHatRowCon := \WWHatRowCon^{\ell} = \WWHatRowCon^{+} - \WWHatRowCon^{-} \in \Reals^{1 \times \eta^{\ell-1}}$ with respect to a point $\Input \in \supp$ is defined as
\begin{align*}
\err[\Input]{\WWHatRowCon} &= \errDef = \abs{\frac{\dotp{\WWHatRowCon^\ell}{ \hat{a}^{\ell-1}(\Point)}}{\dotp{\WWRowCon^\ell}{a^{\ell-1}(\Point)}} - 1},
\end{align*}
where $\WWHatRowCon$ is shorthand for $\WWHatRow^\ell$ as before.
Similarly define
\begin{align*}
\errPlus &= \errPlusDef = \abs{\errRatioPlus - 1} \quad \text{and} \\
\errMinus &= \errMinusDef = \abs{\errRatioMinus - 1}.
\end{align*}
The following lemma establishes the expected performance of a randomly constructed coreset with respect to the distribution of points ${\mathcal D}$ conditioned on coreset constructions for previous layers $(\hat{W}^2, \ldots, \hat{W}^{\ell-1})$ that define the realization $\mathbf{\hat a}(\cdot)$ of $\hat{a}^{\ell-1}(\cdot)$.
\lemexpectederror*
\begin{proof}
For clarity of exposition we will omit explicit references to the layer $\ell$ and neuron $i$ as they are assumed to be arbitrary. In the context of this definition, let $\WWHatRowCon = \WWHatRowCon^{+} - \WWHatRowCon^{-} \in \Reals^{1 \times \eta^{\ell-1}}$ and let $\Input \sim {\mathcal D}$. The proof outline is to bound the overall error term $\err{\WWHatRowCon}$ by bounding $\err{\WWHatRowCon^{+}}$ and $\err{\WWHatRowCon^{-}}$.
Let $\mathcal{E}_\Delta(\Input)$ denote the event that the inequality
$$
\DeltaNeuron[\Input] \leq \DeltaNeuronHat
$$
holds and recall that we condition on the event $\mathcal{E}^{\ell-1}$ occurring as in the premise of the lemma.
Let $k = 2 \, (\ell - 2)$ and let $\xi = k \, \epsilonLayer[\ell]$. We begin by observing that conditioned on the events $\mathcal{E}_\Delta$ and $\mathcal{E}^{\ell-1}$, for any constant $u \ge 0$, the following inequality
$$
\max \{\errPlus, \errMinus\} \leq \frac{u \, \xi}{1 + \xi } := \epsilon_*
$$
implies that
$$
\err{\WWHatRowCon} \leq k \, \epsilonLayer[\ell + 1] \left(u + 1 \right).
$$
Henceforth we will at times omit the variable $\Input$ when referring to the point-specific variables for clarity of exposition with the understanding that the results hold for any arbitrary $\Input$.
To see the previous implication explicitly,
observe that conditioning on $\mathcal{E}^{\ell-1}$ implies that we have $\hat a^{\ell-1}(\Input) \in \left(1 \pm 2 \, (\ell - 2) \, \epsilonLayer[\ell] \right) a^{\ell-1} (\Input) = (1 \pm \xi) a^{\ell-1} (\Input)$, which yields by the triangle inequality
\begin{align}
\abs{\tilde z - z} &\leq \abs{\tilde z^+ - z^+} + \abs{\tilde z^- - z^-} \nonumber = \abs{ \sum_{k \in \Wplus} \WWRow[k] \, (\hat a_k - a_k)} + \abs{ \sum_{k \in \Wminus} (-\WWRow[k]) \, (\hat a_k - a_k)} \nonumber \\
&\leq \sum_{k \in \Wplus} \WWRow[k] \, \abs{\hat a_k - a_k} + \sum_{k \in \Wminus} (-\WWRow[k]) \, \abs{ \hat a_k - a_k} \nonumber \\
&\leq \sum_{k \in \Wplus} \WWRow[k] \, \xi \, a_k + \sum_{k \in \Wminus} (-\WWRow[k]) \, \xi \, a_k \nonumber \\
&=\xi \, (z^+ + z^-). \label{eqn:z-tilde-ineq}
\end{align}
Moreover, via a similar triangle-inequality type argument and the premise $\max \{\errPlus, \errMinus\} \leq \epsilon_*$ we obtain
\begin{align}
\abs{\hat z - \tilde z} &\leq \abs{\hat z^+ - \tilde z^+} + \abs{\hat z^- - \tilde z^-} \nonumber \\
&\leq \epsilon_* \left( \tilde z^+ + \tilde z^- \right) \nonumber \\
&\leq \epsilon_* ((1 + \xi)z^+ + (1 + \xi)z^-) &\text{By event $\mathcal{E}^{\ell-1}$} \nonumber \\
&= \epsilon_* \, (1 + \xi) (z^+ + z^-). \label{eqn:z-bar-ineq}
\end{align}
Combining the inequalities~\eqref{eqn:z-tilde-ineq} and \eqref{eqn:z-bar-ineq}, we obtain
\begin{align*}
\abs{\hat z - z} &\leq \abs{\hat z - \tilde z} + \abs{\tilde z - z} \\
&\leq \epsilon_* (1 + \xi) \, (z^+ + z^-) + \xi (z^+ + z^-) \\
&= |z| \, \DeltaNeuron[\Input] \left( \epsilon_* \, ( 1 + \xi) + \xi \right) \\
&= |z| \, \DeltaNeuron[\Input] \left(u \xi + \xi \right) \\
&\leq |z| \, \DeltaNeuronHat \xi \left(u + 1 \right) &\text{By event $\mathcal{E}_\Delta$} \\
&= |z| \, \DeltaNeuronHat k \, \epsilonLayer \, \left(u + 1 \right) &\text{By definition of $\xi = k \epsilonLayer$} \\
&= |z| \, k \, \epsilonLayer[\ell + 1] \left(u + 1 \right), &\text{By definition of $\epsilonLayer[\ell + 1] = \DeltaNeuronHat \, \epsilonLayer$}
\end{align*}
and dividing both sides by $|z|$ yields the bound on $\err{\WWHatRowCon}$.
Let ${\mathcal Z} \subseteq \supp$ denote the set of \emph{well-behaved} points, i.e., the set of points that satisfy the sensitivity inequality with respect to edges in both $\mathcal{W}_+$ and $\mathcal{W}_-$, i.e.,
$$
{\mathcal Z} = \left\{x' \in \supp \, : \, \gHat{x'} \leq C \s \quad \forall{j \in \mathcal{W}_+ \cup \mathcal{W}_-} \right \},
$$
and let $\mathcal{E}_{{\mathcal Z}(\Input)}$ denote the event $\Input \in {\mathcal Z}$. Let $\mathcal{E}_{\GG(\Input)} = \mathcal{E}_{{\mathcal Z}(\Input)} \cap \mathcal{E}_{\Delta(\Input)}$ denote the \emph{good} event that both of the events $\mathcal{E}_{{\mathcal Z}(\Input)}$ and $\mathcal{E}_{\Delta(\Input)}$ occur.
Note that since $\err[\Input]{\WWHatRowCon} \ge 0$ for all $\Input$ and $\WWHatRowCon$, we obtain by the equivalent formulation of expectation of non-negative random variables
\begin{align}
\E[\err[\Input]{\WWHatRowCon} \given \condAmplif \cap \mathcal{E}^{\ell-1} ] &= \int_0^\infty \Pr \left(\err[\Input]{\WWHatRowCon} \ge v \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \, dv \nonumber \\
&\leq k \, \epsilonLayer[\ell + 1] + \int_{k \, \epsilonLayer[\ell + 1]}^\infty \Pr \left(\err[\Input]{\WWHatRowCon} \ge v \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \, dv \nonumber \\
&= k \, \epsilonLayer[\ell + 1] \left( 1 + \int_0^\infty \Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \, du \right) \label{eqn:err-integral}
\end{align}
where the last equality follows by the change of variable $v = k \, \epsilonLayer[\ell + 1] (u + 1)$.
Recall that by the argument presented in the proof of Lemma~\ref{lem:pos-weights-approx}, we have by Bernstein's inequality for $t \ge 0$
\begin{align*}
\Pr \left (\errPlus \ge t \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) &\leq 2 \exp \left(-\frac{3 t^2 m}{ S \, C \left(6 + 2 t \right)} \right) \\
&\leq 2 \exp \left(- \frac{8 \, \log(8 \eta / \delta) }{6 + 2 t} \cdot \left(\frac{t}{\epsilonLayer}\right)^2 \right) \\
&= 2 \exp \left(- \frac{a \, t^2}{6 + 2 t} \right)
\end{align*}
where
$$
a := \frac{8 \log(8 \eta / \delta)}{\epsilonLayer^2},
$$
in the last equality and the second inequality follows by definition of $m = \SampleComplexity$ and the fact that $C = 3 \, \kmax$. Via the same reasoning, we have for $\errMinus$:
$$
\Pr \left (\errMinus \ge t \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \leq 2 \exp \left(- \frac{a \, t^2}{6 + 2 t} \right).
$$
Hence, combining the implication established in the beginning of the proof with the bounds established above, we invoke the union bound to obtain
\begin{align*}
\Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) &\leq \Pr \left(\max \{\errPlus, \errMinus\} > \frac{u \, \xi}{1 + \xi } \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \\
&\leq \min \left \{ 4 \exp \left(- \frac{a \, t^2}{6 + 2 t} \right), 1 \right\},
\end{align*}
where $t = \frac{u \, \xi}{1 + \xi}$ and as before, $a = \frac{8 \log(8 \eta / \delta)}{\epsilonLayer^2}$. From the expression above, we see that for a value of $t$ satisfying
$$
t \ge \frac{2 \sqrt{a \log 8 + \log^2 2} + \log 4}{a},
$$
we have $ 4 \exp \left(- \frac{a \, t^2}{6 + 2 t} \right) \leq 1$. Bounding the expression above via elementary computations, we have
\begin{align*}
\frac{2 \sqrt{a \log 8 + \log^2 2} + \log 4}{a} &\leq \frac{2 \sqrt{2 a \log 8} + \log 4}{a} \\
&\leq \frac{3 \sqrt{2 a \log 8}}{a} \\
&\leq \frac{7}{\sqrt{a}} \\
&:= t^*.
\end{align*}
Now note that for $t \ge t^*$, we have
\begin{align*}
\exp \left(- \frac{a \, t^2}{6 + 2 t} \right) &\leq \exp \left(- \frac{a \, t^2}{6 \, (t / t^*)+ 2 t} \right) \\
&= \exp \left(- \frac{a \, t \, t^*}{6 + 2 t^*} \right).
\end{align*}
Let
$$
b = \frac{\xi}{1 + \xi}
$$
and recall that $t = \frac{u \, \xi}{1 + \xi} = ub$. Letting
$$
u^* = \frac{t^*}{b} = \frac{7}{b \sqrt{a}},
$$
we reformulate the bound above in terms of $u$ and $u^*$,
\begin{align*}
\exp \left(- \frac{a \, t \, t^*}{6 + 2 t^*} \right) &= \exp \left(- \frac{a b^2 u^* \, u}{6 + 2 u^* b} \right) \\
&= \exp \left(- \left(\frac{a b^2 u^* }{6 + 2 u^* b}\right) u \right) \\
&= \exp(-c \, u),
\end{align*}
where
$$
c = \frac{a b^2 u^* }{6 + 2 u^* b}.
$$
This implies that for $u \ge u^*$, we have
$$
\Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \leq 4 \exp(-c u),
$$
and for $u \in [0, u^*]$, we trivially have
$$
\Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \leq 1.
$$
Putting it all together, we bound the integral from \eqref{eqn:err-integral} as follows
\begin{align*}
\int_0^\infty \Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \, du &\leq \int_0^{u^*} 1 \, du + 4\, \int_{u^*}^\infty \exp(- c u) \, du \\
&= u^* + \frac{4 \exp(-c u^*)}{c} \\
&\leq u^* + \frac{4 \exp(-2)}{c} \\
&\leq u^* + \frac{80 \exp(-2)}{49} \, u^* \\
&\leq 2 u^*,
\end{align*}
where the first inequality follows from the definitions of $u^*$ and $c$, which imply that $u^* b= 7 / \sqrt{a}$ and so by straightforward simplification,
\begin{align*}
c \, u^* &= \left(\frac{ab (u^* b)}{6 + 2 (u^*b)}\right) \, u^* = \left(\frac{7 a b}{6 \sqrt{a} + 14}\right) \, u^* \\
&= \frac{49 \sqrt{a}}{6 \sqrt{a}+ 14} \\
&\ge \frac{49 \sqrt{a}}{6 \sqrt{a}+ 14\sqrt{a}} = \frac{49}{20} > 2,
\end{align*}
where we used the inequality $a = \frac{8 \log(8 \eta / \delta)}{\epsilonLayer^2} \ge 1$. This implies that $\exp(-cu^*) \leq \exp(-2)$. Similarly, the second inequality follows from the calculations above and the definition of $u^*$
\begin{align*}
\frac{1}{c} &\leq \frac{20}{7 b \, \sqrt{a}} = \frac{20}{49} \, u^*.
\end{align*}
Plugging this inequality on the integral back to our bound on the conditional expectation \eqref{eqn:err-integral}, we establish
\begin{align*}
\E[\err[\Input]{\WWHatRowCon} \given \condAmplif \cap \mathcal{E}^{\ell-1} ] &\leq k \, \epsilonLayer[\ell + 1] \left( 1 + 2 \, u^*\right).
\end{align*}
To bound the conditional expectation given the event $ (\mathcal{E}_{\GG(\Input)})^\mathsf{c}$ we first observe that since $\WWHatRowCon^+$ and $\WWHatRowCon^-$ are unbiased estimators, we have
$$
\E[\dotp{\WWHatRowCon^+}{\cdot} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} , \mathbf{\Input}] = \dotp{\WWRowCon^+}{\cdot} \quad \text{and} \quad \E[\dotp{\WWHatRowCon^-}{\cdot} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} , \mathbf{\Input}] = \dotp{\WWRowCon^-}{\cdot}.
$$
Moreover, note that conditioning on event $\mathcal{E}^{\ell-1}$ implies that for any $\Input \in \supp$
\begin{align*}
\abs{\hat{z}(\Input)} &\leq \abs{\tilde z(\Input)} + \xi \left(\tilde z^+ (\Input) + \tilde z^- (\Input) \right),
\end{align*}
where $\xi = k \, \epsilonLayer[\ell]$ as before. Thus, invoking the triangle inequality and applying the definition of $\DeltaNeuron[\Input]$, we bound $\err[\Input]{\WWHatRowCon}$ as
\begin{align*}
\err[\Input]{\WWHatRowCon} &= \abs{\frac{\hat z(\Input)}{z(\Input)} - 1} \leq \abs{\frac{\hat z(\Input)}{z(\Input)}} + 1 \\
&\leq \DeltaNeuron[\Input] \, \frac{\abs{\tilde z(\Input)} + \xi \left(\tilde z^+ (\Input) + \tilde z^- (\Input) \right)}{z^+(\Input) + z^-(\Input)} + 1 \\
&\leq \DeltaNeuron[\Input] \, \frac{\tilde z^+(\Input) + \tilde z^-(\Input) + \xi \left(\tilde z^+ (\Input) + \tilde z^- (\Input) \right)}{z^+(\Input) + z^-(\Input)} + 1 \\
&= \left(1 + \xi \right) \DeltaNeuron[\Input] \, \frac{\tilde z^+ (\Input) + \tilde z^- (\Input)}{z^+(\Input) + z^-(\Input)} + 1.
\end{align*}
Since the above bound holds for any arbitrary $\Input$, we obtain by monotonicity of expectation, law of iterated expectation, and the unbiasedness of our estimators
\begin{align*}
&\E[\err[\Input]{\WWHatRowCon} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} ] \\
&\quad \leq \left(1 + \xi \right) \, \E \left[ \DeltaNeuron[\Input] \, \frac{\tilde z^+ (\Input) + \tilde z^- (\Input)}{z^+(\Input) + z^-(\Input)} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} \right] + 1 \\
&\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\frac{\DeltaNeuron[\Input]}{z^+(\Input) + z^-(\Input)} \, \E \left[\tilde z^+ (\Input) + \tilde z^- (\Input) \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} , \Input \right] \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} \right] + 1 \\
&\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\frac{\DeltaNeuron[\Input]}{z^+(\Input) + z^-(\Input)} \, \left(z^+(\Input) + z^-(\Input) \right) \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} \right] + 1 \\
&\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} \right] + 1 \\
&\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \, \mid \, \mathcal{E}_{\GG(\Input)}^\mathsf{c} \right] + 1 \\
&\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right] + 1 \\
&\quad \leq 2 \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right].
\end{align*}
By the law of total expectation, we obtain
\begin{align*}
\E[\err[\Input]{\WWHatRowCon} \, \mid \, \mathcal{E}^{\ell-1}] &= \underbrace{\E[\err[\Input]{\WWHatRowCon} \given \condAmplif \cap \mathcal{E}^{\ell-1} ]}_{=A} \Pr(\mathcal{E}_{\GG(\Input)}) + \underbrace{\E[\err[\Input]{\WWHatRowCon} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} ]}_{=B} \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) \\
&= A \, \left(1 - \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) \right) + B \, \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) \\
&\leq A_\mathrm{max} \, \left(1 - \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) \right) + B_\mathrm{max} \, \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}),
\end{align*}
where $A_\mathrm{max}$ and $B_\mathrm{max}$ are the upper bounds on the conditional expectations as established above:
$$
A_\mathrm{max} = k \, \epsilonLayer[\ell + 1] \left( 1 + 2 \, u^*\right) \qquad \text{and} \qquad B_\mathrm{max} = 2 \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right].
$$
We now bound $\Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c})$ by the union bound and applications of Lemma~\ref{lem:sensitivity-approximation} (twice, one for positive and the other for negative weights) and Lemma~\ref{lem:delta-hat-approx}
\begin{align*}
\Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) &\leq \Pr(\mathcal{E}_{{\mathcal Z}(\Input)}^\mathsf{c}) + \Pr(\mathcal{E}_{\Delta(\Input)}^\mathsf{c}) \\
&\leq \left( \frac{\delta}{8 \eta} + \frac{\delta}{8 \eta} \right) + \frac{\delta}{4 \eta} \\
&= \frac{\delta}{2 \eta}.
\end{align*}
Moreover, by definition of $a$, $\xi$, $u^*$ we have
\begin{align*}
A_\mathrm{max} &= \xi \DeltaNeuronHat \left(1 + 2 u^*\right) \\
&= \DeltaNeuronHat \left(\xi + \frac{14 (1 + \xi)}{\sqrt{a}} \right) \\
&\leq \DeltaNeuronHat \left(\xi + \frac{5 \, \epsilonLayer \, (1 + \xi)}{\sqrt{\log(8 \eta/\delta)}} \right) \\
&= \epsilonLayer \, \DeltaNeuronHat \left(k + \frac{5 \, (1 + \xi)}{\sqrt{\log(8 \eta/\delta)}} \right).
\end{align*}
Putting it all together, we establish
\begin{align*}
\E[\err[\Input]{\WWHatRowCon} \, \mid \, \mathcal{E}^{\ell-1}] &\leq \left(1 - \frac{\delta}{2 \eta}\right) A_\mathrm{max} + \frac{\delta}{2\eta} B_\mathrm{max} \\
&\leq A_\mathrm{max} + \frac{\delta}{2\eta} B_\mathrm{max} \\
&\leq \epsilonLayer \, \DeltaNeuronHat \left(k + \frac{5 \, (1 + \xi)}{\sqrt{\log(8 \eta/\delta)}} \right) + \frac{\delta \, \left(1 + \xi \right)}{\eta} \, \E_{\Input } \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right] \\
&= \epsilonLayer \, \DeltaNeuronHat \left(k + \frac{5 \, (1 + k \epsilonLayer)}{\sqrt{\log(8 \eta/\delta)}} \right) + \frac{\delta \, \left(1 + k \epsilonLayer \right)}{\eta} \, \E_{\Input } \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right]
\end{align*}
and this concludes the proof.
\end{proof}
Next, consider $\tau \in {\mathbb N}_+$ coreset constructions corresponding to the approximations $\{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}$ generated as in Alg.~\ref{alg:sparsify-weights} for layer $\ell \in \br{2,\ldots,L}$ and neuron $i \in [\eta^{\ell}]$. We overload the $\mathrm{err}_\mathcal C(\cdot)$ function so that the error with respect to the set $\TT$ is defined as the mean error, i.e.,
\begin{equation}
\err[\TT]{\WWHatRowCon^\ell} = \frac{1}{|\TT|} \sum_{\Input \in \TT} \err[\Input]{\WWHatRowCon^\ell}.
\end{equation}
Equipped with this definition, we proceed to prove Theorem~\ref{thm:amplification}.
\thmamplification*
\begin{proof}
Let
$$
\xi = k \epsilonLayer[\ell+1].
$$
We observe that the reparameterization above enables us to invoke Lemma~\ref{lem:neuron-approx} with $\delta' = \frac{\delta}{4 |\PP| \tau}$ to obtain
\begin{align*}
\Pr(\err[\Input]{\WWHatRowCon} \ge \xi \, \mid \, \mathcal{E}^{\ell-1}) \leq \frac{\delta}{4 \, |\PP| \, \tau \, \eta}.
\end{align*}
Now let $\BB$ denote the event that the inequality
$$
\max_{\WWHatRowCon \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}} \, \max_{\mathbf{\Input} \in \TT } \, \err{\WWHatRowCon} < \xi
$$
holds, where $\TT \subseteq (\PP \setminus \SS)$ is a set of size $\ceil*{8 \log \left( 8 \, \tau \, \eta / \, \delta\right) }$.
By the probabilistic inequality established above, we have by the union bound
\begin{align*}
\Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) &=
\Pr \left(\max_{\WWHatRowCon \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}} \, \max_{\mathbf{\Input} \in \TT } \, \err{\WWHatRowCon} \ge \xi \, \mid \, \mathcal{E}^{\ell-1} \right) \\
& \leq \sum_{\WWHatRowCon \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}} \sum_{\mathbf{\Input} \in \TT} \Pr \left(\err{\WWHatRowCon^\ell} \ge \xi \, \mid \, \mathcal{E}^{\ell-1} \right) \\
&\leq \frac{\tau \, |\TT| \, \delta }{4 \, |\PP| \, \tau \, \eta} \\
&\leq \frac{\delta}{4 \, \eta},
\end{align*}
where the last inequality follows from the fact that $|\TT| \leq |\PP|$.
Conditioning on $\BB$ enables us to reason about the bounded random variables $\err{\WWHatRowCon^\ell}$ for each $\Input \in \TT$ via Hoeffding's inequality to establish that for any $\WWHatRowCon^\ell \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}$
\begin{align*}
\Pr \left(|\err[\TT]{\WWHatRowCon^\ell} - \E_{\Input | \WWHatRowCon^\ell} \, [\err[\Input]{\WWHatRowCon^\ell} \, \mid \, \WWHatRowCon^\ell, \BB \cap \mathcal{E}^{\ell-1}]| \ge \frac{\xi}{4} \, \mid \, \BB \cap \mathcal{E}^{\ell-1} \right) &\leq 2 \, \exp \left( - \frac{(\xi \, |\TT|)^2}{ 8 \, (\xi)^2 |\TT|} \right) \\
&= 2 \, \exp \left( - \frac{|\TT|}{ 8 } \right)
\end{align*}
where, as stated earlier, we implicitly condition on the realization $\mathbf{\hat a}(\cdot)$ of $\hat{a}^{\ell}(\cdot)$ in the expression above and in the subsequent parts of the proof since it can be marginalized out and does not affect our bounds.
Applying the union bound, we further obtain
\begin{align}
&\Pr \left(\max_{\WWHatRowCon^\ell \in \{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}} \, \, |\err[\TT]{\WWHatRowCon^\ell} -\E_{\Input | \WWHatRowCon^\ell} \, [\err[\Input]{\WWHatRowCon^\ell} \, \mid \, \WWHatRowCon^\ell, \BB \cap \mathcal{E}^{\ell-1}]| \ge \frac{\xi}{4} \, \mid \, \BB \cap \mathcal{E}^{\ell-1} \right) \nonumber \\
&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\leq 2 \, \tau \exp \left( - \frac{|\TT|}{ 8 } \right) \\
&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \leq \frac{\delta}{4 \, \eta} \label{eqn:hoeffding-bound},
\end{align}
where the last inequality follows by our choice of $|\TT|$:
$$
|\TT| = \ceil*{8 \log \left( 8 \, \tau \, \eta / \, \delta\right) }.
$$
Let $\LLL$ denote the event that
$$
\max_{\WWHatRowCon^\ell \in \{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}} \, \, |\err[\TT]{\WWHatRowCon^\ell} -\E_{\Input | \WWHatRowCon^\ell} \, [\err[\Input]{\WWHatRowCon^\ell} \, \mid \, \WWHatRowCon^\ell, \BB \cap \mathcal{E}^{\ell-1}]| \leq \frac{\xi}{4}.
$$
By law of total probability, we have
\begin{align*}
\Pr( \LLL^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) &= \Pr( \LLL^\mathsf{c} \, | \, \BB, \mathcal{E}^{\ell-1}) \Pr( \BB \, \mid \, \mathcal{E}^{\ell-1}) + \Pr( \LLL^\mathsf{c} \, | \, \BB^\mathsf{c}, \mathcal{E}^{\ell-1}) \Pr( \BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \\
&\leq \frac{\delta}{4 \, \eta} \left(1 - \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \right) + \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \\
&= \frac{\delta}{4 \, \eta} + \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \left(1 - \frac{\delta}{4 \, \eta} \right) \\
&\leq \frac{\delta}{4 \, \eta} + \frac{\delta}{4 \, \eta}\\
&= \frac{\delta}{2 \, \eta}.
\end{align*}
Now let $\WWHatRowCon^{\dagger}$ denote the \emph{true} minimizer of $\E_{\Input | \WWHatRowCon^\ell} \, [\err[\Input]{\WWHatRowCon^\ell} \, \mid \, \WWHatRowCon^\ell, \mathcal{E}^{\ell-1}]$ among $\WWHatRowCon^\ell \in \{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}$ (note that it is not necessarily the case that $\WWHatRowCon^{\dagger} = \WWHatRowCon^*$), i.e.,
$$
\WWHatRowCon^{\dagger} = \argmin_{\WWHatRowCon \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}} \E_{\Input | \WWHatRowCon} \, [\err[\Input]{\WWHatRowCon} \, \mid \, \WWHatRowCon, \mathcal{E}^{\ell-1}].
$$
For each constructed $(\WWHatRowCon)_t, \, t \in [\tau]$, invoking Markov's inequality and the result of Lemma~\ref{lem:expected-error} corresponding to the adjusted size of $\SS$ and sample complexity $m$ yields
\begin{align*}
&\Pr \left(\E_{\Input | (\WWHatRowCon)_t} \, [\err[\Input]{(\WWHatRowCon)_t} \, \mid \, (\WWHatRowCon^\ell)_t, \mathcal{E}^{\ell-1}] \ge \frac{\xi}{4} \, \mid \, \mathcal{E}^{\ell-1} \right) \\
&\qquad \qquad \leq \frac{4 \, \E [\mathrm{err}_{(\WWHatRowCon^\ell)_t}(\Input) \, \mid \, \mathcal{E}^{\ell-1}]}{\xi} \\
&\qquad \qquad \leq \frac{4 \epsilonLayer \, \DeltaNeuronHat}{\xi} \left(k + \frac{5 \, (1 + k \epsilonLayer)}{\sqrt{\log(8 \eta/\delta)}} \right) + \frac{4 \, \delta \, \left(1 + k \epsilonLayer \right)}{\xi \, \eta} \, \E_{\Input \sim {\mathcal D}} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right] \\
&\qquad \qquad \leq \frac{9}{10},
\end{align*}
where the last inequality follows for $\frac{\delta}{\eta}$ small enough. Thus, the event $\E[\mathrm{err}_{\WWHatRowCon^{\dagger}}(\Input) \, \mid \, \WWHatRowCon^{\dagger}, \mathcal{E}^{\ell-1}] \ge \xi/4$ occurs if and only if we fail (i.e., exceed $\xi/4$ expected error) in \emph{all} $\tau$ trials. This implies that
\begin{align*}
\Pr \left(\E_{\Input | \WWHatRowCon^{\dagger}}[\mathrm{err}_{\WWHatRowCon^{\dagger}}(\Input) \, | \, \WWHatRowCon^{\dagger}, \mathcal{E}^{\ell-1}] \ge \frac{\xi}{4} \, \mid \, \mathcal{E}^{\ell-1}\right) &= \Pr \left(\forall{t \in [\tau]} : \, \E_{\Input | (\WWHatRowCon^\ell)_t} \, [\err[\Input]{(\WWHatRowCon^\ell)_t} \, \mid \, (\WWHatRowCon^\ell)_t, \mathcal{E}^{\ell-1}] \ge \frac{\xi}{4} \, \mid \, \mathcal{E}^{\ell-1} \right) \\
&\leq \left(\frac{9}{10}\right)^\tau \\
&\leq \frac{\delta}{4 \, \eta},
\end{align*}
where the last inequality follows by our choice of $\tau$:
$$
\tau = \ceil*{\frac{\log(4 \, \eta / \delta)}{\log(10/9)}}.
$$
Let $\GG$ denote the event that $\E_{\Input | \WWHatRowCon^*}[\mathrm{err}_{\WWHatRowCon^{\dagger}}(\Input) \, | \, \WWHatRowCon^{\dagger}, \mathcal{E}^{\ell-1}] \leq \frac{\xi}{4}$ and recall that
$$
\WWHatRowCon^* = \argmin_{\WWHatRowCon^\ell \in \{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}} \err[\TT]{\WWHatRowCon^\ell}.
$$
If events $\BB, \LLL$, and $\GG$ all occur, i.e., $\BB \cap \LLL \cap \GG \neq \emptyset$, then we obtain
\begin{align*}
&\E_{\Input | \WWHatRowCon^*} \, [\mathrm{err}_{\WWHatRowCon^*}(\Input) \, \mid \, \WWHatRowCon^*, \mathcal{E}^{\ell-1}] \\
&\quad = \E_{\TT | \WWHatRowCon^*}[\err[\TT]{\WWHatRowCon^*} \, \mid \, \WWHatRowCon^*, \mathcal{E}^{\ell-1}] & \\
&\quad= \E[\err[\TT]{\WWHatRowCon^*} \, \mid \, \WWHatRowCon^*, \BB, \mathcal{E}^{\ell-1}] \Pr(\BB \, \mid \, \mathcal{E}^{\ell-1}) \\
&\quad\quad\quad\quad\quad \quad + \E [\err[\TT]{\WWHatRowCon^*} \, \mid \, \WWHatRowCon^*, \BB^\mathsf{c}, \mathcal{E}^{\ell-1}] \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \\
&\quad \leq \E [\mathrm{err}_{\WWHatRowCon^*}(\TT) \, \mid \, \WWHatRowCon^*, \BB, \mathcal{E}^{\ell-1}] + \E \, [\mathrm{err}_{\WWHatRowCon^*}(\Input) \, \mid \, \WWHatRowCon^*, \BB^\mathsf{c}, \mathcal{E}^{\ell-1}] \, \left(\frac{\delta}{4 \, \eta} \right) & \\
&\quad \leq \E \, [\mathrm{err}_{\WWHatRowCon^*}(\TT) \, \mid \, \WWHatRowCon^*, \BB, \mathcal{E}^{\ell-1}] + \frac{\xi}{4} &\text{for $\frac{\delta}{\eta}$ small enough} \,\, \\
&\quad \leq \mathrm{err}_{\WWHatRowCon^*}(\TT) + \frac{\xi}{2} &\text{By $\BB \cap \LLL \neq \emptyset$} \\
&\quad \leq \mathrm{err}_{\WWHatRowCon^{\dagger}}(\TT) + \frac{\xi}{2} &\text{By definition of $\WWHatRowCon^*$} \\
&\quad \leq \E_{\TT | \WWHatRowCon^\dagger} \, [\mathrm{err}_{\WWHatRowCon^{\dagger}}(\TT) \, | \, \WWHatRowCon^{\dagger}, \, \BB, \mathcal{E}^{\ell-1}] + \frac{3 \, \xi}{4} &\text{By $\BB \cap \LLL \neq \emptyset$} \\
&\quad \leq \E_{\Input \, \mid \, \WWHatRowCon^{\dagger}} [\mathrm{err}_{\WWHatRowCon^{\dagger}}(\Input) \, | \, \WWHatRowCon^{\dagger}, \mathcal{E}^{\ell-1}] + \frac{3 \, \xi}{4} & \\
&\quad \leq \xi &\text{By $\GG \neq \emptyset$},
\end{align*}
where in the second to last inequality, we used the fact that conditioning on $\BB$ leads to a decrease in the expected value relative to the unconditional expectation.
By the union bound over the failure events, we have that the sequence of inequalities above holds with probability at least $1- \delta/ \eta$:
\begin{align*}
\Pr \left( \E_{\Input | \WWHatRowCon^*} \, [\err[\Input]{\WWHatRowCon^*} \, \mid \, \WWHatRowCon^*, \mathcal{E}^{\ell-1}] \leq \xi \, \mid \, \mathcal{E}^{\ell-1} \right) &\ge \Pr( \BB \cap \LLL \cap \GG \, \mid \, \mathcal{E}^{\ell-1}) \\
&= 1 - \Pr \left( (\BB \cap \LLL \cap \GG)^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1} \right) \\
&\geq 1 - \left( \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) + \Pr(\LLL^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) + \Pr(\GG^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \right) \\
&\geq 1 - \left( \frac{\delta}{4 \, \eta} + \frac{\delta}{2 \, \eta} + \frac{\delta}{4 \, \eta} \right) \\
&= 1 - \frac{\delta}{\eta},
\end{align*}
and this establishes the theorem.
\end{proof}
\section{Proofs of the Analytical Results in Section~\ref{sec:analysis}}
\label{sec:appendix}
This section includes the full proofs of the technical results given in Sec.~\ref{sec:analysis}.
\input{appendix_empirical}
\input{appendix_sampling}
\ificlr
\else
\input{appendix_amplification}
\fi
\subsection{Analytical Results for Section~\ref{sec:analysis_positive} (Importance Sampling Bounds for Positive Weights)}
\label{app:analysis_empirical}
\subsubsection{Order Statistic Sampling}
We now establish a couple of technical results that will quantify the accuracy of our approximations of edge importance (i.e., sensitivity).
\begin{lemma}
\label{lem:order-statistic-sampling}
Let $K, K' > 0$ be universal constants and let ${\mathcal D}$ be a distribution with CDF $F(\cdot)$ satisfying $F(\nicefrac{M}{K}) \leq \exp(-1/K')$, where $M = \min \{x \in [0,1] : F(x) = 1\}$. Let $\PP = \{X_1, \ldots, X_n\}$ be a set of $n = |\PP|$ i.i.d. samples each drawn from the distribution ${\mathcal D}$. Let $X_{n+1} \sim {\mathcal D}$ be an i.i.d. sample. Then,
\begin{align*}
\Pr \left(K \, \max_{X \in \PP} X < X_{n+1} \right) \leq \exp(-n/K)
\end{align*}
\end{lemma}
\begin{proof}
Let $X_\mathrm{max} = \max_{X \in \PP}$; then,
\begin{align*}
\Pr(K \, X_\mathrm{max} < X_{n+1}) &= \int_{0}^M \Pr(X_\mathrm{max} < \nicefrac{x}{K} | X_{n+1} = x) \, d \Pr(x) \\
&= \int_{0}^M \Pr\left(X < \nicefrac{x}{K} \right)^n \, d \Pr(x) &\text{since $X_1, \ldots, X_n$ are i.i.d.} \\
&\leq \int_{0}^M F(\nicefrac{x}{K})^n \, d \Pr(x) &\text{where $F(\cdot)$ is the CDF of $X \sim {\mathcal D}$} \\
&\leq F(\nicefrac{M}{K})^n \int_{0}^M \, d \Pr(x) &\text{by monotonicity of $F$} \\
&= F(\nicefrac{M}{K})^n \\
&\leq \exp(-n/K') &\text{CDF Assumption},
\end{align*}
and this completes the proof.
\end{proof}
We now proceed to establish that the notion of empirical sensitivity is a good approximation for the relative importance. For this purpose, let the relative importance $\gHat{x}$ of an edge $j$ after the previous layers have already been compressed be
$$
\gHat{x} = \gHatDef{x}.
$$
\begin{lemma}[Empirical Sensitivity Approximation]
\label{lem:sensitivity-approximation}
Let $\epsilon \in (0,1/2), \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, Consider a set $\SS = \{\Input_1, \ldots, \Input_n\} \subseteq \PP$ of size $|\SS| \ge \ceil*{\kPrime \logTerm }$. Then, conditioned on the event $\mathcal{E}_{\nicefrac{1}{2}}$ occurring, i.e., $\hat{a}(\Input) \in (1 \pm \nicefrac{1}{2}) a(\Input)$,
$$
\Pr_{\Input \sim {\mathcal D}} \left(\exists{j \in \mathcal{W}} : C \, \s < \gHat{x} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} \right) \leq \frac{\delta} {8 \, \eta},
$$
where $C = 3 \, \kmax$ and $\mathcal{W} \subseteq [\eta^{\ell-1}]$.
\end{lemma}
\begin{proof}
Consider an arbitrary $j \in \mathcal{W}$ and $x' \in \SS$ corresponding to $\g{x'}$ with CDF $\cdf{\cdot}$ and recall that $M = \min \{x \in [0,1] : \cdf{x} = 1\}$ as in Assumption~\ref{asm:cdf}. Note that by Assumption~\ref{asm:cdf}, we have
$$
F(\nicefrac{M}{K}) \leq \exp(-1/K'),
$$
and so the random variables $\g{x'}$ for $x' \in \SS$ satisfy the CDF condition required by Lemma~\ref{lem:order-statistic-sampling}. Now let $\mathcal{E}$ be the event that $K \, \s < \g{x}$ holds. Applying Lemma~\ref{lem:order-statistic-sampling}, we obtain
\begin{align*}
\Pr( \mathcal{E}) &= \Pr(K \, \s < \g{x} ) = \Pr \left(K \, \max_{\Input' \in \SS} \g{x'} < \g{x} \right) \leq \exp(-|\SS|/K').
\end{align*}
Now let $\hat{\mathcal{E}}$ denote the event that the inequality $C \s < \gHat{x} = \gHatDef{x}$ holds and note that the right side of the inequality is defined with respect to $\gHat{x}$ and not $\g{x}$. Observe that since we conditioned on the event $\mathcal{E}_{\nicefrac{1}{2}}$, we have
that
$
\hat{a}(\Input) \in (1 \pm \nicefrac{1}{2}) a(\Input).
$
Now assume that event $\hat{\mathcal{E}}$ holds and note that by the implication above, we have
\begin{align*}
C \, \s < \gHat{x} &= \gHatDef{x} \leq \frac{ (1 + \nicefrac{1}{2}) \WWRowCon_j \, a_{j}(x)}{(1 - \nicefrac{1}{2}) \sum_{k \in \mathcal{W}} \WWRowCon_k \, a_{k}(x) } \\
&\leq 3 \cdot \gDef{x} = 3 \, \g{x}.
\end{align*}
where the second inequality follows from the fact that $\nicefrac{1 + 1/2}{1 - 1/2} \leq 3$. Moreover, since we know that $C \ge 3 K$, we conclude that if event $\hat{\mathcal{E}}$ occurs, we obtain the inequality
$$
3 \, K \, \s \leq 3 \, \g{x} \Leftrightarrow K \, \s \leq \g{x},
$$
which is precisely the definition of event $\mathcal{E}$. Thus, we have shown the conditional implication $\big(\hat{\mathcal{E}} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} \big) \Rightarrow \mathcal{E}$, which implies that
\begin{align*}
\Pr(\hat{\mathcal{E}} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}}) &= \Pr(C \, \s < \gHat{x} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}}) \leq \Pr(\mathcal{E}) \\
&\leq \exp(-|\SS|/K').
\end{align*}
Since our choice of $j \in \mathcal{W}$ was arbitrary, the bound applies for any $j \in \mathcal{W}$. Thus, we have by the union bound
\begin{align*}
\Pr(\exists{j \in \mathcal{W}} \,: C \, \s < \gHat{x} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}}) &\leq \sum_{j \in \mathcal{W}} \Pr(C \, \s < \gHat{x} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}}) \leq \abs{\mathcal{W}} \exp(-|\SS|/K') \\
&= \left(\frac{|\mathcal{W}|}{\eta^*} \right) \frac{\delta}{8 \eta} \leq \frac{\delta}{8 \eta}.
\end{align*}
\end{proof}
In practice, the set $\SS$ referenced above is chosen to be a subset of the original data points, i.e., $\SS \subseteq \PP$ (see Alg.~\ref{alg:main}, Line~\ref{lin:s-construction}). Thus, we henceforth assume that the size of the input points $|\PP|$ is large enough (or the specified parameter $\delta \in (0,1)$ is sufficiently large) so that $|\PP| \ge |\SS|$.
\subsubsection{Proof of Lemma~\ref{lem:pos-weights-approx}}
We now state the proof of Lemma~\ref{lem:pos-weights-approx}. In this subsection, we establish approximation guarantees under the assumption that the weights are strictly positive. The next subsection will then relax this assumption to conclude that a neuron's value can be approximated well even when the weights are not all positive.
\lemposweightsapprox*
\begin{proof}
Let $\epsilon, \delta \in (0,1)$ be arbitrary. Moreover, let $\mathcal C$ be the coreset with respect to the weight indices $\mathcal{W} \subseteq [\eta^{\ell-1}]$ used to construct $\WWHatRowCon$. Note that as in \textsc{Sparsify}, $\mathcal C$ is a multiset sampled from $\mathcal{W}$ of size
$
m = \SampleComplexity[\epsilon],
$
where $S = \sum_{j \in \mathcal{W}} \s $ and $\mathcal C$ is sampled according to the probability distribution $q$ defined by
$$
\qPM{j} = \frac{\s}{S} \qquad \forall{j \in \mathcal{W}}.
$$
Let $\mathbf{\hat a}(\cdot)$ be an arbitrary realization of the random variable $\hat{a}^{\ell-1}(\cdot)$, let $\mathbf{\Input}$ be a realization of $\Input \sim {\mathcal D}$, and let
$$
\hat{z} = \sum_{k \in \mathcal{W}} \WWHatRow[ k] \, \mathbf{\hat a}_k(\mathbf{\Input})
$$
be the approximate intermediate value corresponding to the sparsified matrix $\WWHatRowCon$ and let
$$
\tilde z = \sum_{k \in \mathcal{W}} \WWRow[ k] \, \mathbf{\hat a}_k(\mathbf{\Input}).
$$
Now define $\mathcal{E}$ to be the (favorable) event that $\hat z$ $\epsilon$-approximates $\tilde z$, i.e., $\hat z \in (1 \pm \epsilon) \tilde z$, We will now show that the complement of this event, $\mathcal{E}^\mathsf{c}$, occurs with sufficiently small probability. Let ${\mathcal Z} \subseteq \supp$ be the set of \emph{well-behaved} points (defined implicitly with respect to neuron $i \in [\eta^\ell]$ and realization $\mathbf{\hat a}$) and defined as follows:
$$
{\mathcal Z} = \left\{x' \in \supp \, : \, \gHat{x'} \leq C \s \quad \forall{j \in \mathcal{W}} \right \},
$$
where $C = 3 \, \kmax$. Let $\mathcal{E}_{{\mathcal Z}}$ denote the event that $\mathbf{\Input} \in {\mathcal Z}$ where $\mathbf{\Input}$ is a realization of $\Input \sim {\mathcal D}$.
\paragraph{Conditioned on $\mathcal{E}_{\mathcal Z}$, event $\mathcal{E}^\mathsf{c}$ occurs with probability $\leq \frac{\delta}{4 \eta}$:}
Let $\mathbf{\Input}$ be a realization of $\Input \sim {\mathcal D}$ such that $\mathbf{\Input} \in {\mathcal Z}$ and let $\mathcal C = \{c_1, \ldots, c_{m}\}$ be $m$ samples from $\mathcal{W}$ with respect to distribution $\qPM{}$ as before. Define $m$ random variables $\T[c_1], \ldots, \T[c_m]$ such that for all $j \in \mathcal C$
\begin{align}
\label{eqn:tplu-defn}
\T[j] &= \frac{\WWRow[j] \, \mathbf{\hat a}_{j} (\mathbf{\Input}) }{m \, \qPM{j}}= \frac{S \, \WWRow[ j] \, \mathbf{\hat a}_{j} (\mathbf{\Input}) }{m \, \s[j]}.
\end{align}
For any $j \in \mathcal C$, we have for the conditional expectation of $\T[j]$:
\begin{align*}
\E [\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}] &= \sum_{k \in \mathcal{W}} \frac{\WWRow[k] \, \mathbf{\hat a}_{k} (\mathbf{\Input})}{m \, \qPM{k}} \cdot \qPM{k} \\
&= \sum_{k \in \mathcal{W}} \frac{\WWRow[k] \, \mathbf{\hat a}_k (\mathbf{\Input})}{m} \\
&= \frac{\tilde z}{m},
\end{align*}
where we use the expectation notation $\E[\cdot]$ with the understanding that it denotes the conditional expectation $\E \nolimits_{\CC \given \hat a^{l-1}(\cdot), \, \Point}\,[\cdot]$. Moreover, we also note that conditioning on the event $\mathcal{E}_{\mathcal Z}$ (i.e., the event that $\mathbf{\Input} \in {\mathcal Z}$) does not affect the expectation of $\T[j]$.
Let $\T = \sum_{j \in \mathcal C} \T[j] = \hat z$ denote our approximation and note that by linearity of expectation,
$$
\E[\T \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}} ] = \sum_{j \in \mathcal C} \E [\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}} ] = \tilde z
$$
Thus, $\hat z = \T$ is an unbiased estimator of $\tilde z$ for any realization $\mathbf{\hat a}(\cdot)$ and $\mathbf{\Input}$; thus, we will henceforth refer to $\E[\T \, \mid \, \mathbf{\hat a}(\cdot), \, \mathbf{\Input} ]$ as simply $\tilde z$ for brevity.
For the remainder of the proof we will assume that $\tilde z > 0$, since otherwise, $\tilde z = 0$ if and only if $\T[j] = 0$ for all $j \in \mathcal C$ almost surely, which follows by the fact that $\T[j] \ge 0$ for all $j \in \mathcal C$ by definition of $\mathcal{W}$ and the non-negativity of the ReLU activation. Therefore, in the case that $\tilde z = 0$, it follows that
$$
\Pr (|\hat{z} - \tilde z| > \epsilon \tilde z \given \mathbf{\hat a}(\cdot), \mathbf{\Input}) = \Pr(\hat{z} > 0 \given \mathbf{\hat a}(\cdot), \mathbf{\Input}) = \Pr( 0 > 0) = 0,
$$
which trivially yields the statement of the lemma,
where in the above expression, $\Pr(\cdot)$ is short-hand for the conditional probability $\Pr_{\WWHatRowCon \, \mid \, \hat a^{l-1}(\cdot), \, \Input}(\cdot)$.
We now proceed with the case where $\tilde z > 0$ and leverage the fact that $\mathbf{\Input} \in {\mathcal Z}$\footnote{Since we conditioned on the event $\mathcal{E}_{\mathcal Z}$.} to establish that for all $j \in \mathcal{W}$:
\begin{align}
C \s &\ge \gHat{\mathbf{\Input}} = \frac{\WWRow[j] \, \mathbf{\hat a}_{j}(\mathbf{\Input})}{\sum_{k \in \mathcal{W}} \WWRow[ k] \, \mathbf{\hat a}_k(\mathbf{\Input})} = \frac{\WWRow[j] \, \mathbf{\hat a}_{j}(\mathbf{\Input})}{\tilde z} \nonumber \\
\Leftrightarrow \quad \frac{\WWRow[j] \, \mathbf{\hat a}_{j}(\mathbf{\Input})}{\s[j]} &\leq C \, \tilde z. \label{eqn:sens-inequality}
\end{align}
Utilizing the inequality established above, we bound the conditional variance of each $\T[j], \, j \in \mathcal C$ as follows
\begin{align*}
\Var(\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) &\leq \E[(\T[j])^2 \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}] \\
&= \sum_{k \in \mathcal{W}} \frac{(\WWRow[k] \, \mathbf{\hat a}_{k}(\mathbf{\Input}))^2}{(m \, \qPM{k})^2} \cdot \qPM{k} \\
&= \frac{S}{m^2} \, \sum_{k \in \mathcal{W}} \frac{(\WWRow[k] \, \mathbf{\hat a}_{k}(\mathbf{\Input}))^2}{\s[k]} \\
&\leq \frac{S}{m^2} \, \left(\sum_{k \in \mathcal{W}}\WWRow[k] \, \mathbf{\hat a}_{k}(\mathbf{\Input}) \right) C\, \tilde z \\
&= \frac{S \, C \, \tilde z^2}{m^2},
\end{align*}
where $\Var(\cdot)$ is short-hand for $\Var \nolimits_{\CC \given \hat a^{l-1}(\cdot), \, \Point}\, (\cdot)$.
Since $\T$ is a sum of (conditionally) independent random variables, we obtain
\begin{align}
\label{eqn:varplu-bound}
\Var(\T \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) &= m \Var(\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) \\
&\leq \frac{S \, C \, \tilde z^2}{m}. \nonumber
\end{align}
Now, for each $j \in \mathcal C$ let
$$
\TTilde[j] = \T[j] - \E [\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}] = \T[j] - \tilde z,
$$
and let $\TTilde = \sum_{j \in \mathcal C} \TTilde[j]$. Note that by the fact that we conditioned on the realization $\mathbf{\Input}$ of $\Input$ such that $\mathbf{\Input} \in {\mathcal Z}$ (event $\mathcal{E}_{\mathcal Z}$), we obtain by definition of $\T[j]$ in \eqref{eqn:tplu-defn} and the inequality \eqref{eqn:sens-inequality}:
\begin{equation}
\label{eqn:tplu-bound}
\T[j] = \frac{S \, \WWRow[j] \, \mathbf{\hat a}_{j} (\mathbf{\Input}) }{m \, \s[j]} \leq \frac{S \, C \, \tilde z}{m}.
\end{equation}
We also have that $S \ge 1$ by definition. More specifically, using the fact that the maximum over a set is greater than the average and rearranging sums, we obtain
\begin{align*}
S &= \sum_{j \in \mathcal{W}} \s = \sum_{j \in \mathcal{W}} \max_{\mathbf{\Input}' \in \SS} \,\, \g{\mathbf{\Input}'} \\
&\ge \frac{1}{|\SS|} \sum_{j \in \mathcal{W}} \sum_{\mathbf{\Input}' \in \SS} \g{\mathbf{\Input}'} = \frac{1}{|\SS|} \sum_{\mathbf{\Input}' \in \SS} \sum_{j \in \mathcal{W}} \g{\mathbf{\Input}'} \\
&= \frac{1}{|\SS|} \sum_{\mathbf{\Input}' \in \SS} 1 = 1.
\end{align*}
Thus, the inequality established in \eqref{eqn:tplu-bound} with the fact that $S \ge 1$ we obtain an upper bound on the absolute value of the centered random variables:
\begin{equation}
\label{eqn:tplutilde-bound}
|\TTilde[j]| = \left|\T[j] - \frac{\tilde z}{m}\right| \leq \frac{S \, C \, \tilde z}{m} = M,
\end{equation}
which follows from the fact that:
\paragraph{if $\T[j] \ge \frac{\tilde z}{m}$:} Then, by our bound in \eqref{eqn:tplu-bound} and the fact that $\frac{\tilde z}{m} \ge 0$, it follows that
\begin{align*}
\abs{\TTilde[j]} &= \T[j] - \frac{\tilde z}{m} \leq \frac{S \, C \, \tilde z}{m} - \frac{\tilde z}{m} \leq \frac{S \, C \, \tilde z}{m}.
\end{align*}
\paragraph{if $\T[j] < \frac{\tilde z}{m}$:} Then, using the fact that $\T[j] \ge 0$ and $S \ge 1$, we obtain
\begin{align*}
\abs{\TTilde[j]} &= \frac{\tilde z}{m} - \T[j] \leq \frac{\tilde z}{m} \leq \frac{S \, C \, \tilde z}{m}.
\end{align*}
Applying Bernstein's inequality to both $\TTilde$ and $-\TTilde$ we have by symmetry and the union bound,
\begin{align*}
\Pr (\mathcal{E}^\mathsf{c} \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) &= \Pr \left(\abs{\T - \tilde z} \ge \epsilon \tilde z \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}} \right) \\
&\leq 2 \exp \left(-\frac{\epsilon^2 \tilde z^2}{2 \Var(\T \given \mathbf{\hat a}(\cdot), \mathbf{\Input}) + \frac{2 \, \epsilon \, \tilde z M}{3}}\right) \\
&\leq 2 \exp \left(-\frac{\epsilon^2 \tilde z^2}{ \frac{2 S C \, \tilde z^2}{m} + \frac{2 S \, C \, \tilde z^2}{3 m}} \right) \\
&= 2 \exp \left(-\frac{3 \, \epsilon^2 \, m}{8 S \, C } \right) \\
&\leq \frac{\delta}{4 \eta },
\end{align*}
where the second inequality follows by our upper bounds on $\Var(\T \given \mathbf{\hat a}(\cdot), \mathbf{\Input})$ and $\abs{\TTilde[j]}$ and the fact that $\epsilon \in (0,1)$, and the last inequality follows by our choice of $m = \SampleComplexity[\epsilon]$. This establishes that for any realization $\mathbf{\hat a}(\cdot)$ of $\hat a^{l-1}(\cdot)$ and a realization $\mathbf{\Input}$ of $\Input$ satisfying $\mathbf{\Input} \in {\mathcal Z}$, the event $\mathcal{E}^\mathsf{c}$ occurs with probability at most $\frac{\delta}{4 \eta}$.
\paragraph{Removing the conditioning on $\mathcal{E}_{\mathcal Z}$:}
We have by law of total probability
\begin{align*}
\Pr (\mathcal{E} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}})
&\ge \int_{\mathbf{\Input} \in {\mathcal Z}}
\Pr(\mathcal{E} \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) \Pr_{\Input \sim {\mathcal D}} (\Input = \mathbf{\Input} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}}) \, d \mathbf{\Input} \\
&\ge \left(1 - \frac{\delta}{4 \eta }\right) \int_{\mathbf{\Input} \in {\mathcal Z}} \Pr_{\Input \sim {\mathcal D}} (\Input = \mathbf{\Input} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}}) \, d \mathbf{\Input} \\
&= \left(1 - \frac{\delta}{4 \eta }\right) \Pr_{\Input \sim {\mathcal D}} (\mathcal{E}_{\mathcal Z} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}}) \\
&\ge \left(1 - \frac{\delta}{4 \eta }\right) \left(1 - \frac{\delta } {8 \eta }\right) \\
&\ge 1 - \frac{3 \delta}{8 \eta}
\end{align*}
where the second-to-last inequality follows from the fact that $\Pr (\mathcal{E}^\mathsf{c} \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) \leq \frac{\delta}{4 \eta }$ as was established above and the last inequality follows by Lemma~\ref{lem:sensitivity-approximation}.
\paragraph{Putting it all together}
Finally, we marginalize out the random variable $\hat a^{\ell -1}(\cdot)$ to establish
\begin{align*}
\Pr(\mathcal{E} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} ) &= \int_{\mathbf{\hat a}(\cdot)} \Pr (\mathcal{E} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}} ) \Pr(\mathbf{\hat a}(\cdot) \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} ) \, d \mathbf{\hat a}(\cdot) \\
&\ge \left(1 - \frac{3 \delta}{8 \eta}\right) \int_{\mathbf{\hat a}(\cdot)} \Pr(\mathbf{\hat a}(\cdot) \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} ) \, d \mathbf{\hat a}(\cdot) \\
&= 1 - \frac{3 \delta}{8 \eta}.
\end{align*}
Consequently,
\begin{align*}
\Pr(\mathcal{E}^\mathsf{c} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} ) &\leq 1 - \left(1 - \frac{3 \delta}{8 \eta}\right) = \frac{3 \delta}{8 \eta},
\end{align*}
and this concludes the proof.
\end{proof}
\section{Asymptotic Time Complexity}
\section{Additional Results}
\label{app:results}
In this section, we give more details on the evaluation of our compression algorithm on popular benchmark data sets and varying fully-connected neural network configurations. In the experiments, we compare the effectiveness of our sampling scheme in reducing the number of non-zero parameters of a network to that of uniform sampling and the singular value decomposition (SVD).
All algorithms were implemented in Python using the PyTorch library~\citep{paszke2017automatic} and simulations were conducted on a computer with a 2.60 GHz Intel i9-7980XE processor (18 cores total) and 128 GB RAM.
For training and evaluating the algorithms considered in this section, we used the following off-the-shelf data sets:
\begin{itemize}
\setlength\itemsep{0.25em}
\item \textit{MNIST}~\citep{lecun1998gradient} --- $70,000$ images of handwritten digits between 0 and 9 in the form of $28 \times 28$ pixels per image.
\item \textit{CIFAR-10}~\citep{krizhevsky2009learning} --- $60,000$ $32 \times 32$ color images, a subset of the larger CIFAR-100 dataset, each depicting an object from one of 10 classes, e.g., airplanes.
\item \textit{FashionMNIST}~\citep{xiao2017} --- A recently proposed drop-in replacement for the MNIST data set that, like MNIST, contains $60,000$, $28 \times 28$ grayscale images, each associated with a label from 10 different categories.
\end{itemize}
We considered a diverse set of network configurations for each of the data sets. We varied the number of hidden layers between 2 and 5 and used either a constant width across all hidden layers between 200 and 1000 or a linearly decreasing width (denoted by "Pyramid" in the figures). Training was performed for 30 epochs on the normalized data sets using an Adam optimizer with a learning rate of 0.001 and a batch size of 300. The test accuracies were roughly 98\% (MNIST), 45\% (CIFAR10), and 96\% (FashionMNIST), depending on the network architecture. To account for the randomness in the training procedure, for each data set and neural network configuration, we averaged our results across 4 trained neural networks.
\subsection{Details on the Compression Algorithms}
We evaluated and compared the performance of the following algorithms on the aforementioned data sets.
\begin{enumerate}
\setlength\itemsep{0.25em}
\item \textit{Uniform (Edge) Sampling} --- A uniform distribution is used, rather than our sensitivity-based importance sampling distribution, to sample the incoming edges to each neuron in the network. Note that like our sampling scheme, uniform sampling edges generates an unbiased estimator of the neuron value. However, unlike our approach which explicitly seeks to minimize estimator variance using the bounds provided by empirical sensitivity, uniform sampling is prone to exhibiting large estimator variance.
\item \textit{Singular Value Decomposition} (SVD) --- The (truncated) SVD decomposition is used to generate a low-rank (rank-$r$) approximation for each of the weight matrices $(\hat{W}^2, \ldots, \hat{W}^L)$ to obtain the corresponding parameters $\hat{\theta} = (\hat{W}^2_r, \ldots, \hat{W}^L_r)$ for various values of $r \in {\mathbb N}_+$. Unlike the compared sampling-based methods, SVD does not sparsify the weight matrices. Thus, to achieve fair comparisons of compression rates, we compute the size of the rank-$r$ matrices constituting $\hat{\theta}$ as,
$$
\size{\hat{\theta}} = \sum_{\ell = 2}^L \sum_{i = 1}^r \left( \nnz{u_i^\ell} + \nnz{v_i^\ell} \right),
$$
where $W^\ell = U^\ell \Sigma^\ell (V^\ell)^\top$ for each $\ell \in \br{2,\ldots,L}$, with $\sigma_1 \ge \sigma_2 \ldots \ge \sigma_{\eta^{\ell-1}}$ and $u_i^\ell$ and $v_i^\ell$ denote the $i$th columns of $U^\ell$ and $V^\ell$ respectively.
\item \textit{$\ell_1$ Sampling~\citep{achlioptas2013matrix}} --- An entry-wise sampling distribution based on the ratio between the absolute value of a single entry and the (entry-wise) $\ell_1$ - norm of the weight matrix is computed, and the weight matrix is subsequently sparsified by sampling accordingly. In particular, entry $w_{ij}$ of some weight matrix $W$ is sampled with probability
$$
p_{ij} = \frac{\abs{w_{ij}}}{\norm{W}_{\ell_1}},
$$
and reweighted to ensure the unbiasedness of the resulting estimator.
\item \textit{$\ell_2$ Sampling~\citep{drineas2011note}} --- The entries $(i,j)$ of each weight matrix $W$ are sampled with distribution
\[
p_{ij} = \frac{w_{ij}^2}{\norm{W}_F^2},
\]
where $\norm{\cdot}_F$ is the Frobenius norm of $W$, and reweighted accordingly.
\item \textit{$\frac{\ell_1 + \ell_2}{2}$ Sampling~\citep{kundu2014note}} -- The entries $(i,j)$ of each weight matrix $W$ are sampled with distribution
\[
p_{ij} = \frac{1}{2} \left(\frac{w_{ij}^2}{\norm{W}_F^2} + \frac{|w_{ij}|}{\norm{W}_{\ell_1}} \right),
\]
where $\norm{\cdot}_F$ is the Frobenius norm of $W$, and reweighted accordingly. We note that~\cite{kundu2014note} constitutes the current state-of-the-art in data-oblivious matrix sparsification algorithms.
\item \textit{CoreNet} (Edge Sampling) --- Our core algorithm for edge sampling shown in Alg.~\ref{alg:sparsify-weights}, but without the neuron pruning procedure.
\item \textit{CoreNet}\verb!+! (CoreNet \& Neuron Pruning) --- Our algorithm shown in Alg.~\ref{alg:main} that includes the neuron pruning step.
\item \textit{CoreNet}\verb!++! (CoreNet\verb!+! \& Amplification) --- In addition to the features of \textit{Corenet}\verb!+!, multiple coresets $\mathcal C_1, \ldots, \mathcal C_\tau$ are constructed over $\tau \in {\mathbb N}_+$ trials, and the best one is picked by evaluating the empirical error on a subset $\TT \subseteq \PP \setminus \SS$ (see Sec.~\ref{sec:method} for details).
\end{enumerate}
\subsection{Preserving the Output of a Neural Network}
We evaluated the accuracy of our approximation by comparing the output of the compressed network with that of the original one and compute the $\ell_1$-norm of the relative error vector. We computed the error metric for both the uniform sampling scheme as well as our compression algorithm (Alg.~\ref{alg:main}). Our results were averaged over 50 trials, where for each trial, the relative approximation error was averaged over the entire test set. In particular, for a test set $\PP_\mathrm{test} \subseteq \Reals^d$ consisting of $d$ dimensional points, the average relative error of with respect to the $f_{\paramHat}$ generated by each compression algorithm was computed as
$$
\mathrm{error}_{\PP_\mathrm{test}}(f_{\paramHat}) = \frac{1}{|\PP_\mathrm{test}|} \sum_{\Input \in \PP_\mathrm{test}} \norm{f_{\paramHat}(\Input) - f_\param(\Input)}_1.
$$
Figures~\ref{fig:mnist_error},~\ref{fig:cifar_error}, and~\ref{fig:fashion_error} depict the average performance of the compared algorithms for various network architectures trained on MNIST, CIFAR-10, and FashionMNIST, respectively. Our algorithm is able to compress networks trained on MNIST and FashionMNIST to about 10\% of their original size without significant loss of accuracy. On CIFAR-10, a compression rate of 50\% yields classification results comparable to that of uncompressed networks. The shaded region corresponding to each curve represents the values within one standard deviation of the mean.
\subsection{Preserving the Classification Performance}
We also evaluated the accuracy of our approximation by computing the loss of prediction accuracy on a test data set, $\PP_{\mathrm{test}}$. In particular, let $\mathrm{acc}_{\PP_{\mathrm{test}}}(f_\param)$ be the average accuracy of the neural network $f_\param$, i.e,.
$$
\mathrm{acc}_{\PP_{\mathrm{test}}}(f_\param) = \frac{1}{|\PP_\mathrm{test}|} \sum_{\Input \in \PP_\mathrm{test}} \1 \left( \argmax_{i \in [\eta^L]} f_\param(x) \neq y(\Input) \right),
$$
where $y(x)$ denotes the (true) label associated with $x$. Then the drop in accuracy is computed as
$$
\mathrm{acc}_{\PP_{\mathrm{test}}}(f_\param) - \mathrm{acc}_{\PP_{\mathrm{test}}}(f_{\paramHat}).
$$
Figures~\ref{fig:mnist_acc},~\ref{fig:cifar_acc}, and~\ref{fig:fashion_acc} depict the average performance of the compared algorithms for various network architectures trained on MNIST, CIFAR-10, and FashionMNIST respectively. The shaded region corresponding to each curve represents the values within one standard deviation of the mean.
\subsection{Preliminary Results with Retraining}
We compared the performance of our approach with that of the popular weight thresholding heuristic -- henceforth denoted by WT -- of~\cite{Han15} when retraining was allowed after the compression, i.e., pruning, procedure.
Our comparisons with retraining for the networks and data sets mentioned in Sec.~\ref{sec:results} are as follows. For MNIST, WT required 5.8\% of the number of parameters to obtain the classification accuracy of the original model (i.e., 0\% drop in accuracy), whereas for the same percentage (5.8\%) of the parameters retained, CoreNet++ incurred a classification accuracy drop of 1\%. For CIFAR, the approach of~\cite{Han15} matched the original model’s accuracy using ~3\% of the parameters, whereas CoreNet++ reported an accuracy drop of 9.5\% for 3\% of the parameters retained. Finally, for FashionMNIST, the corresponding numbers were 4.1\% of the parameters to achieve 0\% loss for WT, and a loss of 4.7\% in accuracy for CoreNet++ with the same percentage of parameters retained.
\subsection{Discussion}
As indicated in Sec.~\ref{sec:results}, the simulation results presented here validate our theoretical results and suggest that empirical sensitivity can lead to effective, more informed sampling compared to other methods. Moreover, we are able to outperform networks that are compressed via state-of-the-art matrix sparsification algorithms. We also note that there is a notable difference in the performance of our algorithm between different datasets. In particular, the difference in performance of our algorithm compared to the other method for networks trained on FashionMNIST and MNIST is much more significant than for networks trained on CIFAR. We conjecture that this is partially due to considering only fully-connected networks as these network perform fairly poorly on CIFAR (around~45\% classification accuracy) and thus edges have more uniformly distributed sensitivity as the information content in the network is limited. We envision that extending our guarantees to convolutional neural networks may enable us to further reason about the performance on data sets such as CIFAR.
\input{appendix_figures}
\subsection{Analytical Results for Section~\ref{sec:analysis_sampling} (Importance Sampling Bounds)}
\label{app:analysis_sampling}
We begin by establishing an auxiliary result that we will need for the subsequent lemmas.
\subsubsection{Empirical $\Delta_\neuron^{\ell}$ Approximation}
\begin{lemma}[Empirical $\Delta_\neuron^{\ell}$ Approximation]
\label{lem:delta-hat-approx}
Let $\delta \in (0,1)$, let $\lambda_* = K'/2 \ge \lambda$, where $K'$ is from Asm.~\ref{asm:cdf}, and define
$$
\DeltaNeuronHat = \DeltaNeuronHatDef,
$$
where $\kappa = \sqrt{2 \lambda_*} \left(1 + \sqrt{2 \lambda_*} \logTerm \right)$ and $\SS \subseteq \PP$ is as in Alg.~\ref{alg:main}. Then,
$$
\Pr_{\Input \sim {\mathcal D} } \left(\max_{i \in [\eta^\ell]} \DeltaNeuron[\Input] \leq \DeltaNeuronHat \right) \ge 1 - \frac{\delta}{4 \eta}.
$$
\end{lemma}
\begin{proof}
Define the random variables $\mathcal{Y}_{\Input'} = \E[\DeltaNeuron[\Input']] - \DeltaNeuron[\Input']$ for each $\Input' \in \SS$ and consider the sum $$
\mathcal{Y} = \sum_{\Input' \in \SS} \mathcal{Y}_{\Input'} = \sum_{\Input' \in \SS} \left(\E[\DeltaNeuron[\Input]] - \DeltaNeuron[\Input']\right).
$$
We know that each random variable $\mathcal{Y}_{\mathbf{\Input}'}$ satisfies $\E[\mathcal{Y}_{\mathbf{\Input}'}] = 0$ and by Assumption~\ref{asm:subexponential}, is subexponential with parameter $\lambda \leq \lambda_*$. Thus, $\mathcal{Y}$ is a sum of $|\SS|$ independent, zero-mean $\lambda_*$-subexponential random variables, which implies that $\E[\mathcal{Y}] = 0$ and that we can readily apply Bernstein's inequality for subexponential random variables~\citep{vershynin2016high} to obtain for $t \ge 0$
$$
\Pr \left(\frac{1}{|\SS|} \mathcal{Y} \ge t\right) \leq \exp \left(-|\SS| \, \min \left \{\frac{t^2}{4 \, \lambda_*^2}, \frac{t}{2 \, \lambda_*} \right\} \right).
$$
Since $\SS = \ceil*{\kPrime \logTerm } \ge \log \left(\logTermInside / \delta \right) \, 2 \lambda^*$, we have for $t = \sqrt{2 \lambda_*}$,
\begin{align*}
\Pr \left(\E[\DeltaNeuron[\Input]] - \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] \ge t \right) &= \Pr \left(\frac{1}{|\SS|} \mathcal{Y} \ge t\right) \\
&\leq \exp \left( -|\SS| \frac{t^2}{4 \lambda_*^2} \right) \\
&\leq \exp \left( - \log \left(\logTermInside / \delta \right) \right) \\
&= \frac{\delta}{8 \, \eta \, \eta^* }.
\end{align*}
Moreover, for a single $\mathcal{Y}_\Input$, we have by the equivalent definition of a subexponential random variable~\citep{vershynin2016high} that for $u \ge 0$
$$
\Pr(\DeltaNeuron[\Input] - \E[\DeltaNeuron[\Input]] \ge u) \leq \exp \left(-\min \left \{-\frac{u^2}{4 \, \lambda_*^2}, \frac{u}{2 \, \lambda_*} \right\} \right).
$$
Thus, for $u = 2 \lambda_* \, \log \left(\logTermInside / \delta \right)$ we obtain
$$
\Pr(\DeltaNeuron[\Input] - \E[\DeltaNeuron[\Input]] \ge u) \leq \exp \left( - \log \left(\logTermInside / \delta \right) \right) = \frac{\delta}{ 8 \, \eta \, \eta^* }.
$$
Therefore, by the union bound, we have with probability at least $1 - \frac{\delta}{4 \eta \, \eta^*}$:
\begin{align*}
\DeltaNeuron[\Input] &\leq \E[\DeltaNeuron[\Input]] + u \\
&\leq \left(\frac{1}{|\SS|} \sum_{\mathbf{\Input}' \in \SS} \DeltaNeuron[\Input'] + t \right) + u \\
&= \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] + \left(\sqrt{2 \lambda_*} + 2 \lambda_* \, \log \left(\logTermInside / \delta \right) \right) \\
&= \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] + \kappa \\
&\leq \DeltaNeuronHat,
\end{align*}
where the last inequality follows by definition of $\DeltaNeuronHat$.
Thus, by the union bound, we have
\begin{align*}
\Pr_{\Input \sim {\mathcal D} } \left(\max_{i \in [\eta^\ell]} \DeltaNeuron[\Input] > \DeltaNeuronHat \right) &= \Pr \left(\exists{i \in [\eta^\ell]}: \DeltaNeuron[\Input] > \DeltaNeuronHat \right) \\
&\leq \sum_{i \in [\eta^{\ell}]} \Pr \left(\DeltaNeuron[\Input] > \DeltaNeuronHat \right) \\
&\leq \eta^{\ell} \left(\frac{\delta}{4 \eta \, \eta^*} \right) \\
&\leq \frac{\delta}{4 \, \eta},
\end{align*}
where the last line follows by definition of $\eta^* \ge \eta^{\ell}$.
\end{proof}
\subsubsection{Notation for the Subsequent Analysis}
Let $\WWHatRow^{\ell +}$ and $\WWHatRow^{\ell -}$ denote the sparsified row vectors generated when \textsc{Sparsify} is invoked with first two arguments corresponding to $(\Wplus, \WWRow^\ell)$ and $(\Wminus, -\WWRow^\ell)$, respectively (Alg.~\ref{alg:main}, Line~\ref{lin:pos-sparsify-weights}). We will at times omit including the variables for the neuron $i$ and layer $\ell$ in the proofs for clarity of exposition, and for example, refer to $\WWHatRow^{\ell +}$ and $\WWHatRow^{\ell -}$ as simply $\WWHatRowCon^+$ and $\WWHatRowCon^-$, respectively.
Let $\Input \sim {\mathcal D}$ and define
$$
\hat{z}^{+}(\Input) = \sum_{k \in \Wplus} \WWHatRow[k]^+ \, \hat a_k(\Input) \ge 0 \qquad \text{and} \qquad \hat{z}^{-}(\Input) = \sum_{k \in \Wminus} (-\WWHatRow[k]^-) \, \hat a_k(\Input) \ge 0
$$
be the approximate intermediate values corresponding to the sparsified matrices $\WWHatRowCon^{+}$ and $\WWHatRowCon^{-}$; let
$$
\tilde z^{+}(\Input) = \sum_{k \in \Wplus} \WWRow[k] \, \hat a_k(\Input) \ge 0 \qquad \text{and} \qquad \tilde z^{-}(\Input) = \sum_{k \in \Wminus} (-\WWRow[k]) \, \hat a_k(\Input) \ge 0
$$
be the corresponding intermediate values with respect to the the original row vector $\WWRowCon$; and finally, let
$$
z^{+}(\Input) = \sum_{k \in \Wplus} \WWRow[k] \, a_k(\Input) \ge 0 \qquad \text{and} \qquad z^{-}(\Input) = \sum_{k \in \Wminus} (-\WWRow[k]) \, a_k(\Input) \ge 0
$$
be the true intermediate values corresponding to the positive and negative valued weights.
Note that in this context, we have by definition
\begin{align*}
\hat{z}_i^\ell (\Input) &= \dotp{\WWHatRowCon}{ \hat a(\Input)} = \hat{z}^{+}(\Input) - \hat{z}^{-}(\Input), \\
\tilde{z}_i^{\ell}(\Input) &= \dotp{\WWRowCon}{\hat a(\Input)} = \tilde z^+(\Input) - \tilde z^{-}(\Input), \quad \text{and} \\
z_i^{\ell}(\Input) &= \dotp{\WWRowCon}{a(\Input)} = z^+(\Input) - z^{-}(\Input),
\end{align*}
where we used the fact that $\WWHatRowCon = \WWHatRowCon^{+} - \WWHatRowCon^{-} \in \Reals^{1 \times \eta^{\ell-1}}$.
\subsubsection{Proof of Lemma~\ref{lem:neuron-approx}}
\lemneuronapprox*
\begin{proof}
Let $\epsilon, \delta \in (0,1)$ be arbitrary and let $\Wplus = \{\edge \in [\eta^{\ell-1}] : \WWRow[\edge] > 0\}$ and $\Wminus = \{\edge \in [\eta^{\ell-1}]: \WWRow[\edge] < 0 \}$ as in Alg.~\ref{alg:main}. Let $\epsilonLayer$ be defined as before,
$
\epsilonLayer = \epsilonLayerDef ,
$
where $\DeltaNeuronHatLayers = \DeltaNeuronHatLayersDef$ and $\DeltaNeuronHat = \DeltaNeuronHatDef$.
Observe that $\WWRow[j] > 0 \, \, \forall{j \in \Wplus}$ and similarly, for all $(-\WWRow[j]) > 0 \, \, \forall{j \in \Wminus}$. That is, each of index sets $\Wplus$ and $\Wminus$ corresponds to strictly positive entries in the arguments $\WWRow^\ell$ and $-\WWRow^\ell$, respectively passed into \textsc{Sparsify}.
Observe that since we conditioned on the event $\mathcal{E}^{\ell-1}$, we have
\begin{align*}
2 \, (\ell - 2) \, \epsilonLayer[\ell] &\leq 2 \, (\ell - 2) \, \epsilonLayerDefWordy[\ell] \\
&\leq \frac{\epsilon}{\DeltaNeuronHatLayersDef} \\
&\leq \frac{\epsilon}{2^{L - \ell + 1}} &\text{Since $\DeltaNeuronHat[k] \ge 2 \quad \forall{k \in \{\ell, \ldots, L\}}$} \\
&\leq \frac{\epsilon}{2},
\end{align*}
where the inequality $\DeltaNeuronHat[k] \ge 2$ follows from the fact that
\begin{align*}
\DeltaNeuronHat[k] &= \DeltaNeuronHatDef[\mathbf{\Input}'] \\
&\ge 1 + \kappa &\text{Since $\DeltaNeuron[\mathbf{\Input}'] \ge 1 \quad \forall{\mathbf{\Input}' \in \supp}$ by definition} \\
&\ge 2.
\end{align*}
we obtain that $\hat{a}(\Input) \in (1 \pm \nicefrac{\epsilon}{2}) a(\Input)$, where, as before, $\hat{a}$ and $a$ are shorthand notations for $\hat{a}^{\ell-1} \in \Reals^{\eta^{\ell -1} \times 1}$ and $a^{\ell-1} \in \Reals^{\eta^{\ell -1} \times 1}$, respectively.
This implies that $\mathcal{E}^{\ell - 1} \Rightarrow \mathcal{E}_{\nicefrac{1}{2}}$ and since $m = \SampleComplexity[\epsilon]$ in Alg.~\ref{alg:sparsify-weights} we can invoke Lemma~\ref{lem:pos-weights-approx} with $\epsilon
= \epsilonLayer$ on each of the \textsc{Sparsify} invocations to conclude that
$$
\Pr \left(\hat{z}^+(\Input) \notin (1 \pm \epsilonLayer) \tilde z^+(\Input) \, \mid \, \mathcal{E}^{\ell-1} \right) \leq \Pr \left(\hat{z}^+(\Input) \notin (1 \pm \epsilonLayer) \tilde z^+(\Input) \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} \right) \leq \frac{3 \delta}{8 \eta},
$$
and
$$
\Pr \left(\hat{z}^-(\Input) \notin (1 \pm \epsilonLayer) \tilde z^-(\Input) \, \mid \, \mathcal{E}^{\ell-1} \right) \leq \frac{3 \delta}{8 \eta}.
$$
Therefore, by the union bound, we have
\begin{align*}
\Pr \left(\hat{z}^+(\Input) \notin (1 \pm \epsilonLayer) \tilde z^+(\Input) \text{ or } \hat{z}^-(\Input) \notin (1 \pm \epsilonLayer) \tilde z^-(\Input) \, \mid \, \mathcal{E}^{\ell-1} \right)
&\leq \frac{3 \delta}{8 \eta} + \frac{3 \delta}{8 \eta} = \frac{3 \delta}{4 \eta}.
\end{align*}
Moreover, by Lemma~\ref{lem:delta-hat-approx}, we have with probability at most $\frac{\delta}{4 \eta }$ that
$$
\DeltaNeuron[\Input] > \DeltaNeuronHat.
$$
Thus, by the union bound over the failure events, we have that with probability at least $1 - \left(\nicefrac{3 \delta}{4 \eta} + \nicefrac{\delta}{4 \eta }\right) = 1 - \nicefrac{\delta}{\eta}$ that \textbf{both} of the following events occur
\begin{fleqn}
\begin{align}
\hspace{\parindent} & \text{1.} \quad \hat{z}^+(\Input) \in (1 \pm \epsilonLayer) \tilde z^+(\Input) \ \ \text{and} \ \ \hat{z}^-(\Input) \in (1 \pm \epsilonLayer) \tilde z^-(\Input) \label{eqn:event1} \\
\hspace{\parindent} & \text{2.} \quad \DeltaNeuron[\Input] \leq \DeltaNeuronHat \label{eqn:event2}
\end{align}
\end{fleqn}
Recall that $\epsilon' = \frac{\epsilon}{\epsilonDenomContant \, (L-1)}$, $ \epsilonLayer[\ell] = \epsilonLayerDef$, and that event $\mathcal{E}^\ell_i$ denotes the (desirable) event that
$$
\hat{z}_i^\ell (\Input) \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^{\ell}_i (\Input)
$$
holds, and similarly, $\mathcal{E}^\ell = \cap_{i \in [\eta^\ell]} \, \mathcal{E}_{i}^\ell$ denotes the vector-wise analogue where
$$
\hat{z}^\ell (\Input) \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^{\ell}(\Input).
$$
Let $k = 2 \, (\ell - 1)$ and note that by conditioning on the event $\mathcal{E}^{\ell-1}$, i.e., we have by definition
\begin{align*}
\hat{a}^{\ell-1}(\Input) &\in (1 \pm 2 \, (\ell - 2) \epsilonLayer[\ell]) a^{\ell-1}(\Input) = (1 \pm k \, \epsilonLayer[\ell]) a^{\ell-1}(\Input),
\end{align*}
which follows by definition of the ReLU function.
Recall that our overarching goal is to establish that
$$
\hat{z}_i^{\ell}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \epsilonLayer[\ell + 1]\right) z_i^\ell(\Input),
$$
which would immediately imply by definition of the ReLU function that
$$
\hat{a}_i^{\ell}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \epsilonLayer[\ell + 1]\right) a_i^\ell(\Input).
$$
Having clarified the conditioning and our objective, we will once again drop the index $i$ from the expressions moving forward.
Proceeding from above, we have with probability at least $1 - \nicefrac{\delta}{\eta}$
\begin{align*}
\hat{z} (\Input) &= \hat{z}^+(\Input) - \hat{z}^-(\Input) \\
&\leq (1 + \epsilonLayer) \, \tilde z^+(\Input) - (1 - \epsilonLayer) \, \tilde z^-(\Input) &\text{By Event~\eqref{eqn:event1} above}\\
&\leq (1 + \epsilonLayer) (1 + k \, \epsilonLayer[\ell]) \, z^+(\Input) - (1 - \epsilonLayer) (1 - k \, \epsilonLayer[\ell]) \, z^-(\Input) &\text{Conditioning on event $\mathcal{E}^{\ell-1}$} \\
&=\left(1 + \epsilonLayer (k + 1) + k \epsilonLayer^2\right) z^+(\Input) + \left(-1 + (k+1) \epsilonLayer - k \epsilonLayer^2 \right) z^-(\Input) \\
&= \left(1 + k \, \epsilonLayer^2\right) z(\Input) + (k+1) \, \epsilonLayer \left(z^+(\Input) + z^-(\Input)\right) \\
&= \left(1 + k \, \epsilonLayer^2\right) z(\Input) + \frac{(k+1) \, \epsilon'}{ \DeltaNeuronHatLayersDef} \, \left(z^+(\Input) + z^-(\Input)\right) \\
&\leq \left(1 + k \, \epsilonLayer^2\right) z(\Input) + \frac{(k+1) \, \epsilon'}{\DeltaNeuron[\Input] \, \DeltaNeuronHatLayersDef[\ell+1]} \, \left(z^+(\Input) + z^-(\Input)\right) &\text{By Event~\eqref{eqn:event2} above} \\
&= \left(1 + k \, \epsilonLayer^2\right) z(\Input) + \frac{(k+1) \,\epsilon'}{ \DeltaNeuronHatLayersDef[\ell+1]} \, \left|z(\Input)\right| &\text{By $\DeltaNeuron[\Input] = \frac{z^+(\Input) + z^-(\Input)}{|z(\Input)|}$} \\
&= \left(1 + k \, \epsilonLayer^2\right) z(\Input) + (k+1) \, \epsilonLayer[\ell + 1] \, |z(\Input)|.
\end{align*}
To upper bound the last expression above, we begin by observing that $k \epsilonLayer^2 \leq \epsilonLayer$, which follows from the fact that $\epsilonLayer \leq \frac{1}{2 \, (L-1)} \leq \frac{1}{k}$ by definition. Moreover, we also note that $\epsilonLayer[\ell] \leq \epsilonLayer[\ell + 1]$ by definition of $\DeltaNeuronHat \ge 1$.
Now, we consider two cases.
\paragraph{Case of $z(\Input) \ge 0$:} In this case, we have
\begin{align*}
\hat{z}(\Input) &\leq \left(1 + k \, \epsilonLayer^2\right) z(\Input) + (k+1) \, \epsilonLayer[\ell + 1] \, |z(\Input)| \\
&\leq (1 + \epsilonLayer) z(\Input) + (k + 1) \epsilonLayer[\ell + 1] z(\Input) \\
&\leq (1 + \epsilonLayer[\ell + 1]) z(\Input) + (k + 1) \epsilonLayer[\ell + 1] z(\Input) \\
&= \left(1 + (k+2) \, \epsilonLayer[\ell + 1]\right) z(\Input) \\
&= \left(1 + 2 \, (\ell - 1) \epsilonLayer[\ell+1]\right) z(\Input),
\end{align*}
where the last line follows by definition of $k = 2 \, (\ell - 2)$, which implies that $k + 2 = 2( \ell - 1)$. Thus, this establishes the desired upper bound in the case that $z(\Input) \ge 0$.
\paragraph{Case of $z(\Input) < 0$:} Since $z(\Input)$ is negative, we have that $\left(1 + k \, \epsilonLayer^2\right) z(\Input) \leq z(\Input)$ and $|z(\Input)| = -z(\Input)$ and thus
\begin{align*}
\hat{z}(\Input) &\leq \left(1 + k \, \epsilonLayer^2\right) z(\Input) + (k+1) \, \epsilonLayer[\ell + 1] \, |z(\Input)| \\
&\leq z(\Input) - (k + 1) \epsilonLayer[\ell + 1] z(\Input) \\
&\leq \left(1 - (k + 1) \epsilonLayer[\ell + 1] \right) z(\Input) \\
&\leq \left(1 - (k + 2) \epsilonLayer[\ell + 1] \right) z(\Input) \\
&= \left(1 - 2 \, (\ell - 1) \epsilonLayer[\ell+1]\right) z(\Input),
\end{align*}
and this establishes the upper bound for the case of $z(\Input)$ being negative.
Putting the results of the case by case analysis together, we have the upper bound of $\hat{z}(\Input) \leq z(\Input) + 2 \, (\ell - 1) \epsilonLayer[\ell+1] |z(\Input)|$. The proof for establishing the lower bound for $z(\Input)$ is analogous to that given above, and yields $\hat{z}(\Input) \ge z(\Input) - 2 \, (\ell - 1) \epsilonLayer[\ell+1] |z(\Input)|$. Putting both the upper and lower bound together, we have that with probability at least $1 - \frac{\delta}{\eta}$:
$$
\hat{z}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \epsilonLayer[\ell+1] \right) z(\Input),
$$
and this completes the proof.
\end{proof}
\subsubsection{Remarks on Negative Activations}
\label{app:negative}
We note that up to now we assumed that the input $a(x)$, i.e., the activations from the previous layer, are strictly nonnegative.
For layers $\ell \in \{3, \ldots, L\}$, this is indeed true due to the nonnegativity of the ReLU activation function. For layer $2$, the input is $a(x) = x$, which can be decomposed into $a(x) = a_\mathrm{pos}(x) - a_\mathrm{neg}(x)$, where $a_\mathrm{pos}(x) \geq 0 \in \Reals^{\eta^{\ell - 1}}$ and $a_\mathrm{neg}(x) \geq 0 \in \Reals^{\eta^{\ell - 1}}$.
Furthermore, we can define the sensitivity over the set of points $\{a_\mathrm{pos}(x),\, a_\mathrm{neg}(x) \, \mid \, x \in \SS\}$ (instead of $\{a(x) \, \mid \, x \in \SS\}$), and thus maintain the required nonnegativity of the sensitivities. Then, in the terminology of Lemma~\ref{lem:neuron-approx}, we let
$$
z_\mathrm{pos}^{+}(\Input) = \sum_{k \in \Wplus} \WWRow[k] \, a_{\mathrm{pos}, k}(\Input) \ge 0 \qquad \text{and} \qquad z_\mathrm{neg}^{-}(\Input) = \sum_{k \in \Wminus} (-\WWRow[k]) \, a_{\mathrm{neg}, k}(\Input) \ge 0
$$
be the corresponding positive parts, and
$$
z_\mathrm{neg}^{+}(\Input) = \sum_{k \in \Wplus} \WWRow[k] \, a_{\mathrm{neg}, k}(\Input) \ge 0 \qquad \text{and} \qquad z_\mathrm{pos}^{-}(\Input) = \sum_{k \in \Wminus} (-\WWRow[k]) \, a_{\mathrm{pos}, k}(\Input) \ge 0
$$
be the corresponding negative parts of the preactivation of the considered layer, such that
$$
z^{+}(\Input) = z_\mathrm{pos}^{+}(\Input) + z_\mathrm{neg}^{-}(\Input) \qquad \text{and} \qquad z^{-}(\Input) = z_\mathrm{neg}^{+}(\Input) + z_\mathrm{pos}^{-}(\Input).
$$
We also let
$$
\DeltaNeuron[\Input] = \frac{z^+(\Input) + z^-(\Input)}{|z(\Input)|}
$$
be as before, with $z^+(\Input)$ and $z^-(\Input)$ defined as above. Equipped with above definitions, we can rederive Lemma~\ref{lem:neuron-approx} analogously in the more general setting, i.e., with potentially negative activations. We also note that we require a slightly larger sample size now since we have to take a union bound over the failure probabilities of all four approximations (i.e. $\hat z_\mathrm{pos}^{+}(\Input)$, $\hat z_\mathrm{neg}^{-}(\Input)$, $\hat z_\mathrm{neg}^{+}(\Input)$, and $\hat z_\mathrm{pos}^{-}(\Input)$) to obtain the desired overall failure probability of $\nicefrac{\delta}{\eta}$.
\subsubsection{Proof of Theorem~\ref{thm:main}}
The following corollary immediately follows from Lemma~\ref{lem:neuron-approx} and establishes a layer-wise approximation guarantee.
\begin{restatable}[Conditional Layer-wise Approximation]{corollary}{corlayerwise}
\label{cor:approx-layer}
Let $\epsilon, \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, and $\Input \sim {\mathcal D}$. \textsc{CoreNet} generates a sparse weight matrix $\hat{W}^\ell = \big(\WWHatRow[1]^\ell, \ldots, \WWHatRow[\eta^\ell]^\ell \big)^\top \in {\REAL}^{\eta^\ell \times \eta^{\ell-1}}$ such that
\begin{equation}
\label{eqn:coreset-property-neuron}
\Pr (\mathcal{E}^\ell \, \mid \, \mathcal{E}^{\ell-1}) = %
\Pr \left( \hat z^\ell (\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^{\ell} (\Input) \, \mid \, \mathcal{E}^{\ell-1} \right) \ge %
1 - \frac{\delta \, \eta^\ell }{\eta},
\end{equation}
where $\epsilonLayer = \epsilonLayerDef$, $\hat{z}^\ell(\Input) = \hat{W}^\ell \hat a^\ell(\Input)$, and $z^\ell(\Input) = W^\ell a^\ell(\Input)$.
\end{restatable}
\begin{proof}
Since~\eqref{eq:neuronapprox} established by Lemma~\ref{lem:neuron-approx} holds for any neuron $i \in [\eta^\ell]$ in layer $\ell$ and since $(\mathcal{E}^\ell)^\mathsf{c} = \cup_{i \in [\eta^\ell]} (\mathcal{E}_i^\ell)^\mathsf{c}$, it follows by the union bound over the failure events $(\mathcal{E}_i^\ell)^\mathsf{c}$ for all $i \in [\eta^\ell]$ that with probability at least $1 - \frac{\eta^\ell \delta}{\eta}$
\begin{align*}
\hat z^\ell(\Input) &= \hat{W}^\ell \hat a^{\ell-1}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) W^\ell a^{\ell-1}(\Input) = \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^\ell (\Input).
\end{align*}
\end{proof}
The following lemma removes the conditioning on $\mathcal{E}^{\ell-1}$ and explicitly considers the (compounding) error incurred by generating coresets $\hat{W}^2, \ldots, \hat{W}^\ell$ for multiple layers.
\lemlayer*
\begin{proof}
Invoking Corollary~\ref{cor:approx-layer}, we know that for any layer $\ell' \in \br{2,\ldots,L}$,
\begin{align}
\Pr_{\hat{W}^{\ell'}, \, \Input, \, \hat{a}^{\ell'-1}(\cdot)} (\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell'-1}) \ge 1 - \frac{\delta \, \eta^{\ell'}}{\eta}. \label{eqn:cor-ineq}
\end{align}
We also have by the law of total probability that
\begin{align}
\Pr(\mathcal{E}^{\ell'}) &= \Pr(\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell' - 1}) \Pr(\mathcal{E}^{\ell' - 1}) + \Pr(\mathcal{E}^{\ell'} \, \mid \, (\mathcal{E}^{\ell' - 1})^\mathsf{c}) \Pr ((\mathcal{E}^{\ell' - 1})^\mathsf{c} ) \nonumber \\
&\ge \Pr(\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell' - 1}) \Pr(\mathcal{E}^{\ell' - 1}) \label{eqn:repeated-invocation}
\end{align}
Repeated applications of \eqref{eqn:cor-ineq} and \eqref{eqn:repeated-invocation} in conjunction with the observation that $\Pr(\mathcal{E}^1) = 1$\footnote{Since we do not compress the input layer.} yield
\begin{align*}
\Pr(\mathcal{E}^\ell) &\ge \Pr(\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell' - 1}) \Pr(\mathcal{E}^{\ell' - 1}) \\
&\,\,\, \vdots & \text{Repeated applications of \eqref{eqn:repeated-invocation}} \\
&\ge \prod_{\ell'=2}^\ell \Pr(\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell' -1}) \\
&\ge \prod_{\ell'=2}^\ell \left(1 - \frac{\delta \, \eta^{\ell'}}{\eta}\right) &\text{By \eqref{eqn:cor-ineq}} \\
&\ge 1 - \frac{\delta}{\eta} \sum_{\ell'=2}^\ell \eta^{\ell'} &\text{By the Weierstrass Product Inequality},
\end{align*}
where the last inequality follows by the Weierstrass Product Inequality\footnote{The Weierstrass Product Inequality~\citep{doerr2018probabilistic} states that for $p_1, \ldots, p_n \in [0,1]$, $$\prod_{i=1}^n (1 - p_i) \ge 1 - \sum_{i=1}^n p_i.$$} and this establishes the lemma.
\end{proof}
Appropriately invoking Lemma~\ref{lem:layer}, we can now establish the approximation guarantee for the entire neural network. This is stated in Theorem~\ref{thm:main} and the proof can be found below.
\thmmain*
\begin{proof}
Invoking Lemma~\ref{lem:layer} with $\ell = L$, we have that for $\hat{\theta} = (\hat{W}^2, \ldots, \hat{W}^L)$,
\begin{align*}
\Pr_{\hat{\theta}, \, \Input} \left(f_{\paramHat}(x) \in 2 \, (L - 1) \, \epsilonLayer[L + 1] f_\param(x) \right) &= \Pr_{\hat{\theta}, \, \Input } (\hat z^{L}(\Input)\in 2 \, (L - 1) \, \epsilonLayer[L + 1] z^L (\Input)) \\
&= \Pr(\mathcal{E}^{L}) \\
&\ge 1 - \frac{\delta \, \sum_{\ell' = 2}^{L} \eta^{\ell'}}{\eta} \\
&= 1 - \delta,
\end{align*}
where the last equality follows by definition of $\eta = \sum_{\ell = 2}^L \eta^\ell$.
Note that by definition,
\begin{align*}
\epsilonLayer[L+1] &= \epsilonLayerDefWordy[L+1] \\
&= \frac{\epsilon}{\epsilonDenomContant \, (L-1)},
\end{align*}
where the last equality follows by the fact that the empty product $\DeltaNeuronHatLayersDef[L+1]$ is equal to 1.
Thus, we have
\begin{align*}
2 \, (L-1) \epsilonLayer[L+1] &= \epsilon,
\end{align*}
and so we conclude
$$
\Pr_{\hat{\theta}, \, \Input} \left(f_{\paramHat}(x) \in (1 \pm \epsilon) f_\param(x) \right) \ge 1 - \delta,
$$
which, along with the sampling complexity of Alg.~\ref{alg:sparsify-weights} (Line~\ref{lin:beg-sampling}), establishes the approximation guarantee provided by the theorem.
For the computational time complexity, we observe that
the most time consuming operation per iteration of the loop on Lines~\ref{lin:beg-main-loop}-\ref{lin:end-main-loop} is the weight sparsification procedure. The asymptotic time complexity of each $\textsc{Sparsify}$ invocation for each neuron $i \in [\eta^\ell]$ in layers $\ell \in \br{2,\ldots,L}$ (Alg.~\ref{alg:main}, Line~\ref{lin:pos-sparsify-weights}) is dominated by the relative importance computation for incoming edges (Alg.~\ref{alg:sparsify-weights}, Lines~\ref{lin:beg-sensitivity}-\ref{lin:end-sensitivity}). This can be done by evaluating $\WWRow[ik]^\ell a_{k}^{\ell-1}(x)$ for all $k \in \mathcal{W}$ and $x \in \SS$, for a total computation time that is bounded above by $\Bigo\left(|\SS| \, \eta^{\ell-1} \right)$ since $|\mathcal{W}| \leq \eta^{\ell-1}$ for each $i \in [\eta^\ell]$. Thus, $\textsc{Sparsify}$ takes $\Bigo\left(\abs{\SS}\, \eta^{\ell-1} \right)$ time. Summing the computation time over all layers and neurons in each layer, we obtain an asymptotic time complexity of $\Bigo \big(\abs{\SS} \, \sum_{\ell = 2}^L \eta^{\ell-1} \eta^{\ell}\big) \subseteq \Bigo \left(\abs{\SS} \, \eta^* \, \eta \right)$. Since $\abs{\SS} \in \Bigo(\log (\eta \, \eta^* / \delta))$, we conclude that the computational complexity our neural network compression algorithm is
\begin{equation}
\label{eqn:computation-time}
\Bigo \left( \eta \, \, \eta^* \, \log \big(\eta \, \eta^*/ \delta \big) \right).
\end{equation}
\end{proof}
\subsubsection{Proof of Theorem~\ref{thm:instance-independent-main}}
In order to ensure that the established sampling bounds are non-vacuous in terms of the sensitivity, i.e., not linear in the number of incoming edges, we show that the sum of sensitivities per neuron $S$ is small. The following lemma establishes that the sum of sensitivities can be bounded \emph{instance-independent} by a term that is logarithmic in roughly the total number of edges ($\eta \cdot \eta^*$).
\begin{lemma}[Sensitivity Bound]
\label{lem:sens-bound}
For any $\ell \in \br{2,\ldots,L}$ and $i \in [\eta^{\ell}]$, the sum of sensitivities $S = S_+ + S_-$ is bounded by
$$
S \leq 2 \, |\SS| = 2 \, \ceil*{\kPrime \logTerm }.
$$
\end{lemma}
\begin{proof}
Consider $S_+$ for an arbitrary $\ell \in \{2, \ldots, L\}$ and $i \in [\eta^{\ell}]$. For all $j \in \mathcal{W}$ we have the following bound on the sensitivity of a single $j \in \mathcal{W}$,
\begin{align*}
\s &= \max_{\Input \in \SS} \,\, \g{x} \leq \sum_{\Input \in \SS} \,\, \g{x} = \sum_{\Input \in \SS} \, \, \gDef{x},
\end{align*}
where the inequality follows from the fact that we can upper bound the max by a summation over $\Input \in \SS$ since $\g{x} \ge 0$, $\forall j \in \mathcal{W}$. Thus,
\begin{align*}
S_+ &= \sum_{j \in \mathcal{W}} \s \leq \sum_{j \in \mathcal{W}} \sum_{\Input \in \SS} \, \, \g{x} \\
&=\sum_{\Input \in \SS} \frac{\sum_{j \in \mathcal{W}} \WWRow[j] \, a_{j}(\Input)}{\sum_{k \in \mathcal{W}}\WWRow[ k]\, a_{k}(\Input) } = |\SS|,
\end{align*}
where we used the fact that the sum of sensitivities is finite to swap the order of summation.
Using the same argument as above, we obtain $S_- = \sum_{j \in \mathcal{W}_-} \s \leq |\SS|$, which establishes the lemma.
\end{proof}
Note that the sampling complexities established above have a linear dependence on the sum of sensitivities, $\sum_{\ell = 2}^{L} \sum_{i=1}^{\eta^\ell} S_\neuron^\ell$, which is instance-dependent, i.e., depends on the sampled $\SS \subseteq \PP$ and the actual weights of the trained neural network.
By applying Lemma~\ref{lem:sens-bound}, we obtain a bound on the size of the compressed network that is independent of the sensitivity.
\begin{restatable}[Sensitivity-Independent Network Compression]{theorem}{thminstanceindependentmain}
\label{thm:instance-independent-main}
For any given $\epsilon, \delta \in (0, 1)$ our sampling scheme (Alg.~\ref{alg:main}) generates a set of parameters $\hat{\theta}$ of size
\begin{align*}
\size{\hat{\theta}} \in
\Bigo \left( \frac{ \log(\eta / \delta) \, \log ( \eta \, \eta^* / \delta) \log^2(\kmaxInsideLog) \, \eta \, L^2}{ \epsilon^2} \, \sum_{\ell = 2}^{L} (\DeltaNeuronHatLayers)^2 \, \right),
\end{align*}
in $\Bigo \left( \eta \, \, \eta^* \, \log \big(\eta \, \eta^*/ \delta \big) \right)$ time, such that $\Pr_{\hat{\theta}, \, \Input \sim {\mathcal D}} \left(f_{\paramHat}(x) \in (1 \pm \epsilon) f_\param(x) \right) \ge 1 - \delta$.
\end{restatable}
\begin{proof}
Combining Lemma~\ref{lem:sens-bound} and Theorem~\ref{thm:main} establishes the theorem.
\end{proof}
\subsubsection{Generalized Network Compression}
Theorem~\ref{thm:main} gives us an approximation guarantee with respect to one randomly drawn point $\Input \sim {\mathcal D}$. The following corollary extends this approximation guarantee to any set of $n$ randomly drawn points using a union bound argument, which enables approximation guarantees for, e.g., a test data set composed of $n$ i.i.d. points drawn from the distribution. We note that the sampling complexity only increases by roughly a logarithmic term in $n$.
\begin{corollary}[Generalized Network Compression]
\label{cor:generalized-compression}
For any $\epsilon, \delta \in (0,1)$ and a set of i.i.d. input points $\PP'$ of cardinality $|\PP'| \in \mathbb{N}_+$, i.e., $\PP' \stackrel{i.i.d.}{\sim} {\mathcal D}^{|\PP'|}$, consider the reparameterized version of Alg.~\ref{alg:main} with
\begin{enumerate}
\item $\SS \subseteq \PP$ of size $|\SS| \ge \ceil*{\logTermGeneral \kPrime}$,
\item $\DeltaNeuronHat = \DeltaNeuronHatDef$ as before, but $\kappa$ is instead defined as
$$
\kappa = \sqrt{2 \lambda_*} \left(1 + \sqrt{2 \lambda_*} \logTermGeneral \right), \qquad \text{and}
$$
\item $m \ge \SampleComplexityGeneralConcise$ in the sample complexity in \textsc{SparsifyWeights}.
\end{enumerate}
Then, Alg.~\ref{alg:main} generates a set of neural network parameters $\hat{\theta}$ of size at most
\begin{align*}
\size{\hat{\theta}} &\leq \sum_{\ell = 2}^{L} \sum_{i=1}^{\eta^\ell} \left( \ceil*{\frac{32 \, (L-1)^2 \, (\DeltaNeuronHatLayers)^2 \, S_\neuron^\ell \, \kmax \, \log (16 \, |\PP'| \, \eta / \delta) }{\epsilon^2}} + 1\right) \\
&\in \Bigo \left( \frac{ K \, \log ( \eta \, |\PP'| / \delta) \, L^2}{ \epsilon^2} \, \sum_{\ell = 2}^{L} (\DeltaNeuronHatLayers)^2 \, \sum_{i=1}^{\eta^\ell} S_\neuron^\ell \, \right),
\end{align*}
in $\Bigo \left( \eta \, \, \eta^* \, \log \big(\eta \, \eta^* \, |\PP'| / \delta \big) \right)$ time such that
$$
\Pr_{\hat{\theta}, \, \Input} \left(\forall{\Input \in \PP'}: f_{\paramHat}(x) \in (1 \pm \epsilon) f_\param(x) \right) \ge 1 - \frac{\delta}{2}.
$$
\end{corollary}
\begin{proof}
The reparameterization enables us to invoke Theorem~\ref{thm:main} with $\delta' = \nicefrac{\delta}{2 \, |\PP'|}$; applying the union bound over all $|\PP'|$ i.i.d. samples in $\PP'$ establishes the corollary.
\end{proof}
\section{Conclusion}
\label{sec:conclusion}
We presented a coresets-based neural network compression algorithm for compressing the parameters of a trained fully-connected neural network in a manner that approximately preserves the network's output. Our method and analysis extend traditional coreset constructions to the application of compressing parameters, which may be of independent interest. Our work distinguishes itself from prior approaches in that it establishes theoretical guarantees on the approximation accuracy and size of the generated compressed network.
As a corollary to our analysis, we obtain generalization bounds for neural networks, which may provide novel insights on the generalization properties of neural networks. We empirically demonstrated the practical effectiveness of our compression algorithm on a variety of neural network configurations and real-world data sets. In future work, we plan to extend our algorithm and analysis to compress Convolutional Neural Networks (CNNs) and other network architectures. We conjecture that our compression algorithm can be used to reduce storage requirements of neural network models and enable fast inference in practical settings.
\section{Introduction}
\label{sec:introduction}
Within the past decade, large-scale neural networks have demonstrated unprecedented empirical success in high-impact applications such as object classification, speech recognition, computer vision, and natural language processing.
However, with the ever-increasing size of state-of-the-art neural networks, the resulting storage requirements and performance of these models are becoming increasingly prohibitive in terms of both time and space. Recently proposed architectures for neural networks, such as those in~\cite{Alex2012,Long15,SegNet15}, contain millions of parameters, rendering them prohibitive to deploy on platforms that are resource-constrained, e.g., embedded devices, mobile phones, or small scale robotic platforms.
In this work, we consider the problem of sparsifying the parameters of a trained fully-connected neural network in a principled way so that the output of the compressed neural network is approximately preserved. We introduce a neural network compression approach based on identifying and removing \LL{isn't weighted weird here?} weighted edges with low relative importance via coresets, small weighted subsets of the original set that approximate the pertinent cost function. Our compression algorithm hinges on extensions of the traditional sensitivity-based coresets framework~\citep{langberg2010universal,braverman2016new}, and to the best of our knowledge, is the first to apply coresets to parameter downsizing. In this regard, our work aims to simultaneously introduce a practical algorithm for compressing neural network parameters with provable guarantees and close the research gap in prior coresets work, which has predominantly focused on compressing input data points.
In particular, this paper contributes the following:
\begin{enumerate}
\item A coreset approach to compressing problem-specific parameters based on a novel, empirical notion of sensitivity that extends state-of-the-art coreset constructions.
\item An efficient neural network compression algorithm, CoreNet, based on our extended coreset approach that sparsifies the parameters via importance sampling of weighted edges.
\item Extensions of the CoreNet method, CoreNet+ and CoreNet++, that improve upon the edge sampling approach by additionally performing neuron pruning and amplification.
\item Analytical results establishing guarantees on the approximation accuracy, size, and generalization of the compressed neural network.
\item Evaluations on real-world data sets that demonstrate the practical effectiveness of our algorithm in compressing neural network parameters and validate our theoretical results.
\end{enumerate}
\section{Method}
\label{sec:method}
In this section, we introduce our neural network compression algorithm as depicted in Alg.~\ref{alg:main}. Our method is based on an important sampling-scheme that extends traditional sensitivity-based coreset constructions to the application of compressing parameters.
\subsection{CoreNet}
Our method (Alg.~\ref{alg:main}) hinges on the insight that a validation set of data points $\PP \stackrel{i.i.d.}{\sim} {\mathcal D}^n$ can be used to approximate the relative importance, i.e., sensitivity, of each weighted edge with respect to the input data distribution ${\mathcal D}$. For this purpose, we first pick a subsample of the data points $\SS \subseteq \PP$ of appropriate size (see Sec.~\ref{sec:analysis} for details) and cache each neuron's activation and compute a neuron-specific constant to be used to determine the required edge sampling complexity (Lines~\ref{lin:sample-s}-\ref{lin:cache-activations}).
\input{pseudocode}
Subsequently, we apply our core sampling scheme to sparsify the set of incoming weighted edges to each neuron in all layers (Lines~\ref{lin:beg-main-loop}-\ref{lin:end-main-loop}).
For technical reasons (see Sec.~\ref{sec:analysis}), we perform the sparsification on the positive and negative weighted edges separately and then consolidate the results (Lines~\ref{lin:weight-sets}-\ref{lin:consolidate}).
By repeating this procedure for all neurons in every layer, we obtain a set $\hat{\theta} = (\hat{W}^2, \ldots, \hat{W}^L)$ of sparse weight matrices such that the output of each layer and the entire network is approximately preserved, i.e., $\hat{W}^{\ell} \hat a^{\ell-1}(\Input) \approx W^\ell a^{\ell-1}(\Input)$ and $f_{\paramHat}(\Input) \approx f_\param(\Input)$, respectively\footnote{$\hat a^{\ell -1}(\Input)$ denotes the approximation from previous layers for an input $\Input \sim {\mathcal D}$; see Sec.~\ref{sec:analysis} for details.}.
\subsection{Sparsifying Weights}
The crux of our compression scheme lies in Alg.~\ref{alg:sparsify-weights} (invoked twice on Line~\ref{lin:pos-sparsify-weights}, Alg.~\ref{alg:main}) and in particular, in the importance sampling scheme used to select a small subset of edges of high importance. The cached activations are used to compute the \emph{sensitivity}, i.e., relative importance, of each considered incoming edge $j \in \mathcal{W}$ to neuron $i \in [\eta^\ell]$, $\ell \in \br{2,\ldots,L}$ (Alg.~\ref{alg:sparsify-weights}, Lines~\ref{lin:beg-sensitivity}-\ref{lin:end-sensitivity}). The relative importance of each edge $j$ is computed as the maximum (over $\Input \in \SS$) ratio of the edge's contribution to the sum of contributions of all edges. In other words, the sensitivity $\sPM$ of an edge $j$ captures the highest (relative) impact $j$ had on the output of neuron $i \in [\eta^\ell]$ in layer $\ell$ across all $\Input \in \SS$.
The sensitivities are then used to compute an importance sampling distribution over the incoming weighted edges (Lines~\ref{lin:beg-sampling-distribution}-\ref{lin:end-sampling-distribution}). The intuition behind the importance sampling distribution is that if $\sPM$ is high, then edge $j$ is more likely to have a high impact on the output of neuron $i$, therefore we should keep edge $j$ with a higher probability. $m$ edges are then sampled with replacement (Lines~\ref{lin:beg-sampling}-\ref{lin:end-sampling}) and the sampled weights are then reweighed to ensure unbiasedness of our estimator (Lines~\ref{lin:beg-reweigh}-\ref{lin:end-reweigh}).
\subsection{Extensions: Neuron Pruning and Amplification}
In this subsection we outline two improvements to our algorithm that that do not violate any of our theoretical properties and may improve compression rates in practical settings.
\textbf{Neuron pruning (CoreNet+)}
Similar to removing redundant edges, we can use the empirical activations to gauge the importance of each neuron.
In particular, if the maximum activation (over all evaluations $\Input \in \SS$) of a neuron is equal to 0, then the neuron -- along with all of the incoming and outgoing edges -- can be pruned without significantly affecting the output with reasonable probability.
This intuition can be made rigorous under the assumptions outlined in Sec.~\ref{sec:analysis}.
\textbf{Amplification (CoreNet++)}
Coresets that provide stronger approximation guarantees can be constructed via \emph{amplification} -- the procedure of constructing multiple approximations (coresets) $(\WWHatRow^\ell)_1, \ldots, (\WWHatRow^\ell)_\tau$ over $\tau$ trials, and picking the best one. To evaluate the quality of each approximation, a different subset $\TT \subseteq \PP \setminus \SS$ can be used to infer performance. In practice, amplification would entail constructing multiple approximations by executing Line~\ref{lin:pos-sparsify-weights} of Alg.~\ref{alg:main} and picking the one that achieves the lowest relative error on $\TT$.
\section{Problem Definition}
\label{sec:problem-definition}
\subsection{Fully-Connected Neural Networks}
A feedforward fully-connected neural network with $L \in \mathbb{N}_+$ layers and parameters $\theta$ defines a mapping $f_\param: \mathcal{X} \to \mathcal{Y}$ for a given input $x \in \mathcal{X} \subseteq \Reals^d$ to an output $y \in \mathcal{Y} \subseteq \Reals^k$ as follows. Let $\eta^\ell \in \mathbb{N}_+$ denote the number of neurons in layer $\ell \in [L]$, where $[L] = \{1, \ldots, L \}$ denotes the index set, and where $\eta^1 = d$ and $\eta^{L} = k$. Further, let $\eta = \sum_{\ell = 2}^L \eta^\ell$ and $\eta^* = \max_{\ell \in \br{2,\ldots,L}} \eta^\ell$. For layers $\ell \in \br{2,\ldots,L}$, let $W^\ell \in \Reals^{\eta^\ell \times \eta^{\ell-1}}$ be the weight matrix for layer $\ell$ with entries denoted by $\WWRow[ij]^\ell$, rows denoted by $\WWRow^\ell \in \Reals^{1 \times \eta^{\ell -1}}$, and $\theta = (W^2,\ldots,W^{L})$.
For notational simplicity, we assume that the bias is embedded in the weight matrix.
Then for an input vector $x \in \Reals^d$, let $a^1 = x$ and $z^{\ell} = W^{\ell} a^{\ell-1} \in \Reals^{\eta^{\ell}}$, $\forall \ell \in \br{2,\ldots,L}$, where $a^{\ell-1} = \relu{z^{\ell-1}} \in \Reals^{\eta^{\ell-1}}$ denotes the activation.
We consider the activation function to be the Rectified Linear Unit (ReLU) function, i.e., $\relu{\cdot} = \max \{\cdot\,, 0\}$ (entry-wise, if the input is a vector).
The output of the network for an input $x$ is $f_\param(x) = z^L$, and in particular, for classification tasks the prediction is
$
\argmax_{i \in [k]} f_\param(x)_i = \argmax_{i \in [k]} z^L_i.
$
\subsection{Neural Network Coreset Problem}
Consider the setting where a neural network $f_\param(\cdot)$ has been trained on a training set of independent and identically distributed (i.i.d.)\ samples from a joint distribution on $\mathcal{X} \times \mathcal{Y}$,
yielding parameters $\theta = (W^2,\ldots,W^{L})$. We further denote the input points of a validation data set as $\PP = \br{x_i}_{i=1}^n \subseteq \mathcal{X}$ and the marginal distribution over the input space $\mathcal{X}$ as ${\mathcal D}$. We define the size of the parameter tuple $\theta$, $\size{\theta}$, to be the sum of the number of non-zero entries in the weight matrices $W^2,\ldots,W^{L}$.
For any given $\epsilon, \delta \in (0,1)$, our overarching goal is to generate a reparameterization $\hat{\theta}$, yielding the neural network $f_{\paramHat}(\cdot)$, using a randomized algorithm, such that $\size{\hat{\theta}} \ll \size{\theta}$, and the neural network output $f_\param(x)$, $\Input \sim {\mathcal D}$ can be approximated up to $1 \pm \eps$ multiplicative error with probability greater than $1- \delta$. We define the $1 \pm \epsilon$ multiplicative error between two $k$-dimensional vectors $a, b \in \Reals^k$ as the following entry-wise bound:
$
a \in (1 \pm \epsilon)b \, \Leftrightarrow \, a_i \in (1 \pm \epsilon) b_i \, \forall{i \in [k]},
$
and formalize the definition of an $(\epsilon, \delta)$-coreset as follows.
\begin{definition}[$(\epsilon, \delta)$-coreset]
Given user-specified $\eps, \delta \in (0,1)$, a set of parameters $\hat{\theta} = (\hat{W}^2, \ldots, \hat{W}^L)$ is an $(\eps, \delta)$-coreset for the network parameterized by $\theta$ if for $x \sim {\mathcal D}$, it holds that
$$
\Pr_{\hat{\theta}, \Input} (f_{\paramHat} (x) \in (1 \pm \eps) f_\param(x)) \ge 1 - \delta,
$$
where $\Pr_{\hat{\theta}, \Input}$ denotes a probability measure with respect to a random data point $\Input$ and the output $\hat{\theta}$ generated by a randomized compression scheme.
\end{definition}
\section{Related Work}
\label{sec:related-work}
Our work builds upon the following prior work in coresets and compression approaches.
\CB{Emphasize that we don't need retraining}
\paragraph{Coresets}
Coreset constructions were originally introduced in the context of computational geometry \citep{agarwal2005geometric} and subsequently generalized for applications to other problems via an importance sampling-based, \emph{sensitivity} framework~\citep{langberg2010universal,feldman2011unified,braverman2016new}. Coresets have been used successfully to accelerate various machine learning algorithms such as $k$-means clustering~\citep{feldman2011unified,braverman2016new}, graphical model training~\citep{molina2018core}, and logistic regression~\citep{huggins2016coresets} (see the surveys of~\citep{bachem2017practical,munteanu2018coresets} for a complete list). In contrast to prior work, we generate coresets for reducing the number of parameters -- rather than data points -- via a novel construction scheme based on an efficiently-computable notion of sensitivity.
\paragraph{Low-rank Approximations and Weight-sharing}
\citet{Denil2013} were among the first to empirically demonstrate the existence of significant parameter redundancy in deep neural networks. A predominant class of compression approaches consists of using low-rank matrix decompositions, such as Singular Value Decomposition (SVD)~\citep{Denton14}, to approximate the weight matrices with their low-rank counterparts. Similar works entail the use of low-rank tensor decomposition approaches applicable both during and after training~\citep{jaderberg2014speeding, kim2015compression, tai2015convolutional, ioannou2015training, alvarez2017compression, yu2017compressing}. Another class of approaches uses feature hashing and weight sharing~\citep{Weinberger09, shi2009hash, Chen15Hash, Chen15Fresh, ullrich2017soft}. Building upon the idea of weight-sharing, quantization~\citep{Gong2014, Wu2016, Zhou2017} or regular structure of weight matrices can be used for reducing the effective number of parameters~\citep{Zhao17, sindhwani2015structured, cheng2015exploration, choromanska2016binary, wen2016learning}. Despite their favorable empirical effectiveness in compressing neural networks, these works generally lack performance guarantees on the quality of their approximations and/or the size of the resulting compressed network.
\paragraph{Weight Pruning}
Similar to our proposed method, weight pruning~\citep{lecun1990optimal} hinges on the idea that only a few dominant weights within a layer are required to approximately preserve the output. Approaches of this flavor have been investigated by~\cite{lebedev2016fast,dong2017learning}, e.g., by embedding sparsity as a constraint~\citep{iandola2016squeezenet, aghasi2017net, lin2017runtime}. Another related approach is that of~\cite{Han15}, which considers a combination of weight pruning and weight sharing methods. %
However, prior work in weight pruning lacks rigorous theoretical analysis of the effect that the discarded weights can have on the compressed network.
\paragraph{Generalization}
The generalization properties of neural networks have been extensively investigated in various contexts~\citep{dziugaite2017computing, neyshabur2017pac, kawaguchi2017generalization, bartlett2017spectrally}. However, as was pointed out by~\cite{zhang2016understanding} and~\cite{neyshabur2017exploring}, current approaches to obtaining non-vacuous generalization bounds do not fully or accurately capture the empirical success of state-of-the-art neural network architectures.
Recently, \cite{arora2018stronger} and \cite{zhou2018compressibility} highlighted the close connection between compressibility and generalization of neural networks. \cite{arora2018stronger} presented a compression method based on the Johnson-Lindenstrauss (JL) Lemma~\citep{johnson1984extensions} and proved generalization bounds based on succinct reparameterizations of the original neural network. Building upon the work of~\cite{arora2018stronger}, we extend our theoretical compression results to establish novel generalization bounds for fully-connected neural networks.
In contrast to prior work, this paper addresses the problem of \emph{provably} compressing a fully-connected neural network while preserving the output for any point from the data distribution up to any user-specified approximation accuracy and failure probability. Unlike the method of~\cite{arora2018stronger}, which exhibits guarantees of the compressed network's performance only on the set of training points, our method's guarantees hold for any point drawn from the distribution.
\section{Related Work}
\label{sec:related-work}
Our work builds upon the following prior work in coresets and compression approaches.
\textbf{Coresets}
Coreset constructions were originally introduced in the context of computational geometry \citep{agarwal2005geometric} and subsequently generalized for applications to other problems via an importance sampling-based, \emph{sensitivity} framework~\citep{langberg2010universal,braverman2016new}. Coresets have been used successfully to accelerate various machine learning algorithms such as $k$-means clustering~\citep{feldman2011unified,braverman2016new}, graphical model training~\citep{molina2018core}, and logistic regression~\citep{huggins2016coresets} (see the surveys of~\cite{bachem2017practical} and \cite{munteanu2018coresets} for a complete list). In contrast to prior work, we generate coresets for reducing the number of parameters -- rather than data points -- via a novel construction scheme based on an efficiently-computable notion of sensitivity.
\textbf{Low-rank Approximations and Weight-sharing}
\citet{Denil2013} were among the first to empirically demonstrate the existence of significant parameter redundancy in deep neural networks. A predominant class of compression approaches consists of using low-rank matrix decompositions, such as Singular Value Decomposition (SVD)~\citep{Denton14}, to approximate the weight matrices with their low-rank counterparts. Similar works entail the use of low-rank tensor decomposition approaches applicable both during and after training~\citep{jaderberg2014speeding, kim2015compression, tai2015convolutional, ioannou2015training, alvarez2017compression, yu2017compressing}. Another class of approaches uses feature hashing and weight sharing~\citep{Weinberger09, shi2009hash, Chen15Hash, Chen15Fresh, ullrich2017soft}. Building upon the idea of weight-sharing, quantization~\citep{Gong2014, Wu2016, Zhou2017} or regular structure of weight matrices was used to reduce the effective number of parameters~\citep{Zhao17, sindhwani2015structured, cheng2015exploration, choromanska2016binary, wen2016learning}. Despite their practical effectiveness in compressing neural networks, these works generally lack performance guarantees on the quality of their approximations and/or the size of the resulting compressed network.
\textbf{Weight Pruning}
Similar to our proposed method, weight pruning~\citep{lecun1990optimal} hinges on the idea that only a few dominant weights within a layer are required to approximately preserve the output. Approaches of this flavor have been investigated by~\cite{lebedev2016fast,dong2017learning}, e.g., by embedding sparsity as a constraint~\citep{iandola2016squeezenet, aghasi2017net, lin2017runtime}. Another related approach is that of~\cite{Han15}, which considers a combination of weight pruning and weight sharing methods. %
Nevertheless, prior work in weight pruning lacks rigorous theoretical analysis of the effect that the discarded weights can have on the compressed network. To the best of our knowledge, our work is the first to introduce a practical, sampling-based weight pruning algorithm with provable guarantees.
\textbf{Generalization}
The generalization properties of neural networks have been extensively investigated in various contexts~\citep{dziugaite2017computing, neyshabur2017pac, bartlett2017spectrally}. However, as was pointed out by~\cite{neyshabur2017exploring}, current approaches to obtaining non-vacuous generalization bounds do not fully or accurately capture the empirical success of state-of-the-art neural network architectures.
Recently, \cite{arora2018stronger} and \cite{zhou2018compressibility} highlighted the close connection between compressibility and generalization of neural networks. \cite{arora2018stronger} presented a compression method based on the Johnson-Lindenstrauss (JL) Lemma~\citep{johnson1984extensions} and proved generalization bounds based on succinct reparameterizations of the original neural network. Building upon the work of~\cite{arora2018stronger}, we extend our theoretical compression results to establish novel generalization bounds for fully-connected neural networks. Unlike the method of~\cite{arora2018stronger}, which exhibits guarantees of the compressed network's performance only on the set of training points, our method's guarantees hold (probabilistically) for any random point drawn from the distribution. In addition, we establish that our method can \LL{we have never introduced what "$\epsilon$-approximate" means. Is this "standard" enough that readers know what it means?}$\epsilon$-approximate the neural network output neuron-wise, which is stronger than the norm-based guarantee of \cite{arora2018stronger}.
In contrast to prior work, this paper addresses the problem of compressing a fully-connected neural network while \emph{provably} preserving the network's output. Unlike previous theoretically-grounded compression approaches -- which provide guarantees in terms of the normed difference --, our method provides the stronger entry-wise approximation guarantee, even for points outside of the available data set. As our empirical results show, ensuring that the output of the compressed network entry-wise approximates that of the original network is critical to retaining high classification accuracy. Overall, our compression approach remedies the shortcomings of prior approaches in that it (i) exhibits favorable theoretical properties, (ii) is computationally efficient, e.g., does not require retraining of the neural network, (iii) is easy to implement, and (iv) can be used in conjunction with other compression approaches -- such as quantization or Huffman coding -- to obtain further improved compression rates.
\section{Results}
\label{sec:results}
In this section, we evaluate the practical effectiveness of our compression algorithm on popular benchmark data sets (\textit{MNIST}~\citep{lecun1998gradient}, \textit{FashionMNIST}~\citep{xiao2017}, and \textit{CIFAR-10}~\citep{krizhevsky2009learning}) and varying fully-connected trained neural network configurations: 2 to 5 hidden layers, 100 to 1000 hidden units, either fixed hidden sizes or decreasing hidden size denoted by \emph{pyramid} in the figures. We further compare the effectiveness of our sampling scheme in reducing the number of non-zero parameters of a network, i.e., in sparsifying the weight matrices, to that of uniform sampling, Singular Value Decomposition (SVD), and current state-of-the-art sampling schemes for matrix sparsification~\citep{drineas2011note,achlioptas2013matrix,kundu2014note}, which are based on matrix norms -- $\ell_1$ and $\ell_2$ (Frobenius). The details of the experimental setup and results of additional evaluations may be found in Appendix~\ref{app:results}.
\paragraph{Experiment Setup}
We compare against three variations of our compression algorithm: (i) sole edge sampling (CoreNet), (ii) edge sampling with neuron pruning (CoreNet+), and (iii) edge sampling with neuron pruning and amplification (CoreNet++). For comparison, we evaluated the average relative error in output ($\ell_1$-norm) and average drop in classification accuracy relative to the accuracy of the uncompressed network. Both metrics were evaluated on a previously unseen test set.
\paragraph{Results}
Results for varying architectures and datasets are depicted in Figures~\ref{fig:classification} and ~\ref{fig:error} for the average drop in classification accuracy and relative error ($\ell_1$-norm), respectively. As apparent from Figure~\ref{fig:classification}, we are able to compress networks to about 15\% of their original size without significant loss of accuracy for networks trained on \textit{MNIST} and \textit{FashionMNIST}, and to about 50\% of their original size for \textit{CIFAR}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.325\textwidth]{figures/acc/MNIST_l3_h1000_pyramid}
\includegraphics[width=0.325\textwidth]{figures/acc/CIFAR_l3_h1000_pyramid}
\includegraphics[width=0.325\textwidth]{figures/acc/FashionMNIST_l3_h1000_pyramid}%
\caption{Evaluation of drop in classification accuracy after compression against the \textit{MNIST}, \textit{CIFAR}, and \textit{FashionMNIST} datasets with varying number of hidden layers ($L$) and number of neurons per hidden layer ($\eta^*$). Shaded region corresponds to values within one standard deviation of the mean.}
\label{fig:classification}
\end{figure}%
\begin{figure}[htb!]
\centering
\includegraphics[width=0.325\textwidth]{figures/error/MNIST_l3_h1000_pyramid}
\includegraphics[width=0.325\textwidth]{figures/error/CIFAR_l3_h1000_pyramid}
\includegraphics[width=0.325\textwidth]{figures/error/FashionMNIST_l3_h1000_pyramid}%
\caption{Evaluation of relative error after compression against the \textit{MNIST}, \textit{CIFAR}, and \textit{FashionMNIST} datasets with varying number of hidden layers ($L$) and number of neurons per hidden layer ($\eta^*$).
}
\label{fig:error}
\end{figure}%
\paragraph{Discussion}
The simulation results presented in this section validate our theoretical results established in Sec.~\ref{sec:analysis}. In particular, our empirical results indicate that we are able to outperform networks compressed via competing methods in matrix sparsification across all considered experiments and trials. The results presented in this section further suggest that empirical sensitivity can effectively capture the relative importance of neural network parameters, leading to a more informed importance sampling scheme. Moreover, the relative performance of our algorithm tends to increase as we consider deeper architectures. These findings suggest that our algorithm may also be effective in compressing modern convolutional architectures, which tend to be very deep.
\FloatBarrier |
1,116,691,499,796 | arxiv | \section{Introduction}
Let $R$ be a ring. If $R$ has a unit element $1_R$, then the number $$\ch R := \min\lbrace
n\in\mathbb N \mid n1_R = 0\rbrace = \min\lbrace
n\in\mathbb N \mid na = 0 \text{ for all } a\in R\rbrace$$ is called the \textit{characteristic} of $R$.
(As usual, if $n1_R \ne 0$ for all $n\in\mathbb N$, then $\ch R := 0$.)
Let $\mathbb Z\langle X \rangle$ be the free associative ring without $1$ on the countable set
$X=\lbrace x_1, x_2, \dots\rbrace$, i.e., the ring of polynomials in non-commuting variables
from $X$ without a constant term.
Let $I$ be an ideal of a ring $R$. We say that $I$ is a \textit{T-ideal} of $R$ if $\varphi(I)\subseteq I$
for all $\varphi \in \End(R)$. We say that $f \in \mathbb Z\langle X \rangle$
is a \textit{polynomial identity} of $R$ \textit{with integer coefficients} if $f(a_1, \dots, a_n)=0$ for all $a_i \in R$.
In other words, $f$ is a polynomial identity if $\psi(f)=0$ for all $\psi \in\Hom(\mathbb Z\langle X \rangle, R)$. Note that the set $\Id(R, \mathbb Z)$ of polynomial identities of $R$ with integer coefficients is a $T$-ideal of $\mathbb Z\langle X \rangle$.
Let $P_n(\mathbb Z)$ be the additive subgroup of $\mathbb Z\langle X \rangle$ generated by $x_{\sigma(1)} x_{\sigma(2)} \dots x_{\sigma(n)}$, $\sigma \in S_n$. (Here $S_n$ is the $n$th symmetric group, $n\in\mathbb N$.)
Then $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)}$ is a finitely generated Abelian group which is the direct sum of free
and primary cyclic groups: $$\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)} \cong
\underbrace{\mathbb Z \oplus \dots \oplus \mathbb Z}_{c_n(R, 0)} \oplus \bigoplus_{\substack{p\text{ is a prime}\\ \text{number}}}
\ \bigoplus_{k\in\mathbb N} \Bigl(\underbrace{\mathbb Z_{p^k}\oplus \dots \oplus \mathbb Z_{p^k}}_{c_n(R, p^k)}\Bigr).$$
We call the numbers $c_n(R, q)$ the \textit{codimensions} of polynomial identities of $R$
with integer coefficients.
Note that the symmetric group $S_n$ is acting on $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)}$ by permutations
of variables, i.e., $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)}$ is a $\mathbb ZS_n$-module. We refer to $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)}$
as the {\itshape $\mathbb ZS_n$-module of ordinary multilinear polynomial functions} on $R$.
Denote by $\Gamma_n(\mathbb Z)$ the subgroup of $P_n(\mathbb Z)$ that consists of {\itshape proper polynomials}, i.e linear combinations of products of long commutators. (All long commutators in the article are left normed, e.g. $[x,y,z,t]:=[[[x,y],z],t]$.)
Then $\Gamma_n(\mathbb Z)$ is a $\mathbb ZS_n$-submodule of $P_n(\mathbb Z)$. Obviously, $\Gamma_1(\mathbb Z) = 0$.
Analogously, we define the \textit{codimensions} $\gamma_n(R, q)$ of proper polynomial identities of $R$:
$$\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z) \cap \Id(R, \mathbb Z)} \cong
\underbrace{\mathbb Z \oplus \dots \oplus \mathbb Z}_{\gamma_n(R, 0)} \oplus \bigoplus_{\substack{p\text{ is a prime}\\ \text{number}}}
\ \bigoplus_{k\in\mathbb N} \Bigl(\underbrace{\mathbb Z_{p^k}\oplus \dots \oplus \mathbb Z_{p^k}}_{\gamma_n(R, p^k)}\Bigr).$$
If $R$ has a unit element $1_R$, then, by the definition, $\gamma_0(R, q)$ is the number of $\mathbb Z_q$ in the decomposition of the cyclic additive subgroup of $R$ generated by $1_R$.
We refer to $\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z) \cap \Id(R, \mathbb Z)}$
as the {\itshape $\mathbb ZS_n$-module of proper multilinear polynomial functions} on $R$.
If $A$ is an algebra over a field $F$, then we can consider codimensions $c_n(A, F) := \dim \frac{P_n(F)}{P_n(F) \cap \Id(A, F)}$ of polynomial
identities of $A$ with coefficients from $F$. (See~\cite[Definition 4.1.1]{ZaiGia}.)
Here $\Id(A, F) \subset F\langle X\rangle$
is the set of polynomial identities of $A$ with coefficients from $F$, and $P_n(F)$ is the subspace of
$F\langle X\rangle$ generated by $x_{\sigma(1)} x_{\sigma(2)} \dots x_{\sigma(n)}$, $\sigma \in S_n$.
The subspace of $P_n(F)$ consisting of proper polynomials, is denoted by $\Gamma_n(F)$.
We say that $\lambda=(\lambda_1, \dots, \lambda_s)$
is a \textit{(proper or ordered) partition} of $n$ and write $\lambda\vdash n$ if $\lambda_1 \geqslant \lambda_2 \geqslant \dots \geqslant \lambda_s > 0$,
$\lambda_i\in\mathbb N$, and $\sum_{i=1}^s \lambda_i = n$.
In this case we write $\lambda \vdash n$. For our convenience, we assume $\lambda_i=0$
for all $i > s$.
We say that $\mu=(\mu_1, \dots, \mu_s)$
is an \textit{unordered partition} of $n$ if $\mu_i\in\mathbb N$ and $\sum_{i=1}^s \mu_i = n$.
In this case we write $\mu \vDash n$. Again, for our convenience, we assume $\mu_i=0$
for all $i > s$.
For every ordered or unordered partition $\lambda$ one can assign the \textit{Young diagram} $D_\lambda$
which contains $\lambda_k$ boxes in the $k$th row. If $\lambda$ is unordered, then $D_\lambda$
is called \textit{generalized}. A Young diagram filled with numbers is called a \textit{Young tableau}.
A tableau corresponding to $\lambda$ is denoted by $T_\lambda$.
In the representation theory of symmetric groups, partitions and their
Young diagrams are widely used. (See~\cite{DrenKurs, ZaiGia} for applications to PI-algebras.)
Let $a_{T_{\lambda}} = \sum_{\pi \in R_{T_\lambda}} \pi$
and $b_{T_{\lambda}} = \sum_{\sigma \in C_{T_\lambda}}
(\sign \sigma) \sigma$
be symmetrizers corresponding to a Young tableau~$T_\lambda$, $\lambda \vdash n$.
Then $S(\lambda) := (\mathbb Z S_n) b_{T_\lambda} a_{T_\lambda}$
is the corresponding Specht module. Moreover, modules $S(\lambda)$
that correspond to different $T_\lambda$ but the same $\lambda$, are isomorphic too.
(The proof is analogous to the case of fields.)
Though $S(\lambda)$ are not irreducible over $\mathbb Z$ and even contain no irreducible $\mathbb Z S_n$-submodules (it is sufficient to consider the submodule $0 \ne 2M \subsetneqq M$ for any submodule $M \subseteq S(\lambda)$), we will use
them in order to describe the structure of $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)}$.
\section{Codimensions of algebras over fields}
Every algebra over a field can be treated as a ring. Therefore, we have to deal with two different types of codimensions. Here we establish a relation between them.
\begin{proposition}\label{PropositionIntAbsence} Let $A$ be an algebra over a field $F$.
Then $c_n(A, q)=0$ for all $n\in\mathbb N$ and $q \ne \ch F$.
\end{proposition}
\begin{proof}
Note that $(\ch F) f \in \Id(R, \mathbb Z)$ for all $f\in \mathbb Z\langle X \rangle$.
Hence $\ch F > 0$ implies $c_n(A, q)=0$ for all $n\in\mathbb N$ and $q \ne \ch F$.
If $\ch F = 0$, then every $q=p^k \ne 0$ is invertible and $c_n(A, q)=0$ for all $n\in\mathbb N$ and $q \ne 0$ too.
\end{proof}
\begin{proposition}\label{PropositionIntPIoverQ}
Let $A$ be an algebra over
a field $F$, $\ch F = 0$.
Then $c_n(A, F)
\leqslant c_n(A, 0)$ for all $n\in\mathbb N$.
Moreover, $c_n(A, \mathbb Q)
= c_n(A, 0)$ for all $n\in\mathbb N$.
\end{proposition}
\begin{proof}
By Proposition~\ref{PropositionIntAbsence}, $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(A, \mathbb Z)}$ is
a free Abelian group. Let $f_1, \dots, f_s$ be the preimages of its free generators in $P_n(\mathbb Z)$.
Note that $P_n(\mathbb Z) \subset P_n(\mathbb Q) \subseteq P_n(F)$ and for every $\sigma \in S_n$ the monomial $x_{\sigma(1)}x_{\sigma(2)}\dots x_{\sigma(n)}$ can be expressed as a linear combination with integer coefficients of
$f_1, \dots, f_s$ and an element of $P_n(\mathbb Z) \cap \Id(A, \mathbb Z)$.
Hence the images of $f_1, \dots, f_s$ generate $\frac{P_n(F)}{P_n(F) \cap \Id(A, F)}$
and $c_n(A, F) \leqslant c_n(A, 0)=s$.
Suppose $f_1, \dots, f_s$ are linearly dependent modulo $\Id(A, \mathbb Q)$. In this case $\frac{r_1}{q_1}f_1+\dots +\frac{r_1}{q_1}f_s \in \Id(A, \mathbb Q)$ for some $q_i \in\mathbb N$, $r_i \in \mathbb Z$.
Thus $$f:=r_1\left(\prod_{i=2}^s q_i\right)f_1 + r_1q_1\left(\prod_{i=3}^s q_i\right)f_2+\dots
+ r_s \left(\prod_{i=1}^{s-1} q_i\right)f_s \in \Id(A, \mathbb Q).$$
However, $f\in \mathbb Z\langle X\rangle$. Hence $f \in \Id(A, \mathbb Z)$
and all $r_i = 0$ since $f_i$ are linearly independent modulo $\Id(A, \mathbb Z)$.
Therefore, the images of $f_1, \dots, f_s$ form a basis of $\frac{P_n(\mathbb Q)}{P_n(\mathbb Q) \cap \Id(A, \mathbb Q)}$ and $c_n(A, \mathbb Q)=c_n(A, 0)=s$.
\end{proof}
The next example shows that in the case $F\supsetneqq \mathbb Q$ we
could have $c_n(A, F) < c_n(A, \mathbb Q) = c_n(A, 0)$.
\begin{example}
Note that $P_3(\mathbb Q)\cong \mathbb QS_3 \cong S^{\mathbb Q}(3) \oplus S^{\mathbb Q}(2,1) \oplus S^{\mathbb Q}(2,1)\oplus S^{\mathbb Q}(1^3)$.
Let $a\in \mathbb QS_3$ such that $S^{\mathbb Q}(2,1)=\mathbb QS_3 a$. Denote by $f_1$ and $f_2$ the polynomials
that correspond to $a$ in the copies of $S^{\mathbb Q}(2,1)$ in $P_3(\mathbb Q)$.
Let $F=\mathbb Q(\sqrt 2)$.
Consider the $T$-ideal $I$ of $ F\langle X \rangle$ generated by $(f_1 + {\sqrt 2} f_2)$.
We claim that $c_3(F\langle X \rangle/I, F) = 4 < c_3(F\langle X \rangle/I, \mathbb Q)=6$.
\end{example}
\begin{proof}
First we notice that $P_3(F)\cap \Id(F\langle X \rangle/I, F)
= F S_3\cdot (f_1 + {\sqrt 2} f_2) \cong S^F(2,1)$.
Hence by the hook formula, $c_3(F\langle X \rangle/I, F) = 6-2 = 4$.
However, $P_3(\mathbb Q)\cap \Id(F\langle X \rangle/I, \mathbb Q)
= P_3(\mathbb Q)\cap F S_3 (f_1 + {\sqrt 2} f_2)=0$.
Indeed, suppose $f = b (f_1 + {\sqrt 2} f_2) \in P_3(\mathbb Q)$
for some $b\in F S_3$. Note that $b=b_1 +\sqrt 2 b_2$ where $b_1, b_2\in \mathbb QS_3$.
Therefore, $f = (b_1 +\sqrt 2 b_2) (f_1 + {\sqrt 2} f_2) = (b_1f_1+2 b_2 f_2) + \sqrt 2 (b_1 f_2 + b_2 f_1)$ and $f \in P_3(\mathbb Q)$ implies $b_1 f_2 + b_2 f_1 = 0$.
Recall that $\mathbb Q S_3 f_1 \oplus \mathbb Q S_3 f_2$ is the direct sum of $\mathbb Q S_3$-submodules.
Hence $b_1f_2 = b_2 f_1 = 0$. However, $\mathbb Q S_3 f_1 \cong \mathbb Q S_3 f_2$.
Thus $b_1f_1 = b_2 f_2 = 0$ too, $f=0$,
$P_3(\mathbb Q)\cap \Id(F\langle X \rangle/I, \mathbb Q) = 0$
and $c_3(F\langle X \rangle/I, \mathbb Q)=6$.
\end{proof}
The result, analogous to Proposition~\ref{PropositionIntPIoverQ}, holds in a positive characteristic.
\begin{proposition}\label{PropositionIntPIoverZp}
Let $A$ be an algebra over
a field $F$, $\ch F = p$.
Then $c_n(A, F)
\leqslant c_n(A, p)$ for all $n\in\mathbb N$.
Moreover, $c_n(A, \mathbb Z_p)
= c_n(A, p)$ for all $n\in\mathbb N$.
\end{proposition}
\begin{proof}
By Proposition~\ref{PropositionIntAbsence}, $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(A, \mathbb Z)}$ is
the direct sum of copies of $\mathbb Z_p$. Let $f_1, \dots, f_s$ be the preimages of their standard generators in $P_n(\mathbb Z)$.
Note that $P_n(\mathbb Z_p)$ is an image of $P_n(\mathbb Z)$
under the natural homomorphism, $P_n(\mathbb Z_p) \subseteq P_n(F)$ and for every $\sigma \in S_n$ the monomial $x_{\sigma(1)}x_{\sigma(2)}\dots x_{\sigma(n)}$ can be expressed as a linear combination with integer coefficients of
$f_1, \dots, f_s$ and an element of $P_n(\mathbb Z) \cap \Id(A, \mathbb Z)$.
Hence the images of $f_1, \dots, f_s$ generate $\frac{P_n(F)}{P_n(F) \cap \Id(A, F)}$
and $c_n(A, F) \leqslant c_n(A, p)$.
Suppose $f_1, \dots, f_s$ are linearly dependent modulo $\Id(A, \mathbb Z_p)$. In this case $ \bar m_1 f_1+\dots +\bar m_s f_s \in \Id(A, \mathbb Z_p)$ for some $m_i \in \mathbb Z$.
Thus $m_1 f_1+\dots +m_s f_s \in \Id(A, \mathbb Z)$
and all $m_i \in p\mathbb Z$ since $f_i$ generate modulo $\Id(A, \mathbb Z)$
the direct sum of copies of $\mathbb Z_p$.
Therefore, the images of $f_1, \dots, f_s$ form a basis of $\frac{P_n(\mathbb Z_p)}{P_n(\mathbb Z_p) \cap \Id(A, \mathbb Z_p)}$ and $c_n(A, \mathbb Z_p)=c_n(A, p)=s$.
\end{proof}
The next result is concerned with the extension of a ring to an algebra over a field.
\begin{theorem}\label{TheoremFCodimofaRing}
Let $R$ be a ring and let $F$ be a field.
Then $$c_n(R \mathbin{\otimes_{\mathbb Z}} F, F)
= \left\lbrace
\begin{array}{ccc} c_n(R/{\Tor R}, 0) & if & \ch F = 0, \\
c_n(R/p R, p)& if & \ch F = p \end{array}\right.$$
where $\Tor R := \lbrace r \in R \mid mr=0 \text{ for some } m\in\mathbb N\rbrace$ is the torsion of $R$.
\end{theorem}
First, we prove the following lemma
\begin{lemma}\label{LemmaTensorWithAField}
Let $R$ be a ring and let $F$ be a field.
Then $$R \otimes 1_F \cong \left\lbrace
\begin{array}{ccc} R/{\Tor R} & if & F = \mathbb Q, \\
R/p R & if & F = \mathbb Z_p \end{array}\right.$$
where $R \otimes 1_F \subseteq R \mathbin{\otimes_{\mathbb Z}} F$ is a subring.
\end{lemma}
\begin{proof}
Consider the natural homomorphism $\varphi \colon R \to R \otimes 1_F$
where $\varphi(a)=a\otimes 1_F$, $a\in R$.
Suppose $F = \mathbb Q$.
If $ma = 0$ for some $m\in\mathbb N$ and $a\in R$,
then $\varphi(a)=a \otimes 1_{\mathbb Q} = ma \otimes \frac{1_{\mathbb Q}}{m} = 0$.
Hence $\Tor R \subseteq \ker \varphi$.
We claim that $\ker \varphi = \Tor R$.
Let $a \in \ker \varphi$, i.e., $a \otimes 1_{\mathbb Q} = 0$. By one of the definitions of the tensor product, \begin{equation*}\begin{split}(a, 1_{\mathbb Q})=\sum_i \ell_i((a_i+b_i, q_i)-(a_i,q_i)-(b_i,q_i))+\\ \sum_i m_i((c_i, s_i+t_i)-(c_i,s_i)-(c_i,t_i)) +
\sum_i n_i((k_i d_i, u_i) - (d_i, k_i u_i))\end{split}\end{equation*} holds for some $a_i,b_i,c_i,d_i\in R$, $k_i, \ell_i,m_i,n_i\in\mathbb Z$,
and $q_i, s_i, t_i, u_i \in \mathbb Q$ in the free $\mathbb Z$-module $H_{R\times \mathbb Q}$
with the basis $R\times \mathbb Q$.
We can find such $m \in\mathbb N$ that all $mq_i,ms_i,mt_i, mu_i \in \mathbb Z$.
Then \begin{equation*}\begin{split}(a, m)=\sum_i \ell_i((a_i+b_i, mq_i)-(a_i,mq_i)-(b_i,mq_i))+\\ \sum_i m_i((c_i, ms_i+mt_i)-(c_i,ms_i)-(c_i,mt_i)) +
\sum_i n_i((k_i d_i, mu_i) - (d_i, k_i mu_i))\end{split}\end{equation*} holds in the free $\mathbb Z$-module $H_{R\times \mathbb Z}$
with the basis $R\times \mathbb Z$. Note that in the right hand side of the latter equality
we have a relation in $R \mathbin{\otimes_{\mathbb Z}} \mathbb Z$.
Hence $a\otimes m = 0$ in $R \mathbin{\otimes_{\mathbb Z}} \mathbb Z \cong R$
and $ma=0$. Thus $a\in \Tor R$. Therefore, $\ker \varphi = \Tor R$ and $R \otimes 1_{\mathbb Q} \cong R/{\Tor R}$.
Suppose $F = \mathbb Z_p$. Then $\varphi(pR)= R\otimes p1_{\mathbb Z_p} = 0$
and $pR \subseteq \ker \varphi$.
Let $a\in\ker\varphi$, i.e., $a \otimes 1_{\mathbb Z_p} = 0$.
Then \begin{equation*}\begin{split}(a, 1_{\mathbb Z_p})=\sum_i q_i((a_i+b_i, \bar\ell_i)-(a_i,\bar\ell_i)-(b_i,\bar\ell_i))+\\ \sum_i s_i((c_i, \bar m_i+ \bar n_i)-(c_i, \bar m_i)-(c_i, \bar n_i)) + \sum_i t_i((k_i d_i, \bar u_i) - (d_i, k_i \bar u_i)) \end{split}\end{equation*} holds for some $a_i,b_i,c_i, d_i\in R$
and $k_i, \ell_i,m_i,n_i, q_i, s_i, t_i, u_i \in \mathbb Z$ in the free $\mathbb Z$-module $H_{R\times \mathbb Z_p}$
with the basis $R\times \mathbb Z_p$.
Note that $H_{R\times \mathbb Z_p}$ is the factor module of $H_{R\times \mathbb Z}$
by the subgroup $\langle (a, m)-(a, m+p) \mid a \in R,\ m\in\mathbb Z\rangle_{\mathbb Z}$.
Hence \begin{equation*}\begin{split}(a, 1_\mathbb Z)=\sum_i q_i((a_i+b_i, \ell_i)-(a_i,\ell_i)-(b_i,\ell_i))+
\sum_i s_i((c_i, m_i+n_i)-(c_i,m_i)-(c_i,n_i))+\\ \sum_i t_i((k_i d_i, u_i) - (d_i, k_i u_i))+ \sum_i \alpha_i((r_i, \beta_i)-(r_i, \beta_i+p))\end{split}\end{equation*}
holds in $H_{R\times \mathbb Z}$ for some $r_i \in R$ and $\alpha_i, \beta_i\in\mathbb Z$.
Thus $a\otimes 1_\mathbb Z = \sum_i \alpha_i r_i \otimes p$.
Now we use the isomorphism $R \mathbin{\otimes_{\mathbb Z}} \mathbb Z \cong R$
and get $a = \sum_i \alpha_i r_i p \in pR$. Therefore, $\ker \varphi = pR$
and
$R \otimes 1_{\mathbb Z_p} \cong R/{p R}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{TheoremFCodimofaRing}.]
Recall that $R \mathbin \otimes 1_F$ is a subring of $R \mathbin{\otimes_{\mathbb Z}} F$.
Hence $P_n(\mathbb Z) \cap \Id(R \mathbin{\otimes_{\mathbb Z}} F, \mathbb Z)
\subseteq P_n(\mathbb Z) \cap \Id(R \otimes 1_F, \mathbb Z)$.
Conversely, $P_n(\mathbb Z) \cap \Id(R \mathbin{\otimes_{\mathbb Z}} F, \mathbb Z)
\supseteq P_n(\mathbb Z) \cap \Id(R \otimes 1_F, \mathbb Z)$ since $R \mathbin\otimes 1_F$ generates $R \mathbin{\otimes_{\mathbb Z}} F$
as an $F$-vector space. Therefore, $c_n(R \otimes 1_F, \ch F) = c_n(R \mathbin{\otimes_{\mathbb Z}} F, \ch F)$ and we get Theorem~\ref{TheoremFCodimofaRing} for $F=\mathbb Q$ and $F=\mathbb Z_p$ from Lemma~\ref{LemmaTensorWithAField} and Propositions~\ref{PropositionIntPIoverQ}, \ref{PropositionIntPIoverZp}.
The general case follows from the fact that $(R \mathbin{\otimes_{\mathbb Z}} F) \mathbin{\otimes_{F}} K \cong R \mathbin{\otimes_{\mathbb Z}} K$ (as a $K$-algebra) for any field
extension $K \supseteq F$ and, by~\cite[Theorem~4.1.9]{ZaiGia}, $$c_n(R \mathbin{\otimes_{\mathbb Z}} K, K)=c_n((R \mathbin{\otimes_{\mathbb Z}} F) \mathbin{\otimes_F} K, K)= c_n(R \mathbin{\otimes_{\mathbb Z}} F, F).$$
\end{proof}
\begin{corollary}
Let $R$ be a torsion-free ring satisfying a non-trivial polynomial identity.
Then
\begin{enumerate} \item either $c_n(R, 0) = 0$ for all $n\geqslant n_0$, $n_0\in\mathbb N$,
or there exist $d\in\mathbb N$, $C_1, C_2 > 0$, $q_1, q_2 \in\mathbb R$
such that $C_1 n^{q_1} d^n \leqslant c_n(R, 0) \leqslant C_2 n^{q_2} d^n$
for all $n\in\mathbb N$;
in particular, polynomial identities of $R$ satisfy the analog of \textit{Amitsur's
conjecture}, i.e., there exists $\lim_{n\to\infty} \sqrt[n]{c_n(R, 0)} \in\mathbb Z_+$;
\item
if $R$ contains $1$, then there exist $C>0$ and $q\in\mathbb Z$ such that
$c_n(R, 0) \sim C n^{\frac{q}{2}} d^n$ as $n\to\infty$, i.e., the analog of \textit{Regev's
conjecture} holds in $R$. (We write $f \sim g$ if $\lim \frac f g = 1$.)
\end{enumerate}
\end{corollary}
\begin{proof}
By Theorem~\ref{TheoremFCodimofaRing}, $c_n(R, 0) = c_n(R \mathbin{\otimes_{\mathbb Z}} \mathbb Q, \mathbb Q)$. Now we apply \cite[Theorem~6.5.2]{ZaiGia} and~\cite[Theorem~4.2.2]{BereleInfDim}.
\end{proof}
\begin{remark}
If $R$ is a torsion-free ring, then $c_n(R, q)=0$ for all $q\ne 0$
since $f \in \Id(R, \mathbb Z)$
for all $f\in\mathbb Z\langle X \rangle$
such that $mf \in \Id(R, \mathbb Z)$ for some $m\in\mathbb N$.
\end{remark}
We conclude the section with an example.
\begin{example}
Let $R = \bigoplus_{k=1}^\infty \mathbb Z_{2^k}$.
Then $c_n(R, 0) = 1$ and $c_n(R, q) = 0$
for all $q\ne 0$ and $n\in\mathbb N$.
Although $mR \ne 0$ for all $m\in \mathbb N$, $R \mathbin{\otimes_{\mathbb Z}} \mathbb Q = 0$ and $c_n(R \mathbin{\otimes_{\mathbb Z}} \mathbb Q, \mathbb Q)=0$
for all $n\in\mathbb N$.
\end{example}
\begin{proof}
The ring $R$ is commutative. Hence all monomials from $P_n(\mathbb Z)$
are proportional to $x_1 x_2 \dots x_n$ modulo $\Id(R, \mathbb Z)$.
However, $m x_1 x_2 \dots x_n \notin \Id(R, \mathbb Z)$
for all $m\in\mathbb N$. (It is sufficient to substitute
$x_1 = x_2 = \dots = x_n = \bar 1_{\mathbb Z_{2^k}}$ for
$2^k > m$.) Thus $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z)\cap \Id(R, \mathbb Z)}
\cong \mathbb Z$ and $c_n(R, 0) = 1$ and $c_n(R, q) = 0$
for all $q\ne 0$ and $n\in\mathbb N$.
However $a \otimes q = 2^k a \otimes \frac{q}{2^k}$
for all $a\in R$, $q\in\mathbb Q$, and $k\in \mathbb N$.
Choosing $k$ sufficiently large, we get $a \otimes q = 2^k a \otimes \frac{q}{2^k} = 0$. Thus $R \mathbin{\otimes_{\mathbb Z}} \mathbb Q = 0$ and $c_n(R \mathbin{\otimes_{\mathbb Z}} \mathbb Q, \mathbb Q)=0$
for all $n\in\mathbb N$.
\end{proof}
\section{Relation between $\mathbb ZS_n$-modules of proper and ordinary polynomial functions}
First, we describe the relation between proper and ordinary codimensions.
\begin{theorem}\label{TheoremCodimProperAndOrdinary}
Let $R$ be a unitary ring. Then
$c_n(R, q) = \sum_{j=0}^n \tbinom{n}{j}\gamma_j(R, q)$
for every $n\in \mathbb N$ and $q \in \lbrace p^k \mid p,k\in\mathbb N,\ p \text{ is prime } \rbrace \cup \lbrace 0\rbrace$.
\end{theorem}
\begin{proof}
First, we notice that \begin{equation}\label{EqPnDecomp}P_n(\mathbb Z) = \bigoplus_{k=0}^n \bigoplus_{1\leqslant i_1 <
i_2 < \dots < i_k \leqslant n} x_{i_1} x_{i_2}\dots x_{i_k} \, \sigma_{i_1, \dots, i_k}
\Gamma_{n-k}(\mathbb Z) \text{ (direct sum of $\mathbb Z$-modules) }\end{equation}
where $\Gamma_0(\mathbb Z) := \mathbb Z$ and $\sigma_{i_1, \dots, i_k} \in S_n$ is any permutation such that $\sigma((n-k)+j)=i_j$ for all $1\leqslant j\leqslant k$.
One way to prove~(\ref{EqPnDecomp}) is to use the Poincar\'e~--- Birkhoff~--- Witt theorem
for Lie algebras over rings~\cite[Theorem 2.5.3]{Bahturin}.
Another way is to show this explicitly in the spirit of Specht~\cite{Specht}. Using the equalities $yx = [y,x]+xy$ and $[\dots,\dots]x=x[\dots,\dots]+[[\dots, \dots],x]$,
we can present every polynomial from $P_n(\mathbb Z)$ as a linear combination of
polynomials $x_{i_1}x_{i_2}\dots x_{i_k}\, f$ where $1\leqslant i_1 <
i_2 < \dots < i_k \leqslant n$ and $f$ is a proper multilinear
polynomial of degree $(n-k)$ in the variables from the set $\lbrace x_1, x_2, \dots, x_n \rbrace \backslash
\lbrace x_{i_1}, x_{i_2}, \dots, x_{i_k} \rbrace$.
In other words, $f \in \sigma_{i_1, \dots, i_k} \Gamma_{n-k}(\mathbb Z)$.
In order to check that the sum in~(\ref{EqPnDecomp}) is direct, we consider
a linear combination of $x_{i_1} x_{i_2}\dots x_{i_k}
\sigma_{i_1, \dots, i_k} f $ where $f\in \Gamma_{n-k}(\mathbb Z)$,
for different $k$ and $i_j$ and choose the term $g := x_{i_1} x_{i_2}\dots x_{i_k} \sigma_{i_1, \dots, i_k} f $ with the greatest $k$
among the terms with a nonzero coefficient.
Then we substitute $x_{i_1}=x_{i_2}=
\dots = x_{i_k}=1$ and $x_j = x_j$ for the rest of the variables.
(We assume that we are working in the free ring with $1$
on the set $X=\lbrace x_1, x_2, \dots \rbrace$.)
All the other terms vanish and we get $f=0$.
Therefore, the sum is direct and~(\ref{EqPnDecomp}) holds.
Substituting $x_{i_1}=x_{i_2}=
\dots = x_{i_k}=1_R$
and arbitrary elements of $R$
for the other $x_j$, we obtain
\begin{equation}\begin{split}\label{EqPnIdDecomp}P_n(\mathbb Z) \cap \Id(R,\mathbb Z)=
(\ch R) \mathbb Z x_1 x_2 \dots x_n\
\oplus \\ \bigoplus_{k=0}^{n-2} \bigoplus_{1\leqslant i_1 <
i_2 < \dots < i_k \leqslant n} \ x_{i_1} x_{i_2}\dots x_{i_k} \, \sigma_{i_1, \dots, i_k}
\bigl(\Id(R,\mathbb Z) \cap \Gamma_{n-k}(\mathbb Z)\bigr).\end{split}\end{equation}
Combining~(\ref{EqPnDecomp}) and~(\ref{EqPnIdDecomp}), we get
$$\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)} \cong \bigoplus_{k=0}^n \bigoplus_{1\leqslant i_1 <
i_2 < \dots < i_k \leqslant n}
\frac{\Gamma_{n-k}(\mathbb Z)}{\Gamma_{n-k}(\mathbb Z) \cap \Id(R, \mathbb Z)}
$$
(direct sum of $\mathbb Z$-modules)
for an arbitrary ring $R$ with the unit $1_R$.
(We define $\frac{\Gamma_0(\mathbb Z)}{\Gamma_0(\mathbb Z) \cap \Id(R,\mathbb Z)} := \langle 1_R \rangle_{\mathbb Z} \subseteq R$.)
Calculating the number of the components, we obtain Theorem~\ref{TheoremCodimProperAndOrdinary}.
\end{proof}
\begin{corollary}
Let $R$ be a unitary ring.
Then all multilinear polynomial identities
of $R$ are consequences of proper multilinear polynomial identities of $R$
and the identity $(\ch R) x \equiv 0$.
\end{corollary}
\begin{proof}
This follows from~(\ref{EqPnIdDecomp}).
\end{proof}
\begin{corollary}
Let $R$ be a unitary ring and let the sequence $\bigl(c_n(R, q)\bigr)_{n=1}^\infty$
be polynomially bounded for some $q$.
Then $c_n(R, q)$ is a polynomial in $n\in\mathbb N$.
\end{corollary}
\begin{proof}
If the sequence $\bigl(c_n(R, q)\bigr)_{n=1}^\infty$
is polynomially bounded, then by Theorem~\ref{TheoremCodimProperAndOrdinary} there exists $j_0 \in\mathbb N$
such that $\gamma_j(R, q)=0$ for all $j\geqslant j_0$. Now we apply Theorem~\ref{TheoremCodimProperAndOrdinary} once again.
\end{proof}
If $H$ is a subgroup of a group $G$
and $M$ is a left $\mathbb ZH$-module, then $M \uparrow G := \mathbb ZG \mathbin{\otimes_{\mathbb ZH}}M$.
The $G$-action on $\mathbb ZG \mathbin{\otimes_{\mathbb ZH}}
M$ is induced as follows: $g_0(g\otimes a) := g_0g \otimes a$
for $a\in M$, $g,g_0 \in G$.
Now we prove an analog of Drensky's theorem~\cite[Theorem 12.5.4]{DrenKurs}:
\begin{theorem}\label{TheoremZSnOrdinaryProper}
Let $R$ be a unitary ring, $\ch R = \ell$, $\ell \in\mathbb Z_+$. Consider for every $n\in \mathbb N$
the series of $\mathbb Z S_n$-submodules
$$M_0 :=\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)}
\supsetneqq M_2 \supseteq M_3 \supseteq \dots \supseteq M_n \cong \frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)
\cap \Id(R, \mathbb Z)}$$
where each $M_k$ is the image of $\bigoplus_{t=k}^n\mathbb Z S_n (x_1\dots x_{n-t} \Gamma_t(\mathbb Z))$ and
$M_{n+1}:=0$.
Then $M_0/M_2 \cong \mathbb Z_\ell$ (trivial $S_n$-action),
$$M_t/M_{t+1}\cong \left(\frac{\Gamma_t(\mathbb Z)}{\Gamma_t(\mathbb Z)
\cap \Id(R, \mathbb Z)} \mathbin{\otimes_{\mathbb Z}} \mathbb Z\right) \uparrow S_n := \mathbb ZS_n \mathbin{\otimes_{\mathbb Z(S_t\times S_{n-t})}} \left(
\frac{\Gamma_t(\mathbb Z)}{\Gamma_t(\mathbb Z) \cap \Id(R, \mathbb Z)} \otimes_{\mathbb Z} \mathbb Z\right)$$ for all $2\leqslant t \leqslant n$ where $S_{n-t}$ is permuting $x_{t+1}, \dots, x_n$
and $\mathbb Z$ is a trivial $\mathbb ZS_{n-t}$-module.
\end{theorem}
\begin{proof}
First we notice that $M_0/M_2$ is generated by the image of $x_1 x_2 \dots x_n$.
Suppose the image of $kx_1 x_2 \dots x_n$ belongs to $M_2$ for some $k\in\mathbb N$.
All the polynomials in $M_2$ vanish under the substitution $x_1=\dots = x_n = 1_R$
since each of them contain at least one commutator. Hence we get $k 1_R = 0$,
$\ell \mid k$, and $M_0/M_2 \cong \mathbb Z_\ell$.
Note that $\frac{\Gamma_t(\mathbb Z)}{\Gamma_t(\mathbb Z) \cap \Id(R, \mathbb Z)} \otimes_{\mathbb Z} \mathbb Z
\cong \frac{\Gamma_t(\mathbb Z)}{\Gamma_t(\mathbb Z) \cap \Id(R, \mathbb Z)}$ where $S_{n-t}$ acts trivially.
Consider the bilinear map $$\varphi \colon \mathbb ZS_n \times \frac{\Gamma_t(\mathbb Z)}{\Gamma_t(\mathbb Z) \cap \Id(R, \mathbb Z)} \to M_t/M_{t+1}$$
defined by $\varphi(\sigma, f)=x_{\sigma(t+1)}x_{\sigma(t+2)}
\dots x_{\sigma(n)}(\sigma f)$ for $\sigma \in S_n$, $f \in \frac{\Gamma_t(\mathbb Z)}{\Gamma_t(\mathbb Z) \cap \Id(R, \mathbb Z)}$.
Note that $\varphi(\sigma\pi, f)=\varphi(\sigma, \pi f)$ for all $\pi \in S_t\times S_{n-t}$
and $M_t/M_{t+1}$ is generated by all $\varphi(\sigma, f)$ for $\sigma \in S_n$
and $f\in \Gamma_t(\mathbb Z)$.
Suppose $L$ is an Abelian group and $\psi \colon \mathbb ZS_n \times \frac{\Gamma_t(\mathbb Z)}{\Gamma_t(\mathbb Z) \cap \Id(R, \mathbb Z)} \to L$ is a $\mathbb Z$-bilinear
map and $\psi(\sigma\pi, f)=\psi(\sigma, \pi f)$ for all $\pi \in S_t\times S_{n-t}$.
First we define $\bar \psi \colon M_t \to L$ on the elements that generate
$M_t$ modulo $M_{t+1}$:
$$\bar \psi(x_{i_1} x_{i_2}\dots x_{i_{n-t}}
f)=\psi(\sigma, \sigma^{-1} f)$$ where $\sigma \in S_n$ and $\sigma^{-1}f \in \frac{\Gamma_t(\mathbb Z)}{\Gamma_t(\mathbb Z) \cap \Id(R, \mathbb Z)}$ (e.g. we can take $\sigma(k) = i_k$ for $1 \leqslant k \leqslant n-t$). Clearly, $\bar\psi(x_{i_1} x_{i_2}\dots x_{i_{n-t}}f)$ does not depend on the choice
of $\sigma$.
Suppose the image $\bar f_0$ of a polynomial $$f_0 = \sum_{i_1 < \dots < i_{n-t}} x_{i_1} x_{i_2}\dots x_{i_{n-t}} f_{i_1, \dots, i_{n-t}}$$ belongs to $M_{t+1}$ for some $f_{i_1, \dots, i_{n-t}}\in\Gamma_t(\mathbb Z)$. Substituting $$x_{i_1}=x_{i_2}=\dots = x_{i_{n-t}}=1_R$$
and arbitrary values for the other $x_j$, we get zero for every $i_1 < \dots < i_{n-t}$.
Hence $f_{i_1, \dots, i_{n-t}} \in \Id(R, \mathbb Z)$ and $\bar\psi(\bar f_0)=0$.
Thus we can define $\bar \psi$ to be zero on $M_{t+1}$ and we may assume that $\bar \psi \colon M_t/M_{t+1} \to L$.
Note that $\bar\psi\varphi = \psi$.
Hence $M_t/M_{t+1} \cong \mathbb ZS_n \mathbin{\otimes_{\mathbb Z(S_t\times S_{n-t})}} \left(
\frac{\Gamma_t(\mathbb Z)}{\Gamma_t(\mathbb Z) \cap \Id(R, \mathbb Z)} \otimes_{\mathbb Z} \mathbb Z\right)$ (isomorphism of Abelian groups) where $\varphi(\sigma, f) \mapsto \sigma \otimes f$.
Therefore, this is an isomorphism of $\mathbb ZS_n$-modules too.
\end{proof}
\section{A particular case of the Littlewood~--- Richardson rule}
Let $\mu \vDash n$, $\lambda \vdash n'$, $n'\leqslant n$. Suppose $\lambda_i \leqslant \mu_i$ for all $i\in\mathbb N$. Denote by $M(\mu)$ the free Abelian group generated by all $\mu$-tabloids.
Now we treat $D_\lambda$ as a Young subdiagram in $D_\mu$. Later on we always assume that in a pair
$(\lambda; \mu)$ we have $\lambda_1 = \mu_1$.
Following~\cite[Definition 17.4]{JamesSymm},
we define a $\mathbb ZS_n$-submodule $S(\lambda, \mu)\subseteq M(\mu)$ where
$$S(\lambda; \mu) := \langle e^{\lambda, \mu}_{T_\mu} \mid T_\mu \text{ is a tableau of the shape } \mu\rangle_\mathbb Z$$
and $e^{\lambda, \mu}_{T_\mu} := \sum_{\sigma \in C_{T_\lambda}} (\sign \sigma) \sigma [T_\mu]$.
Here $T_\lambda$ is the subtableau of $T_\mu$
defined by the partition $\lambda$ and $C_{T_\lambda} \subseteq S_n$ is the subgroup that leaves the numbers out of $T_\lambda$ invariant and puts every number from each column of $T_\lambda$ to the same column.
By $[T_\mu]$ we denote the tabloid corresponding to $T_\mu$. We assume $S(0;0)=0$
for the zero partitions $0\vdash 0$. Note that $S(\lambda; \lambda) \cong S(\lambda)$.
(The proof is completely analogous to the case when the coefficients are taken from a field.)
Let $F$ be a field and let $M^F(\mu)$ be the vector space over $F$ with the formal basis consisting
of all $\mu$-tabloids. In other words, $M^F(\mu)=M(\mu) \mathbin{\otimes_{\mathbb Z}}
F$. We define $S^F(\lambda; \mu)$
as the subspace in $M^F(\mu)$ generated by $S(\lambda; \mu) \mathbin \otimes 1$.
\begin{lemma}\label{LemmaNoTorsion} Let $\mu \vDash n$, $\lambda \vdash n'$, $n'\leqslant n$. Suppose $\lambda_i \leqslant \mu_i$ for all $i\in\mathbb N$. Then $M(\mu)/S(\lambda; \mu)$ has no torsion.
\end{lemma}
\begin{proof} Recall that $M(\mu)$ is a finitely generated free Abelian group and
$S(\lambda; \mu)$ is its subgroup. Hence we can choose a basis $a_1, a_2, \dots, a_t$
in $M(\mu)$ such that $m_1 a_1, m_2 a_2, \dots, m_k a_k$ is a basis of $S(\lambda; \mu)$
for some $m_i \in \mathbb N$. We claim that all $m_i=1$. First, we notice that
$a_1 \otimes 1, a_2 \otimes 1, \dots, a_t \otimes 1$ form a basis of
$M^F(\mu)$ and $m_1 a_1 \otimes 1, m_2 a_2 \otimes 1, \dots, m_k a_k \otimes 1$
generate $S^F(\lambda; \mu)$ for any field $F$. Thus $\dim_F S^F(\lambda; \mu)
= k$ for $\ch F = 0$ and $\dim_F S^F(\lambda; \mu)
< k$ if $\ch F \mid m_i$ for at least one $m_i$.
However, by~\cite[Theorem~17.13 (III)]{JamesSymm}, $\dim_F S^F(\lambda; \mu)$
does not depend on the field $F$. Therefore all $m_i=1$ and
$M(\mu)/S(\lambda; \mu)$ is a free Abelian group.
\end{proof}
Let $c \geqslant 2$ be a natural number satisfying the following conditions:
$\mu_{c-1} = \lambda_{c-1}$ and $\mu_c > \lambda_c$.
Then we define the operators $A_c$ (``adding'') and $R_c$ (``raising'') in the following way:
\begin{enumerate}
\item if $\lambda_{c} = \lambda_{c-1}$, then $A_c(\lambda; \mu)=(0;0)$ where $0 \vdash 0$ is a zero partition, otherwise $A_c(\lambda; \mu)=(\tilde\lambda; \mu)$
where $\tilde \lambda_i = \lambda_i$
for $i\ne c$ and $\tilde\lambda_{c} = \lambda_{c}+1$;
\item $R_c(\lambda; \mu)=(\tilde\lambda; \tilde\mu)$
where $\tilde \mu_i = \mu_i$
for $i\ne c-1,c$; $\tilde\mu_c = \lambda_c$, $\tilde\mu_{c-1} = \mu_{c-1}+(\mu_c-\lambda_c)$,
$\tilde\lambda_1 = \tilde\mu_1$ and
$\tilde \lambda_i = \lambda_i$
for $i > 1$.
\end{enumerate}
Fix $i \in\mathbb N$ and $0 \leqslant v \leqslant \mu_{i+1}$.
Let $\nu \vDash n$, $\nu_j = \mu_j$ for $j \ne i, i+1$, $\nu_i = \mu_i+\mu_{i+1}-v$,
$\nu_{i+1}=v$.
Then we define $\psi_{i,v} \in \Hom_{\mathbb ZS_n} (M(\mu),M(\nu))$
in the following way: $\psi_{i,v}[T_\mu] = \sum [T_\nu]$
where the summation runs over the set of all tabloids $[T_\nu]$ such that $[T_\nu]$
agrees with $[T_\mu]$ in all the rows except the $i$th and the $(i+1)$th,
and the $(i+1)$th is a subset of size $v$ of the $(i+1)$th row in $[T_\mu]$.
Analogously, we define $\psi^F_{i,v} \in \Hom_{FS_n} (M^F(\mu),M^F(\nu))$
for any field $F$.
\begin{lemma}\label{LemmaSlambdamuPsi}
\begin{enumerate}
\item $\psi_{c-1, \lambda_c} S(\lambda; \mu) = S(R_c(\lambda; \mu))$;
\item $\ker \psi_{c-1, \lambda_c} \cap S(\lambda; \mu)= S(A_c(\lambda; \mu))$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof of the first part of the lemma and of the embedding
$\ker \psi_{c-1, \lambda_c} \supseteq S(A_c(\lambda; \mu))$
is completely analogous to~\cite[Lemma 17.12]{JamesSymm}.
Now we notice that there exists a natural embedding $M(\lambda)\otimes 1 \subset M^{\mathbb Q}(\lambda)$.
By~\cite[Theorem 17.13]{JamesSymm}, $\ker \psi^{\mathbb Q}_{c-1, \lambda_c} \cap S^{\mathbb Q}(\lambda; \mu) = S^{\mathbb Q}(A_c(\lambda; \mu))$.
Thus if $\psi_{c-1, \lambda_c} a = 0$ for some $a\in S(\lambda; \mu)$,
then $ma \in S(A_c(\lambda; \mu))$ for some $m \in\mathbb N$ and
$a \in S(A_c(\lambda; \mu))$
since $M(\mu)/S(A_c(\lambda; \mu))$ is torsion-free by
Lemma~\ref{LemmaNoTorsion}.
\end{proof}
\begin{lemma}\label{LemmaSlambdamuSpechtSeries}
Let $n\in\mathbb N$, $\lambda \vdash n'$, $\mu \vDash n$, $n'\leqslant n$, $\lambda_i \leqslant \mu_i$
for all $i\in \mathbb N$. Then $S(\lambda; \mu)$
has a chain of submodules
$$S(\lambda; \mu) = M_0 \supsetneqq M_1 \supsetneqq M_2 \supsetneqq
\dots \supsetneqq M_t = 0$$
with factors $M_i/M_{i+1}$ isomorphic to Specht modules.
Moreover, $S(\lambda; \mu)/M_i$ is torsion-free for any $i$.
\end{lemma}
\begin{proof}
If $\mu = \lambda$, then $S(\lambda; \mu) = S(\lambda)$ and there is nothing
to prove. If $\mu \ne \lambda$, then we find $c\in\mathbb N$ such
that $\lambda_i = \mu_i$ for all $1 \leqslant i \leqslant c-1$
and $\lambda_c < \mu_c$. Since we always assume $\lambda_1 = \mu_1$,
we have $c \geqslant 2$. Now we apply Lemma~\ref{LemmaSlambdamuPsi}. Note that
$\tilde\lambda_c > \lambda_c$ where $A_c(\lambda; \mu)=(\tilde\lambda; \tilde \mu)$
and $R_c$ moves the boxes of $D_\mu$ upper.
Applying Lemma~\ref{LemmaSlambdamuPsi} many times, we get the first part of
Lemma~\ref{LemmaSlambdamuSpechtSeries} by induction.
Suppose $S(\lambda; \mu)/M_i$ is not torsion-free and $ma\in M_i$
for some $a \in S(\lambda; \mu)$, $a \notin M_i$, and $m\in\mathbb N$.
Then we can find an index $0 \leqslant k < i$ such that $a \in M_k$, $a \notin M_{k+1}$. However $ma\in M_i \subseteq M_{k+1}$.
i.e., the Specht module $M_k/M_{k+1}$ is not torsion-free either.
We get a contradiction since all Specht modules are subgroups in finitely generated free Abelian groups.
\end{proof}
Now we can prove the $\mathbb Z$-analog of the particular case of the Littlewood~--- Richardson rule that sometimes is referred to as Young's rule~\cite[Theorem 2.3.3]{ZaiGia}, \cite[Theorem 12.5.2]{DrenKurs} and sometimes as Pieri's formula~\cite[(A.7)]{FultonHarris}.
\begin{theorem}\label{TheoremYoungsRule}
Let $t, n\in\mathbb N$, $m\in\mathbb Z_+$, $t < n$, and $\lambda \vdash t$ and let $\mathbb Z$ be the trivial $\mathbb ZS_{n-t}$-module. Then $$\bigl(S(\lambda)/mS(\lambda)\bigr) \uparrow S_n := \mathbb ZS_n
\otimes_{\mathbb Z(S_{t} \times S_{n-t})} (\bigl(S(\lambda)/mS(\lambda)\bigr) \otimes_{\mathbb Z} \mathbb Z)$$ has a series of submodules with factors $S(\nu)/mS(\nu)$
where $\nu$ runs over the set of all partitions $\nu \vdash n$ such that
$$\lambda_n \leqslant \nu_n \leqslant \lambda_{n-1} \leqslant \nu_{n-1}
\leqslant \dots \leqslant \lambda_2 \leqslant \nu_2 \leqslant \lambda_1 \leqslant \nu_1.$$
(Each factor occurs exactly once.)
\end{theorem}
\begin{proof} Suppose $\lambda=(\lambda_1, \dots, \lambda_s)$, $\lambda_s > 0$.
Then $S(\lambda) \uparrow S_n \cong S(\lambda; \mu)$
where $\mu=(\lambda_1, \dots, \lambda_s, n-t)$.
Now Lemma~\ref{LemmaSlambdamuSpechtSeries} implies the theorem for $m=0$.
Suppose $m > 0$. Then $\bigl(S(\lambda)/mS(\lambda)\bigr) \uparrow S_n \cong
\bigl(S(\lambda) \uparrow S_n\bigr) / \bigl(m(S(\lambda) \uparrow S_n)\bigr)$.
Let $$S(\lambda) \uparrow S_n = M_0 \supsetneqq M_1 \supsetneqq M_2 \supsetneqq
\dots \supsetneqq M_t = 0$$
where $M_{i-1}/M_i \cong S(\lambda^{(i)})$, $\lambda^{(i)} \vdash n$, $1\leqslant i
\leqslant t$.
Hence $$\bigl(S(\lambda) \uparrow S_n\bigr) / \bigl(m(S(\lambda) \uparrow S_n)\bigr) = \overline{M_0} \supsetneqq \overline{M_1} \supsetneqq \overline {M_2} \supsetneqq
\dots \supsetneqq \overline{M_t} = 0$$
where $\overline{M_i} \cong (M_i + m(S(\lambda) \uparrow S_n))/m(S(\lambda) \uparrow S_n)$
and \begin{equation*}\begin{split}
\overline{M_{i-1}} / \overline{M_i} \cong (M_{i-1} + m(S(\lambda) \uparrow S_n))/(M_i + m(S(\lambda) \uparrow S_n)) \cong \\ M_{i-1}/M_{i-1}\cap(M_i + m(S(\lambda) \uparrow S_n))
= \\ M_{i-1}/(M_i + M_{i-1}\cap m(S(\lambda) \uparrow S_n))
\cong (M_{i-1}/M_i)/((M_i + M_{i-1}\cap m(S(\lambda) \uparrow S_n))/M_i).
\end{split}\end{equation*}
By Lemma~\ref{LemmaSlambdamuSpechtSeries},
$(S(\lambda) \uparrow S_n) / M_{i-1}$ is torsion-free.
Hence
$M_{i-1}\cap m(S(\lambda) \uparrow S_n) = mM_{i-1}$ and \begin{equation*}\begin{split}\overline{M_{i-1}} / \overline{M_i} \cong
(M_{i-1}/M_i)/((M_i + mM_{i-1})/M_i) =\\ (M_{i-1}/M_i)/(m(M_{i-1}/M_i))\cong S(\lambda^{(i)})/mS(\lambda^{(i)}).\end{split}\end{equation*}
The description of $\lambda^{(i)}$ is obtained from the proof of Lemma~\ref{LemmaSlambdamuSpechtSeries}.
\end{proof}
\section{Algebras of upper triangular matrices}\label{SectionUT2R}
\subsection{Codimensions and multilinear identities}\label{SubsectionCodimUT2R}
Let $M$ be an $(R_1, R_2)$-bimodule for commutative rings $R_1$, $R_2$ with $1$
and let $R = \left(\begin{array}{rr} R_1 & M \\ 0 & R_2\end{array}\right)$.
In this section, we calculate $c_n(R, q)$ for all $q = p^k$ and $q=0$,
describe the structure of the $\mathbb Z S_n$-module
$\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)}$
and find such multilinear polynomials that elements of $\Id(R, \mathbb Z) \cap P_n(\mathbb Z)$
are consequences of them.
\begin{remark}
If $F$ is a field of characteristic $0$ and $A=\UT_2(F):=\left(\begin{array}{rr} F & F \\ 0 & F\end{array}\right)$,
then $c_n(A, F)$ and generators of $\Id(A, F)$ as a $T$-ideal can be found, e.g., in~\cite[Theorem 4.1.5]{ZaiGia}. The structure of the $F S_n$-module
$\frac{P_n(F)}{P_n(F) \cap \Id(A, F)}$ can be determined using proper cocharacters~\cite[Theorem 12.5.4]{DrenKurs}.
\end{remark}
\begin{theorem}\label{TheoremCodimUT2R}
All polynomials from $P_n(\mathbb Z) \cap \Id(R, \mathbb Z)$, $n\in\mathbb N$,
are consequences of the left hand sides of the following polynomial
identities in $R$:
\begin{equation}\label{EqId1}[x,y][z,t]\equiv 0,
\end{equation} \begin{equation}\label{EqId2}\ell x\equiv 0,\end{equation}
\begin{equation}\label{EqId3}m [x,y]=0
\end{equation}
where $[x,y]:= xy-yx$, $$\ell := \min\left\lbrace n \in\mathbb N \mid na = 0 \text{ for all } a\in R_1\cup R_2 \right\rbrace,$$ $$m := \min\left\lbrace n \in\mathbb N \mid na = 0 \text{ for all } a\in M \right\rbrace.$$ (If one of the corresponding sets is empty, we define $\ell=0$ or $m=0$, respectively. Note that $m \mid \ell$.)
Moreover, $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)} \cong \mathbb Z_\ell \oplus \mathbb (\mathbb Z_m)^{(n-2)2^{n-1}+1}$ where $\mathbb Z_0 := \mathbb Z$.
\end{theorem}
\begin{remark}
Now $c_n(R, q)$ can be easily computed.
If $R_1=R_2=M$ and $R_1 = R_2$ is a field, we obtain the same numbers as in~\cite[Theorem 4.1.5]{ZaiGia}.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{TheoremCodimUT2R}.]
Denote by $e_{ij}$ the matrix units. Then $R= R_1 e_{11} \oplus R_2 e_{22}\oplus M e_{12}$
(direct sum of subspaces), $[R,R]\subseteq M e_{12}$, and (\ref{EqId1})--(\ref{EqId3}) are indeed
polynomial identities of $R$.
Now we consider an arbitrary mononomial from $P_n(\mathbb Z)$ and find the first inversion among the indexes of its variables.
We replace the corresponding pair of variables with the sum of their commutator and their product in the right order.
Note that $[x,y]u[z,t]=[x,y][z,t]u+[x,y][u,[z,t]] \equiv 0$ is a consequence of~(\ref{EqId1}).
Therefore, we may assume that all the variables to the right of the commutator have increasing indexes. For example: $$ \begin{array}{lcl}
x_{3}x_{1}x_{4} x_{2} & = & x_{1} x_{3} x_{4} x_{2} + [x_{3}, x_{1}] x_{4} x_{2} \\
& \stackrel{(\ref{EqId1})}{\equiv} & x_{1} x_{3} x_{2} x_{4} + x_{1} x_{3}[x_{4}, x_{2}] + [x_{3}, x_{1}] x_{2} x_{4} \\
& = & x_{1} x_{2} x_{3} x_{4} + x_{1} [x_{3}, x_{2}] x_{4} + x_{1} x_{3}[x_{4}, x_{2}] + [x_{3}, x_{1}] x_{2} x_{4}. \\
\end{array}$$
Continuing this procedure,
we present any element of $P_n(\mathbb Z)$
modulo the consequences of~(\ref{EqId1}) as a linear combination
of polynomials $f_0:=x_1 x_2 \dots x_n$ and \begin{equation}\label{EqPolynomial}
x_{i_{1}}\dots x_{i_{k}}[x_{s},x_{r}]x_{j_{1}}\dots x_{j_{n-k-2}} \text{ for } i_{1} < \dots < i_{k}<s,\ r < s,\ j_{1} < \dots < j_{n-k-2}.\end{equation}
Denote the set of polynomials~(\ref{EqPolynomial}) by $\Xi$.
Consider the free Abelian group $\mathbb Z(\Xi \cup \lbrace f_0\rbrace)$
with the basis $\Xi \cup \lbrace f_0\rbrace$.
Now we have the surjective homomorphism $\varphi \colon \mathbb Z(\Xi \cup \lbrace f_0\rbrace)
\to \frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)}$
where $\varphi(f)$ is the image of $f \in \Xi \cup \lbrace f_0\rbrace$ in
$\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)}$.
We claim that $\ker \varphi$ is generated by $\ell f_0$ and all $m f$ where $f\in \Xi$.
Suppose that a linear combination $f_1$ of $f_0$ and elements from $\Xi$ is a polynomial identity,
however $f_1$ is not a linear combination of $\ell f_0$ and $m f$, $f\in \Xi$.
If we substitute $$x_1 =x_2 = \dots = x_n = 1_{R_i}e_{ii} \text{ where } i \in \lbrace 1,2 \rbrace,$$
all $f \in \Xi$ vanish. Therefore, the coefficient of $f_0$ is a multiple of $\ell$.
Now we find $f_2 := x_{i_{1}}\dots x_{i_{k}}[x_{s},x_{r}]x_{j_{1}}\dots x_{j_{n-k-2}} \in \Xi$
with the greatest $k$ such that the coefficient $\beta$ of $f_2$ in $f_1$ is not a multiple of $m$.
Then we substitute $x_{i_{1}}=\dots =x_{i_{k}}=x_s = 1_{R_1}e_{11}$, $x_r = a e_{12}$, $x_{j_{1}}=\dots = x_{j_{n-k-2}}=1_{R_1}e_{11}+1_{R_2}e_{22}=1_R$ where $a\in M$ and $\beta a \ne 0$.
Our choice of $f_2$ implies that $f_2$ is the only summand in $f_1$ that could be nonzero under this substitution. Hence $f_1$ does not vanish and we get a contradiction.
Therefore, $\ker \varphi$ is generated by $\ell f_0$ and $m f$, $f\in \Xi$.
In particular,
$\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(R, \mathbb Z)} \cong \mathbb Z_\ell \oplus (\mathbb Z_m)^{|\Xi|}$ and every multilinear polynomial identity of $R$ is a consequence
of (\ref{EqId1})--(\ref{EqId3}).
Note that \begin{equation*}\begin{split}|\Xi|=\sum_{k=2}^n (k-1)\binom{n}{k} = \sum_{k=2}^n \frac{n!}{(k-1)!(n-k)!} -
\sum_{k=2}^n \binom{n}{k}=\\ n\sum_{k=1}^{n-1} \frac{(n-1)!}{k!(n-k-1)!} - (2^n-n-1)=n(2^{n-1}-1)- (2^n-n-1)
= (n-2)2^{n-1}+1\end{split}\end{equation*}
and the theorem follows.
\end{proof}
\begin{corollary}
Multilinear polynomial identities of $\UT_2(\mathbb Q)$ as a ring are generated by~(\ref{EqId1}).
\end{corollary}
\subsection{$\mathbb ZS_n$-modules}
Note that the Jacobi identity and~(\ref{EqId1}) imply that $\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(R,\mathbb Z)}$ is generated as a $\mathbb Z$-module by $[x_i, x_n, x_1, x_2, \dots, \hat x_i, \dots, x_{n-1}]$ where $1 \leqslant i \leqslant n-1$.
\begin{lemma}\label{LemmaSymmKryuk}
Let $R$ be the ring from Subsection~\ref{SubsectionCodimUT2R}
and $T_\lambda = \begin{array}{|l|l|l|l|} \cline{1-4} 1 & 2 & \dots & n-1
\\ \cline{1-4} n \\ \cline{1-1} \end{array}$.
Then
\begin{equation}\label{EqSymm}b_{T_\lambda} a_{T_\lambda} [x_1, x_n, x_2, x_3, \dots, x_{n-1}]
\equiv n(n-2)![x_1, x_n, x_2, x_3, \dots, x_{n-1}]\ (\mathop\mathrm{mod} P_n(\mathbb Z)\cap \Id(R)).\end{equation}
\end{lemma}
\begin{proof} Indeed,
\begin{equation*}\begin{split}b_{T_\lambda} a_{T_\lambda} [x_1, x_n, x_2, x_3, \dots, x_{n-1}]
\equiv b_{T_\lambda} (n-2)! \sum_{i=1}^{n-1} [x_i, x_n, x_1, x_2, \dots, \hat x_i, \dots, x_{n-1}]
= \\ (n-2)! \sum_{i=2}^{n-1} \left([x_i, x_n, x_1, x_2, \dots, \hat x_i, \dots, x_{n-1}]
- [x_i, x_1, x_n, x_2, \dots, \hat x_i, \dots, x_{n-1}]\right)
+ \\2(n-2)! [x_1, x_n, x_2, x_3, \dots, x_{n-1}]\equiv n(n-2)![x_1, x_n, x_2, x_3, \dots, x_{n-1}]\end{split}\end{equation*}
since, by the Jacobi identity, $[x_i, x_1, x_n]=[x_i, x_n, x_1]+[x_n, x_1, x_i]$.
\end{proof}
First, we determine the structure of $\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(R,\mathbb Z)}$
for $R=\UT_2(\mathbb Q)$.
\begin{lemma}\label{LemmaIsoZSnUT2Q} Let $T_\lambda = \begin{array}{|l|l|l|l|} \cline{1-4} 1 & 2 & \dots & n-1
\\ \cline{1-4} n \\ \cline{1-1} \end{array}$. Then
$\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(\UT_2(\mathbb Q),\mathbb Z)}
\cong (\mathbb ZS_n)b_{T_\lambda}a_{T_\lambda}$.
\end{lemma}
\begin{proof}
We claim that if $u b_{T_\lambda}a_{T_\lambda} = 0$ for some $u\in\mathbb ZS_n$,
then $$u [x_1, x_n, x_2, x_3, \dots, x_{n-1}] \in \Gamma_n(\mathbb Z) \cap \Id(\UT_2(\mathbb Q),\mathbb Z).$$
Indeed, by~(\ref{EqSymm}), $$n(n-2)!\, u [x_1, x_n, x_2, x_3, \dots, x_{n-1}] \equiv
u b_{T_\lambda} a_{T_\lambda} [x_1, x_n, x_2, x_3, \dots, x_{n-1}] = 0.$$
Since $\UT_2(\mathbb Q)$ has no torsion, $u [x_1, x_n, x_2, x_3, \dots, x_{n-1}] \equiv 0$
is a polynomial identity of $\UT_2(\mathbb Q)$.
Thus we can define the surjective homomorphism $\varphi \colon (\mathbb ZS_n)b_{T_\lambda}a_{T_\lambda}
\to \frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(\UT_2(\mathbb Q),\mathbb Z)}$
by $\varphi(\sigma b_{T_\lambda}a_{T_\lambda}) = \sigma [x_1, x_n, x_2, x_3, \dots, x_{n-1}]$
for $\sigma \in S_n$.
Analogously, we can define the surjective homomorphism
$$\varphi_0 \colon (\mathbb QS_n)b_{T_\lambda}a_{T_\lambda}
\to \frac{\Gamma_n(\mathbb Q)}{\Gamma_n(\mathbb Q)\cap\Id(\UT_2(\mathbb Q),\mathbb Q)}$$
by $\varphi(\sigma b_{T_\lambda}a_{T_\lambda}) = \sigma [x_1, x_n, x_2, x_3, \dots, x_{n-1}]$
for $\sigma \in S_n$. Since $(\mathbb QS_n)b_{T_\lambda}a_{T_\lambda}$ is an irreducible
$\mathbb QS_n$-module, $\varphi_0$ is an isomorphism of $\mathbb QS_n$-modules.
We claim that $\varphi$ is an isomorphism of $\mathbb ZS_n$-modules.
Indeed, suppose $$u [x_1, x_n, x_2, x_3, \dots, x_{n-1}] \in \Gamma_n(\mathbb Z) \cap \Id(\UT_2(\mathbb Q), \mathbb Z)$$ for some $u\in\mathbb ZS_n$. Then $\varphi_0(u b_{T_\lambda}a_{T_\lambda})=u [x_1, x_n, x_2, x_3, \dots, x_{n-1}] \in \Gamma_n(\mathbb Q) \cap \Id(\UT_2(\mathbb Q),\mathbb Z)$ and $u b_{T_\lambda}a_{T_\lambda}=0$.
Hence $\varphi$ is an isomorphism and the lemma is proven.
\end{proof}
\begin{theorem}\label{TheoremProperZSnUT2}
Let $R$ and $m$ be, respectively, the ring and the number from Subsection~\ref{SubsectionCodimUT2R}.
Then $\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(R,\mathbb Z)} \cong S(\lambda)/mS(\lambda)$
where $\lambda = (n-1, 1)$, for all $n \geqslant 2$.
\end{theorem}
\begin{proof}
Recall that $\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(R,\mathbb Z)}$ is generated as a $\mathbb Z$-module by $[x_i, x_n, x_1, x_2, \dots, \hat x_i, \dots, x_{n-1}]$ where $1 \leqslant i \leqslant n-1$.
We exploit the same trick as in the proof of Theorem~\ref{TheoremCodimUT2R}.
Using the substitution $x_1=\dots =x_{i-1}=x_{i+1}= \dots = x_n = 1_{R_1}e_{11}$, $x_i = a e_{12}$ where $a\in M$, we obtain that $\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(R,\mathbb Z)}$
is the direct sum of $n-1$ cyclic groups isomorphic to $\mathbb Z_m$ and generated by
$[x_i, x_n, x_1, x_2, \dots, \hat x_i, \dots, x_{n-1}]$ where $1 \leqslant i \leqslant n-1$.
By Theorem~\ref{TheoremCodimUT2R} and its corollary, we have the natural surjective
homomorphism $\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(\UT_2(\mathbb Q),\mathbb Z)}
\to \frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(R,\mathbb Z)}$.
The remarks above imply that the kernel equals $m\frac{\Gamma_n(\mathbb Z)}{\Gamma_n(\mathbb Z)\cap\Id(\UT_2(\mathbb Q),\mathbb Z)}$. Now the theorem follows from Lemma~\ref{LemmaIsoZSnUT2Q}.
\end{proof}
Applying Theorems~\ref{TheoremZSnOrdinaryProper}, \ref{TheoremYoungsRule}, and \ref{TheoremProperZSnUT2}
we immediately get
\begin{theorem}\label{TheoremOrdinaryZSnUT2}
Let $R$, $\ell$, and $m$ be, respectively, the ring and the numbers from Subsection~\ref{SubsectionCodimUT2R}.
Then there exists a chain of $\mathbb ZS_n$-submodules in $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z)\cap\Id(R,\mathbb Z)}$ with the set of factors that
consists of one copy of $\mathbb Z_\ell$ and $(\lambda_1-\lambda_2+1)$ copies of $S(\lambda_1, \lambda_2, \lambda_3)/mS(\lambda_1, \lambda_2, \lambda_3)$
where $(\lambda_1, \lambda_2, \lambda_3) \vdash n$, $\lambda_2 \geqslant 1$, $\lambda_3 \in \lbrace 0,1 \rbrace$.
\end{theorem}
\section{Grassmann algebras}\label{SectionGrassmannR}
Let $R$ be a commutative ring with a unit element $1_R$, $\ch R = \ell$ where either $\ell$ is an odd natural number or $\ell = 0$.
We define the Grassman algebra~$G_R$ over a ring $R$ as the $R$-algebra with a unit, generated by
the countable set of generators $e_i$, $i\in\mathbb N$, and the anti-commutative relations
$e_{i} e_{j} = -e_{j}e_{i}$, $i,j\in\mathbb N$.
Here we consider the same questions as for the upper triangular matrices.
\subsection{Codimensions and polynomial identities} \label{grassmann}
This lemma is known but we provide its proof for the reader's convenience.
\begin{lemma}\label{identities}
The polynomial identity $[y,x][z,t] + [y,z][x,t]\equiv 0$
is a consequence of $[x_{1},x_{2},x_{3}] \equiv 0$.
In particular, $[x,y]u[z,t] + [x,t]u[z,y]\equiv 0$
for all $u \in \mathbb Z \langle X \rangle$.
\end{lemma}
\begin{proof}
Note that \begin{equation}\begin{split}
[x, yt, z] = [[x, y]t, z]+[y[x, t], z]
= [x, y, z]t+ [x, y][t, z]+ \\ [y,z][x, t]+y[x,t,z]\equiv [x, y][t, z]+[y,z][x, t]
= [y, x][z, t]+[y,z][x, t]\end{split}
\end{equation}
modulo $[x_{1},x_{2},x_{3}] \equiv 0$. (Here we have used Jacobi's identity too.)
Hence
\begin{equation}\begin{split}[x,y]u[z,t] + [x,t]u[z,y] = [x,y][u,[z,t]] + [x,y][z,t]u +\\ [x,t][u,[z,y]]
+[x,t][z,y]u\equiv [x,y][z,t]u + [x,t][z,y]u \equiv 0.\end{split}
\end{equation}
\end{proof}
\begin{theorem}\label{TheoremCodimGrass}
All polynomials from $P_n(\mathbb Z) \cap \Id(G_{R}, \mathbb Z)$, $n\in\mathbb N$,
are consequences of the left hand sides of the following polynomial
identities in $R$:
\begin{equation}\label{EqId1bis}[x,y,z]\equiv 0,
\end{equation} \begin{equation}\label{EqId2bis}\ell x\equiv 0.\end{equation}
Moreover, $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(G_{R}, \mathbb Z)} \cong \mathbb (\mathbb Z_\ell)^{2^{n-1}}$.
\end{theorem}
\begin{proof}
Define $G_R^{(0)} = \langle e_{i_1} e_{i_2}\dots e_{i_{2k}} \mid k\in\mathbb Z_+\rangle_R$
and $G_R^{(1)} = \langle e_{i_1} e_{i_2}\dots e_{i_{2k+1}} \mid k\in\mathbb Z_+\rangle_R$.
Clearly, $G_R= G_R^{(0)} \oplus G_R^{(1)}$ (direct sum of $R$-submodules), $[G_R,G_R]\subseteq G_R^{(0)}$,
$G_R^{(0)} = Z (G_{R})$.
Hence $[x_{1},x_{2},x_{3}]\equiv 0$ is a polynomial identity.
Obviously, (\ref{EqId2bis}) is a polynomial identity too.
Let \begin{equation*}\begin{split}\Xi = \{ x_{i_{1}} \dots x_{i_{k}} [x_ {j_{1}},x_{j_{2}}] \dots [x_{j_{2m-1}}, x_{j_{2m}}] \mid i_{1} < \dots < i_{k}, \\ j_{1} < \dots < j_{2m},\ k+2m = n,\ k,m \in \mathbb Z_+ \}\subset P_n(\mathbb Z).\end{split}\end{equation*}
By Lemma~\ref{identities}, every
polynomial from $P_n(\mathbb Z)$ can be presented modulo (\ref{EqId1bis})
as a linear combination of polynomials from $\Xi$.
For example, \begin{equation*}\begin{split}x_3 x_2 x_4 x_1 = -[x_2, x_3] x_4 x_1
+ x_2 x_3 x_4 x_1 = \\ ([x_2, x_3] [x_1, x_4] - [x_2, x_3] x_1 x_4)
+ (x_2 x_3 x_1 x_4 - x_2 x_3 [x_1, x_4]) \equiv \\
-[x_2, x_1] [x_3, x_4] - x_1 x_4 [x_2, x_3]
+ x_2 x_1 x_3 x_4 - x_2 [x_1, x_3] x_4 - x_2 x_3 [x_1, x_4]
\equiv \\ [x_1, x_2] [x_3, x_4] - x_1 x_4 [x_2, x_3] +
x_1 x_2 x_3 x_4 - [x_1, x_2] x_3 x_4 - x_2 x_4 [x_1, x_3] - x_2 x_3 [x_1, x_4]
\equiv \\ [x_1, x_2] [x_3, x_4] - x_1 x_4 [x_2, x_3]+
x_1 x_2 x_3 x_4 - x_3 x_4[x_1, x_2] - x_2 x_4 [x_1, x_3] - x_2 x_3 [x_1, x_4]
.\end{split}\end{equation*}
Consider the free Abelian group $\mathbb Z\Xi$
with the basis $\Xi$.
Now we have the surjective homomorphism $\varphi \colon \mathbb Z\Xi
\to \frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(G_R, \mathbb Z)}$
where $\varphi(f)$ is the image of $f \in \Xi $ in
$\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(G_R, \mathbb Z)}$.
We claim that $\ker \varphi$ is generated by $\ell f$ where $f\in \Xi$.
Suppose that a linear combination $f_1$ of elements from $\Xi$ is a polynomial identity,
however $f_1$ is not a linear combination of $\ell f$, $f\in \Xi$.
Now we find $$f_2 := x_{i_{1}} \dots x_{i_{k}} [x_ {j_{1}},x_{j_{2}}] \dots [x_{j_{2m-1}}, x_{j_{2m}}] \in \Xi$$
with the greatest $k$ such that the coefficient $\beta$ of $f_2$ in $f_1$ is not a multiple of $\ell$.
Then we substitute $x_{i_{1}}=\dots =x_{i_{k}}= 1_{G_R}$, $x_{j_i}=e_i$, $1\leqslant i \leqslant 2m$.
Our choice of $f_2$ implies that $f_2$ is the only summand in $f_1$ that could be nonzero under this substitution. Hence the value of $f_1$ equals $ (2^m \beta\, 1_R) e_1 e_2 \dots e_m = 0$.
However, $G_R$ is a free $R$-module and $e_1 e_2 \dots e_m$ is one of its basis elements.
Therefore $2^m \beta\, 1_R = 0$, $\ell \mid (2^m \beta)$ and $\ell \mid \beta$ since $2 \nmid \ell$.
We get a contradiction.
Thus $\ker \varphi$ is generated by $\ell f$, $f\in \Xi$.
In particular,
$\frac{P_n(\mathbb Z)}{P_n(\mathbb Z) \cap \Id(G_R, \mathbb Z)} \cong (\mathbb Z_\ell)^{|\Xi|}$ and every multilinear polynomial identity of $G_R$ is a consequence
of (\ref{EqId1bis}) and (\ref{EqId2bis}).
We now calculate $| \Xi |$.
The number of these polynomials equals the number of choices of $x_{i_{1}}, \dots, x_{i_{k}}$. If $n$ is odd, this number equals
${n \choose 1} + {n \choose 3} + \dots + {n \choose n}.$
If $n$ is even, the number equals ${n \choose 0} + {n \choose 2} + \dots + {n \choose n}$. But the both are equal to $2^{n-1}$. Indeed, denote $s_{0} = \sum\limits_{i \mbox{ \tiny{even}}} { n \choose i }$ and $s_{1} = \sum\limits_{i \mbox{ \tiny{odd}}} { n \choose i }$. Then $2^{n} = (1+1)^{n} = s_{0} + s_{1}$ and $0 =(1-1)^{n}=s_{0} -s_{1}$. So $| \Xi |=s_{0}=s_{1}= 2^{n-1}$.
\end{proof}
\subsection{$\mathbb{Z}S_{n}$-modules}
First we determine the structure of $\mathbb{Z}S_{n}$-modules of proper polynomial
functions.
\begin{theorem} \label{TheoremProperZSnR}
Let $G_{R}$ be the Grassmann algebra over $R$.
Let $\lambda = (1^{2m})$ and $T_\lambda = \begin{array}{|l|} \cline{1-1} 1 \\ \cline{1-1} 2 \\ \cline{1-1} \vdots \\ \cline{1-1} 2m\\ \cline{1-1} \end{array}$. Then $\frac{\Gamma_{2m}(\mathbb Z)}{\Gamma_{2m}(\mathbb Z) \cap \Id(G_{R}, \mathbb Z)} \cong S(\lambda)/ \ell S(\lambda) $ for all $m\in\mathbb N$, where $\ell = \ch R$,
and $\frac{\Gamma_{2m+1}(\mathbb Z)}{ \Gamma_{2m+1}(\mathbb Z) \cap \Id(G_R, \mathbb Z)}=0$
for all $m\in\mathbb Z_+$.
\end{theorem}
\begin{proof}
Note $S(\lambda)$ is a free cyclic group generated by $b_{T_\lambda}[T_\lambda]$
and $\sigma b_{T_\lambda}[T_\lambda] = (\sign \sigma) b_{T_\lambda}[T_\lambda]$
for all $\sigma \in S_n$.
The proof of Theorem~\ref{TheoremCodimGrass} implies that $\frac{\Gamma_{2m}(\mathbb Z)}{\Gamma_{2m}(\mathbb Z) \cap \Id(G_{R}, \mathbb Z)} \cong \mathbb Z_{\ell}$ is a cyclic group generated by $[x_ {1},x_{2}] \dots [x_{2m-1},x_{2m}]$.
By Lemma~\ref{identities}, $$\sigma [x_{1}, x_{2}] \dots [x_{2m-1},x_{2m}] = (\sign \sigma) [x_{1}, x_{2}] \dots [x_{2m-1},x_{2m}] \text{ for all }\sigma \in S_n.$$
Hence $\frac{\Gamma_{2m}(\mathbb Z)}{\Gamma_{2m}(\mathbb Z) \cap \Id(G_R, \mathbb Z)} \cong S(\lambda)/ \ell S(\lambda)$.
The first assertion is proved.
The second assertion is evident since every long commutator of length greater than $2$
is a polynomial identity of $G_R$.
\end{proof}
\begin{theorem}\label{TheoremOrdinaryZSnGrass}
Let $G_{R}$ be the Grassmann algebra over the $R$. Then there exists a chain of $\mathbb Z S_n$-submodules in $\frac{P_n(\mathbb Z)}{P_n(\mathbb Z)\cap\Id(G_{R},\mathbb Z)}$ with factors $S(n-k, 1^{k})/\ell S(n-k, 1^{k})$ for each $0 \leqslant k \leqslant n-1$ (each factor occurs exactly once)
where $\ell = \ch R$.\end{theorem}
\begin{proof}
Now we apply Theorems~\ref{TheoremZSnOrdinaryProper}, \ref{TheoremYoungsRule}, and \ref{TheoremProperZSnR}.
By Theorem~\ref{TheoremYoungsRule}, a diagram consisting of a single column can generate
only diagrams $D_{(n-k, 1^{k})}$. Since we have diagrams of an even length only, each factor occurs only once.
\end{proof}
\section*{Acknowledgements}
The authors are grateful to Eric Jespers and Mikhail Zaicev for helpful discussions.
They also appreciate the referee for his remarks.
|
1,116,691,499,797 | arxiv | \section{Introduction}
The geometric rigidity estimate for gradient fields proved in~\cite{FJM} plays a crucial role in nonlinear elasticity theory. However, in the study of lattices with dislocations, a geometric rigidity estimate for incompatible fields (i.e., fields not arising from gradients) becomes necessary (cf. e.g.~\cite{MSZ} and~\cite{LL}). In~\cite{MSZ}, the authors proved a (\emph{scaling invariant}) version of the geometric rigidity theorem in~\cite{FJM} for incompatible fields in dimension $2$ for the critical exponent.\\
In this work we give a proof of the analogous result in dimension $\ge 3$, for the supercritical regime $p > 1^* = \frac{n}{n-1}$ (Theorem~\ref{thm:rig_LL_1}). The approach is to write down an incompatible field as the sum of a compatible term, for which we can use the classical geometric rigidity from~\cite{FJM} and a remainder, which is the $L^p$ norm of a weakly singular operator (the \emph{averaged linear homotopy operator}), whose derivative is a Calder\'on-Zygmund operator. This allows to give the bounds in the supercritical case. On the other hand, for the critical exponent we can still use the weak geometric rigidity estimate proved in~\cite{CDM} in order to find a scaling invariant estimate for the weak-$L^{1^*}$ norm (Theorem~\ref{thm:useless1}). From Theorem~\ref{thm:useless1}, we deduce directly in Proposition~\ref{prop:curl_bounds_D_SOn} that the $\Curl$ of a matrix field $A \in L^{1^*, \infty}(\Omega)^{n\times n}$ (where $\Omega$ is an open bounded set in $\mathbb{R}^n$) taking values in $SO(n)$ bounds its gradient.
\section{Notations and Preliminaries}
In what follows, $C$ will denote a (universal) constant whose value is allowed to change from line to line. We put $\widehat{x}:=\frac{x}{\modulus{x}}$, while $L^p(U, \Lambda^r)$ ($W^{m, p}(U, \Lambda^r)$) denotes the space of $r$-forms on $U$ whose coefficients are $L^p$ ($W^{m,p }$) functions. Moreover, recall that we can identify a tensor field $A\in L^1(\Omega)^{n\times n}$ with a vector of $1$-forms of length $n$, that is with $\omega:=\rB{\omega^i}_{i=1}^n$, $\omega^i = A^i_j \ensuremath{\mathrm{d}} x^j$, and its $\Curl$ with $\ensuremath{\mathrm{d}} \omega$ (or, more precisely, with $\rB{\star \ensuremath{\mathrm{d}} \omega}^{\flat}$), given by \[\displaystyle \ensuremath{\mathrm{d}} \omega^i = \sum_{j < k} \rB{\frac{\partial A^i_j}{\partial x^k} - \frac{\partial A^i_k}{\partial x^j}} \ensuremath{\mathrm{d}} x^j \wedge \ensuremath{\mathrm{d}} x^k .\]
We recall that a real-valued function $f$ from a measure space $(X, \mu)$ is in $L^{p, \infty}(X, \mu)$ or ($L^p_w(X, \mu)$) if
\[
\norm{f}_{L^{p, \infty}(X, \mu)}:=\sup_{t > 0} t \mu\rB{\cB{x \in X\biggr| \modulus{f(x)} > t}}^{\frac{1}{p}} < \infty.
\]
Is easy to check that $\norm{\cdot}_{L^{p, \infty}}(X, \mu)$ is only a quasi-norm, that is the triangle inequality holds just in the weak form
\[
\norm{f+g}_{L^{p, \infty}(X, \mu)} \le C_{p} \rB{\norm{f}_{L^{p, \infty}(X, \mu)} + \norm{g}_{L^{p, \infty}(X, \mu)}}.
\]
We write $L^{p, \infty}(\Omega)$ for $L^{p, \infty}(\Omega, \modulus{\cdot})$, when $\Omega\subset \mathbb{R}^n$ and $\modulus{\cdot}$ is the Lebesgue measure.\\
We recall the
\begin{definition}
Let $U \subset \mathbb{R}^n$ be a star-shaped domain with respect to the point $y \in U$. The \emph{linear homotopy operator} at the point $y$ is the operator
\[
k_y= k_{y, r} : \Omega^r(U) \to \Omega^{r-1}(U),
\]
defined as
\[
(k_y \omega) (x):=\int_0^1{s^{r-1} \omega(sx + (1-s)y)\zak (x-y)\ensuremath{\mathrm{d}} s},
\]
\end{definition}
where $(\omega(x)\zak v)\sB{v_1, \cdots v_{n-1}}:=\omega(x)\sB{v, v_1, \cdots, v_{n-1}}$. It is well known that the linear homotopy operator satisfies
\begin{equation}
\label{eq:lho1}
\omega = k_{y, r+1} \ensuremath{\mathrm{d}} \omega + \ensuremath{\mathrm{d}} k_{y, r} \omega\quad \forall \omega \in \Omega^r(U).
\end{equation}
In order to get more regularity, we consider the following \emph{averaged} linear homotopy operator on $B := B(0, 1)$, which coincides with the one introduced by Iwaniec and Lutoborski in~\cite{IL}, except for the choice of the weight function:
\[
T = T_r : \Omega^r(B) \to \Omega^{r-1}(B),
\]
\[
T\omega(x):=\int_B \phi(y) \rB{k_y\omega}(x) \ensuremath{\mathrm{d}} y,
\]
where $\phi \in \mathcal{C}^{\infty}_c(B(0, 2))$ is a positive cut-off function, with $\phi \equiv 1$ in $B$ and
\[\max\cB{\norm{\phi}_{L^{\infty}(\mathbb{R}^n)}, \norm{\nabla \phi}_{L^{\infty}(\mathbb{R}^n)}}\le 3.\]
Clearly,~\eqref{eq:lho1} holds for $T$ as well:
\begin{equation}
\label{eq:lho2}
\omega = T\ensuremath{\mathrm{d}} \omega + \ensuremath{\mathrm{d}} T \omega.
\end{equation}
An essential result is the rigidity estimate due to Friesecke, James and M\"uller:
\begin{theorem}[~\cite{FJM}]
\label{thm:FJM}
Let $\Omega\subset\mathbb{R}^n$ be a bounded Lipschitz domain, $n \ge 2$, and let $1<p<\infty$. There exists a constant $C = C(p, \Omega)$ such that for every $u \in W^{1, 2}(\Omega)$ there exists a rotation $R \in SO(n)$ such that
\[
\norm{\nabla u - R}_{L^p(\Omega)^{n\times n}} \le C\norm{\dist(\nabla u, SO(n))}_{L^p(\Omega)^{n\times n}}.
\]
\end{theorem}
For weak-$L^p$ estimate, we shall need the following theorem proved by Conti, Dolzmann and M\"uller:
\begin{theorem}[~\cite{CDM}]
\label{thm:cdm}
Let $p \in (1, \infty)$ and $\Omega\subset\mathbb{R}^n$ be a bounded connected domain. There exists a constant $C > 0$ depending only on $p, n$ and $\Omega$ such that for every $u \in W^{1, 1}(\Omega)^n$ such that $\dist(\nabla u, SO(n)) \in L^{p, \infty}(\Omega)^{n\times n}$ there exists a rotation $R \in SO(n)$ such that
\begin{equation}
\label{eq:cdm}
\norm{\nabla u - R}_{L^{p, \infty}(\Omega)^{n \times n}} \le C \norm{\dist(\nabla u, SO(n))}_{L^{p, \infty}(\Omega)^{n\times n}}.
\end{equation}
\end{theorem}
We also recall that, as proved in~\cite{IL}, $T$ satisfies (for smooth forms $\omega$) the pointwise bound
\begin{equation}
\label{eq:bound_LHO}
\modulus{T\omega(x)} \le C_{n, r} \int_B \frac{\modulus{\omega(y)}}{\modulus{x-y}^{n-1}} \ensuremath{\mathrm{d}} y.
\end{equation}
Indeed, for $\omega = \omega_{\alpha} \ensuremath{\mathrm{d}} x^{\alpha}\in \Omega^r(B)$ we have
\[
T\omega (x) = \rB{\int_B \ensuremath{\mathrm{d}} y \phi(y) \int_0^1 t^{r-1} \scal{x-y}{e_i} \omega_{\alpha}(tx+(1-t)y)} \ensuremath{\mathrm{d}} x^{\alpha} \zak e_i.
\]
We then make the substitution $\Phi(y, t):=\rB{tx+(1-t)y, \frac{t}{1-t}} \equiv (z(t, y), s(t))$, $\Phi: B(0, 1)\times (0, 1)\to B(0,1) \times (0, \infty)$, which gives
\[
\begin{split}
T\omega(x)
&= \rB{\int_B \ensuremath{\mathrm{d}} z \omega_{\alpha}(z)\frac{\scal{x-z}{e_i}}{\modulus{x-z}^n} \int_0^2 s^{r-1}(1+s)^{n-r}\phi(z+s\widehat{z - x})} \ensuremath{\mathrm{d}} x^{\alpha}\zak e_i \equiv\\
&\equiv \rB{\int_B K^i_r(z, x-z) \omega_{\alpha}(z) \ensuremath{\mathrm{d}} z} \ensuremath{\mathrm{d}} x^{\alpha} \zak e_i,
\end{split}
\]
where
\[
K^i_r(x, h):=\frac{\scal{h}{e_i}}{\modulus{h}^n} \int_0^2 s^{r-1} (1+s)^{n-r} \phi(x-s\widehat{h}) \ensuremath{\mathrm{d}} s,
\]
and we noticed that, since $\phi$ has compact support, the integral from $0$ to $\infty$ actually reduces to an integral over a finite interval. That is, we get~\eqref{eq:bound_LHO}. It also follows easily from~\eqref{eq:bound_LHO} that $T$ is a compact operator from $L^p(B, \Lambda^r)$ to $L^p(B, \Lambda^{r-1})$. Moreover, by density,~\eqref{eq:lho2} extends to every differential form $\omega \in W^{1, p}(B, \Lambda^r)$, and to every differential form $\omega \in L^1(B, \Lambda^r)$ whose differential is a bounded Radon measure, $\ensuremath{\mathrm{d}} \omega \in \mathcal{M}_b(B, \Lambda^{r+1})$.
\section{Proof of the Main Results}
Using the homotopy operator, we get the following weak-$L^p$ geometric rigidity estimate for incompatible fields:
\begin{theorem}
\label{thm:useless1}
Let $1^* = 1^*(n):=\frac{n}{n-1}$, and let $B \subset \mathbb{R}^n$ be the unit ball of $\mathbb{R}^n$.
There exists a constant $C = C(n) >0 $ such that for every $A \in L^{p^{*}}(B)$ whose $\Curl(A)$ is a vector measure on $U$ with bounded total variation and whose support is contained in $B$, i.e. $\spt \Curl(A) \Subset B$, there exist a rotation $R \in SO(n)$ such that
\[
\norm{A - R}_{L^{1^*, \infty}(B)} \le C\rB{\norm{\dist(A, SO(n))}_{L^{1^*, \infty}(B)} + \modulus{\Curl(A)}(B)}.
\]
\end{theorem}
\begin{proof}
Take any measurable subset $E \subset B$, and let $r > 0$ be such that $\modulus{B(0, r)} = \modulus{E}$. Then, using~\eqref{eq:bound_LHO} and the Hardy-Littlewood inequality
\[
\begin{split}
\int_E \ensuremath{\mathrm{d}} x \modulus{(T\omega)(x)} & \le C \int_E \ensuremath{\mathrm{d}} x \int_B \ensuremath{\mathrm{d}} y \frac{\modulus{\omega(y)}}{\modulus{x - y}^{n-1}} = \\
&=C \int_B \ensuremath{\mathrm{d}} y \modulus{\omega(y)} \int_E \frac{\ensuremath{\mathrm{d}} x}{\modulus{x - y}^{n-1}} \le\\
&\le C \int_B \ensuremath{\mathrm{d}} y \modulus{\omega(y)} \int_{\mathbb{R}^n} \rchi_{E - x}(y)\frac{\ensuremath{\mathrm{d}} y}{ \modulus{y}^{n-1}} \le\\
&\le C \int_B \ensuremath{\mathrm{d}} y \modulus{\omega(y)} \int_{\mathbb{R}^n} \rchi_{B(0, r)} \frac{\ensuremath{\mathrm{d}} y}{\modulus{y}^{n-1}} \le \\
&= C \int_B \ensuremath{\mathrm{d}} y \modulus{\omega(y)} \int_0^r \ensuremath{\mathrm{d}} t \int_{\partial B(0, t)} \frac{\ensuremath{\mathrm{d}} y}{t^{n-1}} = \\
&= C r \norm{\omega}_{L^1(B)} = C \modulus{E}^{\frac{1}{n}} \norm{\omega}_{L^1(B)}.
\end{split}
\]
This gives immediately
\[
\norm{T\omega}_{L^1(B)} \le C_n \norm{\omega}_{L^1(B)},
\]
and thus, using~\eqref{eq:lho2}, $\norm{A - T\ensuremath{\mathrm{d}} A}_{L^1(B)} \le C \norm{\ensuremath{\mathrm{d}} A}_{L^1(B)}$, which extends immediately by density in the case when $\ensuremath{\mathrm{d}} A$ is a vector measure with bounded total variation. Choosing $E = \cB{x \in B \biggr| \modulus{T\omega(x)} > t}$, for $t > 0$
\[
t \modulus{E} \le \int_E \modulus{T\omega(x)} \ensuremath{\mathrm{d}} x \le C \modulus{E}^{\frac{1}{n}} \modulus{\ensuremath{\mathrm{d}} A}(B).
\]
Passing to the supremum over $t > 0$, we find
\begin{equation}
\label{eq:weak_Lp_lho}
\norm{T\ensuremath{\mathrm{d}} A}_{L^{1^*, \infty}(B)} \le C_n \modulus{\ensuremath{\mathrm{d}} A}(B).
\end{equation}
Since $B$ is convex and $d(A - T\ensuremath{\mathrm{d}} A) = \ensuremath{\mathrm{d}}^2 TA = 0$, we can find a function $g$ such that $\ensuremath{\mathrm{d}} g = A - T\ensuremath{\mathrm{d}} A$. From the estimates proven, is possible to apply Theorem~\ref{thm:cdm} to $g$ and find
\[
\norm{\ensuremath{\mathrm{d}} g - R}_{L^{1^*, \infty}(B)} \le C \norm{\dist(\ensuremath{\mathrm{d}} g, SO(n))}_{L^{1^*, \infty}(B)}.
\]
But
\[
\norm{\ensuremath{\mathrm{d}} g - R}_{L^{1^*, \infty}(B)} \ge C \norm{A - R}_{L^{1^*, \infty}(B)} - \norm{T \ensuremath{\mathrm{d}} A}_{L^{1^*, \infty}(B)}
\]
and
\[
\norm{\dist(\ensuremath{\mathrm{d}} g, SO(n))}_{L^{1^*, \infty}(B)} \le \norm{\dist(A, SO(n))}_{L^{1^*, \infty}(B)} + \norm{T\ensuremath{\mathrm{d}} A}_{L^{1^*, \infty}(B)}.
\]
In particular,
\[
\norm{A - R}_{L^{1^*, \infty}(B)} \le C\rB{\norm{\dist(A, SO(n))}_{L^{1^*, \infty}(B)} + \modulus{\Curl(A)}(B)}.\qedhere
\]
\end{proof}
We now give another estimate for $L^p$ norms. It requires an $L^{\infty}$-bound on the matrix field $A$, which is natural in the context of the theory of elasticity.
\begin{theorem}
\label{thm:rig_LL_1}
Let $n \ge 3$, $1^*:=1^*(n):=\frac{n}{n-1}$, $p \in [1^*, 2]$ and fix $M > 0$. There exists a constant $C = C(n, M, p) > 0$, depending only on the dimension $n$, the exponent $p$ and the constant $M$, such that for every $A \in L^{\infty}(B)$, with $\norm{A}_{\infty} \le M$ and $\Curl(A) \in \mathcal{M}_b(B, \Lambda^2)$, $B:=B(0, 1)$, there exists a corresponding rotation $R \in SO(n)$ for which, if $p > 1^*$
\begin{equation}
\label{eq:rig_LL_p}
\int_B \modulus{A - R}^p \ensuremath{\mathrm{d}} x \le C\rB{\int_B\dist^{p}(A, SO(n))\ensuremath{\mathrm{d}} x + \modulus{\Curl(A)}^{1^*}(B)},
\end{equation}
while, if $p = 1^*$,
\begin{equation}
\label{eq:rig_LL}
\begin{split}
\int_B \modulus{A - R}^{1^*} \ensuremath{\mathrm{d}} x \le& C\int_B\dist^{1^*}(A, SO(n))\ensuremath{\mathrm{d}} x + \\
& + C\modulus{\Curl(A)}^{1^*}(B)\cB{\modulus{\log\rB{\modulus{\Curl(A)}(B)}}+1}.
\end{split}
\end{equation}
\end{theorem}
\begin{remark}
\label{rmk:everything_pointless}
The constant $C$ in~\eqref{eq:rig_LL} is \emph{not} scaling invariant in the critical regime $p = 1^*$.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:rig_LL_1}]
Without loss of generality, we can assume $T\ensuremath{\mathrm{d}} A$ not identically constant. Indeed, if $T\ensuremath{\mathrm{d}} A$ is identically constant, from the identity $T\ensuremath{\mathrm{d}} A = A - \ensuremath{\mathrm{d}} T A$, we see that $\ensuremath{\mathrm{d}} A = 0$, hence the result follows applying Theorem~\ref{thm:FJM}. As in the proof of Theorem~\ref{thm:useless1}, applying Theorem~\ref{thm:FJM} (and using $\modulus{a - b}^p \ge 2^{1-p}\modulus{a}^p - \modulus{b}^p$) we find a rotation $R \in SO(n)$ for which the inequality
\begin{equation}
\label{eq:useless11}
\int_B \modulus{A - R}^p \ensuremath{\mathrm{d}} x \le C_n\rB{\int_B\modulus{\dist(A, SO(n))}^p\ensuremath{\mathrm{d}} x + \int_B \modulus{T\ensuremath{\mathrm{d}} A(x)}^p \ensuremath{\mathrm{d}} x}
\end{equation}
holds. We then just need to estimate the last term in the right hand side of~\eqref{eq:useless11}. For, fix a $\Lambda > 1$ (to be chosen later), and define the integrals
\[
I:=\int_{\modulus{T\ensuremath{\mathrm{d}} A } > \Lambda} \modulus{T\ensuremath{\mathrm{d}} A}^p \ensuremath{\mathrm{d}} x,\qquad II:=\int_{\modulus{T\ensuremath{\mathrm{d}} A}\le \Lambda} \modulus{T \ensuremath{\mathrm{d}} A}^p \ensuremath{\mathrm{d}} x.
\]
We now give an estimate for $I$. Firstly, we recall that $T$ is a bounded operator from $L^p(B, \Lambda^r)$ into $W^{1, p}(B, \Lambda^{r+1})$, whenever $p\in (1, \infty)$ (cf. \cite[Proposition 4.1]{IL}). Moreover, $T\ensuremath{\mathrm{d}} A = A - \ensuremath{\mathrm{d}} TA $, and $\nabla T = S_1 + S_2$, where $S_1$ is a ``weakly'' singular operator which maps continuously $L^{\infty}$ into itself, while $S_2$ is a Calder\'on-Zygmund operator (cf. \cite[Proposition 4.1]{IL}). In particular,
\[
\norm{T \ensuremath{\mathrm{d}} A}_{\text{BMO}}\le C_n \norm{A}_{\infty} \le C_n M,
\]
where $C_n > 0$ is a constant depending only on the dimension. Now, we can write
\begin{equation}
\label{eq:useless777}
I = \Lambda^{p - \pst}\Lambda^{\pst} \modulus{\cB{\modulus{T \ensuremath{\mathrm{d}} A} > \Lambda}} + I',\qquad I':=\int_{\Lambda}^{\infty} \lambda^{p - 1}\modulus{\cB{\modulus{T\ensuremath{\mathrm{d}} A} > \lambda}} \ensuremath{\mathrm{d}} \lambda.
\end{equation}
Clearly,
\[
\Lambda^{\pst} \modulus{\cB{\modulus{T \ensuremath{\mathrm{d}} A} > \Lambda}} \le \norm{T\ensuremath{\mathrm{d}} A}_{L^{\pst, \infty}}^{\pst} \le C \modulus{\ensuremath{\mathrm{d}} A}(B)^{\pst}.
\]
We now take a Calder\'on-Zygmund decomposition of $F(x):=\modulus{T\ensuremath{\mathrm{d}} A(x)}^p$: namely, we find a function $g \in L^{\infty}$, with $\norm{g}_{\infty} \le 2^{-n}\Lambda^p$ and disjoint cubes $\cB{Q_j}_{j \ge 1}$ such that, if $b:=\sum_{j \ge 1} \rchi_{Q_j} F$,
\[
\begin{cases}
F = g + b,\\
2^{-n} \Lambda^p < \fint_{Q_j} F \ensuremath{\mathrm{d}} x\le \Lambda^p \quad \rB{\text{Jensen }\Rightarrow \modulus{\fint_{Q_j} T\ensuremath{\mathrm{d}} A(x) \ensuremath{\mathrm{d}} x} \le \Lambda},\\
\modulus{\bigcup_{j \ge 1} Q_j} < \frac{2^n}{\Lambda^p} \int \modulus{T \ensuremath{\mathrm{d}} A}^p \ensuremath{\mathrm{d}} x.
\end{cases}
\]
With such a decomposition, outside the cubes $Q_j$, $\modulus{T \ensuremath{\mathrm{d}} A}^p = \modulus{g(x)} \le 2^{-n}\Lambda^p \le \Lambda^p$. Hence, using the John-Nirenberg inequality and the elementary estimate
\[
\int_x^{\infty} \lambda^q e^{-\lambda} \ensuremath{\mathrm{d}} \lambda \le e^{-x}(1+x),\quad \forall q \le 1 \text{ and } x \ge 1,
\]
we find that (provided $p \le 2$)
\begin{equation}
\label{eq:useless888}
\begin{split}
I' &= \int_{\Lambda}^{\infty} \lambda^{p - 1} \sum_{j \ge 1} \modulus{\cB{x \in Q_j \biggr| \modulus{T\ensuremath{\mathrm{d}} A} > \lambda}} \ensuremath{\mathrm{d}} \lambda \le \\
&\le \int_{\Lambda}^{\infty} \lambda^{p - 1} \sum_{j \ge 1} \modulus{\cB{x \in Q_j\biggr| \modulus{T\ensuremath{\mathrm{d}} A(x) - \fint_{Q_j} T\ensuremath{\mathrm{d}} A \ensuremath{\mathrm{d}} x} > \lambda - \Lambda}} \ensuremath{\mathrm{d}} \lambda \le \\
&\le C_1 \int_{\Lambda}^{\infty} \lambda^{p - 1} \rB{\sum_{j \ge 1} \modulus{Q_j}} \exp\rB{-C_2 \frac{\lambda - \Lambda}{\norm{T\ensuremath{\mathrm{d}} A}_{\text{BMO}}}} \ensuremath{\mathrm{d}} \lambda < \\
&< C_1 \frac{2^n}{\Lambda^p} \rB{\int \modulus{T\ensuremath{\mathrm{d}} A}^p} e^{C_2 \frac{\Lambda}{\norm{T\ensuremath{\mathrm{d}} A}_{\text{BMO}}}} \rB{\frac{\norm{T\ensuremath{\mathrm{d}} A}_{\text{BMO}}}{C_2}}^{p} \int_{\frac{C_2}{\norm{T\ensuremath{\mathrm{d}} A}_{\text{BMO}}} \Lambda}^{\infty}\lambda^{p - 1} e^{-\lambda} \ensuremath{\mathrm{d}} \lambda \le \\
&\le C_1 \frac{2^n}{\Lambda^p} \rB{\int \modulus{T\ensuremath{\mathrm{d}} A}^p} \rB{\frac{\norm{T\ensuremath{\mathrm{d}} A}_{\text{BMO}}}{C_2}}^p \rB{1 + \frac{C_2}{\norm{T\ensuremath{\mathrm{d}} A}_{\text{BMO}}}\Lambda} \le \\
&\le C_{n, M} \rB{\int \modulus{T \ensuremath{\mathrm{d}} A}^p} \frac{1 + \Lambda}{\Lambda^p}.
\end{split}
\end{equation}
Hence, if we choose $\Lambda$ big enough (depending only on $n$ and $M$) in~\eqref{eq:useless888},
\begin{equation}
\label{eq:useless999}
I' \le \frac{1}{2} \int \modulus{T\ensuremath{\mathrm{d}} A}^p.
\end{equation}
Let us now estimate $II$. If $p > \pst$, we can write
\[
\begin{split}
\int_{\modulus{T\ensuremath{\mathrm{d}} A} \le \Lambda} \modulus{T\ensuremath{\mathrm{d}} A}^p \ensuremath{\mathrm{d}} x &= \int_{1 < \modulus{T\ensuremath{\mathrm{d}} A}\le \Lambda} \modulus{T\ensuremath{\mathrm{d}} A}^p \ensuremath{\mathrm{d}} x + \sum_{j \ge 0} \int_{2^{-j-1} < \modulus{T\ensuremath{\mathrm{d}} A} \le 2^{-j}} \le \\
&\le C \cB{\Lambda^p \modulus{\ensuremath{\mathrm{d}} A}^{\pst}(B) + \sum_{j \ge 0} 2^{-(j+1)p} \modulus{\cB{\modulus{T\ensuremath{\mathrm{d}} A} > 2^{-(j+1)}}} } \le \\
&\le C\modulus{\ensuremath{\mathrm{d}} A}^{\pst}(B)\rB{\Lambda^p + \sum_{j \ge 0} 2^{-j(\pst - p)}} \le \\
&\le C(n, p, M) \modulus{\ensuremath{\mathrm{d}} A}^{\pst}(B),
\end{split}
\]
which gives~\eqref{eq:rig_LL_p}. In the case $p = \pst$, we are going to make use of the increasing convex function $\Psi$, defined as the linear (convex) continuation of $t \mapsto t^p$ for $t \ge \Lambda$:
\[
\Psi(t):=\begin{cases}
t^{\pst} & \text{if } t \le \Lambda,\\
\pst \Lambda^{\pst - 1} t + (1-\pst) \Lambda^{\pst}& \text{if }t \ge \Lambda.
\end{cases}
\]
\begin{equation}
\label{eq:uselessB}
\begin{split}
II &\le \int_B \Psi(\modulus{T\ensuremath{\mathrm{d}} A(x)}) \ensuremath{\mathrm{d}} x \le \int_B \Psi\rB{\fint_B \frac{C\modulus{\ensuremath{\mathrm{d}} A}(B) \ensuremath{\mathrm{d}} \modulus{\ensuremath{\mathrm{d}} A}(y)}{\modulus{x-y}^{n-1}}} \le\\
&\le \int_B \fint \Psi\rB{\frac{C\modulus{\ensuremath{\mathrm{d}} A}(B)}{\modulus{x-y}^{n-1}}} \ensuremath{\mathrm{d}} \modulus{\ensuremath{\mathrm{d}} A}(y) \ensuremath{\mathrm{d}} x = \\
&= \fint_B \ensuremath{\mathrm{d}} \modulus{\ensuremath{\mathrm{d}} A}(y) \int_B \Psi\rB{\frac{C\modulus{\ensuremath{\mathrm{d}} A}(B)}{\modulus{x-y}^{n-1}}} \ensuremath{\mathrm{d}} x \le \\
&\le \int_{B(0, 2)} \Psi\rB{\frac{C\modulus{\ensuremath{\mathrm{d}} A}(B)}{\modulus{z}^{n-1}}} \ensuremath{\mathrm{d}} z = C\int_0^2 \ensuremath{\mathrm{d}} \rho \rho^{n-1} \Psi\rB{\frac{C\modulus{\ensuremath{\mathrm{d}} A}(B)}{\rho^{n-1}}} = \\
&= \int_0^{C\rB{\modulus{\ensuremath{\mathrm{d}} A}(B)\Lambda^{-1}}^{\frac{1}{n-1}}} \rho^{n-1}\rB{1^*\Lambda^{\pst - 1} \frac{C\modulus{\ensuremath{\mathrm{d}} A}(B)}{\rho^{n-1}} + (1 - 1^*)\Lambda^{\pst}}\ensuremath{\mathrm{d}} \rho + \\
&\quad+ C \int_{C\rB{\modulus{\ensuremath{\mathrm{d}} A}(B)\Lambda^{-1}}^{\frac{1}{n-1}}}^2 \frac{\modulus{\ensuremath{\mathrm{d}} A}(B)^{1^*}}{\rho} \ensuremath{\mathrm{d}} \rho \le \\
&\le C \modulus{\ensuremath{\mathrm{d}} A}(B)^{1^*}\rB{1 + \modulus{\log\rB{\modulus{\ensuremath{\mathrm{d}} A}(B)}}}.
\end{split}
\end{equation}
Combining together~\eqref{eq:useless777}, ~\eqref{eq:useless999} and~\eqref{eq:uselessB}, we obtain~\eqref{eq:rig_LL}.
\end{proof}
\begin{remark}
The same conclusions can be obtained considering the operator defined by an average on the sphere:
\[
\tilde{T}\omega(x):=\int_{\mathbb{S}^{n-1}} \ensuremath{\mathrm{d}} \mathcal{H}^{n-1}(y) k_y \omega(x).
\]
\end{remark}
\begin{remark}
Using Korn's inequality instead of Theorem~\ref{thm:FJM}, one can easily prove the linear counterpart of Theorem~\ref{thm:rig_LL_1}.
\end{remark}
\begin{proposition}
\label{prop:curl_bounds_D_SOn}
Let $\Omega \subset \mathbb{R}^n$ be a bounded open set, and suppose $A \in L^2(\Omega)$ and $\spt(A) \Subset \Omega$. Consider a tessellation of $\mathbb{R}^n$ with cubes $\cB{Q^{(\rho)}_i}_{i \ge 1} \equiv \cB{Q(x_i, \rho)}$ of side $\rho$, and define $A_{\rho}$ as the piecewise constant function
\begin{equation}
\label{eq:def_A_rho}
A_{\rho}:=\sum_{i \ge 1} R^{(\rho)}_i \rchi_{Q_{\rho,i}},
\end{equation}
where the rotations $R^{(\rho)}_i$ are the ones given by Theorem~\ref{thm:useless1} applied to $A$ on the balls $B(x_i, \frac{3}{2}\rho)$. There exists a constant $C = C(n) > 0$, depending only on the dimension $n$, such that
\begin{equation}
\label{eq:prop1_1}
\frac{1}{\rho}\norm{A - A_{\rho}}_{L^1(\Omega)} + \modulus{DA_{\rho}}(\Omega) \le C\rB{\rho^{\frac{n-2}{2}}\norm{\dist(A, SO(n))}_{L^2(\Omega)} + \modulus{\Curl(A)}(\Omega)}.
\end{equation}
In particular, if $A \in SO(n)$ almost everywhere,
\begin{equation}
\label{eq:prop1_2}
\modulus{DA}(\Omega) \le C \modulus{\Curl(A)}(\Omega).
\end{equation}
That is, $A \in BV(\Omega, SO(n))$ provided $\modulus{\Curl(A)}(\Omega)$ is finite.
\end{proposition}
\begin{proof}
By definition, the rotations $R^{(\rho)}_i$ in~\eqref{eq:def_A_rho} satisfy
\[
\norm{A - R^{(\rho)}_i}_{L^{1^*, \infty}(Q^{(\rho)}_i)} \le C_n\rB{\norm{\dist(A, SO(n))}_{L^{1^*, \infty}} + \modulus{\Curl(A)}(2Q^{(\rho)}_i)}.
\]
Let $\phi \in \mathcal{C}^1_c(\Omega)$. Then
\[
\modulus{\int A_{\rho}\text{div}(\phi) \ensuremath{\mathrm{d}} x} \le \sum_{\substack{i, j \text{ s.t. }\\ \partial Q^{(\rho)}_i \cap \partial Q^{(\rho)}_j \ne \emptyset}} \rho^{n-1} \modulus{R^{(\rho)}_i - R^{(\rho)}_j}.
\]
Now, for any two adjacent cubes $Q^{(\rho)}_i$ and $Q^{(\rho)}_j$, take the rotation $R'_{\rho, i}$ given applying Theorem~\ref{thm:useless1} to the cube $2Q^{(\rho)}_i$. Then
\[
\begin{split}
\modulus{R^{(\rho)}_i - R^{(\rho)}_j} \rho^{n - 1} &\le \rB{\modulus{R^{(\rho)}_i - R'_{\rho, i}} + \modulus{R'_{\rho, i} - R^{(\rho)}_j}} \rho^{n-1} \le \\
&\le C_n \rB{\norm{R^{(\rho)}_i - R'_{\rho, i}}_{L^{\pst, \infty}(Q^{(\rho)}_i)} + \norm{R'_{\rho, i} - R^{(\rho)}_j}_{L^{\pst, \infty}(Q^{(\rho)}_j)}} \le \\
&\le C_n \rB{\norm{A - R^{(\rho)}_i}_{L^{\pst, \infty}(Q^{(\rho)}_i)} + \norm{A - R^{(\rho)}_j}_{L^{\pst, \infty}(Q^{(\rho)}_j)} + \norm{A - R'_{\rho, i}}_{L^{\pst, \infty}(2Q^{(\rho)}_i)}} \le \\
&\le C_n\rB{\norm{\dist(A, SO(n))}_{L^{1^*, \infty}(4Q^{\rho}_i)} +\modulus{\Curl(A)}(4Q^{(\rho)}_i)} \\
&\le C_n\rB{\rho^{\frac{n-2}{2}}\norm{\dist(A, SO(n))}_{L^2(4Q^{\rho}_i)} +\modulus{\Curl(A)}(4Q^{(\rho)}_i)}.
\end{split}
\]
Taking the supremum over $\phi$, since the cubes $4Q^{(\rho)}_i$ overlap only finitely many times, we obtain
\[
\modulus{DA_{\rho}}(\Omega) \le C_n\rB{\rho^{\frac{n-2}{2}}\norm{\dist(A, SO(n))}_{\Omega}+ \modulus{\Curl(A)}(\Omega)}.
\]
Moreover, from the definition of weak-$L^1$:
\[
\frac{1}{\rho} \int_{Q^{(\rho)}_i} \modulus{A - A_{\rho}} \ensuremath{\mathrm{d}} x \le C_n \norm{A - A_{\rho}}_{L^{\pst, \infty}(Q^{(\rho)}_i)} \le C_n\rB{\norm{\dist(A, SO(n))}_{L^{1^*, \infty}(4Q^{\rho}_i)} + \modulus{\Curl(A)}(4Q^{(\rho)}_i)}.
\]
This gives in particular~\eqref{eq:prop1_1}. Moreover
\[
\begin{split}
\norm{A - A_{\rho}}_{L^1(\Omega)} &\le \sum_{i\ge 1} \norm{A - A_{\rho}}_{L^1(Q^{(\rho)}_i)} \le C_n \rho \sum_{i \ge 1}\rB{\norm{A - A_{\rho}}_{L^{\pst, \infty}(2Q^{(\rho)}_i)} + \modulus{\Curl(A)}(2Q^{(\rho)}_i) }\le\\
&\le C_n \rho \rB{\norm{A - A_{\rho}}_{L^{\pst, \infty}(\Omega)} + \modulus{\Curl(A)}(\Omega)} \xrightarrow[\rho \to 0]{} 0.
\end{split}
\]
That is, $A_{\rho} \to A$ strongly in $L^1$. Thus, if we let $\rho\to 0$, we obtain~\eqref{eq:prop1_2} provided $A \in SO(n)$ almost everywhere.
\end{proof}
|
1,116,691,499,798 | arxiv | \section{Introduction}
\section{Introduction}
Both the black hole horizon and the cosmological horizon are described by
the so-called ``fluid'' metric which is characterized by the ``velocity'' field ${\bf v}$ \cite{Visser2005}:
\begin{equation}
ds^2=g_{\mu\nu}x^\mu x_\nu= -dt^2 + \left(d {\bf r}- {\bf v}dt\right)^2~,
\label{FluidMetric1}
\end{equation}
where we used the units with $c=1$.
The de Sitter spacetime is characterized by the radial velocity field
\begin{equation}
{\bf v}({\bf r})=v(r)\hat{\bf r} ~~,~~v(r) =Hr= \frac{r}{r_H} ~,
\label{FRWfluid}
\end{equation}
where $r_H=1/H$ is the radius of cosmological horizon.
The ``fluid'' metric for black hole at the end of the gravitational collapse is the Painleve-Gullstrand metric
\cite{Painleve}, which corresponds to the radial flow field in the form:
\begin{equation}
{\bf v}({\bf r})=v(r)\hat{\bf r} ~~,~~v(r)= -\sqrt{\frac{r_H}{r}}~~,
\label{Schwarzschild}
\end{equation}
where $r_H$ is the radius of the black hole horizon.
The ``fluid'' metric is best suited for the derivation of the Hawking radiation using the semiclassical tunneling picture \cite{Volovik1999a,ParikhWilczek}, because this metric is stationary and thus the energy is well defined. The classical energy spectrum of a particle with mass $m$ in the ``fluid'' spacetime is given by
\begin{equation}
E({\bf p},{\bf r}) = \sqrt{m^2 + p^2}+ {\bf p}\cdot {\bf v}({\bf r}) ~,
\label{Spectrum}
\end{equation}
where the first term is the spectrum in the frame ``comoving with the vacuum'', while the last term plays the role of the Doppler frequency shift.
The tunneling probability is obtained from the imaginary part of the action along the semiclassical trajectory
\begin{equation}
w=w_0 \exp(-2{\bf Im}~S)
~.
\label{TunnelingProbability}
\end{equation}
The radial trajectory $p_r(r)$ is obtained from the energy conservation along the trajectory:
\begin{equation}
E(p_r,r) = \sqrt{m^2 + p_r^2}+ p_r v(r)= E ~,
\label{SpectrumTraj}
\end{equation}
which gives the tunneling exponent
\begin{equation}
2{\bf Im}~S=2{\bf Im}\int dr ~p_r(r) = \frac{2\pi E} {|dv/dr|_{r=r_H}}~.
\label{Action}
\end{equation}
The quantum tunneling thus simulates the thermal radiation from the horizon with Hawking temperature
\begin{equation}
T_{\rm H}= \frac{\hbar}{2\pi } \Big |\frac{dv} {dr}\Big |_{r=r_H}~.
\label{HawkingT}
\end{equation}
For the de Sitter Universe with its $v(r)=Hr$, the corresponding temperature
would be
\begin{equation}
T_{\rm H}^{\rm dS}= \frac{\hbar H}{2\pi }~.
\label{HawkingTDS}
\end{equation}
However, in this semiclassical description the prefactor $w_0$ in \eqref{TunnelingProbability} remains unknown, and there are arguments that the symmetry of the de Sitter background nullifies the prefactor
\cite{Volovik2009,volovik1}. For a discussion of the controversies
concerning the stability of de-Sitter vacuum towards the Hawking radiation, see e.g.
Refs.~\cite{Starobinskii1979,StarobinskyYokoyama1994,GarrigaTanaka2007,
TsamisWoodard2007,Polyakov2008,Busch2008}.
While the de Sitter vacuum may be stable, the particles living in the de Sitter environment are certainly not
\cite{Nachtmann1967}. This is because the mass of particle is not well defined in the de Sitter background. Calculations of the decay rate of the composite particles have been done in
Refs. \cite{Bros2009,Bros2008,Bros2006}. Let us stress, that contrary to conclusion made in Ref. \cite{AkhmedovBuividovichSingleton2009}, we argue that the possibility of massive free falling particles to radiate other massive particles does not mean that the de Sitter space cannot exist eternally.
In the presence of an external body (detector or composite particle), the radiation occurs which takes the energy from the body. But the pure de Sitter vacuum (i.e. without any impurity) may be stable.
Here we use the semiclassical tunneling picture for the calculation of the decay rate and make comparison with the Hawking radiation.
\section{Ionization rate and Hawking temperature}
\label{DdSb}
We consider two examples of the radiation caused by the presence of external object in the de Sitter vacuum: ionization of an atom caused by the de Sitter expansion discussed in \cite{Volovik2009}, and the decay of the composite particle into two particles in the de Sitter background discussed in
Refs. \cite{Bros2009,Bros2008,Bros2006}.
The atom (or any other composite or massive particle) plays two roles: it serves as the detector of radiation; and it violates the de Sitter symmetry and provides the nonzero matrix element for the radiation, since as we argue the pure de Sitter vacuum is not radiating due its symmetry.
Let us start with an atom \cite{Volovik2009}, which is at rest in the comoving reference frame. In the reference frame of the atom its position is at the origin, $r=0$. The electron bounded to an atom absorbs the energy from the gravitational field of the de Sitter background, which is sufficient to escape from the electric potential barrier that originally confined it.
If the electron is originally
sitting at the energy level $E_n$, then the ionization potential $\epsilon_0=-E_n$. If the ionization potential is much smaller than the electron mass, $\epsilon_0\ll m$ , one can use the non-relativistic quantum mechanics to estimate the tunneling rate through the barrier.
The corresponding radial trajectory $p_r(r)$ is obtained from the classical equation $E(p_r,r)= -\epsilon_0$,
which in the non-relativistic approximation reads
\begin{equation}
-\epsilon_0=\frac{p_r^2(r)}{2m} + p_r(r)Hr~.
\label{RadialTrajDS}
\end{equation}
Here $p_r$ is the radial momentum of electron, and the last term
is the Doppler shift $p_rv(r)$ in Eq. \eqref{Spectrum} provided by the de Sitter expansion \eqref{FRWfluid}. This gives the following radial trajectory of electron:
\begin{equation}
p_r(r)=-mHr + \sqrt{m^2H^2r^2 -2m\epsilon_0}~.
\label{RadialTrajDS2}
\end{equation}
The sign in front of the square root is chosen such that it corresponds to the flux from the center, i.e. the radial velocity of the particle $u_r=dE/dp_r= p_r/m +Hr$ is positive in the classically allowed region $r>r_0$, where
\begin{equation}
r_0^2=\frac{2\epsilon_0}{mH^2}~.
\label{RegionBarrier}
\end{equation}
The momentum $p_r$ is imaginary in the classically forbidden region $0<r<r_0$,
which demonstrates that there is an energy barrier between the position of the electron in the atom, i.e. at $r=0$, and the position of the free electron with the same energy at $r=r_0$.
Since we assume that $\epsilon_0\ll m$, one has $r_0\ll r_H=1/H$, which means that tunneling occurs well within the horizon. We also assume that $H\ll \epsilon_0 (\epsilon_0/m)^{1/2} \alpha^{-1}$, which
allows us to neglect the region close to the origin where the contribution of the Coulomb potential $-\alpha/r$ to Eq.(\ref{RadialTrajDS}) is important.
The imaginary part of the action
\begin{equation}
{\bf Im}\int dr ~p_r(r)=mH\int_0^{r_0}dr \sqrt{r_0^2-r^2}=
\frac{\pi\epsilon_0}{2H}~,
\label{IonizationExponent}
\end{equation}
gives the probability of ionization
\begin{equation}
w\propto \exp(-2{\bf Im}~S)
=\exp\left(-\frac{\pi\epsilon_0}{H}\right)~.
\label{IonizationProbability}
\end{equation}
The quantum tunneling of the electron in the gravitational field of the de Sitter spacetime thus simulates the thermal activation of an atom by a heat bath with effective temperature $T$, which is twice larger than the corresponding Hawking temperature in \eqref{HawkingTDS}:
\begin{equation}
w\propto \exp\left(-\frac{\epsilon_0}{T}\right)~~,~~T=\frac{\hbar H}{\pi}=2T_{\rm H}^{\rm dS}~,
\label{ActivationT}
\end{equation}
\section{Decay rate of composite particle and Hawking temperature}
The same result is obtained in Refs.
\cite{Bros2009,Bros2008,Bros2006}, whose authors considered the decay of a composite particle with mass $m_0$ into two particles, each with mass $m_1 > m_0/2$. Such decay is energetically forbidden in the Minkowski spacetime, but is allowed in the de Sitter background.
It is instructive to derive the results of Refs. \cite{Bros2009,Bros2008,Bros2006} using also the semiclassical tunneling picture. The trajectory of each of the two particles with mass $m_1$ moving in the radial direction from the origin at $r=0$ is obtained from equation
\begin{equation}
E(p_r,r) = \sqrt{p_r^2 + m_1^2} + p_rHr= \frac{m_0}{2}~.
\label{ParticleTraj1}
\end{equation}
We took into account that each of the two particles carries the one half of the energy of the original particle, i.e. $E=m_0/2$. The momentum along the trajectory is
\begin{equation}
p_r(r) = \frac{1}{1-H^2r^2} \left[- \frac{m_0}{2}Hr +\sqrt{ \frac{m_0^2}{4}-m_1^2+ m_1^2H^2r^2}\right].
\label{ParticleTraj2}
\end{equation}
Here again we choose the sign in front of the square root, which in the classically allowed region at $r>r_0$, where $r_0=\sqrt{1-m_0^2/4m_1^2}/H$, corresponds to the classical motion from the center. The momentum is imaginary in the classically forbidden region $r<r_0$, which gives the imaginary contribution to the action:
\begin{equation}
{\bf Im}\int dr ~p_r(r)=\frac{m_1}{H}\int_0^{r_0}dr \frac{\sqrt{r_0^2-r^2}}{r_H^2-r^2} = \frac{\pi}{4}(2m_1-m_0).
\label{DecayExponent}
\end{equation}
where as before $r_H=1/H$ is the position of the de Sitter horizon. We must take into account that due to momentum conservation, the two particles tunnel simultaneously in opposite direction (which is called co-tunneling). This adds extra factor two in the exponent. As a result one obtains the decay rate:
\begin{equation}
w\propto\exp(-4{\bf Im}~S) =\exp\left(-\frac{\pi(2m_1 -m_0)}{H}\right)~.
\label{DecayRate2}
\end{equation}
This looks as thermal activation by a heat bath with the temperature which is again twice larger than
the corresponding Hawking temperature in \eqref{HawkingTDS}:
\begin{equation}
w\propto \exp\left(-\frac{\pi\Delta m}{H}\right)= \exp\left(-\frac{\Delta m}{T}\right)~~,~~T=\frac{\hbar H}{\pi}=2T_{\rm H}^{\rm dS}~.
\label{DecayRate}
\end{equation}
Here $\Delta m$ is the mass deficit. It is the analog of the ionization potential $\epsilon_0$ in \eqref{IonizationProbability}. In the case of the decay of the particle with mass $m_0$ into two particles with masses $m_1$, the mass deficit is
$\Delta m=2m_1-m_0>0$.
\section{Discussion}
The decay rate calculated using the semiclassical method reproduces exact result
obtained in Refs.~\cite{Bros2009,Bros2008,Bros2006} (note that there is misprint in Eq.(16) of Ref.~\cite{Bros2006}: the factor $\pi$ has been omitted in the exponent). Both approaches demonstrate that the effective temperature which characterizes the decay rate of composite particles in de Sitter space is twice larger than the Hawking temperature of the de Sitter horizon. The controversies concerning the factor of 2 for the Hawking and Unruh temperatures can be found in Refs. \cite{AkhmedovaPillingGillSingleton2008,Pilling2008,Akhmedova2-2008,Akhmedov2008} and references therein. However, the same semiclassical method applied to the black hole radiation
\cite{Volovik1999a} gives rise to the correct factor in the Hawking temperature. In the case of the Unruh effect \cite{Unruh1976}, the tunneling approach is different because of the time dependent potential \cite{Volovik1992}, but it also gives the correct factor for the Unruh temperature.
It is important that the effective temperature (\ref{ActivationT}) has nothing to do with
the existence of the cosmological horizon, since both for the atom and for the decaying particle the energy barrier
is situated within the horizon: $r_0<r_H=1/H$. Moreover, the residue of the pole at $r=r_H$ in Eq. \eqref{ParticleTraj2} vanishes.
That is why the possible subtleties, which may influence the semiclassical
tunneling approach in the presence of horizon
and restore the `correct' factor \cite{AkhmedovaPillingGillSingleton2008,Pilling2008,Akhmedova2-2008,Akhmedov2008},
are irrelevant here. The extra factor of 2 appears in some calculations of the Hawking temperature, when people use the Schwarzschild static coordinates in the tunneling method (see also discussion in Ref. \cite{Ya-PengHu2009}). The Schwarzschild coordinates are not well suited for calculations, since they have coordinate singularity at the horizon and do not describe the interior of the black hole.
The Painlev\'e-Gullstrand ``fluid'' metric used in Refs. \cite{Volovik1999a,ParikhWilczek} and in the present paper does not suffer from such drawbacks. That is why the extra factor 2 which appears in the decay of composite particle is not the artefact of the wrong coordinates.
It is interesting that the `correct' factor in Eq. \eqref{DecayRate} may be restored in the limit of the vanishingly small mass of the decaying particle: $m_0\ll m_1$. In this case, the equation \eqref{DecayRate2} becomes
\begin{equation}
w\propto \exp\left(-\frac{2\pi m_1}{H}\right)= \exp\left(-\frac{m_1}{T_{\rm H}^{\rm dS}}\right)~.
\label{DecayRateHawking}
\end{equation}
The presence of the cosmological horizon does become important in this case, since in the limit $m_0/m_1 \rightarrow 0$ the position $r_0$, to which the particle with mass $m_1$ is tunneling, approaches the horizon: $r_0 \rightarrow r_H$ when $m_0/m_1\rightarrow 0$.
One may argue that the limit, when the mass $m_0$ of the decaying particle approaches zero, formally corresponds to the creation of the pair of particles with mass $m_1$ from the vacuum; and this corresponds to the Hawking radiation from the de Sitter vacuum.
However, this is not exactly true. The presence of the original particle is necessary for the radiation, otherwise the matrix element $\left<m_0|{\cal H}|m_1,m_1\right>$, which is needed for the decay of the original particle \cite{Bros2009,Bros2008,Bros2006}, drops out. The presence of the original particle, even with zero mass, violates the symmetry of the de Sitter vacuum, and the radiation becomes possible (see discussion in Ref. \cite{Volovik2009}). Also one should not forget that the factor $2\pi$ in \eqref{DecayRateHawking} appears simply because two particles with masses $m_1$ tunnel simultaneously, that is why the effective activation temperature, which appears in the decay of the composite particle in de Sitter background is $T=2T_{\rm H}^{\rm dS}$. It is still unclear whether there is a deep physics in this relation or just a coincidence.
Hawking radiation from the black hole can be also considered as the decay of the black hole with initial mass $M_{\rm i}$ concentrated in the singularity into a particle with the mass $m$ radiated away and the black hole with the smaller mass $M_{\rm f}<M_{\rm i}$. The corresponding trajectory of the radiated particle is
\begin{equation}
E(p_r,r) = \sqrt{m^2 + p_r^2}+ p_r v(r)= M_{\rm i}-M_{\rm f} ~.
\label{BH}
\end{equation}
If the black hole is immersed in the Minkowski spacetime, the energy conservation prescribes $M_{\rm i}=M_{\rm f}+m$ for particle radiated with zero momentum at infinity. Then the straightforward application of the semiclassical tunneling approach \cite{Volovik1999a} gives the radiation rate
\begin{equation}
w\propto \exp\left(-\frac{m}{T_{\rm H}^{\rm bh}}\right) ~,
\label{BH2}
\end{equation}
with the correct Hawking temperature $T_{\rm H}^{\rm bh}$ for the black hole \cite{volovik2}.
This demonstrates that the black hole immersed in the Minkowski spacetime is decaying, while the Minkowski vacuum itself remains stable. In the same manner the body (composite particle, atom, black hole or other object which serves as detector of radiation) immersed in the de Sitter vacuum is decaying, while the de Sitter vacuum may remain stable towards the Hawking radiation.
It is a pleasure to thank Vladimir Eltsov, Vincent Pasquier, Alexander Polyakov and Alexei Starobinsky for useful comments. This work is supported in part by the Russian Foundation
for Basic Research (grant 06--02--16002--a) and the
Khalatnikov--Starobinsky leading scientific school (grant
4899.2008.2).
|
1,116,691,499,799 | arxiv | \section{Introduction}
Medical professionals are faced with a large amount of textual patient information every day. Clinical decision support systems (CDSS) aim to help clinicians in the process of decision-making based on such data. We specifically look at a subtask of CDSS, namely the prediction of clinical diagnosis from patient admission notes. When clinicians approach the task of diagnosis prediction, they usually take similar patients into account (from their own experience, clinic databases or by talking to their colleagues) who presented with typical or atypical signs of a disease. They then compare the patient at hand with these previous encounters and determine the patient’s risk of having the same condition.
In this work, we propose ProtoPatient, a deep neural approach that imitates this reasoning process of clinicians: Our model learns prototypical characteristics of diagnoses from previous patients and bases its prediction for a current patient on the similarity to these prototypes. This results in a model that is both inherently interpretable and provides clinicians with pointers to previous prototypical patients. Our approach is motivated by \citet{chen2019looks} who introduced prototypical part networks (PPNs) for image classification. PPNs learn prototypical parts for image classes and base their classification on the similarity to these prototypical parts. We transfer this work into the text domain and apply it to the extreme multi-label classification task of diagnosis prediction. For this transfer, we apply an additional label-wise attention mechanism that further improves the interpretability of our method by highlighting the most relevant parts of a clinical note regarding a diagnosis.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\textwidth]{images/intro.pdf}
\caption{Basic concept of the ProtoPatient method. The model makes predictions for a patient (left side) based on the comparison to prototypical parts of earlier patients (right side).}
\label{fig:intro}
\end{figure}
While deep neural models have been widely applied to outcome prediction tasks in the past \cite{ml-outcomepred}, their black-box nature remains a large obstacle for clinical application \cite{van-aken-etal-2022-see}. We argue that decision support is only possible when model predictions are accompanied by justifications that enable clinicians to follow a lead or to potentially discard predictions. With ProtoPatient we introduce an architecture that allows such decision support. Our evaluation on publicly available data shows that the model can further improve state-of-the-art performance on predicting clinical outcomes.
\paragraph{Contributions} We summarize the contributions of this work as follows:
\noindent1. We introduce a novel model architecture based on prototypical networks and label-wise attention that enables interpretable diagnosis prediction. The system learns relevant parts in the text and points towards prototypical patients that have led to a certain decision.\\
2. We compare our model against several state-of-the-art baselines and show that it outperforms earlier approaches. Performance gains are especially visible in rare diagnoses.\\
3. We further evaluate the explanations provided by our model. The quantitative results indicate that our model produces explanations that are more faithful to its inner working than post-hoc explanations. A manual analysis conducted by medical doctors further shows the helpfulness of prototypical patients during clinical decision-making.\\
4. We release the code for the model and experiments for reproducibility.\footnote{Public code repository:\\ \url{https://github.com/bvanaken/ProtoPatient}}
\section{Task: Diagnosis Prediction from Admission Notes}
The task of outcome prediction from admission notes was introduced by \citet{van2021clinical} and assumes the following situation: A new patient $p$ gets admitted to the hospital. Information about the patient is written into an admission note $a_p$. The goal of the decision support system is to identify risk factors in the text and to communicate these risks to the medical professional in charge. For outcome diagnosis prediction in particular, the underlying model determines these risks by predicting the likelihood of a set of diagnoses $C$ being assigned to the patient at discharge.
\paragraph{Data}
\label{sec:data}
We evaluate our approach on the diagnosis prediction task from the clinical outcome prediction dataset introduced by \citet{van2021clinical}. The data is based on the publicly available MIMIC-III database \cite{johnson2016mimic}. It comprises de-identified data from patients in the Intensive Care Unit (ICU) of the Beth Israel Deaconess Medical Center in Massachusetts in the years 2001-2012. The data includes 48,745 admission notes written in English from 37,320 patients in total. They are split into train/val/test sets with no overlap in patients. The admission notes were created by extracting sections from MIMIC-III discharge summaries which contain information known at admission time such as \textit{Chief Complaint} or \textit{Family History}. The notes are labelled with diagnoses in the form of 3-digit ICD-9 codes that were assigned to the patients at discharge. On average, each patient has 11 assigned diagnoses per admission from a total set of 1266 diagnoses.
\begin{figure}[t]
\centering
\includegraphics[width=74mm]{images/line_plot.pdf}
\caption{Distribution of ICD-9 diagnosis codes in MIMIC-III training set. }
\label{fig:distribution}
\end{figure}
\begin{figure}[t!]
\captionsetup{width=1.02\linewidth}
\includegraphics[width=\columnwidth]{images/schema.pdf}
\caption{Schematic view of the ProtoPatient method. Starting at the bottom, document tokens get a contextualized encoding and are then transformed into a label-wise document representation $\mathbf{v_{pc}}$. The classifier simply considers the distance of this representation to a learned prototypical vector $\mathbf{u_c}$. The prototypical patient $\mathbf{v'_c}$ is the training example closest to the prototypical vector.}
\label{fig:schema}
\end{figure}
\paragraph{Challenges} Challenges surrounding diagnosis prediction can be divided into two main categories:
\begin{itemize}[leftmargin=2mm]
\item \textbf{Predicting the correct diagnoses} The number of possible diagnoses is large (>1K) and, as shown in Figure \ref{fig:distribution}, the distribution is extremely skewed. Since many diagnoses only have a few samples, learning plausible patterns is challenging. Further, each admission note describes multiple conditions, some being highly related, while others are not. The text in admission notes is also highly context dependent. Abbreviations like \textit{SBP} (i.a. for \textit{systolic blood pressure} or \textit{spontaneous bacterial peritonitis}) have completely different meanings based on their context. Our models must capture these differences and enable users to check the validity of features used for a prediction.
\item \textbf{Communicating risks to doctors} Apart from assigning scores to diagnoses, for a high-stake task such as diagnosis prediction, a system must be designed for medical professionals to understand and act upon its predictions. Therefore, models must provide faithful explanations for their predictions and give clues that enable further clinical reasoning steps by doctors. These requirements are challenging, since interpretability of models often come with a trade-off in their prediction performance \cite{xai-tradeoff}.
\end{itemize}
\section{Method: ProtoPatient}
To address the challenges above, we propose a novel model architecture called ProtoPatient, which adapts the concept of prototypical networks \cite{chen2019looks} to the extreme multi-label scenario by using label-wise attention and dimensionality reduction. Figure \ref{fig:schema} presents a schematic overview. We further show how our model can be efficiently initialized to improve both speed and performance.
\subsection{Learning Prototypical Representations}
We encode input documents $a_{p}$ ($p$ indexes patients)
into vectors $\mathbf{v_p}$ with dimension $D$ and measure their distance to a learned set of prototype vectors. Each prototype vector $\mathbf{u_c}$ represents a diagnosis $c \in C$ in the dataset. The prototype vectors are learned jointly with the document encoder so that patients with a diagnosis can best be distinguished from patients without it.
As a distance measure we use the Euclidean distance $d_{pc} = ||\mathbf{v_p} -\mathbf{u_c}||_2$ which \citet{snell-prototypical} identified as best suited for prototypical networks. We then calculate the sigmoid $\sigma$ of the negative distances to get a prediction $\hat{y}_{pc} = \sigma{(-d_{pc})}$, so that documents closer to a prototype vector get higher prediction scores. We define the loss $L$ as the binary cross entropy ($BCE$) between $\hat{y}_{pc}$ and the ground truth $y_{pc} \in \{0,1\}$.
\begin{equation}
L = \sum_p \sum_c BCE(\hat{y}_{pc}, y_{pc})
\end{equation}
\paragraph{Prototype initialization}
\citet{snell-prototypical} define each prototype as the mean of the embedded support set documents. In contrast, we learn the label-wise prototype vectors end-to-end while optimizing the multi-label classification. This leads to better prototype representations, since not all documents are equally representative of a class, as taking the mean would suggest.
However, using the mean of all support documents is a reasonable starting point.
We set the initial prototype vectors of a class as
$\mathbf{u_{c_{init}}}=\langle \mathbf{v_c} \rangle$, i.e. the mean of all document vectors $\mathbf{v_c}$ with class label $c$ in the training set. We then fine-tune their representation during training. Initial experiments showed that this initialization leads to model convergence in half the number of steps compared to random initialization.
\paragraph{Contextualized document encoder}
For the encoding of the documents, we choose a Transformer-based model, since Transformers are capable of modelling contextualized token representations. For initializing the document encoder, we use the weights of a pre-trained language model. At the time of our experiments, the PubMedBERT \cite{pubmedbert} model reaches the best results on a range of biomedical NLP tasks \cite{blurb}. We thus initialize our document encoder with PubMedBERT weights\footnote{Model weights from:
\url{https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext}} and further optimize it with a small learning rate during training.
\subsection{Encoding Relevant Document Parts with Label-wise Attention}
Since we face a multi-label problem, having only one joint representation per document tends to produce document vectors located in the center of multiple prototypes in vector space. This way, important features for single diagnoses can get blurred, especially if these diagnoses are rare. To prevent this, we follow the idea of prototypical part networks of selecting parts of the note that are of interest for a certain diagnosis. In contrast to \citet{chen2019looks}, we use an attention-based approach instead of convolutional filters, since attention is an effective way for selecting relevant parts of text. For each diagnosis $c$, we learn an attention vector $\mathbf{w_c}$. To encode a patient note with regard to $c$, we apply a dot product between $\mathbf{w_c}$ and each embedded token $\mathbf{g_{pj}}$, where $j$ is the token index. We then apply a softmax.
\begin{equation}
\label{formula:att1}
s_{pcj} = softmax(\mathbf{g_{pj}^T \, w_c})
\end{equation}
We use the resulting scores $s_{pcj}$ to create a document representation $\mathbf{v_{pc}}$ as a weighted sum of token vectors.
\begin{equation}
\label{formula:dot}
\mathbf{v_{pc}} = \sum_j s_{pcj} \, \mathbf{g_{pj}}
\end{equation}
\noindent This way, the document representation for a certain diagnosis is based on the parts that are most relevant to that diagnosis.
We then measure the distance $d_{pc} = ||\mathbf{v_{pc}} -\mathbf{u_c}||_2$ to the prototype vector $\mathbf{u_c}$ based on the diagnosis-specific document representation $\mathbf{v_{pc}}$.
\paragraph{Attention initialization} \label{sec:att-init} The label-wise attention vectors $\mathbf{w_c}$ determine which tokens the final document representation is based on. Therefore, when initializing them randomly, we start our training with document representations which might carry little information about the patient and the corresponding diagnosis. To prevent this cold start, we initialize the attention vectors $\mathbf{w_{c_{init}}}$ with tokens informative to the diagnosis $c$. This way, at training start, these tokens reach higher initial scores $s_{pcj}$. We consider tokens $\tilde{t}$ informative that surpass a TF-IDF threshold of $h$. We then use the average of all embeddings $\mathbf{g_{c\tilde{t}}}$ from $\tilde{t}$ in documents corresponding to the diagnosis.
\begin{equation}
\mathbf{w_{c_{init}}} =
\langle \mathbf{g_{c\tilde{t}}} \rangle
\end{equation}
with $\tilde{t} = t : \textrm{tf-idf}(t) > h$.
We found $h$=0.05 suitable to get 5-10 informative tokens per diagnosis.
\subsection{Compressing representations} Label-wise attention vectors for a label space with more than a thousand labels lead to a considerable increase in model parameters and memory load. We compensate this by reducing the dimensionality $D$ of vector representations used in our model. We add a linear layer after the document encoder that both reduces the size of the document embeddings and acts as a regularizer, compressing the information encoded for each document. We find that reducing the dimensionality by one third ($D=256$) leads to improved results compared to the full-size model, indicating that more dense representations are beneficial to our setup.
\subsection{Presenting prototypical patients} For retrieving prototypical patients $\mathbf{v'_c}$ for decision justifications at inference time, we simply take the label-wise attended documents from the training data that are closest to the diagnosis prototype. By presenting their distances to the prototype vector, we can provide further insights about the general variance of diagnosis presentations. Correspondingly, we can also present patients with atypical presentation of a diagnosis by selecting the ones furthest away from the learned prototype.
\section{Evaluating Diagnosis Predictions} \label{sec:experiments}
\subsection{Experimental Setup}
\begin{table*}[t!]
\begin{tabularx}{\textwidth}{lccc}
& ROC AUC \small{macro} & ROC AUC \small{micro} & PR AUC \small{macro} \\ \hline
HAN \cite{yanghan} & 83.38 \small{$\pm 0.13$} & 96.88 \small{$\pm 0.04$} & 13.56 \small{$\pm 0.01$} \\
HAN + Label Emb \cite{donghlan} & 83.49 \small{$\pm 0.18$} & 96.87 \small{$\pm 0.12$} & 13.07 \small{$\pm 0.14$} \\
HA-GRU \cite{baumelhagru} & 79.94 \small{$\pm 0.57$} & 96.65 \small{$\pm 0.12$} & 9.52 \small{$\pm 1.01$} \\
HA-GRU + Label Emb \cite{donghlan} & 80.54 \small{$\pm 1.67$} & 96.67 \small{$\pm 0.22$} & 10.33 \small{$\pm 1.70$} \\ \arrayrulecolor{gray}\hline\arrayrulecolor{black}
ClinicalBERT \cite{elsentzerclinicalbert} & 80.95 \small{$\pm 0.16$} & 94.54 \small{$\pm 0.93$} & 11.62 \small{$\pm 0.64$} \\
DischargeBERT \cite{elsentzerclinicalbert} & 81.17 \small{$\pm 0.30$} & 94.70 \small{$\pm 0.48$} & 11.24 \small{$\pm 0.88$} \\
CORe \cite{van2021clinical} & 81.92 \small{$\pm 0.09$} & 94.00 \small{$\pm 1.10$} & 11.65 \small{$\pm 0.78$} \\
PubMedBERT \cite{pubmedbert} & 83.48 \small{$\pm 0.21$} & 95.47 \small{$\pm 0.22$} & 13.42 \small{$\pm 0.57$} \\ \arrayrulecolor{gray}\hline\arrayrulecolor{black}
Prototypical Network & 81.89 \small{$\pm 0.22$} & 95.23 \small{$\pm 0.01$} & \textcolor{white}{--}9.94 \small{$\pm 0.36$} \\
ProtoPatient & 86.93 \small{$\pm 0.24$} & \textbf{97.32} \small{$\pm 0.00$} & \textbf{21.16} \small{$\pm 0.21$} \\
ProtoPatient + Attention Init & \textbf{87.93} \small{$\pm 0.07$} & 97.24 \small{$\pm 0.02$} & 17.92 \small{$\pm 0.65$} \\ \hline
& & &
\end{tabularx}
\caption{Results in \% AUC for diagnosis prediction task (1266 labels) based on MIMIC-III data. The ProtoPatient model outperforms the baselines in micro ROC AUC and PR AUC. The attention initialization further improves the macro ROC AUC. $\pm$ values are standard deviations. Label Emb: Label Embeddings. Attention Init: Attention vectors initialized as described in Section \ref{sec:att-init}.}
\label{table:results}
\end{table*}
\paragraph{Baselines} We compare ProtoPatient to hierarchical attention models and to Transformer models pre-trained on (bio)medical text, representing two state-of-the-arts approaches for ICD coding and outcome prediction tasks, respectively.
\begin{itemize}[leftmargin=*]
\item \textbf{Hierarchical attention models} Hierarchical Attention Networks (\textbf{HAN}) were introduced by \citet{yanghan}. They are based on bidirectional gated recurrent units, with attention applied on both the sentence and token level. \citet{baumelhagru} built \textbf{HA-GRU} upon this concept using only sentence-wise attention, while adding a label-wise attention scheme comparable to ProtoPatient. \citet{donghlan} further show that pre-initialized \textbf{label embeddings} learned from ICD code co-occurrence improves results for both approaches. We thus evaluate the models with and without label embeddings.\footnote{Note that \citet{donghlan} also propose the H-LAN model, which is a combination of HAN and HA-GRU using label-wise attention on sentence and token level. However, the model is only applicable to smaller label spaces (<100) due to its memory footprint and thus cannot be evaluated on our task.}
\item \textbf{Transformers pre-trained on in-domain text}
\citet{elsentzerclinicalbert} applied clinical language model fine-tuning on two Transformer models based on the BioBERT model \cite{biobert}. \textbf{ClinicalBERT} was trained on all clinical notes in the MIMIC-III database, and \textbf{DischargeBERT} on all discharge summaries. They belong to the most widely used clinical language models and achieve high scores on multiple clinical NLP tasks. The \textbf{CORe} model \cite{van2021clinical} is also based on BioBERT, but further pre-trained with an objective specific to patient outcomes, which achieved higher scores on clinical outcome prediction tasks. \citet{pubmedbert} introduced \textbf{PubMedBERT} which was, in contrast to the other models, trained from scratch on articles from PubMed Central with a dedicated vocabulary. It is currently the best performing approach on the BLURB \cite{blurb} benchmark.
\end{itemize}
\paragraph{Training}
We train all baselines on the dataset introduced in Section \ref{sec:data}. For training HAN and HA-GRU we use the code and best performing hyperparameters as provided by \citet{donghlan}. We further use their pre-trained ICD-9 label embeddings (for details, see Appendix \ref{sec:label-embeddings}). For training the Transformer-based models and ProtoPatient, we use hyperparameters reported to perform best for BERT-based models by \citet{van2021clinical} and additionally optimize the learning rate and number of warm up steps with a grid search. We further truncate the notes to a context size of 512. See \ref{sec:hyperparameter} for all details on the chosen hyperparameters. We report the scores of all models as an average over three runs with different seeds.
\paragraph{Ablation studies}
ProtoPatient combines three strategies: Prototypical networks, label-wise attention and dimensionality reduction. We conduct ablation studies to measure the impact of each strategy. To this end, we apply both label-wise attention and dimensionality reduction to a PubMedBERT model using a standard classification head. We further train a prototypical network without label-wise attention and ProtoPatient with different dimension sizes. The results are found in Table \ref{table:ablation} and \ref{table:ablation-full}.
\paragraph{Transfer to second data set} Clinical text data varies from clinic to clinic. We want to test whether the patterns learned by the models are transferable to other data sources than MIMIC-III. We use another publicly available dataset from the i2b2 De-identification and Heart Disease Risk Factors Challenge \cite{i2b2} further processed into admission notes by \citet{van2021clinical}. The data consists of 1,118 admission notes labelled with the ICD-9 codes for \textit{chronic ischemic heart disease}, \textit{obesity}, \textit{hypertension}, \textit{hypercholesterolemia} and \textit{diabetes}. We evaluate models without fine-tuning on the new data to simulate a model transfer to another clinic. The resulting scores are reported in Table \ref{table:i2b2}.
\subsection{Results} \label{sec:results} We present the results of all models on the diagnosis prediction task in Table \ref{table:results}. In addition, we show the macro ROC AUC score across codes depending on their frequency in the training set in Figure \ref{fig:buckets}. We summarize the main findings as follows.
\paragraph{ProtoPatient outperforms previous approaches} The results show that ProtoPatient achieves the best scores among all evaluated models. Pre-initializing the attention vectors further improves the macro ROC AUC score. Ablation studies show that all components play a role in improving the results. A prototypical network without label-wise attention is not able to capture the extreme multi-label data. PubMedBERT using a standard classification head also benefits from label-wise attention, but not to the same extent. Combining prototypical networks and label-wise attention thus brings additional benefits. The choice of dimension size is another important factor. Using 768 dimensions (the standard BERT base size) appears to lead to over-parameterization in the attention and prototype vectors. Using 256 dimensions also improves generalization, which is shown in producing the best results on the i2b2 data set in Table \ref{table:i2b2}.
\begin{table}
\small
{\renewcommand{\arraystretch}{1.2}%
\setlength\tabcolsep{5pt}
\begin{tabularx}{\columnwidth}{lc}
& ROC AUC \scriptsize{macro} \\ \hline
\textbf{Dimensionality reduction} & \\
ProtoPatient 768 & 83.56 \small{$\pm 0.17$} \\
ProtoPatient \scriptsize{(our proposed model with $D$=256)} & \textbf{86.93} \small{$\pm 0.24$} \\
\hline
\textbf{Transformer vs. Prototypical} & \\
PubMedBERT 768 & 83.48 \small{$\pm 0.21$} \\
PubMedBERT 768 + Label Attention & \textbf{84.10} \small{$\pm 0.25$} \\
ProtoPatient 768 & 83.56 \small{$\pm 0.17$} \\ \arrayrulecolor{gray}\hline\arrayrulecolor{black}
\textbf{Label-wise attention} & \\
PubMedBERT 256 & 83.61 \small{$\pm 0.04$} \\
PubMedBERT 256 + Label Attention & \textbf{84.68} \small{$\pm 0.52$} \\ \hline
\end{tabularx}}
\caption{\textbf{Ablation studies} comparing different dimension sizes and how a standard Transformer (PubMedBERT) performs with additional label-wise attention.}
\label{table:ablation}
\end{table}
\paragraph{Improvements for rare diagnoses}
Figure \ref{fig:buckets} shows that the ROC AUC improvements are particularly large for codes that are rare ($\leq$50 times) in the training set. Prototypical networks are known for their few-shot capabilities \cite{snell-prototypical} which also prove useful in our scenario with mixed label frequencies. For extremely rare codes that appear less than ten times, the attention initialization described in Section \ref{sec:att-init} further improves results. This indicates that the randomly initialized attention vectors need at least a number of samples to learn the most important tokens, and that pre-initializing them can accelerate this process.
\paragraph{PubMedBERT and HAN are the best baselines} The pre-trained PubMedBERT and the HAN model achieve the highest scores among the baselines. Interestingly, PubMedBERT outperforms the Transformer models pre-trained on clinical text. This indicates that training from scratch with a domain-specific vocabulary is beneficial for the task. The scores of the HAN model further emphasize the importance of label-wise attention. The addition of label embeddings to HAN and HA-GRU, however, does not add significant improvements in our case.
\begin{figure}
\centering
\includegraphics[width=0.42\textwidth]{images/bucket_results.pdf}
\caption{Macro ROC AUC scores regarding the frequency of ICD-9 codes in the training set. ProtoPatient models show the largest performance gain in rare codes ($\leq$100 samples). Attention initialization leads to large improvement for very rare codes ($<$10 samples).}
\label{fig:buckets}
\end{figure}
\begin{table}[t]
\small
{\renewcommand{\arraystretch}{1.2}%
\begin{tabularx}{\columnwidth}{lc}
& ROC AUC \small{macro} \\ \hline
PubMedBERT & 82.11 \small{$\pm 0.12$}\\
Prototypical Network & 69.65 \small{$\pm 0.22$} \\
ProtoPatient 768 & 85.28 \small{$\pm 0.49$} \\
ProtoPatient & \textbf{87.38} \small{$\pm 0.20$} \\
ProtoPatient + Attention Init & 86.72 \small{$\pm 1.52$} \\ \hline
\end{tabularx}}
\caption{Performance on a second data set based on clinical notes from the \textbf{i2b2 challenge} \cite{i2b2}. ProtoPatient shows the highest degree of transferability. Further metrics shown in Table \ref{table:i2b2-full}.}
\label{table:i2b2}
\end{table}
\section{Evaluating Interpretability}\label{sec:interpret}
We evaluate the interpretability of ProtoPatient with quantitative and qualitative analyses as follows.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{images/faithfulness.pdf}
\caption{Evaluating faithfulness of highlighted tokens. Lower scores indicate more faithful explanations. ProtoPatient's token highlights are part of the model decision and thus more faithful than post-hoc explanations.}
\label{fig:xai}
\end{figure}
\paragraph{Quantitative study on faithfulness} Faithfulness describes how explanations correspond to the inner workings of a model, a property essential to their usefulness. We apply the explainability benchmark introduced by \citet{xai} to compare the faithfulness of ProtoPatient's token highlights to post-hoc explanation methods. Following the benchmark, faithfulness is measured by incrementally masking highlighted tokens, expecting a steep drop in model performance if the tokens are indeed relevant to the model prediction. See \ref{sec:interpret-full} for details. Due to the high computational costs of the evaluation, we limit our analyses to three diagnoses with a high severity to the ICU: Sepsis, intracerebral hemorrhage and pneumonia. We compare against four common post-hoc explanation methods, namely Lime \cite{lime}, Occlusion \cite{occlusion}, InputXGradient \cite{inputxgradient}, and Gradient Backpropagation \cite{gradientbackpropagation}, which we apply to the PubMedBERT baseline. Figure \ref{fig:xai} shows the results, for which lower scores mean more faithful explanations (i.e. a steeper drop in model performance). We see that ProtoPatient's explanations reach the lowest scores for all three labels, proving that they are more faithful than the post-hoc explanations. This is a result of the interpretable structure of ProtoPatient, in which model decisions are directly based on the highlighted parts. We show these parts, i.e. the tokens that are most frequently highlighted by the model for the three analyzed diagnoses, in \ref{sec:relevant-tokens}.
\begin{table}[b!]
\small
{\renewcommand{\arraystretch}{1.1}%
\begin{tabular}{|llll}
\hline
\multicolumn{4}{|c|}{\begin{tabular}[c]{@{}c@{}}\textbf{Analysis of prototypical patient cases}\\(principal diagnoses)\end{tabular}} \\ \hline
\multicolumn{4}{|c|}{Q1: Prototypical patient shows typical clinical signs} \\
\multicolumn{4}{|c|}{\begin{tabular}{p{2.5cm}|p{2.5cm}}\centering yes & \centering no \end{tabular}} \\
\hline
\multicolumn{4}{|c|}{\begin{tabular}{p{2.5cm}|p{2.5cm}}\centering 21 & \centering 2 \end{tabular}} \\
\hline
\multicolumn{4}{|c|}{Q2: Highlighted prototypical parts are relevant} \\
\multicolumn{4}{|c|}{\begin{tabular}{p{2cm}|p{2cm}|p{2cm}}\centering mostly & \centering partially & \centering hardly \end{tabular}} \\ \hline
\multicolumn{4}{|c|}{\begin{tabular}{p{2cm}|p{2cm}|p{2cm}}\centering 21 & \centering 2 & \centering 0 \end{tabular}} \\
\hline
\multicolumn{4}{|c|}{Q3: Prototypical patient is helpful for diagnosis decision} \\
\multicolumn{4}{|c|}{\begin{tabular}{p{2.5cm}|p{2.5cm}}\centering yes & \centering no \end{tabular}} \\ \hline
\multicolumn{4}{|c|}{\begin{tabular}{p{2.5cm}|p{2.5cm}}\centering 17 & \centering 6 \end{tabular}} \\
\hline
\multicolumn{4}{|c|}{\begin{tabular}[c]{@{}c@{}}\textbf{Analysis of highlighted parts}\\(all diagnoses)\end{tabular}} \\ \hline
\multicolumn{4}{|c|}{\begin{tabular}[c]{@{}c@{}}Q4: Highlighted tokens are relevant for diagnosis\\ (i.e. describe diagnosis, symptoms or risk factors)\end{tabular}} \\
\multicolumn{1}{|l|}{} & \multicolumn{3}{c|}{\begin{tabular}{p{1.6cm}|p{1.6cm}|p{1.6cm}}\centering mostly & \centering partially & \centering hardly \end{tabular}} \\ \hline
\multicolumn{1}{|l|}{TPs} & \multicolumn{3}{c|}{\begin{tabular}{p{1.6cm}|p{1.6cm}|p{1.6cm}}\centering 78 & \centering 3 & \centering 7 \end{tabular}} \\
\multicolumn{1}{|l|}{FPs} & \multicolumn{3}{c|}{\begin{tabular}{p{1.6cm}|p{1.6cm}|p{1.6cm}}\centering 50 & \centering 12 & \centering 9 \end{tabular}} \\
\multicolumn{1}{|l|}{FNs} & \multicolumn{3}{c|}{\begin{tabular}{p{1.6cm}|p{1.6cm}|p{1.6cm}}\centering 22 & \centering 10 & \centering 12 \end{tabular}} \\
\hline
\multicolumn{4}{|c|}{Q5: Important tokens are missing from highlights} \\
\multicolumn{1}{|l|}{} & \multicolumn{3}{c|}{\begin{tabular}{p{2.3cm}|p{2.3cm}}\centering yes & \centering no \end{tabular}} \\ \hline
\multicolumn{1}{|l|}{TPs} & \multicolumn{3}{c|}{\begin{tabular}{p{2.3cm}|p{2.3cm}}\centering 17 & \centering 71 \end{tabular}} \\
\multicolumn{1}{|l|}{FPs} & \multicolumn{3}{c|}{\begin{tabular}{p{2.3cm}|p{2.3cm}}\centering 13 & \centering 58 \end{tabular}} \\
\multicolumn{1}{|l|}{FNs} & \multicolumn{3}{c|}{\begin{tabular}{p{2.3cm}|p{2.3cm}}\centering 2 & \centering 42 \end{tabular}} \\
\hline
\end{tabular}}
\caption{Results of the manual analysis conducted by medical doctors on ProtoPatient outputs. The prototypical patients were analyzed for the principal diagnoses only, while the highlighted parts of the patient letter at hand were analyzed for all diagnoses. Q1..5 denote the questions answered regarding each patient case.}
\label{table:analysis}
\end{table}
\paragraph{Manual analysis by medical doctors} We conduct a manual analysis with two medical doctors (one specialized, one resident) to understand whether highlighted tokens and prototypical patients are helpful for their decisions. They used a demo application of ProtoPatient\footnote{Demo URL available at:\\\url{https://protopatient.demo.datexis.com}} and analyzed 20 random patient letters with 203 diagnoses in total. The results are shown in Table \ref{table:analysis}. The doctors first identified the principal diagnoses and then rated the corresponding prototypical patients presented by the model. Note that some patients have more than one principal diagnosis. In 21 of 23 cases, the prototypical samples were showing typical signs of the respective diagnosis and 17 of them were rated as helpful for making a diagnosis decision. Cases in which they were not helpful included very rare conditions or ones with a strong difference to the specific case. They further analyzed the highlighted tokens for all diagnoses and found that they contained mostly relevant information in 150 cases. Examples of highlighted risk factors judged as plausible were \textit{obesity} known to relate to \textit{diabetes type II}, \textit{untreated hypertension} to \textit{heart failure} or a medication history of \textit{anticoagulant coumadin} to \textit{atrial fibrillation}. They also identified cases in which the highlighted tokens were partially or hardly relevant. In these cases, the highlighted tokens often included stop words or punctuation, indicating that the attention vector failed to learn relevant tokens. This was mainly observed in very frequent diagnoses such as \textit{hypertension} or \textit{anemia}, which corresponds to the lower model performance on these conditions (see Figure \ref{fig:buckets}). This is because conditions very common in the ICU are often either not indicated in the clinical note or not labelled, so that the model cannot learn clear patterns regarding their relevant tokens.
\begin{table*}
\scriptsize
{\renewcommand{\arraystretch}{1.35}%
\begin{tabularx}{\textwidth}{l|X l X}
Admission note & Relevant parts of admission note & similar to & Parts of prototypical patient notes\\ \hline
\multirow{3}{165pt}[-5.5pt]{PRESENT ILLNESS: Patient is a 35-year-old male pedestrian \hlc{pink1}{struck by a bicycle} from behind with \hlc{pink1}{positive loss of consciousness for 6 minutes} at the scene after landing on his head. At arrival at ER \hlc{green1}{patient was confused}, had multiple contusions noted on a \hlc{violet1}{head CT scan including bilateral frontal and right temporal contusions}. His cervical spine and abdominal examinations were negative radiologically. The patient was then \hlc{cyan1}{transferred to the Emergency Room. Patient had several episodes of vomiting} during flight and during the trauma workup. He was assessed and was \hlc{green1}{intubated for airway protection}. The patient was \hlc{pink1}{given coma score of 9} upon initial assessment. Patient remaining \hlc{violet1}{hemodynamically stable throughout the transfer} and throughout the workup in the ED. […]}
&
\hlc{pink1}{struck by a bicycle} …
\hlc{pink1}{loss of consciousness for 6 minutes} …
\hlc{pink1}{coma score 9} …
&
\multirow{1}{0pt}[-5pt]{
$\longrightarrow$ }
&
\textbf{cerebral hemorrhage}
loss of consciousness …
struck by vehicle …
with a gcs of 10 …
\\ \cline{2-4}
&
\hlc{violet1}{head CT scan} …
\hlc{violet1}{bilateral contusions} …
\hlc{violet1}{hemodynamically stable} …
&
\multirow{1}{0pt}[-5pt]{
$\longrightarrow$ }
&
\textbf{skull fracture}
head wound …
right and left contusions …
stable blood circulation …
\\ \cline{2-4}
&
\hlc{cyan1}{transferred to Emergency Room} …
\hlc{cyan1}{several episodes of vomiting} …
&
\multirow{1}{0pt}[-3pt]{
$\longrightarrow$ }
&
\textbf{shock}
patient had multiple episodes of vomiting during the day …
\\ \cline{2-4}
& \hlc{green1}{patient was confused} …
\hlc{green1}{intubated for airway protection} …
&
\multirow{1}{0pt}[-3pt]{
$\longrightarrow$ }
&
\textbf{acute respiratory failure}
patient was disoriented …
later intubated for protection…
\\ \hline
\end{tabularx}}
\caption{Exemplary output of ProtoPatient. The model identifies parts in an admission note that are similar to (i.e. \textit{"look like"}) parts from prototypical patient notes seen during training, leading to the prediction of this diagnosis.}
\label{table:example}
\end{table*}
\section{Related Work}
\paragraph{Diagnosis prediction from clinical notes}
Predicting diagnosis risks from clinical text has been studied using different methods. \citet{fakhraie2011s} analyzed the predictive value of clinical notes with bag-of-words and word embeddings. \citet{attention-clinical} experimented with adding attention modules to recurrent neural models. Recently, the use of Transformer models for diagnosis prediction has outperformed earlier approaches. \citet{van2021clinical} applied BERT-based models further pre-trained on clinical cases to predict patient outcomes. However, the black-box nature of these models hinders their application in clinical practice. We therefore introduce ProtoPatient, which uses Transformer representations, but provides interpretable predictions.
\paragraph{Prototypical networks for few-shot learning}
Prototypical networks were first introduced by \citet{snell-prototypical} for the task of few-shot learning. They initialized prototypes as centroids of support samples per episode and applied the approach to image classification tasks. \citet{sun-fewshot-text} adapted the approach to text documents with hierarchical attention layers. Recently, related approaches based on prototypical networks have been used for multiple few-shot text classification tasks \cite{wen21infproc,zhang21kdd,ren20coling,deng20wsdm,feng23compspeech}. In contrast to this body of work, we do not train our model in a few-shot scenario using episodic learning. However, our model shows related capabilities by improving results for diagnoses with few available samples.
\paragraph{Prototypical networks for interpretable models}
\citet{chen2019looks} used prototypical networks in a different setup to build an interpretable model for image classification. To this end, they learn prototypical parts of images to mimic human reasoning. We adapt their idea and show how to apply it to clinical natural language.
Recently, \citet{prototext} and \citet{prototex} applied the concept of prototypical networks to text classification and showed how prototypical texts help to interpret predictions. In contrast to their work and following \citet{chen2019looks}, we identify prototypical \textit{parts} rather than whole documents by using label-wise attention. This makes interpreting results easier and enables multi-label classification with over a thousand labels.
\paragraph{Label-wise attention}
\citet{mullenbach-caml} introduced label-wise attention for clinical text with the CAML model. Since then, the method has been further improved by hierarchical attention approaches \cite{baumelhagru,yanghan,donghlan}. Label-wise attention has mainly been used for ICD coding, a task related to diagnosis prediction that differs in the input data: ICD coding is done on notes that describe the whole stay at a clinic. In contrast, outcome diagnosis prediction uses admission notes as input and identifies diagnosis \textit{risks} rather than the diagnoses already mentioned in the text. Our method--combining prototypical networks with label-wise attention--is particularly focused on detecting and highlighting those risks to enable clinical decision support.
\section{Discussion}
\subsection{Reflection on the Challenges}
\citet{stopexplaining} urges to stop explaining black-boxes and to build interpretable models instead. With ProtoPatient we introduce a model with a simple decision process--\textit{this patient looks like that patient}--that is understandable to medical professionals and inherently interpretable. An exemplary output is shown in Table \ref{table:example}.
Our results indicate that the model is able to deal with contextual text in clinical notes, e.g.~when identifying \textit{SBP} as a risk factor for sepsis in \ref{sec:relevant-tokens}. In addition, it improves results on rare diagnoses, which are especially challenging for doctors to detect due to lack of experience and sensitivity towards their signs. Overall, our approach demonstrates that interpretability can be improved without compromising performance. The modularity of the prototype vectors further allows clinicians to modify the model even after training. This can be done by adding prototypes whenever a new condition is found, or by directly defining certain patients as prototypical for the system.
\subsection{Limitations of this work}
Our model currently learns relations between diagnoses only indirectly, due to the label-wise nature of the classification. However, considering relations or conflicts between diagnoses is an important part of clinical decision-making. One way to include such relations is the addition of a loss term incorporating diagnosis relations, as proposed by \citet{mullenbach-caml}. Another limitation is that the current model only considers one prototype per diagnosis, even though most diagnoses have multiple presentations, varying among patient groups. We therefore propose further research towards including multiple prototypes into the system.
\section{Conclusion and Future Work}
In this work, we present ProtoPatient which enables interpretable outcome diagnosis prediction from text. Our approach enhances existing methods in their prediction capability—especially for rare classes—and presents benefits to doctors by highlighting relevant parts in the text and pointing towards prototypical patients. The modularity of prototypical networks can be explored in future research. One promising approach is to introduce multiple prototypes per diagnosis, corresponding to the multiple ways diseases can present in a patient. Prototypes could also be added manually by medical professionals based on patients they consider prototypical. Another approach would be to initialize prototypes from medical literature and compare them to those learned from patients.
\section*{Acknowledgments}
We would like to thank the anonymous reviewers for their valuable feedback. This work was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) under grant agreements 01MD19003B (PLASS) and 01MK2008MD (Service-Meister), as well as the Federal Ministry of Education and Research (BMBF) under grant agreement 16SV8845 (KIP-SDM).
|
1,116,691,499,800 | arxiv | \section{Introduction}
\text{Th}e Internet of \text{Th}ings (IoT) refers to the idea of connecting everyday objects to the Internet, enabling them to send and receive data. \text{Th}ere is a wide range of applications for IoT in the areas of smart cities, asset tracking, smart agriculture, health monitoring and so on. \text{Th}e IoT landscape consists of wireless technologies that operate in licensed or unlicensed bands, achieving ranges from less than ten meters up to tens of kilometers with data rates from a few bps to Mbps. Low Power Wide Area Network (LPWAN) targets low-power and long-range applications with data rates from 10 bps up to a few kbps. Narrowband-IoT (NB-IoT) \cite{spec} is a licensed LPWAN technology, which was standardized in 2016 by the \text{Th}ird Generation Partnership Project (3GPP). NB-IoT can be deployed in Global System for Mobile Communications (GSM) or Long-Term Evolution (LTE) networks, and can co-exist with LTE. NB-IoT uses a new physical layer design that facilitates a wide range of IoT applications in the licensed spectrum that require long range, deep indoor penetration, low cost, low data rate, low power consumption, and massive capacity \cite{overview1}.
Among the aforementioned requirements, this paper focuses on uplink coverage enhancement. Many solutions are proposed in the standard to achieve coverage enhancement for NB-IoT. \text{Th}e first solution, referred as \textit{tones}, is to reduce the bandwidth and to perform resource allocation based on tones (or subcarriers) instead of Resource Blocks (RBs). A lower number of tones enables the User Equipment (UE) to transmit in a narrower bandwidth. \text{Th}e second solution is \textit{repetitions}, which refers to repeating the data transmission multiple times. \text{Th}e last solution is \textit{Modulation and Coding Scheme (MCS)}, which is already used in LTE to achieve better coverage \cite{overview3}. Considering the new features of tones and repetitions, uplink link adaptation needs to be performed in three dimensions - using tones, repetitions and MCS. In this paper, coverage enhancement features of NB-IoT are implemented in NS-3 simulator and the effect of each one of these features on reliability and latency is evaluated and analyzed. Furthermore, a hybrid link adaptation considering tones, repetitions and MCS is provided so that the latency per user is minimal and good reliability is achieved. Different techniques for optimization are tried out and compared in terms of execution time and accuracy.
\subsection{Background}
NB-IoT has a bandwidth of 180 kHz which corresponds to one RB of LTE. In the uplink, the bandwidth of 180 kHz can be distributed among 12 subcarriers or tones with 15 kHz spacing, or 48 subcarriers with 3.75 kHz spacing. \text{Th}e subframe duration for 3.75 kHz spacing is 4 ms, which is four times that of 15 kHz spacing \cite{primer}.
NB-IoT supports single-tone and multi-tone communication in the uplink. In case of multi-tone, there are three options with 12, 6 and 3 subcarriers. In case of single-tone, there is only 1 subcarrier with either 15 kHz or 3.75 kHz spacing. A higher number of tones is used to provide higher data rates for devices in normal coverage, while a lower number of tones is used for devices that need extended coverage. A single packet of a fixed size is transmitted over 1 ms in case of 12 tones, 2 ms in case of 6 tones, 4 ms in case of 3 tones, 8 ms in case of 1 tone (15 kHz spacing) and 32 ms in case of 1 tone (3.75 kHz spacing) \cite{rohde}.
MCS is the feature that influences the type of modulation and code rate. MCS is directly proportional to the code rate and Transport Block Size (TBS) and can take values from 0 to 12\cite{3gpp}. As the channel quality deteriorates, the MCS becomes lower and thus the code rate and TBS become lower. MCS, tones and repetitions are assigned based on channel quality. Repetitions of uplink data can take values of 1, 2, 4, 8, 16, 32, 64 and 128. When channel quality is poor, tones and MCS are decreased and repetitions are increased.
\subsection{State-of-the-art}
Constituting a relatively new technology, there are a lot of open issues that need to be investigated for NB-IoT, such as performance analysis, link adaptation, design optimization, and co-existence with other technologies. \text{Th}e performance of NB-IoT with respect to coverage, capacity, and co-existence with LTE has been studied in, for instance, \cite{coverage1}, \cite{coverage2}, \cite{coverage3} and \cite{overview4}. \text{Th}e focus of our paper is towards implementation and evaluation of coverage enhancement techniques and link adaptation based on coverage enhancement methods.
NS-3 is an open source network simulator commonly used for evaluating wireless technologies such as LTE. \text{Th}e NS-3 LTE module
is well-tested and can be used as a base for developing the NB-IoT module. \text{Th}e work on NB-IoT module in NS-3 began in \cite{ns3a}, in which the authors modified downlink signaling traffic such as the Master Information Block (MIB) and the System Information Block (SIB) to comply with NB-IoT specification. In \cite{ns3b}, the authors restricted the bandwidth to one Resource Block (RB) which is 180 kHz and separated the control and data channels. \text{Th}is paper aims to extend \cite{ns3b}, by modifying the resolution of resources from RB to subcarriers, implementing the single and multi-tone uplink features, and including repetitions in the uplink.
With respect to uplink link adaptation of NB-IoT, the authors of \cite{2D} propose a 2D link adaptation strategy based on MCS and repetitions and use link-level simulations to evaluate the performance of their solution. In this paper, however, we use a system and network level simulator (NS-3) to evaluate our solution through end-to-end simulations. Further, they do not take tones into account, which is an important dimension to be considered for link adaptation. Furthermore, they do not consider a hybrid solution instead they fix one parameter while varying the other. In \cite{coverage4}, the authors derive analytic equations that model the impact of repetitions, tones and MCS. They also propose an exhaustive search method that searches all possible combinations of repetitions, tones, and MCS to minimize the transmission latency. However, their analysis of the coverage enhancement features is entirely based on analytic models and has not been verified using network simulations. \text{Th}is paper first performs the hybrid link adaptation using analytic approaches and compares the outcome to the results of end-to-end simulations to verify the accuracy of the solution. Furthermore, instead of an exhaustive search method, we propose a closed-form solution that achieves the optimum result with lower complexity.
\section{NB-IoT implementation and evaluation}
\label{Imp}
\text{Th}e NB-IoT module of NS-3 is built using the existing LTE module. \text{Th}e LTE module in NS-3 includes aspects such as radio resource management (RRC), physical layer error model \cite{error}, QoS-aware packet scheduling, inter-cell interference coordination, and dynamic spectrum access. Based on the LTE module in NS-3, the authors of \cite{ns3b} implemented the basic features for eMTC and NB-IoT modules. Based on the NB-IoT module described in \cite{ns3b}, we implement the uplink coverage enhancement features.
\subsection{Implementation of tones and repetitions}
In order to implement tones, modifications are made in both time domain (extending a packet according to tone) and frequency domain (transmitting over a narrower bandwidth). It is know that reducing bandwidth improves the Signal-to-Noise Ratio (SNR) as the transmitted power spectral density increases. In order to support bandwidth lower than 180 kHz (1 RB), the existing resource allocation is modified from RB-based allocation to subcarrier-based allocation.
In order to implement repetitions, major modifications are made in the time domain (repeating a data packet). Whenever repetition is used, the subsequent repetitions of the same data are aggregated at the eNodeB. Hence, the resulting SNR after the aggregation is the sum of the SNRs of each received repetition. Therefore, repetition of two results in an improvement of approximately 3dB in SNR \cite{coverage4}. In order to achieve this behavior, we have modified the physical layer of the base station in NS-3 to aggregate all the repetitions, and use the final sum of SNR as input to the error model described in \cite{error}.
\subsection{Implementation of link adaptation}
\begin{figure} [t]
\centering
\includegraphics [scale=1,width=0.7\columnwidth]{link__1_.eps}
\vspace{-4 mm}
\caption{The link adaptation mechanism.}
\vspace{-2 mm}
\label{link}
\end{figure}
\begin{figure*} [t]
\hspace*{-.2in}
\centering
\includegraphics [width=\columnwidth]{combined.eps}
\text{\hspace{8mm}(a) Assigned value vs zone\hspace{8mm}(b) Achieved PDR vs zone\hspace{8mm}(c) Achieved delay vs zone}
\caption{Performance of link adaptation in open area and urban scenarios.}
\label{dummy}
\end{figure*}
Link adaptation is performed based on the SNR received from the Secondary Reference Signal (SRS). SRS is a signal that is sent periodically by the UE. Fig.~\ref{link} shows the link adaptation mechanism. The SNR received from the SRS is provided as input to the error model of NS-3 to find the Block Error Rate (BLER) corresponding to the SNR \cite{error}. If the BLER is less than the target BLER of 0.1, the MCS and tone are fixed to the highest value (12 tones) and repetition is fixed to the lowest value (1 repetition). If the target BLER is not met, MCS, tone and repetition are adapted and re-evaluated using the error model. \text{Th}is process is repeated until a BLER of 0.1 or less is reached. \text{Th}e final value of the MCS, tone or repetition that resulted in the BLER of 0.1 or less is assigned to the UE.
\text{Th}ree independent methods of link adaptation are performed:
\begin{enumerate}
\item MCS is adapted based on SNR (repetitions are fixed to 1 and tones are fixed to 12).
\item Tones are adapted based on SNR (repetitions are fixed to 1 and MCS is fixed to 12).
\item Repetitions are adapted based on SNR (MCS is fixed to 12 and tones are fixed to 12).
\end{enumerate}
\subsection{Evaluation}
\label{eval}
\text{Th}e three link adaptation strategies are evaluated using NS-3. \text{Th}e performance evaluation is carried out for random deployment scenarios. We consider two scenarios: open area and urban. In open area, the eNodeB is located in the center and UE's are arranged in a random fashion at different distances from the eNodeB up to a distance of 25 km. Note that as distance increases, the SNR becomes lower. In urban scenario, we include buildings and we assume that 80-90\% of the users are located inside the buildings. For a given distance, SNR is relatively lower inside a building than outside. \text{Th}e simulation parameters for these scenarios are shown in Table~\ref{main}.
\begin{table}[b]
\centering
\caption{Simulation parameters}
\label{main}
\begin{tabular}{|l|l|}
\hline
\textbf{Parameter} & \textbf{Value} \\ \hline
Number of UE & 100 - 600 \\ \hline
UEs distribution & random \\ \hline
Propagation model & \begin{tabular}[c]{@{}l@{}}Okumura-Hata propagation model (Open area)\\ Hybrid building propagation model (Urban)\end{tabular} \\ \hline
Frequency Band & DL: 925 MHz, UL: 880 MHz \\ \hline
Tx Power & eNodeB: 46 dBm, UE: 20 dBm \\ \hline
Packet Size & 12 bytes \\ \hline
\# Runs & 100 runs \\ \hline
Inter-packet interval & 10 seconds \\ \hline
Zone start (m) & \begin{tabular}[c]{@{}l@{}}0, 200, 600, 800, 1000, 2000, 2500, 2750, 3000\\ 3500, 4000, 5000, 6000, 8000, 10000 \end{tabular} \\
\hline
Zone width (m) & 200, 250, 500, 1000 \\ \hline
\end{tabular}
\end{table}
In each scenario, the nodes are grouped among different zones. \text{Th}ere are 16 zones which start at different distance from the eNodeB as indicated by the ``Zone start'' field in Table ~\ref{main}. \text{Th}e zones are separated by three different intervals indicated as ``Zone width'' field in Table ~\ref{main}.
Fig.~\ref{dummy} shows the results for open area and urban scenarios. Fig.~\ref{dummy}(a) shows the average value of the assigned MCS, repetition and tone in different zones. It is important to note that, the farther the zone, the lower the value of SNR. We can observe that due to indoor deployment in urban scenario the values of MCS, tone and repetitions are modified at closer distances. In urban scenario, UE's that are located inside buildings have very low values of SNR compared to the open area and the MCS, repetition and tone are adapted more rapidly in order to improve reliability.
Similarly, as shown in Fig.~\ref{dummy}(b), the reduction in Packet Delivery Ratio (PDR) is steeper in urban scenario. We can observe from the PDR graph that MCS provides good reliability until zone 11 (4 km), in open areas scenario, while it starts to fail in zone 6 (2 km) in the urban scenario. Tones start to fail at zone 14 (8 km) in the open areas and zone 9 (3 km) in urban. Repetition also follows the same trend and achieves good reliability until zone 16 (10 km) in open areas and zone 11 (4 km) in urban. \text{Th}erefore, we can achieve good reliability until a maximum distance of 10 km in open areas and 4 km in urban areas. Repetitions have the best performance in both urban and open areas. However, an increase in repetitions has to be traded off for a corresponding increase in the power consumption. Fig.~\ref{dummy}(c) shows the average delay or latency at different zones. We can observe that the delay starts to increase at a lower distance for urban compared with open area. \text{Th}is clearly shows that the latency of transmission increases as we move from open areas to urban areas. \text{Th}e delay follows the adapted value and increases towards farther zones.
Based on the above results, we can conclude that the improvement in coverage comes at the cost of a higher delay. \text{Th}e link adaptation strategies illustrated above try to adapt one of the features such as tone or MCS or repetitions. However, in practice, a more useful solution will be to adapt all three of them in an optimal manner.
\section{Hybrid solution}
\text{Th}e link adaptation strategies described in the previous section adapt only one of the three coverage enhancement parameters, which result in saturation before achieving a good coverage. In order to extend coverage, combining these parameters into a hybrid solution is inevitable. When MCS, tones or repetitions are adapted to improve the reliability of a UE that has a poor coverage, there is a corresponding increase in the transmission delay of the UE. Therefore, in the hybrid solution, the values of tones, repetitions and MCS are evaluated in an optimized manner such that the delay per user is minimal, while the reliability is not compromised. To achieve this, we formulate an optimization problem, with transmission delay per user as the objective function, and the reliability as the constraint. In addition to transmission delay, energy consumption would also be an interesting objective for minimization. In this paper, however, we only focus on the delay.
The delay of a UE is composed of synchronization delay, Random Access Channel (RACH) delay and data transmission delay. In this paper, we only consider the \textit{data transmission delay}, as it is the delay that can stretch in time based on the amount of data. \text{Th}e uplink data transmission delay per UE consists of Downlink Control Information (DCI), transmission of data, and transmission and reception of the acknowledgment. \text{Th}e data transmission delay per UE for the uplink (UL) transmissions can be written as \cite{ns3b},
\begin{equation}\label{Delay}
Delay = {TL} \times \ceil {\dfrac{ \,Datalength}{TBS(MCS,RU)}},
\end{equation}
where $TL$ is the transmission latency, $Datalength$ is the data size per user and $TBS$ is the transport block size. $TL$ depends on the duration of a single transmission of DCI ($t_{PDCCH}$), repetitions of control transmission ($RLDC$), downlink to uplink switching delay ($t_{DUS}$), duration of a single subframe ($t_{PUSCH}$), the time factor ($t$), number of repetitions of the data transmissions ($RLUS$) and time taken for acknowledgement ($t_{ACK}$) as shown in Fig.~\ref{latency}. The Narrowband Physical Uplink Shared Channel (NPUSCH) is used for uplink data transmission and the Narrowband Physical Downlink Control Channel (NPDCCH) is used for downlink control transmission.
Hence, $TL$ can be written as,
\begin{align}
TL =&{}\, RLDC \times t_{PDCCH}+t_{DUS}+RLUS\times t\times t_{PUSCH}\nonumber \\
&{}+t_{UDS}+ RLUC \times t_{ACK}.\label{TL}
\end{align}
The time factor $t$ depends on the number of tones assigned to the UE and can take values as 1, 2, 4, 8, and 32 for 12, 6, 3, 1 tones of 15 kHz spacing and 1 tone of 3.75 kHz spacing, respectively. The acknowledgement and retransmissions are disabled to better analyze the performance of our solution i.e., $t_{UDS}$ and $t_{ACK}$ are set to zero. For simplicity, we assume that there are no repetitions in the DCI ($RLDC=0$) and that the number of resource units is one ($RU=1$).
\begin{figure} [t]
\centering
\includegraphics [scale=1,width=0.5\columnwidth]{uldelay.eps}
\caption{Uplink transmission latency in NB-IoT }
\label{latency}
\vspace{-3 mm}
\end{figure}
Let us denote $t_{PUSCH}$ by $K_0$, $Datalength$ by $K_2$ and $RLUS$ by $r$. Hence, we can rewrite \eqref{Delay}
as follows,
\begin{align}
\label{Delay_final}
Delay &= \left({K_1} + {K_0}\, \times {r}\, \times {t}\,\right) \ceil{\dfrac{{K_2}}{TBS(m)}},
\end{align}
where $r$ is the number of repetitions, $t$ is the time factor, $K_2$ is the datalength, $K_0$ and $K_1$ are constants and $TBS$ is the transport block size that depends on MCS denoted by $m$. \text{Th}e table showing the relationship between MCS and TBS is specified in \cite{spec}. Considering the delay expression given in \eqref{Delay_final}, the optimization problem can be formulated as,
\begin{equation}
\begin{aligned}
& \underset{r,t,m}{\text{min}}
& & Delay(r,t,m) \\
& \text{s. t.}
& & \text{SNR} \geq \text{SNR}_{\text{Th}}(m)\\
& \text{}
& & r \in R,\: t \in T,\: m \in M,
\end{aligned}
\end{equation}
where $\text{SNR}_{\text{Th}}(m)$ is the threshold SNR value that depends on MCS, denoted by $m$. MCS is an integer value that belongs to the set $M=\{0,1,2...,12\}$, $r$, representing repetitions, is an integer value that belongs to the set $R=\{1,2,4,8,16,32,64,128\}$ and $t$, representing the time factor, is an integer value that belongs to the set $T=\{1,2,4,8,32\}$. In order to achieve good reliability, the received SNR should be above $\text{SNR}_{\text{Th}}(m)$. \text{Th}e received SNR depends on propagation loss, repetition and tone. \text{Th}e number of tones influence the transmission bandwidth which is given by $BW=180\,\text{kHz}/f$, where $f$ is the frequency factor. The frequency factor, $f$, can take values as 1, 2, 4, 12, and 48 for 12, 6, 3, 1 tones of 15 kHz spacing and 1 tone of 3.75 kHz spacing, respectively.
\text{Th}e transmitted power spectral density ($PSD_{TX}$) depends on the frequency factor ($f$), and is given by $P_{TX}/BW$, where $P_{TX}$ is the transmitted power.
Hence, the received SNR is calculated as,
\begin{equation}\label{SNR_final}
\text{SNR} = K_3\times f\times r,
\end{equation}
where $K_3=P_{TX}/(180\text{kHz} \times N_{0}\times PL)$, $N_0$ is the noise power spectral density and $PL$ is the pathloss.
\text{Th}e SNR obtained in \eqref{SNR_final} should be greater than a given threshold ($\text{SNR}_{\text{Th}}(m)$) to achieve a good reliability and low BLER. The value of $\text{SNR}_{\text{Th}}(m)$ depends on MCS and it can be obtained from the NB-IoT BLER curves generated for each MCS. \text{Th}ese BLER curves are generated by performing link level simulations. Fig.~\ref{BLER} shows the generated BLER curves on the uplink for different MCS values under Additive White Gaussian Noise (AWGN) channel. Hence, $\text{SNR}_{\text{Th}}(m)$, for all $m$, can be obtained from Fig.~\ref{BLER} by setting the value of BLER to be 0.1.
\begin{figure} [htb]
\centering
\includegraphics[width=0.5\columnwidth]{bler-2.eps}
\caption{BLER curves under different MCS values for AWGN channel.}
\label{BLER}
\end{figure}
The obtained $\text{SNR}_{\text{Th}}(m)$ needs to be met in order to guarantee that the packet is received at the base station without any corruption.
Using the expressions given in \eqref{Delay_final} and \eqref{SNR_final}, the optimization problem can be re-written as follows:
\begin{equation}
\begin{aligned}
& \underset{r,t,m}{\text{min}}
& & \dfrac{{K_2}\, \left({K_1} + {K_0}\, {r}\, {t}\,\right)}{TBS(m)} \\
& \text{s. t.}
& & K_3\times f\times r \geq \text{SNR}_{\text{Th}}(m)\\
& \text{}
& & r \in R,\: t \in T,\: m \in M.
\end{aligned}
\label{actual}
\end{equation}
Note that the ceiling in \eqref{Delay} is dropped since it will not alter the outcome of the optimization. \text{Th}e objective function given in \eqref{actual} is non-convex and it is hard to solve it analytically without any approximations. \text{Th}e optimization problem is solved using three methods, namely, the exhaustive search, Lagrange and fsolve methods. In order to simplify the optimization problem \eqref{actual} for solving through the Lagrange and fsolve methods, some approximations are made. Furthermore, the integer constraints on $r$, $t$ and $m$ are relaxed. In order to obtain these approximations, we use curve fitting function in MATLAB.
\text{Th}e first approximation is done for $TBS(m)$ which is the denominator of the objective function. \text{Th}e obtained approximation is given by,
\begin{equation}
TBS(m) = a\, m^2 + b\, m + c,
\label{approxTBS}
\end{equation}
where a = 0.65, b=7.5, c=15.5 and the mean square error between the actual $TBS(m)$ given in \cite{spec} and the obtained approximation is equal to 20.
\text{Th}e second approximation is for $\text{SNR}_{\text{Th}}(m)$ in \eqref{actual}. The approximation of $\text{SNR}_{\text{Th}}(m)$ is derived from BLER curves in Fig.\ref{BLER} and is given by,
\begin{equation}
\text{SNR}_{\text{Th}}(m) = q_1\, m^3 +\, q_2\, m^2 + \, q_3\, m + q_4,
\label{approxthreshold}
\end{equation}
where $q_1 = 0.001055$, $q_2 = 0.007623$, $q_3 = 0.01359$, and $q_4 = 0.3615$. The mean square error between the actual and the approximated $\text{SNR}_{\text{Th}}(m)$ is 0.0047.
\text{Th}e final approximation concerns the time and frequency factors. \text{Th}e objective function is based on $t$ whereas the SNR is based on $f$. Parameters $f$ and $t$ are both based on the number of tones and are interrelated. For example, for a 15 kHz single-tone, $t$ is equal to 8 and $f$ is equal to 12. Hence, we create an expression that relates $f$ to $t$ and it is given by,
\begin{equation}
f = {p_1}\, t^3 + {p_2}\, t^2 + {p_3}\, t + {p_4}
\label{approxtone}
\end{equation}
where $p_1= -0.004994$, $p_2 = 0.2031$, $p_3 = 0.08811$, and $p_4 = 0.834$. The mean square error between the actual and the approximated function is 0.015.
Based on the above approximations, the optimization problem \eqref{actual} can be re-written as,
\begin{multline}
\begin{aligned}
\underset{r,t,m}{\text{min}}
& \qquad \dfrac{{K_2}\, \left({K_1} + {K_0}\, {r}\, {t}\,\right)}{a\, m^2 + b\, m + c} \\
\text{s. t.}
& \qquad K_3 \times \left({p_1}\, t^3 + {p_2}\, t^2 + {p_3}\, t + {p_4}\right) \times r \geq \\ & \qquad q_1\, m^3 +\, q_2\, m^2 + \, q_3\, m + q_4\\
& \qquad 0 \leq r \leq 128, \, 0 \leq m \leq 12,\, 0 \leq t \leq 32.
\end{aligned}
\label{approx}
\end{multline}
Based on the formulations of the optimization problem in equations \eqref{actual} and \eqref{approx}, we solve the optimization problem using different methods.
\subsubsection{Lagrange}
\text{Th}e method of Lagrange multipliers is used to solve the minimization problem described in \eqref{actual} and \eqref{approx}. In order to simplify the optimization problem and to have a closed-form solution, we fix the value of MCS, $m$. Thus, we search for the optimum values of $r$ and $t$ for a given value of $m$. Hence, in \eqref{approx}, since $m$ is a constant, there is no need of using the approximation of $TBS(m)$ given in \eqref{approxTBS}.
Furthermore, we relax the integer constraint on $r$ and $t$. Based on \eqref{approx} and these assumptions, the objective function and the constraints for a given $m$ are written as,
\begin{equation*}
\begin{aligned}
& \underset{r,t,m}{\text{min}}
& & \dfrac{{K_2}\, \left({K_1} + {K_0}\, {r}\, {t}\,\right)}{TBS(m)} \\[6pt]
& \text{s. t.}
& & {K_3}\, {r}\, \left({p_1}\, t^3 + {p_2}\, t^2 + {p_3}\, t + {p_4}\right)- \text{SNR}_{\text{Th}} (m) \geq 0&
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
& 0 \leq r \leq 128, \, 0 \leq m \leq 12,\, 0 \leq t \leq 32&\\
\end{aligned}
\end{equation*}
\text{Th}e Lagrangian (L) is defined as:
\begin{align
L=&\,\frac{{K_2}\, \left({K_1} + {K_0}\, {r}\, t\right)}{TBS(m)}-
{\lambda}\, {K_3}\, {r}\, \left({p_1}\, t^3 + {p_2}\, t^2 + {p_3}\, t + {p_4})\right) \nonumber \\ &- \lambda\,\text{SNR}_{\text{Th}} (m)
\end{align}
where $r$, $t$ and Lagrangian multiplier $\lambda$ are the variables or unknowns. \text{Th}e partial derivatives of the Lagrangian L are calculated for $r$, $t$ and $\lambda$ as shown below:
\begin{align}
&\dfrac{\partial L}{\partial r}=0, \quad \dfrac{\partial L}{\partial \lambda}=0, \quad \dfrac{\partial L}{\partial t}=0
\end{align}
\begin{align}
& \frac{{K_0}\, {K_2}\, t}{TBS(m)} - \, {K_3}\, {\lambda}\, \left({p_1}\, t^3 + {p_2}\, t^2 + {p_3}\, t + {p_4}\right)=0, \label{op1}
\end{align}
\begin{align}
& \frac{{K_0}\, {K_2}\, {r}}{TBS(m)} - \, {K_3}\, {\lambda}\, {r}\, \left(3\, {p_1}\, t^2 + 2\, {p_2}\, t + {p_3}\right)=0,\label{op2}
\end{align}
\begin{align}
& \text{SNR}_{\text{\text{Th}}} (m) - \, {K_3}\, {r}\, \left({p_1}\, t^3 + {p_2}\, t^2 + {p_3}\, t + {p_4}\right)=0 \label{op3}
\end{align}
Solving \eqref{op1} and \eqref{op2} for $t$ for a given $m$, we get
\begin{equation}
\frac{2p_1t^3+p_2t^2-p_4}{3p_1t^2+2p_2t+p_3}=0.\label{t_opti}
\end{equation}
From \eqref{t_opti}, we can see that $t$ depends only on the SNR. In order to get $t$, we solve $2p_1t^3+p_2t^2-p_4=0$ such that $3p_1t^2+2p_2t+p_3\neq 0$. Solving these equations for the given parameters $p_i$, we get the following
\begin{align}
t& =-1.936,\: 2.142,\: 20.128 \\
t&\neq -0.215, \:27.327
\end{align}
Out of these $t$ values, the negative value is discarded and the only possible values are 20.128 and 2.142. We can obtain $r$ by substituting the values of $t$ in \eqref{op3}. Then, we search for the integer combination of $r$ and $t$ that gives minimal delay and a SNR value higher than $\text{SNR}_{\text{Th}}$. \text{Th}e value of $m$ is chosen by performing an exhaustive search and obtaining the values of $t$ and $r$ for each value of $m$. \text{Th}e optimum solution is obtained by choosing the combination $r$, $t$ and $m$ that yield the lowest delay, while achieving good reliability, i.e., SNR$\geq$$\text{SNR}_{\text{Th}}$.
\subsubsection{fsolve}
\text{Th}e second method used to solve the optimization problem is fsolve, a MATLAB function used to solve a system of multivariate non-linear equations.
\text{Th}is method is based on the approximated objective function \eqref{approx}.
\subsubsection{Exhaustive search}
\text{Th}e most straight-forward method to solve the optimization problem is through an exhaustive search. For this method, we consider the optimization problem without any approximations given by \eqref{actual}. \text{Th}is method is implemented by searching for all possible combinations of $m$, $r$ and $t$. Then, we select the combination that yield the smallest delay and satisfies the SNR constraint.
\begin{table}[!b]
\centering
\caption{Accuracy and speed of fsolve and Lagrange}
\begin{tabular}{|l|l|l|}
\hline
\textbf{Method} & \textbf{Mean square error} &\textbf{Speed-up factor} \\ \hline
fsolve & 0.0018 & 1.5 \\ \hline
Lagrange & 0.0001028 & 8\\ \hline
\end{tabular}
\label{accuracy}
\end{table}
\section{Numerical Results}
\text{Th}e exhaustive, Lagrange and fsolve algorithms are first implemented in MATLAB. \text{Th}e results from the MATLAB implementation do not include network delays and are based on theoretical models. \text{Th}e exhaustive search method is chosen as the base for evaluation since it is the most accurate approach without any approximations. Table~\ref{accuracy} shows the obtained mean square error of fsolve and Lagrange methods compared with the exhaustive method. \text{Th}e Lagrange solution has better accuracy than fsolve because less approximations are used.
We can observe in Table~\ref{accuracy} that the Lagrange method is the fastest method with a speedup factor of about eight times over the exhaustive method. This is achieved because the Lagrange method only iterates over the value of $m$.
fsolve is faster than exhaustive search but slower than the Lagrange method. \text{Th}is is because fsolve is an iterative approach and it tries to find the three unknowns, simultaneously. In order to allocate tones, repetitions and MCS, the base station needs to perform the link adaptation at runtime for all the UE's whenever there is a change in SNR. Hence, the speed of the optimization algorithm is an important factor to be considered, while choosing the algorithm.
\begin{figure} [t!]
\centering
\includegraphics [scale=1,width=0.5\columnwidth]{hybrid.eps}
\caption{Delay per user for different approaches }
\label{a}
\vspace{-0.6cm}
\end{figure}
In order to evaluate our theoretical models for delay given in \eqref{Delay}, the Lagrange and the exhaustive approach are implemented in the NS-3 network simulator. \text{Th}e same random deployment scenario described above for open areas in \ref{eval} is used to perform the simulations in NS-3.
Fig.\ref{a} depicts the delay obtained by adapting MCS, adapting tone, adapting repetitions and adapting all the three parameters, i.e. hybrid optimization in NS-3. We should note that the optimum values of the parameters in the hybrid solution are obtained using Lagrange method. The delay obtained from NS-3 simulations of the Lagrange approach is is denoted by 'Lagrange (NS-3)' in Fig.\ref{a}. The delay obtained from MATLAB using the theoretical expression in \eqref{Delay} optimized using the Lagrange method is denoted in Fig.\ref{a} by 'Lagrange (model)'. We can observe that the delay obtained using 'Lagrange (model)' and 'Lagrange (NS-3)' are similar. This confirms that the expression of the delay in \eqref{Delay} is correct. In the zoomed part of Fig.\ref{a}, we can observe that between zones 5 and 15, the hybrid solution, 'Lagrange (NS-3)', gives the lowest delay among the other methods and yields similar delay value at closer zones. Furthermore, the MCS, tone, and repetition-only approaches show good performance with respect to the reliability up to a maximum of 4 km, 8 km, and 10 km, respectively. However, through experiments, the hybrid Lagrange approach provides good reliability up to a distance of 40 km in open areas scenario. Thus, hybrid solution offers better network efficiency, lower delay or latency per user which means lower power consumption.
In addition to latency and power consumption, we also evaluate the network performance in terms of scalability, which is the maximum number of users that can be supported in a network. The maximum number of users that can be supported in a network is obtained from \cite{ns3b} and is given by,
\begin{equation}
max\,N_{UE} = \floor{\dfrac{Reporting\,Period}{Delay_{UE}}}\times\floor{\dfrac{N_{SC}}{SCU}},
\label{numue1}
\end{equation}
where $N_{SC}$ is the total number of subcarriers available for allocation, $Delay_{UE}$ is the average delay per user obtained in \eqref{Delay}, and $SCU$ is the number of subcarriers allocated to one user. The reporting period ($Reporting\, Period$) is assumed to be the same for all users. Fig.~\ref{b} depicts the results obtained using NS-3 simulator and using the theoretical expression given in \eqref{numue1} for the different aforementioned methods.
\begin{figure} [t!]
\centering
\includegraphics [width=0.5\columnwidth]{scalability.eps}
\caption{Scalability for reporting period of 10s.}
\label{b}
\end{figure}
Fig.~\ref{b} shows the maximum number of users that can be supported when $N_{SC}$ is 24, i.e., number of RBs is two, and the reporting period is 10 s. We can observe that the Lagrange or hybrid method has the highest maximum number of users, mainly because it is optimized to achieve lower delay per user ($Delay_{UE}$). Furthermore, in tone and hybrid approaches, resource allocation is performed in terms of subcarriers (SC) and multiple users can share the same RB whereas, in repetition and MCS approaches, the resource allocation is performed in terms of resource blocks (RB) and every user is allocated a minimum of one RB (12 SC). This means that the subcarrier per user (SCU) is fixed to 12 in these approaches, resulting in a lower maximum number of users than the tone and hybrid approaches. In the tone and hybrid approaches, there is a difference between the NS-3 and the theoretical results because it is difficult to simulate beyond 600 users in NS-3 due to memory and processing constraints.
\section{Conclusion}
In this paper, we describe an implementation of uplink coverage enhancement methods of NB-IoT in NS-3 simulator. We evaluate the performance of tones, repetitions and MCS with respect to reliability and latency. We show that, an improvement in reliability at longer ranges comes at the cost of a corresponding increase in latency. In order to achieve improved coverage and lower latency, we propose a hybrid optimization strategy with latency as the objective function and SNR as the constraint. We propose and implement three optimization methods the exhaustive search, fsolve and Lagrange methods and we evaluate them based on accuracy and speed. We show that the Lagrange method outperforms the other two methods in terms of execution speed and yields the same latency as the exhaustive method. We implement the Lagrange method in the NS-3 simulator and verify our optimization formulation. \text{Th}rough numerical results, we show that the Lagrange method for hybrid link adaptation is eight times faster than the exhaustive search approach and yields similar latency. Furthermore, it achieves a range of 40 km for open areas and has better scalability than optimized tone, optimized repetition and optimized MCS approaches.
\section{ACKNOWLEDGMENT}
\text{Th}is work was partially funded by the Flemish FWO SBO S004017N IDEALIoT (Intelligent DEnse And Long range IoT networks) project and the SCOTT project (SCOTT (www.scott-project.eu) has received funding from the Electronic Component Systems for European Leadership Joint Undertaking under grant agreement No 737422. \text{Th}is Joint Undertaking receives support from the European Unions Horizon 2020 research and innovation programme and Austria, Spain, Finland, Ireland, Sweden, Germany, Poland, Portugal, Netherlands, Belgium, Norway).
|
1,116,691,499,801 | arxiv | \section{Introduction}
The effect of channel coupling, that is,
couplings of the relative motion
between the colliding nuclei to their intrinsic motions as well as
transfer reactions, have been well known
in heavy-ion collisions around the Coulomb barrier.
In heavy-ion fusion reactions at sub-barrier energies, the channel
coupling effects enhance considerably the fusion cross sections as compared
to the prediction of potential model calculation
\cite{beck-88,bah-tak98,das-98}.
It has been well established by now that the
channel coupling gives rise to a distribution of
potential barriers \cite{esb-81,nag-86,hag-95}. Based on this idea, a
method was proposed to extract barrier distributions directly from
experimental fusion excitation functions by taking the second derivative of the
product of center mass energy, $E$, and the fusion cross section,
$\sigma_{\rm fus}(E)$, with respect to $E$ \cite{row-91}.
Coupled-channels calculations as well as high precision fusion data
have shown that the fusion barrier
distributions, $D^{\rm fus}=d^2[E\sigma_{\rm fus}(E)]/dE^2$, is sensitive to
the details of channel couplings, while the sensitivity is much more
difficult to see in the fusion cross sections \cite{das-98,das-981,leigh-95}.
Similar information as the fusion cross section can also be
obtained
from the quasi-elastic scattering (a sum of elastic, inelastic and
transfer processes) at backward angles \cite{ARN88}.
Timmers {\it et al.}
measured the quasi-elastic scattering cross section for several
systems \cite{thim-95},
for which the fusion barrier distribution had already
been extracted \cite{leigh-95}.
They proposed that the corresponding
barrier distribution can be extracted
by taking the first derivative of the ratio of the quasi-elastic
to the Rutherford cross sections, $d\sigma_{\rm qel}/d\sigma_R$, with
respect to the energy, $E$, {\it i.e.,}
$D^{\rm qel}=-d(d\sigma_{\rm qel}/d\sigma_R)/dE$.
The properties of the quasi-elastic barrier distributions have been studied
in more details in Ref. \cite{hag-04}. These studies show that
the quasi-elastic barrier distribution is similar to the
fusion barrier distribution, although the former is somewhat smeared
and less sensitive to the nuclear structure effects.
\begin{figure
\includegraphics{16O144Sm.ps}
\caption{Comparison between the experimental fusion (the filled circles)
and quasi-elastic (the open squares) barrier
distributions for the $^{16}$O$+^{144}$Sm reaction. They are
normalized to unit area in the energy interval between
$E_{\rm c.m.}=$ 56 and 70 MeV.
The experimental data are
taken from Refs. \cite{leigh-95} and \cite{thim-95}.}
\end{figure}
One of the systems which Timmers {\it et al.} measured is
$^{16}$O$+^{144}$Sm \cite{thim-95}.
Figure 1 shows the comparison of the
experimental barrier distribution extracted from fusion (the filled
circles) and quasi-elastic (the open squares) processes.
In order to compare the two barrier distributions,
we scale them so that the energy integral between $E_{\rm c.m.}$= 56
and 70 MeV is unity. For energies below 62 MeV,
the two barrier distributions resemble each other.
However, at higher energies,
they behave rather differently, although
the overall width of the distributions is similar to each other.
That is, the quasi-elastic barrier distribution
decreases monotonically as a function of energy while
the fusion barrier distribution exhibits a distinct peak at energy around
$E_{\rm c.m.}=65$ MeV.
So far, no theoretical calculations have succeeded in explaining
this difference.
The coupled-channels calculations of Timmers {\it et al.} \cite{thim-95}
with the computer code {\tt ECIS} \cite{ecis}, which took
into account the one
quadrupole, $2^+$, and the one octupole, $3^-$,
phonon excitations of $^{144}$Sm,
were unable to reproduce both the
experimental data of the quasi-elastic cross sections and
the quasi-elastic barrier distribution.
The {\tt ECIS} results for the ratio of quasi-elastic scattering
to the Rutherford cross sections fall off more steeply than
the experimental data, while the obtained barrier distribution
has a secondary peak similar
to the fusion barrier distribution.
They argued that this failure is largely due to the residual
excitations not
included in the {\tt ECIS} calculations, which they
postulated to be transfer channels.
Esbensen and Buck have also performed the coupled-channels
calculations for this system taking into account the second order couplings
\cite{Esb-96}. However, they did not analyze the quasi-elastic
barrier distribution.
These previous coupled-channels calculations took into account
only the single phonon excitations in $^{144}$Sm.
On the other hand,
Hagino {\it et al.} \cite{hag-97,hag-971} have shown that
the double anharmonic quadrupole and octupole
phonon excitations play an important role in reproducing the experimental
fusion barrier distribution for this system.
However, its effect on the quasi-elastic
scattering
has not yet been clarified so far. The aim of this paper is then to
study weather the double anharmonic vibrational excitations of the
$^{144}$Sm nucleus can explain the difference in the shape of barrier
distribution between fusion and quasi-elastic.
The role of proton
transfer reactions in this system is also discussed.
The paper is organized as follows. In the next section, we briefly
explain the coupled-channels formalism which takes
into account the anharmonicities of the
vibrational excitations. We present the results of our calculations
in Sec. III. We then summarize the paper in Sec. IV.
\section{Coupled-channels formalism for anharmonic vibration}
In this section, we briefly describe the coupled-channels formalism
which includes the effects of anharmonic excitations of the
vibrational states.
We follow the procedure of Refs. \cite{hag-97,hag-971}, which was
successfully applied to describe the experimental fusion cross
sections as well
as the fusion barrier distributions of $^{16}$O+$^{144,148}$Sm systems.
The total Hamiltonian of the system is assumed to be
\begin{eqnarray}
H&=&-\frac{\hbar^2}{2\mu}\nabla^2+H_{\rm vib}+V_{\rm coup}(\boldsymbol{r},\xi)
\end{eqnarray}
where $\boldsymbol{r}$ is the coordinate of the relative motion between the
target and the projectile nuclei, $\mu$ is the reduced mass and $\xi$
represents the internal vibrational degrees of freedom of the target
nucleus. $H_{\rm vib}$ describes the
vibrational spectra in the target nucleus.
The coupling between the relative motion and the intrinsic motion of
the target nucleus is described by the coupling potential $V_{\rm coup}$
in Eq.(1), which consists
of the Coulomb and nuclear parts. Using the no-Coriolis
(iso-centrifugal) approximation
\cite{bah-tak98,hag-99}, they are given as
\begin{eqnarray}
V_{\rm coup}(r,\xi)=V_C(r,\xi)+V_N(r,\xi),\qquad\qquad\qquad\qquad \\
V_C(r,\xi)=\frac{Z_PZ_Te^2}{r}\left(1+\frac{3R_T^2}{5r^2}
\frac{\hat{O}_{20}}{\sqrt{4\pi}}+\frac{3R_T^3}{7r^3}
\frac{\hat{O}_{30}}{\sqrt{4\pi}}\right),
\label{vcoupc}
\\
V_{N}(r,\xi)=\frac{-V_0}{\left[1+\textrm{exp}\left(\frac{
[r-R_0-R_T(\hat{O}_{20}+\hat{O}_{30})/\sqrt{4\pi}]}{a}\right)\right]}.
\quad\,\label{vcoupn}
\end{eqnarray}\\
Here $\hat{O}_{20}$ and $\hat{O}_{30}$ are the excitation operators for
the quadrupole and octupole vibrations, respectively, and $R_T$ is
the target radius.
The effect of
anharmonicities for the quadrupole and octupole vibrations are taken into
account based on the U(5) limit of the
Interacting Boson Model (IBM). The
matrix elements of the operator
$\hat{O}=\hat{O}_{20}+\hat{O}_{30}$
in Eqs.(\ref{vcoupc}) and (\ref{vcoupn}) then read
\cite{baha-9394,hag-97,hag-971},
\begin{widetext}
\begin{equation}
O_{ij}=
\left[\begin{array}{cccccc}
0&\beta_2&\beta_3&0&0&0\\
\beta_2&-\frac{2}{\sqrt{14N}}\chi_2 \beta_2&-\frac{2}{\sqrt{15N}}\chi_3\beta_3&
\sqrt{2(1-1/N)}\beta_2&\sqrt{1-1/N}\beta_3&0\\
\beta_3&-\frac{2}{\sqrt{15N}}\chi_3\beta_3&
-\frac{2}{\sqrt{21N}}\chi_{2f}\beta_2&0&
\sqrt{1-1/N}\beta_2&
\sqrt{2(1-1/N)}\beta_3\\
0&\sqrt{2(1-1/N)}\beta_2&0&-\frac{4}{\sqrt{14N}}\chi_2\beta_2&
-\sqrt{\frac{8}{15N}}\chi_3\beta_3&0\\
0&\sqrt{1-1/N}\beta_3&\sqrt{1-1/N}\beta_3&-\sqrt{\frac{8}{15N}}\chi_3\beta_3&
(-\frac{2}{\sqrt{14N}}\chi_2-\frac{2}{\sqrt{21N}}\chi_{2f})\beta_2&
-\sqrt{\frac{8}{15N}}\chi_3\beta_3\\
0&0&\sqrt{2(1-1/N)}\beta_3&0&-\sqrt{\frac{8}{15N}}\chi_3\beta_3&
-\frac{4}{\sqrt{21N}}\chi_{2f}\beta_2\\
\end{array} \right]
\end{equation}
\end{widetext}
for 6 low-lying states ($i,j$=1-6), where
$|1\rangle = |0^+\rangle$,
$|2\rangle = |2^+\rangle$,
$|3\rangle = |3^-\rangle$,
$|4\rangle = |2^+\otimes2^+\rangle$,
$|5\rangle = |2^+\otimes3^-\rangle$, and
$|6\rangle = |3^-\otimes3^-\rangle$.
In Eq.(5), $\beta_2$ and $\beta_3$ are
the quadrupole and the octupole deformation parameters, respectively,
which can be estimated from the electric transition probabilities.
The scaling of
coupling strength with $\sqrt{N}$, $N$ being the number of boson in
the system, is introduced to ensure the
equivalence between the IBM and the geometric
model in the large $N$ limit \cite{baha-9394}.
When all the $\chi$ parameters in Eq.(5)
are set to be zero then the quadrupole moment of all the states
vanishes, and one obtains the harmonics
limit in the large $N$ limit. Nonzero values of $\chi$ generate the
quadrupole moments, and, together with finite boson number, they are
responsible for the anharmonicities in the vibrational excitations.
\section{$^{16}$O$+^{144}$Sm reaction : Comparison with experimental data}
We now apply the formalism to analyze the quasi-elastic
scattering data of $^{16}$O$+^{144}$Sm \cite{thim-95}. The
calculations are performed with a version \cite{hag2} of the coupled-channels
code {\tt CCFULL} \cite{hag-99} once the coupling matrix elements
are determined from Eq.(5).
Notice that the iso-centrifugal approximation employed in this code
works well for
quasi-elastic scattering at backward angles \cite{hag-04}. In the code,
the regular boundary condition is imposed at the origin instead of the
incoming wave boundary condition.
\subsection{Effect of anharmonicities of nuclear vibrations}
In the calculations presented below, we include only the excitations
in the $^{144}$Sm nucleus whilst the excitations of the $^{16}$O
is not explicitly included. For sub-barrier fusion reactions,
the latter has been shown to lead only to a shift of the fusion
barrier distribution in energy
without significantly altering its shape \cite{hag-972}, and
hence can be incorporated in the choice of the bare potential.
This is a general feature for reactions with the $^{16}$O as a
projectile. We have confirmed that it is
the case also for the quasi-elastic barrier distribution. That is,
although the $^{16}$O excitations contribute to the
absolute value of quasi-elastic
cross sections themselves, the shape of quasi-elastic barrier
distribution is not altered much. Since we are interested
mainly in the difference of the shape between the fusion and the
quasi-elastic barrier distributions, we simply
do not include the $^{16}$O excitations and instead adjust the
inter-nuclear potential.
For simplicity, we take the eigenvalues of the $H_{\rm vib}$ in Eq.(1)
to be $\epsilon=n_2\epsilon_2+n_3\epsilon_3$,
where $n_2$ and $n_3$ are the number of quadrupole and octupole
phonons, respectively. $\epsilon_2$ and $\epsilon_3$ are
the excitation energies of the quadrupole
and the octupole phonon states of the target nucleus,
{\it i.e.}, $\epsilon_2=1.61$ MeV and $\epsilon_3=1.81$ MeV, respectively.
Notice that we assume the harmonic spectra for the phonon
excitations. It has been shown in Refs. \cite{hag-97,hag-971}
that the effect of anharmonicity with respect to the excitation energy
on the barrier distribution is insignificant once the energy of the
single phonon states is fixed. The radius and diffuseness parameters
of the real part of the nuclear potential are taken to be
the same as those in Ref. \cite{hag-97},
{\it i.e.,} $r_{0}=1.1$ fm and
$a=0.75$ fm, respectively, while the depth parameter
is slightly adjusted in order to reproduce
the experimental quasi-elastic cross sections.
The optimum value is obtained as $V_0=112$ MeV.
As usually done, we use a short-range imaginary potential
with $W_{0}=30$ MeV, $r_{w}=1.0$ fm and $a_w=0.3$ fm to simulate the
compound nucleus formation. Finally, the target radius is taken to be
$R_T=1.06A_T^{1/3}$. We use the same values for the parameters
$\beta_2,\beta_3, N, \chi_2, \chi_{2f}$, and $\chi_3$ as in
Ref. \cite{hag-97}. All the calculations presented below are
performed at $\theta_{\rm c.m.}=170^\circ$.
\begin{figure
\includegraphics{16O144Smqel.ps}
\caption{Comparison of the experimental data
with the coupled-channels calculations for $^{16}$O$+^{144}$Sm
reaction for (a) the ratio of quasi-elastic
to the Rutherford cross sections and for (b) quasi-elastic barrier
distribution. The dotted and dashed lines are obtained by including
up to the single and the double phonon excitations in the harmonic
limit, respectively. The solid line is the result of the
coupled-channels calculations with the double anharmonic phonon excitations.
The experimental data are taken from Ref. \cite{thim-95}.}
\end{figure}
\begin{figure
\includegraphics{16O144Smelcom.ps}
\caption{(a)
Comparison of the measured pure elastic (the open squares), the
$Z=8\,(-\,\textrm{el})$ (the open circles) and the residual (the
filled circles) components of $d\sigma_{\rm qel}/d\sigma_R$ with the
coupled-channels calculations for $^{16}$O$+^{144}$Sm reaction.
The $Z=8\,(-\,\textrm{el})$ component is defined as the $Z=8$ yields
subtracted the elastic component, while the residual component the
sum of $Z=6$ and 7 yields.
The dashed line is the result of elastic scattering, while the dotted
line shows the inelastic cross sections for the single 2$^+$ and 3$^-$
phonon states. The solid line is the result of the sum of inelastic
cross sections for the double phonon states in $^{144}$Sm.
(b) The same as (a)
but for the pure elastic and the total inelastic cross sections.
The experimental data are
taken from Ref. \cite{thim-95}.}
\end{figure}
The results of the coupled-channels calculations are compared with the
experimental data in Fig. 2. Figures 2(a) and 2(b) show the ratio of the
quasi-elastic to the Rutherford cross sections,
$d\sigma_{\rm qel}/d\sigma_R$, and the quasi-elastic barrier
distributions, $D^{\rm qel}$, respectively. The dotted line denotes the
result in the harmonic limit,
where coupling to the quadrupole and octupole vibrations in
$^{144}$Sm are truncated at the single phonon level, {\it i.e.,} only the
$2^+$ and $3^-$ states are taken into account and all the
$\chi$ parameters in Eq.(5) are set to be zero.
As we see this calculation fails to reproduce the
experimental data. The obtained quasi-elastic cross sections,
$d\sigma_{\rm qel}/d\sigma_R$, drop much faster than the experimental
data at high energies. Also the quasi-elastic barrier distribution,
$D^{\rm qel}$, exhibits a distinct peak at energy around
$E_{\rm c.m.}=65$ MeV. These results are similar to the one achieved
in Ref. \cite{thim-95}. The dashed line represents the result when the
coupling to the quadrupole and octupole vibrations of $^{144}$Sm is
truncated at the double phonon states in the harmonic limit. In this
case, we take into account the couplings to the $2^+$, $3^-$,
$2^+\otimes2^+$,$2^+\otimes 3^-$ and \mbox{$3^-\otimes3^-$} states.
It is obvious that the results are inconsistent with the experimental data.
To see the effect of anharmonicities of the vibrations, we then perform the
the same calculations using the coupling matrix elements given in Eq.(5).
The resultant quasi-elastic excitation
function and the quasi-elastic barrier distribution are shown
by the solid line.
The calculated ratio of
quasi-elastic to Rutherford cross sections quite well agree with the
experimental data.
This suggests that the inclusion of anharmonic effects in the
vibrational motions is important for the description of the
quasi-elastic excitation functions for the $^{16}$O$+^{144}$Sm reaction.
On the other hand,
the result for $D^{\rm qel}$ is still similar to the
barrier distribution obtained by assuming the harmonic limit
truncated at the one
phonon level (the dotted line), although
the former has a more smooth peak.
Figure 3 shows the decomposition of the quasi-elastic
cross sections to each channel for the calculation with
the coupling to the double anharmonic vibrations
(the solid line in Fig. 2).
The fraction of cross section for each channel $i$ in the
quasi-elastic cross section,
$d\sigma_i/d\sigma_{\rm qel}=d\sigma_i/[\sum_jd\sigma_j]$,
is also shown in Fig. 4. The open squares are the experimental elastic cross
section while the open circles are the measured excitation function for
$Z=8$ subtracted the contribution from the elastic channel.
The latter contains not only
the neutron transfer components but also the contributions of
inelastic cross sections. The filled circles
are the experimental residual (a sum of $Z=7$ and
$Z=6$ yields) components of the
$d\sigma_{\rm qel}/d\sigma_R$. The dashed line shows results of the
coupled-channels
calculations for the elastic channel. It reproduces reasonably well the
experimental data for elastic scattering. The $Z=8$ component of
quasi-elastic cross sections is almost exhausted by the single phonon
excitations, that is, the combined $2^+$ and $3^-$ channels, as
shown by the dotted-line in Figs. 3(a) and 4(a).
The cross sections for the double phonon channels are given by the
solid line in Figs. 3(a) and 4(a).
These are important at energies higher than around 66 MeV.
If the components of all the inelastic
channels included in the calculation are summed up,
we obtain the dot-dashed line in Figs. 3 (b) and 4(b).
\begin{figure
\includegraphics{fraction.ps}
\caption{ Same as Fig. 3, but for the fraction of cross section for
each channel in the quasi-elastic cross sections. }
\end{figure}
\subsection{Effects of proton transfer reactions}
In the previous subsection we have shown that the experimental
quasi-elastic cross sections can be well
explained within the present coupled-channels calculations, which
takes into account only the inelastic excitations in $^{144}$Sm.
However, the shape of quasi-elastic barrier
distribution is still somewhat inconsistent with the experimental data.
As one sees in Figs. 3(a) and 4(b), the experimental data indicate that
the charged particle transfer reactions
may also play some role (see the filled circles in the figures).
In this subsection, we therefore investigate the effects of
proton transfer reactions, in addition to the
anharmonic double phonon excitations.
To this end, we use the macroscopic
form factor for the transfer coupling \cite{dasso-8586},
\begin{equation}
F_{\rm trans}(r)=-F_{\rm tr}\frac{dV(r)}{dr}
\end{equation}
where $F_{\rm tr}$ is the coupling strength and $V(r)$ is the real part
of the nuclear potential.
In this paper, we consider a single proton transfer as well as
the direct proton pair transfer reactions, although the experimental
Z=6 component may also include the alpha-particle transfer channel.
The corresponding optimum
$Q-$values for the transfer between the ground states
are $Q_{\rm opt}(1p)=-1.79$ MeV and $Q_{\rm opt}(2p)=0.13$ MeV,
respectively.
The coupling strength $F_{\rm tr}$ in Eq.(6)
is determined so that the experimental transfer cross sections
for each Z=6 and Z=7 components \cite{thim-thes} are reproduced.
The optimum values for $F_{\rm tr}$ are found to be 0.12 and 0.16 fm for
the one and the two proton transfer channels, respectively.
\begin{figure
\includegraphics{16O144Smqelptrans.ps}
\caption{Effect of proton transfers on the quasi-elastic scattering
cross sections (the upper panel) and on the quasi-elastic barrier
distribution (the lower panel) for $^{16}$O$+^{144}$Sm reaction.
The solid line is the result of the coupled-channels calculations
including the effect of double anharmonic vibrations only.
The dashed line is obtained by including, in addition,
the couplings to the proton transfer channels.
The experimental data are taken from Ref. \cite{thim-95}.}
\end{figure}
\begin{figure
\includegraphics{16O144Smelcomptrans.ps}
\caption{
Contribution of quasi-elastic cross sections from several channels.
The solid and dashed line are the results of the
coupled-channels calculations for
the proton transfer and the elastic cross sections, respectively.
The dotted line denotes the sum of total inelastic
and proton transfer cross sections.
The corresponding experimental data are shown by the filled circles,
the open squares, and the open triangles, respectively, which are
taken from Ref. \cite{thim-95}.}
\end{figure}
\begin{figure
\includegraphics{fractionptrans.ps}
\caption{
Same as Fig. 6, but for the fraction in the quasi-elastic
cross sections. }
\end{figure}
\begin{figure
\includegraphics[angle=0,width=0.457\textwidth]{16O144Sm1.ps}
\caption{Comparison of the theoretical fusion barrier distribution
(dashed line) with the quasi-elastic barrier distribution (solid-line)
obtained with different coupling schemes for
$^{16}$O$+^{144}$Sm system. Both functions are normalized to unit
area in energy interval between 54 and 70 MeV. (a) The results of
the coupling to one phonon state of quadrupole and
octupole excitations of $^{144}$Sm in the harmonic oscillator limit.
(b) The same as (a) but for the coupling up to double phonon states.
(c) The result when the coupling to anharmonic vibration of double
quadrupole and octupole excitations in $^{144}$Sm is taken into account.}
\end{figure}
The effects of proton transfer reactions on the quasi-elastic scattering
is illustrated in Fig. 5. The solid line represents the results of the
calculations including only the coupling to the double
anharmonic vibrations. The dashed line is obtained by taking the
coupling to the proton transfer channels into account, in
addition to the anharmonic vibration channels.
The upper panel shows the quasi-elastic cross sections, while the
lower panel the quasi-elastic barrier distribution.
We observe from Fig. 5(a) that the inclusion of proton transfer reactions
overestimates the experimental $d\sigma_{\rm qel}/d\sigma_R$ at
energies between 62 and 68 MeV. Also the
higher peak in the quasi-elastic
barrier distribution becomes more distinct and thus worsens as
compared to the calculation without the transfer channels.
Figure 6 shows the contribution of each channel to the quasi-elastic
cross sections. The fraction of each contribution is also shown in
Fig. 7. The open squares are the
experimental elastic cross sections, while the filled circles and the open
triangles are the experimental proton transfer cross sections and the
sum of total inelastic and transfer cross sections, respectively.
The coupled-channels calculations for the elastic cross sections are
shown by the dashed-line.
Although it reproduces the experimental data below around 62 MeV, it
overestimates the data at higher energies.
The sum of the contributions from the
total inelastic and the proton transfer channels is denoted by the
dotted line, which reproduces the experimental data reasonably well,
although the proton transfer cross sections themselves are
underestimated at energies larger than 60 MeV (the solid line).
The overestimation of the quasi-elastic cross section indicated in
Fig. 5(a) is therefore largely due to the contribution of elastic channel.
From this study, we conclude that the inclusion of the
proton transfer reactions in the coupled-channels calculations
does not explain the difference of the shape between the fusion
and quasi-elastic barrier distributions for the
$^{16}$O$+^{144}$Sm system.
\subsection{Discussions}
We have argued that the presence of high energy shoulder, instead
of high energy peak, in the quasi-elastic barrier distribution for
the scattering between
$^{16}$O and $^{144}$Sm nuclei cannot be accounted for within the present
coupled-channels calculations, which take into account the anharmonic
double phonon excitations in $^{144}$Sm as well as the proton transfer
channels.
Figure 8 compares the calculated
fusion barrier distribution $D^{\rm fus}$ and the
corresponding quasi-elastic barrier distribution $D^{\rm qel}$
for several coupling schemes
as shown in Fig. 2 in the coupled-channels calculations.
The solid line shows the quasi-elastic barrier
distribution while the dashed line is for the fusion barrier
distribution. They are normalized so that the
energy integral between 54 and 70 MeV is unity.
Figures 8(a) and 8(b) are obtained by including the one phonon and the
two phonon excitations in $^{144}$Sm in the
harmonic limit, respectively. Figure 8(c) is the result of
the double anharmonic vibration coupling.
From these figures, it is evident that the theoretical fusion and
quasi-elastic barrier distributions
are always similar to each other within the same coupling scheme,
although the latter is slightly more smeared due to the low-energy
tail \cite{hag-04}.
This would be the case even with the excitations in $^{16}$O as well
as neutron transfer channels, which are not included in the present
coupled-channels calculations.
Therefore, it seems unlikely that the experimental fusion and
quasi-elastic barrier distributions can be explained simultaneously
within the standard coupled-channels approach.
\section{Conclusion}
We have studied the effects of double anharmonic vibrations
of the $^{144}$Sm nucleus on the large angle quasi-elastic scattering for
$^{16}$O$+^{144}$Sm system. We have
shown that the experimental data for the quasi-elastic scattering
cross sections for this reaction can be reasonably well explained.
However, we found that the obtained quasi-elastic barrier
distribution still shows the clear doubled-peaked structure,
that is not seen in the experimental data.
This was not resolved even if we took the proton transfer channels
into account. Our coupled-channels calculations indicate
that, within the same coupling scheme, the quasi-elastic and fusion barrier
distributions are always similar to each other.
Although detailed analyses including neutron transfer channels in
a consistent manner are still necessary, it is thus unlikely that
the fusion and quasi-elastic barrier distributions can be explained
simultaneously with the standard coupled-channels framework.
This fact might be related to the large diffuseness problem in sub-barrier
fusion, in which dynamical effects such as couplings to deep-inelastic
scattering are one of the promising origins
\cite{NBD04,DHN04,MHD07}.
It is still an open problem to perform the coupled-channels
calculations with such dynamical effects and explain the difference of
the shape between the fusion and the quasi-elastic barrier distributions
for the $^{16}$O$+^{144}$Sm reaction.
\begin{acknowledgments}
This work was partly supported by The 21st Century Center of
Excellence Program ``Exploring New Science by Bridging
Particle-Matter Hierarchy'' of Tohoku University
and partly by Monbukagakusho Scholarship
and Grant-in-Aid for Scientific Research under
the program number 19740115 from the Japanese Ministry of
Education, Culture, Sports, Science and Technology.
\end{acknowledgments}
|
1,116,691,499,802 | arxiv | \section{Introduction}
The Calogero-Moser models \cite{models}\ are completely-integrable,
Hamiltonian systems describing (non-relativistic) particle dynamics with
pairwise interaction potentials of the form $1/x^2$, $1/\sin^2\!x$,
$1/\sinh^2\!x$
(and in general the Weierstrass $\wp$ functions).
The models are rather generic, which accounts for their importance in
various branches of theoretical physics from solid state physics to
particle physics \cite{ss, pp, other}:
they appear when describing the eigenvalue motion
of certain matrices \cite{evm}; the pole motions of the solitons
of various PDE's are described by the model (with possible constraints)
\cite{poles};
the quantum mechanics \cite{qm} of these models has been connected with the
transmission properties of wires \cite{wires} and Conformal Field Theory
\cite{cft}.
A rich algebraic structure is being uncovered behind the models \cite{alg}.
The successes of the Calogero-Moser systems have
naturally led to an expectation that their ``relativistic'' versions, {\em if
any},
might play similar roles in connection with
integrable relativistic quantum field theories.
Examples of integrable relativistic quantum field theories include the
sine-Gordon model and affine Toda field theories (the latter being constructed
from the various affine Lie algebras).
Thanks to the infinite number of conserved quantities which
characterises the integrability of quantum field theories,
no particle creation and annihilation are allowed in such theories
and their $N$-particle $S$-matrices
are factorised into a product of ${N(N-1)/2}$ two-particle $S$-matrices.
The expectation that an integrable relativistic field theory
might equivalently and simply be described in terms of some integrable
``relativistic'' particle dynamics was speculated by Ruijsenaars in \cite{RS2}
and appears more explicitly in \cite{RS1}, where Ruijsenaars and Schneider
describe the motivation lying behind the discovery of their model.
Here the model was proposed as a ``relativistic'' (or one-parameter $c$, the
velocity of light) generalisation of Calogero-Moser model.
(The model is variously referred to as the ``relativistic''
Calogero-Moser model or Ruijsenaars-Schneider model. For reasons
we later give, we prefer the latter nomenclature.)
Our aim in the following note is to further explicate these models
and in particular the role of ``relativistic invariance".
The viewpoint described below
is that the Ruijsenaars-Schneider system is an important
and rather generic integrable system, but to describe it as expressing
``relativistic particle dynamics" is quite misleading.
The importance of the Ruijsenaars-Schneider system cannot be underestimated:
it arises as a particular form of eigenvalue motion in much the same way
as the Calogero-Moser model does, and this eigenvalue motion is relevant in
many physical settings. Just as the Calogero-Moser model is related to
particular solutions of PDE's, the Ruijsenaars-Schneider model is also
connected with particular soliton solutions of for example the KdV,
mKdV and sine-Gordon equations.
It has been connected with the gauged WZW model \cite{GN}.
A rich algebraic structure is also being uncovered \cite{relag}
for the model and
spin-generalisations \cite{RSspin} of the Ruijsenaars-Schneider model are known,
paralleling\footnote{In this context we note that a Hamiltonian formulation
for these spin-generalisations is still lacking.}
the spin-generalisations \cite{CMspin} of the Calogero-Moser model. The
solitons of the $a_n$ affine Toda field theories with imaginary
coupling constant have been related \cite{BH} to these
spin-generalisations extending the sine-Gordon soliton and
Ruijsenaars-Schneider correspondence mentioned above \cite{BB}.
But clearly if the same model is related to the relativistically invariant
sine-Gordon equation and also the relativistically
noninvariant KdV equation (and others), the simple notion
of ``relativistic particle dynamics" needs clarification.
The first difficulty one usually encounters when seeking
to describe ``relativistic particle dynamics" is how any theory with a
single time can be compatible with causality. Any interaction Hamiltonian
or Lagrangian depending on the coordinates and momenta of the other particles
in a single time formulation is by
definition `action-at-a-distance'. The time evolution of the
positions and momenta is determined by the
positions and momenta of the other particle {\em at the same time}.
In order for this to happen each particle must be able to `know' the
coordinates and momenta of the other particles
{\em instantly}. This obviously breaks Einstein's causality.
One possible way to circumvent the above difficulty is of course to adopt an
interaction potential of zero range, namely the delta function
potential.
In two and higher space dimensions the delta
function potential is too singular to be treated properly \cite{Kem},
but as is well known in quantum mechanics the delta
function potential in one space dimension can easily be handled.
In fact in this case relativistic many particle theory can be
properly formulated \cite{BNF} and the particle coordinates and times
obey the Lorentz transformation and together with
the generators of space and time
translations and boost satisfy the Poincar\'e algebra.
However, with any long range interaction $f(q)$ and a single time formalism
the incompatibility of `action-at-a-distance' with Einstein's
causality remains.
Actually the Ruijsenaars-Schneider models have several ``times"
corresponding to different commuting flows $H_{j}$,
\begin{equation}
(q(t_1,t_2,\ldots t_l),\theta(t_1,t_2,\ldots t_l))=
\exp\left(\sum_{j=1}\sp{l}t_j H_{j}\right) (q(0),\theta(0)),
\label{evolution}
\end{equation}
and the solutions of the PDE's mentioned above require the evolution
to be determined with respect to each of these times.
In particular, when the flows $H_1$ and $H_{-1}$ are both present
and so $q_j=q_j(t,x)$,
the theory exhibits a Poincar\'e invariance, but as we shall
argue the theory is not relativistically invariant in the sense
suggested by the ``non-relativistic" limit given by Ruijsenaars
and Schneider. Indeed the presence of two ``times" or flows
means we are not dealing with a traditional notion of relativistic
dynamics and the standard ``no-go" theorems \cite{SM}
are correspondingly avoided.
Because the coordinates $q(t,x)$ and $\theta(t,x)$ of the
Poincar\'e invariant
Ruijsenaars-Schneider model are parameterized by Minkowski space it
may be thought that what we have here is some, albeit unusual, field
theory. We shall show however that the solutions $q(t,x)$, $\theta(t,x)$ of
the Ruijsenaars-Schneider\ model do not describe the dynamical time-evolution typical of
field theory and are more akin to those of a topological field theory
in the sense that they do not possess dynamical degrees of freedom.
The Note is organised as follows. In section two some salient features
of the Ruijsenaars-Schneider\ model are briefly reviewed to set the stage and notation.
We view the Ruijsenaars-Schneider\ model as describing the motion of eigenvalues
of matrices of certain type, a simple generalisation of the
Calogero-Moser situation.
Then the connection with the $N$-soliton solutions of various soliton
equations (KdV and sine-Gordon, etc) is briefly mentioned.
In section three the nature of the ``relativistic invariance'' of the
Ruijsenaars-Schneider\ model is clarified starting with its ``non-relativistic'' limit.
The many ``times'' formulation and the Poincar\'e invariance of the
theory is also discussed.
Section four discusses the field theory aspects of the Ruijsenaars-Schneider\ model.
In section five we dwell upon the possible connection between
integrable quantum field theories with exact factorisable S-matrices and
the Ruijsenaars-Schneider\ model. The uncertainty principle of quantum theory plays an
important role here.
The final section is for summary and discussion.
Throughout we will try to use the notation of
Ruijsenaars and Schneider \cite{RS1} or Ruijsenaars \cite{RS3}
as far as possible.
\section{The Ruijsenaars-Schneider Model}
\setcounter{equation}{0}
In this section we first review the salient features of the
Ruijsenaars-Schneider model to fix the notation; the details
may be found in \cite{RS1, RS3}.
Having done this we next review how the model arises when describing
the eigenvalue motion of a particular (possibly partial) differential
matrix equation. This is our perspective on the models, and others may
differ here. Theorems pertaining to these eigenvalue motions may be found in
\cite{RS4}.
We conclude with the connection between this model and
soliton equations.
\subsection{Salient Features}
The dynamical variables of the Ruijsenaars-Schneider theory are the
``rapidity'' $\theta_j$ and its canonically conjugate ``position''
$q_j$, satisfying the following Poisson bracket relations:
\begin{equation}
\{q_j,q_k\}=\{\theta_j,\theta_k\}=0,\quad \{q_j,\theta_k\}=\delta_{jk},
\quad j,k=1,\ldots,N.
\label{dynvar}
\end{equation}
We see from (\ref{dynvar}) that if the ``rapidity'' $\theta_j$
is taken to be dimensionless, then $q_j$ has the dimensions of action;
the product of any two canonical variables has the dimensions of action.
The Hamiltonian $H$, the ``space-translation'' generator $P$ and
``boost'' generator $B$ are given by
\begin{eqnarray}
H&=&mc^2\sum_{j=1}^N\cosh\theta_j\prod_{k\neq j}f
\left(\frac{q_j-q_k}{A}\right),\quad
\label{ham}\\
P&=&mc\sum_{j=1}^N\sinh\theta_j\prod_{k\neq j}f
\left(\frac{q_j-q_k}{A}\right),\quad
\label{tran}\\
B&=&-{1\over c}\sum_{j=1}^N q_j,
\label{boost}
\end{eqnarray}
where $c$ is the velocity of light and $A$ is a constant having the
dimension of the action (see section three for more detail).
They satisfy the following relations
\begin{equation}
\{H,P\}=0,\quad \{H,B\}=P,\quad \{P,B\}=H/c^2,
\label{Poin}
\end{equation}
provided $f^2(z)$ equals $\lambda+\mu\wp(z)$, including its
trigonometric, hyperbolic and rational degenerate cases.
These are the relations that the generators of
the two-dimensional Poincar\'e algebra should satisfy.
It is an added bonus that this choice of the function $f$ also
ensures the existence of $N$ independent, Poisson commuting conserved
quantities, and so the Ruijsenaars-Schneider model is
completely integrable.
Typical of the conserved quantities constructed are $H_{\pm1}$
where
\begin{equation}
H_{\pm1} = mc^2\sum_{j=1}^N e^{\pm \theta_j}\ \prod^N_{k\not= j}
f\left(\frac{q_j-q_k}{A}\right),
\label{eq:cons}
\end{equation}
and so $H=(H_1 +H_{-1})/2$ and $P=(H_1 -H_{-1})/2$ in the above.
Contrary to \cite{RS1, RS3} we have emphasised the appearance of
the dimensionful parameter $A$ necessary\footnote{
In \cite{RS3} Ruijsenaars chooses to work with the
variables $\bar q_j = mc\, q_j$ and $\bar \theta_j= \theta_j/mc$. In this case
a dimensionful length scale $A/mc=L$ must appear in the functions
$(2.22)$ of that reference.}
to define the theory.
The Lagrangians associated with these systems are rather unusual and have
some interesting features. The `Lagrangian' associated with (say) $H_+$ is
\begin{equation}
{\cal L}=\sum_{j=1}^N \dot q_{j} \left(
\ln\frac{\dot q_{j}}{mc\sp2} -1 -\ln \prod^N_{k\not= j}
f\left(\frac{q_j-q_k}{A}\right)\right),
\label{entlag}
\end{equation}
and we remark that the first
term on the right here behaves as an \lq entropy\rq.
For the remainder of this section we will set $A=m=c=1$, but will reinstate
these constants at later junctures in our discussion.
\subsection{Eigenvalue Motion and the Ruijsenaars-Schneider model}
The Ruijsenaars-Schneider theory and its generalisations
may be viewed as describing the
motion of the eigenvalues of matrices of certain type.
For example, let $V$ be a real, symmetric, positive-definite
$N\times N$ matrix whose `time' dependence satisfies
\begin{equation}
\partial V= \Lambda\, V +V\, \Lambda,
\label{eq:def}
\end{equation}
where $\Lambda$ is a constant matrix.
As we shall now review,
the eigenvalue motion corresponding to (\ref{eq:def})
leads to a mechanical system that is directly
analogous to the linear motions associated with the Calogero-Moser
model. Here the constancy of $\Lambda$ plays the same role as the constants
of motion in the Calogero-Moser situation.
The Ruijsenaars-Schneider model arises when $\partial V$ is further assumed
to be of a specific form; this restriction is directly analogous to the
constraint on the angular momentum made for the Calogero-Moser model.
We will later give examples of such
$N\times N$ matrices satisfying (\ref{eq:def}) that are to be
found in connection with the $N$-soliton solutions of some soliton theories.
Let $V$ be diagonalised by the orthogonal matrix $U$ and set
\[
Q=UVU\sp{-1}=\mathop{\rm\textstyle{diag}}\nolimits(\exp(q_{1}),...,\exp(q_{N})),
\quad\quad
M=\partial U U\sp{-1},
\]
where $M$ is an anti-symmetric matrix $M=-M^t$.
Then upon setting $L=U\Lambda U\sp{-1}$ we obtain the Lax equation
\begin{equation}
\partial L=[M,L],
\quad\quad
\partial Q=[M,Q]+U\partial V U\sp{-1}=[M,Q]+L\,Q+Q\,L.
\label{laxeq}
\end{equation}
From this it is easy to obtain
\begin{equation}
L_{jj}=(1/2)\partial q_{j} \label{eq:const}
\end{equation}
and (for $j\neq k$)
\[
M_{jk}=\left(\frac{Q_{j}+Q_{k}} {Q_{j}-Q_{k}}
\right)L_{jk}=\coth((q_{j}-q_{k})/2)L_{jk}.
\]
Substituting these into the Lax equation produces
(with $\dot q_{j}=\partial q_{j}$)
the equations of motion:
\begin{equation}
\dot L_{jj}={1\over2}\ddot q_{j}
=2\sum_{k\neq j}\coth((q_{j}-q_{k})/2)L_{jk}L_{kj},
\label{eq:rsma}
\end{equation}
\begin{equation}
\begin{array}{l}
\dot L_{jk}=
\frac{1}{2}\coth((q_{j}-q_{k})/2)(\dot q_{k}-\dot q_{j})L_{jk}\\
\quad\quad\quad+\sum_{l\neq j,k}
(\coth((q_{j}-q_{l})/2)-\coth((q_{l}-q_{k})/2))
L_{jl}L_{lk},\quad (j\neq k).
\end{array}
\label{eq:rsmb}
\end{equation}
As shown in \cite{BH} these are
the spin-generalised Ruijsenaars-Schneider equations \cite{RSspin}\
with certain constraints.
The (non-spin) model of Ruijsenaars-Schneider now results when
$\dot{V}$ may be expressed as
\begin{equation}
\dot V_{jk}=e_je_k,\quad j,k=1,\ldots, N,
\label{vdege}
\end{equation}
for some real vector $e$ ($e_j$ being its $j$-th component).
Then with $\tilde{e}=Ue$ we find
\[
L_{jk}=\,\frac{\tilde{e}_{j}\tilde{e}_{k}}{\exp(q_{j})+\exp(q_{k})}.
\]
Since we know the diagonal elements of $L$ explicitly in terms of the
$q_{j}$ we have
\begin{equation}
L_{jk}=\frac{\sqrt{\dot q_{j} \dot q_{k}}}{\cosh((q_{j}-q_{k})/2)}.
\label{laxexp}
\end{equation}
This may then be substituted into (\ref{eq:rsma}) to give
\begin{equation}
\ddot q_{j}=2\sum_{k\neq j}\frac{\dot q_{j}\dot q_{k}}{\sinh(q_{j}-q_{k})}.
\label{eq:rseqm}
\end{equation}
These are the equations of motion for (either $H_{\pm1}$)
\begin{equation}
H_{\pm1} = \sum_{j=1}^N e^{\pm \theta_j}\ \prod^N_{k\not= j}
\coth\biggl({\frac{q_j-q_k}{2} }\biggr) ,
\label{eq:rsham}
\end{equation}
with conjugate variables $q_j$, $\theta_j$,
satisfying the canonical Poisson bracket relations (\ref{dynvar}).
In this case (\ref{eq:rsmb}) is then identically satisfied.
Now $H=(H_1 +H_{-1})/2$ and $P=(H_1 -H_{-1})/2$ are particular cases of
(\ref{ham}) and (\ref{tran}). Thus the hyperbolic Ruijsenaars-Schneider
model may be identified with the eigenvalue motion just described.
Other (possibly difference \cite{differ}) matrix equations correspond to the
different functions $f$ appearing in the Ruijsenaars-Schneider model.
Further, if $L$ is
the Lax matrix associated with the Ruijsenaars-Schneider theory above then
each of the flows corresponding to $H_k=(1/k) tr\,L\sp{k}$
is also conserved and $\{H_k,H_l\}=0$; these give the conserved
quantities associated with the model. Upon setting
\begin{equation}{\cal H}_k=(H_k+H_{-k})/2,\quad \quad {\cal P}_k=(H_k-H_{-k})/2
,\quad \quad {\cal B}= -\sum_j\sp{N} q_j,
\label{hkb}
\end{equation}
we have
\begin{equation}
\{{\cal H}_k,{\cal P}_k\}=0,\quad \quad \{{\cal H}_k,{\cal B}\}={\cal P}_k,
\quad \quad \{{\cal P}_k,{\cal B}\}={\cal H}_k.
\label{poink}
\end{equation}
For any $k$ this has the form of the two dimensional Poincar\'e algebra.
Also note from $\sum_j\ddot q_j=0$ that ${\cal B}$ evolves linearly with
respect to the $H_1$ flow.
\subsection{Connection with $N$-soliton Solutions}
The Ruijsenaars-Schneider theory appears in the study of $N$-soliton
solutions of equations whose tau functions have the form
\begin{equation}
\tau=\sum_{\epsilon}\exp
\left(\sum_{j<k}
\epsilon_{j}\epsilon_{k}B_{jk}+
\sum_{j}\epsilon_{j}\zeta_{j}(t,x)\right).
\label{eq:taufn}
\end{equation}
In the above the $\epsilon$ indicates a summation over all possible
combinations of $\epsilon_{j}$ taking the values $0$ or $1$, and the
indices $j$ and $k$ take values in $\{1,...,N\}$. The
expression (\ref{eq:taufn})
is a rather generic form of the soliton tau function for an integrable PDE,
the precise nature of $B_{jk}$ and $\zeta_{j}$ depending on the
particular PDE being considered. It may be viewed as a degeneration
of the theta function solutions of the PDE given via algebraic geometry in
which the $\epsilon_{j}$'s run over all of the integers.
Now in appropriate circumstances this tau function can
be written in terms of determinants.
Thus for the Sine-Gordon equation we have
\[
e^{i\beta\phi}=\frac{\det\hspace{0.05in}(1-V)} {\det\hspace{0.05in}(1+V)},
\]
while for the KdV equation
$$\dot u-u u\sp\prime+u\sp{\prime\prime\prime}=0$$
we have $u=-2(\ln\tau)\sp{\prime\prime}$ where
$$\tau={\det\hspace{0.05in}(1+V)}.$$
In both cases the matrix has the form
\begin{equation}
V_{jk}=\frac{\sqrt{X_{j}X_{k}}}{\mu_{j}+\mu_{k}},
\label{eq:vdef}
\end{equation}
where
\begin{equation}
X_{j}=2\, a_{j}\exp\left(\xi_j({ t,x})\right)
\label{eq:Xdef}
\end{equation}
and
\begin{equation}
\xi_j(t,x)=\xi_j(0)+\mu_j\sp3\, t-\mu_j\, x,\quad\quad(KdV),
\label{eq:KdVdef}
\end{equation}
\begin{equation}
\xi_j(t,x)=\xi_j(0)+\mu_j\sp{-1}x_- +\mu_j\, x_+,\quad\quad(SG).
\label{eq:SGdef}
\end{equation}
For the $x$ flow of the KdV equation and either of the SG flows
corresponding to the
light cone coordinates $x_\pm$, the matrix equation (\ref{eq:def})
is satisfied and the Ruijsenaars-Schneider
theory \rref{eq:rsham} ensues. For the SG equation $\mu_j$ is related to a
rapidity.
For other soliton equations that may be expressed in terms of matrices
of the form (\ref{eq:vdef}) and (\ref{eq:Xdef}) the
\lq times\rq\ linear in $\mu_j\sp{\pm1}$ yield the
Ruijsenaars-Schneider theory (\ref{ham}).
\section{Relativistic Invariance}
\setcounter{equation}{0}
We wish now to examine the ``relativistic invariance'' of the
theories presented by Ruijsenaars and Schneider as \lq{\em a class
of finite-dimensional integrable systems that may be viewed as
relativistic generalizations of the Calogero-Moser systems.\rq}
In the first part of this section we argue that the Ruijsenaars-Schneider
theory is not
relativistically invariant in the natural variables suggested by
this description. This is why we believe the description of
Ruijsenaars-Schneider models as \lq relativistic Calogero-Moser models\rq\
is misleading.
Indeed, the ``non-relativistic" limit
of these models requires an explicit scaling of the dimensionful
coupling constant $A$ needed to define these theories,
and it is unclear why this should be described as a ``non-relativistic" limit.
Rather the relativistic invariance of the theories, and that
we feel intended by Ruijsenaars and Schneider, is more subtle.
We shall go on in the latter subsection to investigate this, but note
that this relativistic invariance
does not yield relativistically invariant particle {\it dynamics}.
At the outset it is instructive to see in what sense the
Ruijsenaars-Schneider models yield the corresponding Calogero-Moser models
as non-relativistic limits.
For the sake of both ease and concreteness consider
$$
f\sp2\left(\frac{q_j-q_k}{A}\right) = 1+
\frac{\alpha\sp2}{\sinh\sp2\left(\frac{q_j-q_k}{A}\right)};
$$
similar results hold for the other potentials.
(Here $A$ is to be identified with $2/\mu$ in (4.12) of \cite{RS1}.)
Under the following scalings (which preserve the Poisson bracket relations
for the new variables $\bar q_j$ and $\bar\theta_j$)
\begin{equation}
\theta_j= \frac{\bar\theta_j}{c},\quad
q_j= c\, \bar q_j,\quad
\alpha=\frac{v}{c},\quad
A=c\, A^\prime
\end{equation}
we find
\begin{equation}
H_{nr}=\lim_{c\rightarrow\infty}\left(H-N m c\sp2\right)
=\frac{m}{2}\sum_{j=1}^N{\bar\theta_j}\sp2 +
\sum_{i\ne j}\frac{m {v}\sp2 }
{\sinh\sp2\left(\frac{\bar q_j-\bar q_k}{A^\prime}\right)}.
\end{equation}
Upon using the identification
\begin{equation}
q_j=x_jmc\cosh\theta_j,\quad p_j=mc\sinh\theta_j,\quad j=1,\ldots,N.
\label{mincoord}
\end{equation}
where now
\begin{equation}
\{x_j,x_k\}=\{p_j,p_k\}=0,\quad \{x_j,p_k\}=\delta_{jk},
\quad j,k=1,\ldots,N,
\label{xpvar}
\end{equation}
Ruijsenaars and Schneider then express this as
\begin{equation}
H_{nr} =\frac{1}{2}\sum_{j=1}^N\frac{p_j\sp2}{m} +
\sum_{i\ne j}\frac{m {v}\sp2 }
{\sinh\sp2\left( (x_j-x_k)/L\right)},
\end{equation}
which is the Hamiltonian of an appropriate Calogero-Moser model.
(Here we write $A^\prime=mL$, $L$ being a constant having the
dimension of length. The constant $v$ has the dimension of the velocity.)
Similarly they obtain
\begin{eqnarray}
P_{nr}&=&m\sum_{j=1}^N \bar\theta_j=\sum_{j=1}^N p_j,\\
B_{nr}&=&-{m}\sum_{j=1}^N x_j.
\end{eqnarray}
As Ruijsenaars and Schneider remark, this limit has required
scaling the coupling constants of the theory.
Indeed, however one takes this limit, one cannot avoid\footnote{
It may at first appear that the $\beta$ scaling given in \cite{RS1}
avoids the scaling of the parameter $\mu$, which plays the role of
$1/A$ here. This is not really the case, for the $q_j$ variables
must also be scaled to preserve the Poisson bracket relations; the
three different scalings given in \cite{RS1, RS3} are identical.}
scaling the dimensionful \lq coupling constant\rq\ $A$.
Certainly this analysis shows that the Ruijsenaars-Schneider models
reduce to the Calogero-Moser models in a particular scaling limit,
but it is not clear that this should be described physically as a
``non-relativistic" limit. Only by (infinitely) shifting the
Hamiltonian do the generators of the Poincar\'e algebra reduce to the
Galilei generators and, as we shall now show, the Ruijsenaars-Schneider model
is not relativistically invariant in the naive sense one would expect
for a theory described as a ``relativistic generalisation" of the
Calogero-Moser model. It seems altogether better to describe the
Ruijsenaars-Schneider theory as a one-parameter extension of the
Calogero-Moser models.
Now Einstein's special relativity simply states that an `event' is a point
in {\em Minkowski space}.
The essential point is that special relativity is more than
a closed Poincar\'e algebra (like \rref{Poin}): one also needs the
Minkowski space upon which it acts via the inhomogeneous Lorentz (Poincar\'e)
transformation
\begin{equation}
\pmatrix{t^\prime_0\cr x^\prime_0\cr}=\pmatrix{\cosh\alpha &\sinh\alpha\cr
\sinh\alpha &\cosh\alpha\cr}
\pmatrix{t_0\cr x_0\cr}+ \pmatrix{a\cr b\cr}.
\label{pointr}
\end{equation}
For relativistically invariant particle {\em
dynamics} one further needs {\em dynamical variables} directly related with
the {\em Minkowski} positions and momenta.
Now by describing their models as \lq relativistic generalisations\rq\
of the Calogero-Moser system, one is naturally led to expect
that the $q_j$ or the $x_j$, arising in the ``non-relativistic limit" above,
are possible Minkowski space variables.
Indeed if we wish the Hamiltonian (\ref{ham}) to be
space translation invariant --it is manifestly time-translation invariant
since the Hamiltonian $H$ does not contain the time explicitly--
we must identify the $q_j$ as the Minkowski space variables
since (\ref{ham}) depends only on their differences. Let us see that neither
$q_j$ or $x_j$ are possible Minkowski space variables.
To this end we record the following actions
of the ``space-translation'' generator $P$ and ``boost" generator $B$
on $q_j$ and $\theta_j$:
\begin{equation}
\delta_Pq_j=\{q_j,P\}=mc\cosh\theta_j\prod_{k\neq j}f(q_j-q_k),
\quad \delta_P\theta_j=\{\theta_j,P\}\neq0,
\label{sptract}
\end{equation}
\begin{equation}
\delta_B\theta_j=\{\theta_j,B\}={1\over c},\quad\quad
\delta_Bq_j=\{q_j,B\}=0.
\label{boos}
\end{equation}
These imply
\begin{equation}
\delta_Px_j=\prod_{k\neq j}f(q_j-q_k)-x_j\tanh\theta_j\{\theta_j,P\},
\quad \delta_Pp_j\neq0,
\label{sptrxact}
\end{equation}
and that the finite transformations under ``boosts" are
\begin{equation}
\theta^\prime_j=\theta_j+{\alpha\over c},\quad q_j^\prime=q_j,\quad {\rm or}
\quad
x^\prime_j=x_j{\cosh\theta_j\over{\cosh(\theta_j+{\alpha\over c})}}.
\label{finboos}
\end{equation}
Now we see from \rref{sptract} and \rref{sptrxact} that neither
$q_j$ nor $x_j$ transform as the coordinates of
the Minkowski space under a space translation --in fact they are changed by
amounts depending on the particle positions and momenta.
Further, although the rapidities have the correct transformation
\rref{finboos} that of the
Minkowski positions is very different from the ordinary Lorentz boost.
We conclude therefore that the theory is not relativistically invariant
in the naive sense suggested by the ``non-relativistic" limit
given by Ruijsenaars and Schneider.
Of course the details of the above verification for the non-invariance
under the inhomogeneous Lorentz transformation have depended on our
identification of the Minkowski coordinates and momenta, but without
giving these explicitly the Ruijsenaars-Schneider theory cannot be said to
describe relativistic dynamics.
We have argued that the Hamiltonian dynamics of the
Ruijsenaars-Schneider theory is not invariant under
Einstein's special theory of relativity in the naive sense
suggested by the ``non-relativistic" limit
given by Ruijsenaars and Schneider. As such we believe the
description of these models as ``relativistic Calogero-Moser"
systems is thoroughly misleading. Indeed, the ``non-relativistic" limit
of these models requires an explicit scaling of the dimensionful
coupling constant $A$ above, and it is unclear why this should be
described as a ``non-relativistic" limit at all. It seems far more
sensible to view the models as one-parameter generalisations of
the Calogero-Moser systems.
\subsection{Many ``times'' and Poincar\'e Invariance}
It remains to explain in what sense Ruijsenaars-Schneider theory
evidences Poincar\'e invariance. For such an invariance we
require several ``times' and their corresponding flows $H_k$. These
times will be our coordinates. Now the dynamical variables evolve
according to (\ref{evolution})
\begin{equation}
(q(t_1,t_2,\ldots t_l),\theta(t_1,t_2,\ldots t_l))=
\exp\left(\sum_{j=1}\sp{l}t_j H_{j}\right) (q(0),\theta(0)),
\end{equation}
and because we have several times we are not really dealing with
dynamics. Thus using our description
(\ref{eq:vdef},\ref{eq:Xdef},\ref{eq:KdVdef})
we see the solitons of the KdV equation evolve according to
$H_1$ and $H_3$ and we have $q_j=q_j(t_1,t_3)$. Similarly the solitons
of the SG equation evolve according to $H_1$ and $H_{-1}$ and we have
$q_j=q_j(t_{-1},t_1)$. In the further restricted setting when we
are dealing with flows $H_k$ and $H_{-k}$ it is possible to consider the
associated Poincar\'e algebra (\ref{poink}). This is what distinguishes
between the various soliton equations: although we may associate the
Ruijsenaars-Schneider Hamiltonian (\ref{ham})
with solitons of each of the KdV, mKdV and SG equations for example,
only the SG equation has a second flow that yields an associated
Poincar\'e algebra. It remains to check that the ``boost" does indeed behave
correctly. Of course we always have that
$$
e\sp{\alpha {\cal B}}
e\sp{ t_k {\cal H}_k -t_{-k} {\cal P}_k}
e\sp{-\alpha {\cal B}}
=e\sp{ t_k\sp\prime {\cal H}_k -t_{-k}\sp\prime {\cal P}_k},
$$
where
$$
\pmatrix{t^\prime_k\cr t^\prime_{-k}\cr}=\pmatrix{\cosh\alpha &\sinh\alpha\cr
\sinh\alpha &\cosh\alpha\cr}
\pmatrix{t_k\cr t_{-k}\cr},
$$
but when we further have that $e\sp{\alpha {\cal B}}q_j(0)=q_j(0)$
(i.e. when $q_j$ behaves as a Lorentz scalar) we see that
\begin{equation}
e\sp{\alpha {\cal B}}q_j(t_k,t_{-k})=q_j(t^\prime_k,t^\prime_{-k})
\label{nponik}
\end{equation}
and we have an action of the Poincar\'e algebra on our coordinates.
Using (\ref{finboos}) we see for example that this is true for
the $H_{\pm 1}$ flows for the SG equation. It is in this
sense that the Ruijsenaars-Schneider theory is said to evidence
Poincar\'e invariance, but this is very different from
relativistic particle dynamics.
Let us further consider the SG example where $(t_1,t_{-1})=(t,x)$.
Here we have
\begin{equation}
[q(t,x), \theta(t,x)]_j=[\exp(tH-xP)(q(0),\theta(0)]_j,\quad j=1,\ldots,N,
\label{rseq}
\end{equation}
in which $H$ and $P$ are given by (\ref{ham}) and (\ref{tran}), respectively.
In this very specific setting,
because the $q_j$ behave as Lorentz scalar scalars, we may
define a ``trajectory" via
$$
q_j( t, x_j(t))=0.
$$
Ruijsenaars and Schneider show that this specifies $x_j(t)$ for all time
and (\ref{nponik}) shows these trajectories are Lorentz invariant.
(Indeed we could have set $q_j$ to equal any constant with a similar
result; the choice $q_j=0$ is motivated by the fact that asymptotically
these correspond to the peaks of the solitons.)
Now although these ``trajectories'' are relativistically invariant
we again emphasise that they have not been presented as
relativistically invariant {\em dynamics}.
Before closing this section a further comment is in order.
We have seen that the two ``times'', $t_1$ and $t_{-1}$, or $t$ and $x$,
are necessary for the Poincar\'e invariance of the Ruijsenaars-Schneider\ model.
So, logically both of these times should be
carried
over to its non-relativistic limit, the Calogero-Moser models.
Under the $x$ evolution we simply have
$$
x_j(t,x)=x_j(t)+x,\quad\quad p_j(t,x)=p_j(t).$$
In the physical interpretation of the Calogero-Moser models
the presence of this additional ``time'' $x$ is both redundant and rather
confusing.
This is another reason why we believe that the description of the Ruijsenaars-Schneider\
model as ``relativistic Calogero-Moser models'' is misleading.
\section{Ruijsenaars-Schneider Theory and Field Theory}
\setcounter{equation}{0}
In the previous section we have seen that when one considers
the $H_{\pm1}$ (or equivalently the $H$ and $P$) flows associated
with for example the SG equation, the $q_j(t,x)$ given by
(\ref{rseq}) behave as Lorentz scalars. One might naively be
tempted to think these describe an $N$-component scalar field in the
$1+1$ dimensional Minkowski space $(t,x)$, and similarly
that $\theta(t,x)$ are {\em dynamical} fields of some $1+1$
dimensional theory. We will now argue that this is not really the case
and discuss the physical content of the solutions $q(t,x)$,
$\theta(t,x)$ of the Ruijsenaars-Schneider\ model.
An ordinary field variable, say $\phi(t,x)$ describes a dynamical
system with infinitely many degrees of freedom (one associated to each
point $ x$ of space). At equal times these degrees of freedom are independent
of each other and this is expressed by the
Poisson bracket (or commutation) relation
$$
\{\phi(t,x),\phi(t,y)\}=0,\quad ([\phi(t,x),\phi(t,y)]=0).
$$
In other words, in an initial value problem ($t=0$), the initial values
$\phi(0,x)$ can be chosen arbitrarily.
On the other hand, as is clear from (\ref{rseq}) $q(0,x)$ and $\theta(0,x)$
are severely constrained. They are the solutions of
\begin{eqnarray}
{\partial\over{\partial x}}q_j(0,x)&=&\{q_j(0,x),P\},\nonumber\\
{\partial\over{\partial x}}\theta_j(0,x)&=&\{\theta_j(0,x),P\}, \quad t=0,\quad
-\infty<x<\infty,\label{constreq}
\end{eqnarray}
with the condition $q_j(0,0)=q_j(0)$ and $\theta_j(0,0)=\theta_j(0)$.
It is obvious that such constraints can never be imposed on a relativistic field
variable $\phi(0,x)$ without breaking causality.
Further, the ``time-evolution" of $q_j(t,x)$ and
$\theta_j(t,x)$ are also very different from those of a relativistic field.
At time $t$, $q_j(t,x)$ and
$\theta_j(t,x)$ are solely determined by the ``initial data" $\{q_k(0,x),\theta_
k(0,x)\}$, $k=1,\ldots,N$ depending only on the {\em same} $x$,
since they are solutions of
\begin{eqnarray}
{\partial\over{\partial t}}q_j(t,x)&=&\{q_j(t,x),H\},\nonumber\\
{\partial\over{\partial t}}\theta_j(t,x)&=&\{\theta_j(t,x),H\},\label{timeveq}
\end{eqnarray}
with the initial value $q_j(0,x)$ and $\theta_j(0,x)$. This is in
marked contrast with a dynamical relativistic field,
in which $\phi(t,x)$ depends on the initial data
$\phi(0,y)$ within the past light-cone, ie, $x-ct\leq y\leq x+ct$.
Indeed, given the $2N$ initial conditions
$q_j(0,0)=q_j(0)$ and $\theta_j(0,0)=\theta_j(0)$ at any one point,
the solutions $q_j(t,x)$, $j=1,\ldots,N$ of Ruijsenaars-Schneider\ models are then
specified ``globally". The properties we have just described show that
the the solutions $q(t,x)$, $\theta(t,x)$ of the Ruijsenaars-Schneider\ model
are not describing the dynamical time-evolution typical of
field theory. Indeed this lack of ``dynamics" bears many of the
hallmarks of a topological field theory:
although we cannot as yet make this precise we conclude the section
with a Lax pair encoding the evolution with respect to the
various flows.
{\em
Let $V$ be an $N\times N$ diagonalisable matrix such that
\begin{equation}
\partial_\pm V= \Lambda_\pm\, V +V\, \Lambda_\pm\quad\quad
[\Lambda_+,\Lambda_-]=0,
\end{equation}
and $\Lambda_\pm$ are constant.
Then with $Q=UVU\sp{-1}=\mathop{\rm\textstyle{diag}}\nolimits(\exp(q_{1}),...,\exp(q_{N}))$,
$M_\pm=\partial_\pm U U\sp{-1}$ and $L_\pm=U\Lambda_\pm U\sp{-1}$
we have
\begin{equation}
[D_+,D_-]=0
\end{equation}
where
\begin{equation}
D_\pm =\partial_\pm+\left(
\begin{array}{cc}
-M_\pm-L_\pm&\mu\sp{\pm1} Q\\
0&L_\pm-M_\pm
\end{array}
\right),
\label{dplusmi}
\end{equation}
and $\mu$ is a spectral parameter.
}
\section{Uncertainty Principle}
\setcounter{equation}{0}
Let us now examine the possibility of `reducing' an integrable
relativistic quantum field theory with factorisable $S$-matrices
to a collection of fixed particle number quantum mechanical systems;
this was mentioned as means of motivation at the outset of the
work of Ruijsenaars and Schneider.
The known exact factorisable $S$-matrices of, for example,
sine-Gordon theory \cite{Zam}\
and affine Toda field theory \cite{BCDSa, CM} have been obtained as
solutions of the Yang-Baxter equation and/or bootstrap equation satisfying
analyticity, unitarity and crossing symmetry.
Now in a crossing symmetric quantum field theory a field operator
$\phi_j$ annihilates particles of species $j$ and creates their
anti-particles. Therefore any interaction term in the Lagrangian
of a crossing symmetric field theory changes the particle numbers.
This is in sharp contrast with non-relativistic quantum field
theories (for example, the non-linear Schr\"odinger theory) in which
the interaction term $(\bar\psi\psi)^2$ is manifestly particle number
preserving:
$\psi$ annihilates a particle and $\bar\psi$ creates a particle.
\bigskip
In contrast to the no-particle production
which is the hallmark of an integrable classical field theory and based on
its infinite number of conservation laws,
in relativistic quantum field theory this
property is only guaranteed
between the two asymptotic states at $t=\infty$
and $t=-\infty$ \cite{Lan}.
In other words, the results of measuring any classically conserved quantity
over a finite time interval will fluctuate because of the
uncertainty principle of the quantum theory.
In particular, the particle numbers will not be constant over time
due to the various virtual processes caused by the above mentioned
particle number non-preserving interactions.
Various field theoretical calculations of the S-matrices and other
quantities \cite{BCDSb}\ in affine Toda field theory show this fact
explicitly.
Thus we arrive at the conclusion that a `reduction' of a solvable
relativistic quantum field theory to a collection of
fixed particle number (relativistic) quantum mechanical systems is impossible.
\section{Summary and Discussion}
\setcounter{equation}{0}
We have discussed various aspects of the Ruijsenaars-Schneider\ model.
In particular we have argued that these models are most naturally
viewed as a one-parameter generalisation of the Calogero-Moser
models which should not be described as a ``relativistic"
generalisation: the model is not in fact ``relativistically invariant"
in the sense dictated by the ``non-relativistic'' limit.
Further we have compared the
many (compatible) ``times'' formulation --in which certain models are
Poincar\'e invariant-- with standard field theory.
In this context the Ruijsenaars-Schneider\ model does not describe a dynamical field
theory. This is entirely natural in the soliton setting that gives rise
to the model, for the Ruijsenaars-Schneider\ equations simply describe a {\it single}
solution to the associated soliton-bearing PDE in an analogous manner to the
inverse scattering transform.
We have also discussed constraints that the uncertainty principle
places on any possible linkage
between integrable quantum field theories with exact factorisable
S-matrices and integrable particle dynamics.
In spite of the difficulties related to the ``relativistic''
interpretation, we again emphasise the importance of these models,
an importance we believe stems from the natural matrix equations
associated with the models. The Ruijsenaars-Schneider\ equations in this setting are not only
generic but useful. The work on Ruijsenaars-Schneider\ models with
spin degrees of freedom is still in its infancy and we would
like the connections between such models and the affine Toda field theories
to be pursued both algebraically and physically.
\section*{Acknowledgments}
We thank L.~O'Raifeartaigh and Y.~Munakata for useful discussion.
This work is partially funded by the Royal Society and JSPS (Japan society for
Promotion of Science), the Anglo-Japanese joint research project.
We thank JSPS and RS for financial support.
H.W.B. thanks YITP and RIMS, Kyoto University for hospitality.
R.S. thanks Dept. Math. Univ. Durham for hospitality.
|
1,116,691,499,803 | arxiv | \section{\label{}}
\section{Introduction}
ATLAS (A Toroidal Lhc ApparatuS)~\cite{ATLAS} is one of the four major experiments at the forthcoming LHC (Large Hadron Collider), in which protons will collide at a center of mass energy of 14 TeV. It consists of three main sub-systems: the Inner Detector (ID), the Calorimetry system (electromagnetic and hadronic calorimeters) and the Muon Spectrometer (MS). It uses a superconducting magnet system with a central solenoid around the inner detector and large air-core toroid magnets for the muon spectrometer. Fig.~\ref{ATLAS} shows the overall detector layout.\\
The commissioning of the ATLAS detector with physics data already started while the detector was being mechanically and electrically completed by
collecting cosmic rays with the parts of the detectors that were becoming available. Global cosmic rays runs with the complete detector with
different magnetic field configurations are now being recorded. The full system was ready to collect data during the three days in which a single
beam of the LHC at the injection energy of 450 GeV circulated through ATLAS.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.6\textwidth]{atlas.jpg}
\caption{\sl Schematic view of the ATLAS detector. The dimensions of the detector are 25 m in height and 44 m in length. The overall weight of the detector is approximately 7000 tons.}
\label{ATLAS}
\end{center}
\end{figure}
In addition to put in place the trigger and data acquisition chains, commissioning of the full software chain is a main goal. This is interesting not only to ensure that the reconstruction, monitoring and simulation chains are ready to deal with LHC collisions data, but also to understand the detector performance in view of achieving the physics requirements. Furthermore, they have been used to validate and improve the ATLAS simulation comparing the results obtained from real data and those from the specific simulations.\\
\section{Reconstruction chain}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.35\textwidth]{SoftwareChain2.png}
\caption{\sl Schematic of the software chain.}
\label{SoftwareChain}
\end{center}
\end{figure}
The commissioning period is being used to put in place the full detector operation chain, i.e. from the LVL1 trigger and data acquisition to the analysis in the Grid Tier-2 computing centers. The software plays an important role in this chain.\\
The software chain is shown in Fig.~\ref{SoftwareChain}. Cosmic rays and events recorded during LHC single beam operations are reconstructed using the full ATLAS software chain, with specific modifications to account for the lack of synchronization of these kind of events with the readout clock (even during single beam operations due to the not yet operational RF capture) and the fact that particles do not come from the center of the detector. The reconstruction and monitoring algorithms have been continuously running online in the ATLAS control room to provide online event displays and histograms monitoring the data quality during detector operations. Event displays in which a cosmic ray track is reconstructed is shown in Fig.~\ref{ED2}. One can see hits in the trigger and precision muon spectrometer chambers, energy deposited in the calorimeters and hits in the inner detector. Examples of the type of histograms produced to check the data quality are number of tracks reconstructed, energy of calorimeter cells, hit occupancies, synchronization between the different sub-detectors, etc. Some examples are shown in Fig.~\ref{Monitoring}. In the left plot, the occupancy in the SemiConductor Tracker (SCT) modules is shown. The right one shows the difference of the $\theta$ track parameter measured in the inner detector and the muon spectrometer as a function of the event number. It can be seen that both sub-detectors are synchronized.\\
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.4\textwidth]{ED3.png}
\caption{\sl Event display showing a cosmic ray crossing the ATLAS barrel, recorded during ATLAS combined cosmic run 90272 which the full magnetic field. A combined (inner detector and muon spectrometer) track is reconstructed in this event.}
\label{ED2}
\end{center}
\end{figure}
The High Level Trigger algorithms have been providing data streams adequate for different purposes such as alignment and calibrations running at the Cern Analysis Facility (CAF). The first offline data processing takes place with a latency of less than one hour at the Tier-0 and further re-processings with new software versions and updated conditions are done at the Grid Tier-1 centers. At the end of the chain, Event Summary Data (ESD), Monitoring histograms, Combined Ntuples (CBNT), Analysis Data Objects (AOD) and TAG files to allow for a selection of events are produced.\\
\begin{figure}[h!]
\begin{center}
\resizebox{0.57\textwidth}{!}{
\includegraphics[width=0.27\textwidth]{SCThitmapNEWb.png}
\includegraphics[width=0.30\textwidth]{IDMS_Ievent.png}
}\\
\caption{\sl SCT modules occupancy (left). Difference of the $\theta$ track parameter measured by the inner detector and muon spectrometer as a function of the event number (right).}
\label{Monitoring}
\end{center}
\end{figure}
\section{Simulation chain}
In addition to the cosmic rays and single beam data taken by ATLAS in the pit, simulated data has also been made available. This is important to check the reconstruction software and allows for data/MC comparisons in terms of detector response, efficiencies, etc. Beam gas and beam halo events have been simulated at 5 TeV. However, as already mentioned, the energy of the LHC proton beam that circulated through ATLAS was of 450 GeV. Consequently, comparisons data/MC have only been made for cosmic rays events.\\
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{Simulation2.png}
\caption{\sl Schematic of the simulation chain.}
\label{Simulation}
\end{center}
\end{figure}
The simulation chain is shown in Fig.~\ref{Simulation}. A Monte Carlo generator was used for simulating muons from cosmic ray events, based on measurements of the differential vertical muon cross section and analytical calculations extrapolated to low muon energy ~\cite{MC}. Single muons are generated at the surface.\\
The simulation toolkit Geant4 was used in order to simulate the passage of particles through the detector, giving rise to energy depositions in the detector. In order to reduce the simulation time, only those events pointing to the ATLAS detector are passed to the Geant4 simulation and only those events that have energy depositions in a given volume -in this case, in the Transition Radiation Tracker (TRT) volume- are sent to the next step. This final state consists in emulating the electronics response (the so-called digitization process) in order to end up with simulated raw data.\\
\section{Data Analysis}
Cosmic rays have allowed us to study the ATLAS detector in terms of efficiencies, resolutions, channel integrity and alignment and calibrations. Some examples of these studies are shown in Fig.~\ref{DetectorStudies}.
On the left plot, the energy response and track length measured in the hadronic calorimeter (TileCal) is shown. The $\eta$ dependence that is observed is well understood by the variation in the length traversed by the muons in the calorimeter.\\
\begin{figure}[!h]
\begin{center}
\resizebox{0.6\textwidth}{!}{
\includegraphics[width=0.3\textwidth]{CaloEtaDep.png}
\includegraphics[width=0.3\textwidth]{diffTrackPar_IDMuonTHETA.png}
}\\
\caption{\sl $\eta$ dependence in the hadronic calorimeter response (left). Difference of the $\theta$ parameter measured by the inner detector and muon spectrometer (right).}
\label{DetectorStudies}
\end{center}
\end{figure}
Furthermore, cosmic ray runs have allowed for studies of the performance of combined algorithms. The ATLAS experiment will identify and measure muons in the muon spectrometer (in a region $|\eta|<$2.7). However, it is not enough. Because of various acceptance gaps in the muon spectrometer and the decrease in the momentum resolution for tracks with low momenta, algorithms that combine the information from the different ATLAS sub-detectors are essential.
Fig.~\ref{ED2} shows a cosmic ray event in which a combined track (inner detector and muon spectrometer) is reconstructed. The combined tracker matches first tracks of the inner detector with those reconstructed in the muon system and then performs a global $\chi^2$ fit using all hits.\\
The difference between the $\theta$ track parameter measured in these two sub-detectors is shown in the right plot of Fig.~\ref{DetectorStudies}. The mean of these distributions gives an idea of the relative alignment between them. The distributions are more centered for MC, as expected. In addition, the momentum reconstructed in the muon spectrometer and in the inner detector is compared in Fig.~\ref{Momentum} for tracks reconstructed in the top (left plot) and bottom (right plot) part of the muon spectrometer. The difference between the momentum reconstructed in both sub-detectors is the energy deposited in the calorimeters.
\begin{figure}[!h]
\begin{center}
\includegraphics[height=0.22\textwidth]{MomentumIDMSb.png}
\caption{\sl Momentum reconstructed in the muon spectrometer and in the inner detector for tracks reconstructed in the top (left plot) and bottom (right plot) part of the muon spectrometer.}
\label{Momentum}
\end{center}
\end{figure}
Data taken during the LHC single beam period has also been analysed. Fig.~\ref{Eta} (left) shows a single beam (halo-like muons) event. A comparison of the $\theta$ track parameter measured by the inner detector in cosmic rays and single beam runs is shown in the right plot of Fig.~\ref{Eta}. Cosmics tracks are mainly vertical while single beam tracks are more horizontal.\\
\begin{figure}[!h]
\begin{center}
\resizebox{0.5\textwidth}{!}{
\includegraphics[width=0.55\textwidth]{ED6.png}
\includegraphics[height=0.4\textwidth]{Eta_CosmicsBeam.png}
}\\
\caption{\sl Left: single beam event (halo-like muons). Right: track parameter $\eta$ measured by the inner detector for cosmic rays and single beam events.}
\label{Eta}
\end{center}
\end{figure}
\section{Conclusions}
The complete software chain is being commissioned making use of the different type of data taken by ATLAS, i.e.
cosmic rays and LHC single beam data. All these data have allowed us to study the ATLAS detector in terms of efficiencies, resolutions, channel integrity and alignment and calibration corrections. They have also allowed us to test and optimize the different sub-system reconstruction as well as the muon combined performance algorithms, such as combined tracking tools and different muon identification algorithms based on measurements in the inner detector and muon spectrometer or calorimeters.\\
\begin{acknowledgments}
The commissioning work described here requires many things to be working, from the detectors systems through to the offline software. I would like to acknowledge the huge contribution from the whole ATLAS collaboration.
\end{acknowledgments}
|
1,116,691,499,804 | arxiv | \section{Introduction}
\IEEEPARstart{W}{ireless} power transfer has received a considerable attention in recent years. Radio frequency (RF) simultaneous wireless information and power transfer (SWIPT) was proposed as a technique to transmit information and harvest energy by converting energy from an electromagnetic field into the electrical domain \cite{RFSWIPT1,RFSWIPT2}. SWIPT is considered as a future generation energy transfer technology in wireless communication networks. However, besides the RF spectrum scarcity, RF energy harvesting suffers from relatively low efficiency and major technical problems related to the transmitting, and receiving circuits \cite{RFSWIPT3}. Additional challenges are imposed by the electromagnetic safety and health concerns raised over the high power RF applications (references are within \cite{RFSWIPT3}). SWIPT can be equally a source of interference to data transmission, and RF pollution.\newline
\indent One alternative to the use of electromagnetic radiation to harvest energy is lightwaves emitted by light emitting diodes (LEDs) and lasers sources. Using light beams, it is possible to simultaneously perform a transfer of energy and efficiently deliver data streams. Simultaneous lightwave information and power transfer (SLIPT) can provide significant performance compared to RF-based SWIPT taking the advantage of the free-license optical wireless technology \cite{Georges1}. Lightwave energy harvesting can be also a complementary technology to visible light communication (VLC) as proposed in \cite{VLCRFSWIPT,SLIPT,SLIPTVLC}. In a dual-hop VLC/RF configuration, Rakia \textit{et al.} proposed harvesting energy from the VLC hop, by extracting the direct current (DC) component of the receiver illuminated by an LED \cite{VLCRFSWIPT}. The harvested energy is then harnessed to re-transmit the information over the RF link. Authors of \cite{SLIPT}, demonstrated the use of a low-cost Silicon solar cell to decode low-frequency VLC signals and harvest optical energy over a short propagation distance of 40 cm. In \cite{SLIPTVLC}, authors proposed the use of VLC systems with energy harvesting capabilities for night gathering events. Diamantoulakis \textit{et al.} proposed new strategies to enhance the efficiency and optimize between the communication and the energy harvesting functions for indoor applications through visible and infrared wireless communications.\newline
\begin{figure}[htp]
\centering
\includegraphics[width=3.5in]{img/SLIPTTechniques.png}
\caption{Different SLIPT techniques: (a) time switching, (b) power splitting, and (c) spatial splitting.}
\label{SLIPTTech}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=6in]{img/SLIPTUnderwater.png}
\caption{Illustration of the use of self-powered IoUT devices in an underwater environment.}
\label{UnderwaterSLIPT}
\end{figure*}
\indent Motivated by the low-cost of the optical communication/energy harvesting circuits components compared to their RF counterparts, SLIPT can be equally a cost effective solution for energy-constrained wireless systems including remote sensors, autonomous self-powered devices. SLIPT can be also very promising for applications in RF-sensitive environment and applications such as medical, smart houses, and aerospace \cite{Georges2}.\newline
Another potential application of SLIPT is powering Internet of underwater things (IoUT) devices. This is motivated by the high demands for underwater communication systems due to the on-going expansion of human activities in underwater environments such as marine life monitoring, pollution control/tracking, marine current power grid connections, underwater exploration, scientific data collection, undersea earthquakes monitoring, and offshore oil field exploration/monitoring. \newline
Wireless transmission under water can be achieved through radio, acoustic, or optical waves.
Traditionally, acoustic communication has been used for underwater applications for more than 50 years and can cover long ranges up to several Kilometers. The typical frequencies associated with this communication type are 10 Hz to 1 MHz. However, it is well known that this technology suffers from a very small available bandwidth, and large latencies due to the low propagation speed. RF waves suffer from high attenuation in sea-water and can propagate over long distances only at extra low frequencies (30-300 Hz). This requires large antennas and high transmission powers which make them unappealing for most practical purposes. Compared to acoustic and RF communications, underwater optical wireless communication (UWOC) through ocean water can covers large bandwidth and involve transmitting high data amount in the order of sevral Gbit/s. SLIPT as a complementary technology to UWOC can provide continuous connectivity and wireless powering for devices and sensors in difficult-access locations. \newline
\indent In this paper, we present the different concepts of optical SWIPT or SLIPT. We then provide experimental demonstrations of time switching SPLIT in an underwater environment for IoUT applications. We further discuss the open problems and propose key solutions for the deployment of underwater SLIPT-based devices.
\section{SLIPT System Design and Concept}
Different possible techniques in various domains including, time, power, and space can be adopted to transmit information and harvest energy.
\subsection{Time Switching}
In a time switching configuration, the receiver, which is possibly a low-cost solar cell, switches between the energy harvesting mode and the information decoding mode, better known as the photovoltaic and the photoconductive modes, respectively. Both SLIPT functions are performed over two different time slots $t_{1}$ and $t_{2}$, as can be seen in Fig.~\ref{SLIPTTech} (a). The quantity of harvested energy is ruled by the conversion efficiency of the solar cell. With the advances in the development of high-speed light sources, the maximum transmission rate that could be achieved is mainly restricted by the bandwidth of the solar cell. Synchronizing the photovoltaic mode and the photoconductive mode is crucial and can be done via hardware programming. The switching function can be fulfilled via a low-power relay.
\begin{figure*}[htp!]
\centering
\includegraphics[width=6.5in]{img/UnderwaterSetup.png}
\caption{ Schematic illustrating the experimental setup of time switching underwater SLIPT and its block diagram program.}
\label{UnderwaterSetup}
\end{figure*}
\subsection{Power Splitting}
Within the power splitting approach, the receiving terminal is simultaneously charged while decoding information carried by the incident light beam, as can be seen in Fig.~\ref{SLIPTTech} (b). A key device needed for this configuration is the power splitter, which splits the incident power into $\alpha$ and $(1-\alpha)$ quantities. The $\alpha P_{R}$ power portion is used to harvest energy, while the $(1-\alpha)P_{R}$ portion is used to decode the received signal. The power splitting component can be either a passive beam splitter, which splits the incident beam from a light source into two or more beams, with (un)evenly fixed distributed power portions. It can be also a splitter with variable splitting ratios, which can potentially increase the system complexity. Differently from the time switching approach, using power splitting, the simultaneous energy and power transfer is fulfilled. With this method, it is also possible to achieve higher transmission rates because the decoding can be performed via a high-speed photodiode (PD).
\subsection{Spatial Splitting }
The spatial splitting approach is applied on a configuration involving multiple transmitters and multiple receivers with information decoding and energy harvesting capabilities, as depicted in Fig.~\ref{SLIPTTech} (c). Each transmitter can transfer data or energy, and each receiver can harvest energy from multiple transmitters. Time switching can be applied within this configuration where the same receiver can act as an ``energy harvester'' and ``signal decoder'' over different time slots.
\section{Demonstrations}
Multiple theoretical and experimental SLIPT-related studies, in free space and underwater media, have been proposed in the literature. Wang \textit{et al.} proposed a novel design of a solar panel as a photo-receiver \cite{Demonstration1}. Authors of \cite{Demonstration2} established a VLC link and used an organic solar panel as a receiver. Earlier demonstrations also involved the use of a 5 cm$^2$ solar panel as a receiver for an underwater communication link over a 7 m long water tank \cite{Demonstration3}. A Gallium Arsenide (GaAs) solar cell was further used to perform a 0.5 Gb/s transmission over a 2 m-long free space link \cite{Demonstration4}. Here, we demonstrate two communication and energy harvesting scenarios over underwater media using two devices, which can be potentially charged through light beams emitted from a source fixed on a boat or an autonomous underwater vehicle (AUV) that could be used in real-life marine environmental monitoring and scientific data collection applications, as depicted in Fig. \ref{UnderwaterSLIPT}. In a first experiment, we charge the battery of a submerged module with temperature and turbidity sensors, and transmit commands using a single laser. The temperature sensor is then used to monitor the variable temperature of a water tank. In a second demonstration, we report charging the capacitor of an IoUT device equipped with a camera and a low-power laser for real-time video streaming.
\subsection{Self-Powered Sensor Module}
The experimental setup of the first demonstration is depicted in Fig.~\ref{UnderwaterSetup}. The transmitter is composed by an Arduino Mega with a laser drive connected to a 430 nm laser diode (LD), and is fixed outside of a 1.5-m-long water tank. The receiver, a self-powered sensor platform formed by a a $55\times70$ mm solar cell, with a 3-dB bandwidth of 30 KHz in photovoltaic mode, and an electric circuitry, is placed inside the water tank. The state of the solar cell is controlled via a low-power relay, which changes the connection of the solar cell to the circuit. The solar cell can then have two possible states:
\begin{itemize}
\item It directly deliver power to a battery or a super-capacitor, at the same time its charge is monitored by a wake-up circuit.
\item It is reverse-voltage-biased and the output current passes through a transimpedance amplifier and a comparator. Both circuits are implemented by a programmable system on-chip (PSoC). The main circuit is connected to a low-power microcontroller that receives the signal and processes the data (saving the data or executing the commands).
\end{itemize}
We should stress that in some of the previously demonstrated SLIPT circuits, such as the one in \cite{Demonstration3}, the harvested energy and the data rate are directly associated to the frequency of the electrical signal due to the coupling capacitor and inductor. Moreover, the feasibility of incorporating a maximum power point tracking (MPPT) for the solar cells with the existing circuit methodology has not been demonstrated before to the best of our knowledge. Through our circuit design, in the energy harvesting mode, we allow the implementation of any MPPT circuit without affecting the data-transmission, as both solar cell modes are independent. We also note that one of the major challenges for the deployment of UWOC systems is fulfilling the pointing, acquisition and tracking (PAT) requirements. Using the solar cell as a photo-detector can reduce the strict PAT requirements.
As depicted in Fig.~\ref{UnderwaterSetup}, the module wake-up upon receiving a light beam on the solar panel. The battery voltage $V_{B}$ is then measured. If $V_{B}<V_{th}$, where $V_{th}$ is a threshold voltage of 3.6V for our module, the sensors start measuring turbidity and/or temperature. The collected data is saved on a secure digital (SD) card fixed on the module, which then enters to a sleep mode. If $V_{B}\ge V_{th}$, the solar panel switches to a receiver mode and receive commands, which include switching on/off a particular sensor, and sending/re-transmitting the saved data on the memory card. Upon executing all the needed commands, the solar panel switches to energy harvesting mode and when reaching a full battery charge, the module enters to the sleep mode. The full charge of the 840 mW module battery takes approximately 124 mins using the blue laser and the reached throughput when the solar panel acts as an information receiver is equal to 500 kbit/s. The temperature sensor is then switched on to measure the temperature inside the water tank over a time window of more than two hours. The water temperature is controlled using two chillers fixed on the two sides of the tank. The temperature evolution as a function of time is shown in Fig.~\ref{UnderwaterTemp}.
\begin{figure}[htp!]
\centering
\includegraphics[width=3.5in]{img/Temperature.png}
\caption{Temperature evolution of the water tank as a function of time.}
\label{UnderwaterTemp}
\end{figure}
\subsection{Self-Powered Underwater Camera}
The second demonstration involves an IoUT device equipped with an analog camera for underwater live video streaming, as shown in Fig.~\ref{Camera} (a). The IoUT is formed by an analog front-end PSoC circuit, powered by a 5 F super-capacitor, which can be charged via a solar panel (similar to the $55\times70$ mm used to perform the first demonstration). A low-power red laser is also connected to the circuit for video transmission. The device is fixed at the bottom of a vertical tank with sea water.
Using an LED source, as seen in Fig.~\ref{Camera} (b), that is fixed 30 cm away from the solar panel, the full super-capcitor charge takes approximately 1h 30min. Once fully charged, the device is used to establish a 1 minute-lomg real-time streaming of a video captured by the analog camera, as seen in Fig.~\ref{Camera} (c).\newline
The real-life deployment of the self-powered device in the Red Sea (location known as Abu Gisha island) is shown in Fig~\ref{Camera} (d). With the strong water movement that significantly shakes the device. The use of the large-area solar panel to harvest energy and decode information eased the PAT requirements compared to state-of-the art UWOC systems based on limited active area detectors. However, water motion significantly affected the pointing of the device's laser towards the detector.
\begin{figure}[htp!]
\centering
\includegraphics[width=3.5in]{img/Deploy.png}
\caption{(a) Self-powered underwater camera. Photograph of the module (b) being charged by an LED source, (c) transmitting information (video streaming) with a red laser, and (d) deployed in a coral reef in the Red Sea.}
\label{Camera}
\end{figure}
\section{Open Problems}
There are different challenges associated with the various SLIPT techniques, which are either related to the hardware such as the bandwidth of the solar cell and the battery life time or the propagation effects over wireless underwater media. Here we discuss the open problems of the SLIPT technology and propose future research directions with the objective of coping with deployment challenges.
\subsection{Hardware Challenges}
One of the main challenges for SLIPT is to increase the transmission rate of optical communication links. The problem lies with the bandwidth of the receiving terminal, which is limited by the decoding bandwidth of the solar cell, mainly for time switching SLIPT configurations. The bandwidth of commercially available solar cells is usually restricted to a few tens of KHz. Using advanced modulation formats such as M-quadrature amplitude modulation-orthogonal frequency division multiplexing (M-QAM OFDM) can potentially scale the transmission capacity by several orders of magnitude. However, implementing such techniques requires the use of a digital-to-analog converter (DAC) at the transmitter and an analog-to-digital-converter (ADC) at the receiver, which comes at a major cost, which is the battery energy consumption. Limitations on data transmission rates can be alleviated in a power splitting SLIPT approach where a high-bandwidth PD can be used to decode the information signals instead of the solar detector at the expense of the pointing errors since the detection areas of commercially available high-bandwidth PDs are limited to only few tens of mm$^2$, due to the limit imposed by the resistor-capacitor (RC) time constant \cite{RCTime}, which results in a strict angle of view that requires maintaining a perfect system alignment. Using high-speed PDs can also generate additional complexity for the power splitter, which should be adapted to deliver sufficient power to decode the signals.\newline
Additional challenges are related to the life time of the IoUT device 's battery, which is subject to the enabled features and the information transmission rate. To provide an idea on the impact of enabled features and data throughput on the energy consumption levels of IoUT devices, we collected in Table \ref{CurrentConsumption} the current consumption portion of a fully awake device for various energy sources with different characteristics as well as data throughput levels (data decoded by the solar cell in the photoconductive mode).
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Current consumption for different enabled features}
\label{CurrentConsumption}
\centering
\begin{tabular}{|l|l|l|l|}
\hline
\rowcolor[gray]{0.8}
Source&\thead{Current \\Consumption}&Throughput & Enabled Features\\
\hline
3.7 V&102 mA&500 Kbit/s&Wi-Fi, Bluetooth\\
\hline
3.7 V&36 mA&500 Kbit/s&IoT with clock at 10MHz\\
\hline
3.7 V&11 mA&115.2 Kbit/s&\thead{SoC with microcontroller\\ at 3MHz}\\
\hline
5 V&110 mA&-&Video streaming\\
\hline
3.7 V&7 mA&-&Sensing and saving data\\
\hline
5 V&236 mA&500 Kbit/s&\thead{Video streaming,\\ Wi-Fi and Bluetooth}\\
\hline
\end{tabular}
\end{table}
\subsection{Propagation Effects}
When propagating through the water, the intensity of a light beam decays exponentially along the propagation direction $z$, from the initial intensity $I_{0}$ following Beer's law expressed as $I=I_{0}\exp(-\alpha z)$, with $\alpha$ being the absorption coefficient. $\alpha$ is obtained by summing the contribution of two main phenomena, $b$ and $c$, which are the absorption and the scattering coefficients, respectively. Beams propagating through the water can be also subject to turbulence, which is due to random temperature in-homogeneity, salinity variations or air-bubbles. Temperature fluctuations and salinity variations result in rapid changes in the refractive index of the water, while air-bubbles block partially or completely the light beam. Statistical models to estimate the impact of turbulence on the underwater link under several conditions are existing in the literature \cite{UWOCModels}. When designing a link to simultaneously transfer power and information in an underwater channel, attenuation as well as turbulence effects should be taken into account to ensure the delivery of a sufficient amount of power to perform the two SLIPT functions. \newline
A possible way to reduce the impact of turbulence is two use multiple wavelegths for the information transfer and energy harvesting functions. The use of multiple wavelengths can provide a diversity gain over a harsh underwater environment, if different copies of the same signal are encoded over distinct carrier wavelengths that are affected differently by turbulence-induced distortions \cite{Colors}. For example, light attenuation in clear seawater is minimum in the blue-green region, while longer wavelengths are more effective to mitigate the effect of turbulence. At clear water, the use of multiple wavelengths to carry independent data streams can considerable scale the transmission capacity. Taking the example of a two-wavelength system, one wavelength can be used to transfer energy while the other can carry the data streams ensuring a continuous connectivity of the device. The two wavelengths can be also used to transmit two independent data streams and charge the battery at the same time if a power splitting approach is adopted.
\subsection{Beam Divergence}
While propagating through an unguided medium, a light beam tends to diverge leading to an increase of the radius. Losses due to beam divergence can be denoted as geometrical attenuation, which scales with the propagation distance and related to the used laser or LED source at the transmitter (including the collimation system, if used) as well as the operating wavelength. Taking into account beam divergence is crucial for SLIPT systems. The power portion at the receiver should fulfill the two SLIPT functions.
\subsection{Path Obstructions}
\indent Obstructed propagation path is another limiting factor for underwater SLIPT, which can be overcome using the enhanced scattering of the ultraviolet (UV) light that can be harnessed to establish non-line of sight (NLoS) connections. Nonetheless, this requires having solar-blind solar cells to harvest energy from UV light. This technique should be also carefully studied to avoid UV exposure health-related issues.
\section{Conclusion}
Throughout this article, we provided an overview of the SLIPT technology. SLIPT is a key technology for green energy transfer that could exploit different degrees of freedom including time, power, and space. We presented two underwater experimental demonstrations of time switching SLIPT. In a first proof-of-concept test, we charged the battery of a submerged module using a blue laser and successfully transmitted commands with a transmission rate of 500 Kbit/s through a 1.5 m underwater link. We also collected data using the self-powered temperature sensor. In the second experiment, we report transmitting commands and charging the capacitor of a device equipped with a low-power red laser and analog-camera. SLIPT is still a largely unexplored field and requires deeper research efforts to overcome several major technical issues and channel-related challenges before its wide-scale deployment.
\section*{Acknowledgment}
This work was supported by funding from King Abdullah University of Science and Technology. The authors would like to thank the Red Sea Research Center and Coastal \& Marine Resources Core Lab (CMOR) for helping in the testing, and deployment of the prototypes.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
1,116,691,499,805 | arxiv | \section{Introduction}
The concept of uncertainty, advocated initially by Heisenberg \cite{Heisenberg1927}, is one of the most peculiar features in quantum theory.
Much study has been devoted to proper understandings of uncertainty to demonstrate that it has two aspects: {\it preparation uncertainty} and {\it measurement uncertainty} \cite{Busch_quantummeasurement}.
Loosely speaking, for a pair of observables which do not commute, the former describes that there is no state on which their individual measurements output simultaneously definite values, while the latter expresses the impossibility of performing their joint measurement.
Although there have been considered several mathematical representations of them, {\it preparation uncertainty relations} ({\it PURs}) and {\it measurement uncertainty relations} ({\it MURs}) respectively \cite{PhysRev.34.163,PhysRevLett.60.2447,BUSCH2007155,doi:10.1063/1.4871444,PhysRevA.67.042105}, entropic uncertainty relations \cite{UncertaintyRelationsForInformationEntropy,PhysRevLett.50.631,PhysRevLett.60.1103,10.2307/25051432,PhysRevLett.112.050401} have the advantages of their compatibility with information theory and independence from the structure of the sample spaces. They indeed have been applied to the field of quantum information in various ways \cite{RevModPhys.89.015002}.
On the other hand, two kinds of uncertainty have been investigated also in physical theories broader than quantum theory called {\it generalized probabilistic theories} ({\it GPTs}) \cite{Gudder_stochastic,hardy2001quantum,PhysRevA.75.032304,PhysRevA.84.012311,BARNUM20113,1751-8121-47-32-323001}. For example, there have been researches on both types of uncertainty \cite{PhysRevA.101.052104} or joint measurability of observables \cite{PhysRevLett.103.230402,Busch_2013,PhysRevA.89.022123,PhysRevA.94.042108,PhysRevA.98.012133}, which are related with MURs, in GPTs.
In \cite{takakura2020uncertainty}, several formulations of two types of uncertainty were generalized to GPTs, and it was revealed quantitatively that there are close relations between them not only in quantum theory \cite{doi:10.1063/1.3614503} but also in a class of GPTs.
However, although the notion of entropy has been introduced in GPTs \cite{1367-2630-12-3-033024,Short_2010,KIMURA2010175,EPTCS195.4,1367-2630-19-4-043025,Takakura_2019}, insights of entropic uncertainty relations in GPTs are still missing.
In the present paper, entropic uncertainty relations are studied in a class of GPTs investigated in the previous work \cite{takakura2020uncertainty}: GPTs satisfying {\it transitivity} and {\it self-duality} with respect to a certain inner product.
They include finite dimensional classical and quantum theories, and thus can be regarded as generalizations of them. In those theories, we obtain an entropic inequality related with PURs in a simple way via the Landau-Pollak-type relations \cite{Uffink_PhD,PhysRevA.71.052325,PhysRevA.76.062108}. We also prove an entropic relation similar to the quantum MUR by Buscemi {\it et al.} \cite{PhysRevLett.112.050401} with their formulations generalized to those GPTs.
Our results manifest that the structures of entropic PURs and MURs in quantum theory are indeed more universal ones.
Moreover, they can be considered as an entropic counterpart of \cite{takakura2020uncertainty}: if there exist entropic PURs giving certain bounds of uncertainty, then entropic MURs also exist and can be formulated in terms of the same bounds as PURs.
We also present, as an illustration, concrete expressions of our entropic relations in a specific class of GPTs called the {\it regular polygon theories} \cite{1367-2630-13-6-063024}.
This paper is organized as follows. In section \ref{sec:GPTs}, we give a short survey of GPTs including the introduction of the regular polygon theories. Section \ref{sec:main section} is the main part of this paper, and there are shown entropic uncertainty relations in a certain class of GPTs. We conclude the present work and give brief discussions in section \ref{sec:conclusion}.
\section{GPTs}
\label{sec:GPTs}
GPTs are the most general physical theories reflecting intuitively the notion of physical experiments: to prepare a state, to conduct a measurement, and to observe a probability distribution. In this section, a brief survey of GPTs is shown according mainly to \cite{1751-8121-47-32-323001,takakura2020uncertainty,KIMURA2010175,EPTCS195.4,kimura2010physical}.
\subsection{Fundamentals}
\label{subsec:fundamentals}
Any GPT is associated with the notion of {\it states} and {\it effects}. In this paper, a compact convex set $\Omega$ in $V\equiv\mathbb{R}^{N+1}$ with $\mathrm{dim}\mathit{aff}(\Omega)=N$ describes the set of all states in a GPT, which we call the {\it state space} of the theory. We assume in this paper that GPTs are finite dimensional ($N<\infty$), and $\mathit{aff}(\Omega)$ does not include the origin $O$ of $V$ ($O\notin\mathit{aff}(\Omega)$). We note that the notion of probability mixture of states is reflected by the convex structure of $\Omega$.
The extreme elements of $\Omega$ are called {\it pure states}, and we denote the set of all pure states by $\Omega^{\mathrm{ext}}=\{\omega^{\mathrm{ext}}_{\lambda}\}_{\lambda\in\Lambda}$.
The other elements of $\Omega$ are called {\it mixed states}.
For a GPT with its state space $\Omega$, we define the {\it effect space} of the theory as $\mathcal{E}(\Omega)=\{e\in V^{*}\mid e(\omega)\in[0, 1]\ \mbox{for all}\ \omega\in\Omega\}$, where $V^{*}$ is the dual space of $V$,
and call its elements {\it effects}.
Remark that we follow the {\it no-restriction hypothesis} \cite{PhysRevA.81.062348} in this paper, and we sometimes denote $\mathcal{E}(\Omega)$ simply by $\mathcal{E}$. Introducing the {\it unit effect} $u$ as $u\in\mathcal{E}(\Omega)$ satisfying $u(\omega)=1$ for all $\omega\in\Omega$, a {\it measurement} or {\it observable} on some sample space $X$ is defined by a set of effects $\{e_{x}\}_{x\in X}$ such that $\sum_{x\in X}e_{x}=u$. In this paper, we assume that every measurement is with finite outcomes (i.e. the sample space $X$ is finite) and does not include the zero effect, and the trivial measurement $\{u\}$ is not considered. Two measurements $A=\{a_{x}\}_{x\in X}$ and $B=\{b_{y}\}_{y\in Y}$ are called {\it jointly measurable} or {\it compatible} if there exists a joint measurement $C=\{c_{xy}\}_{(x, y)\in X\times Y}$ such that its marginals satisfy $\sum_{y\in Y}c_{xy}=a_{x}$ and $\sum_{x\in X}c_{xy}=b_{y}$ for all $x\in X, y\in Y$. If $A$ and $B$ are not jointly measurable, then they are called {\it incompatible}. We say that two GPTs are equivalent if their state spaces $\Omega_{1}$ and $\Omega_{2}$ satisfies $\psi(\Omega_{1})=\Omega_{2}$ for a linear bijection $\psi$ on $V$.
In that case, because $\mathcal{E}(\Omega_{2})=\mathcal{E}(\Omega_{1})\circ\psi^{-1}$ holds, we can see the covariance (equivalence) of physical predictions.
For a state space $\Omega$, the {\it positive cone} $V_{+}(\Omega)$ (or simply $V_{+}$) generated by $\Omega$ is defined as the set of all unnormalized states, that is, $V_{+}:=\{v\in V\mid v=k\omega, \omega\in\Omega, k\ge0\}$. We can also define the {\it dual cone} $V^{*}_{+}(\Omega)$ (or simply $V^{*}_{+}$) as the set of all unnormalized effects: $V^{*}_{+}:=\{f\in V^{*}\mid f(v)\ge0, \ ^\forall v\in V_{+}\}$.
A half-line $E\subset V_{+}$ is called an {\it extremal ray} of $V_{+}$ (respectively $V_{+}^{*}$) if $l=m+n$ with $l\in E$ and $m, n\in V_{+}$ (respectively $m, n\in V_{+}^{*}$) implies $m, n\in E$. We call effects on extremal rays of $V_{+}^{*}$ {\it indecomposable}, while it is easy to see that the half-lines $\{x\in V\mid x=k\omega^{\mathrm{ext}}_{\lambda}, k>0\}$ generated by the pure states $\Omega^{\mathrm{ext}}=\{\omega^{\mathrm{ext}}_{\lambda}\}_{\lambda\in \Lambda}$ are the extremal rays of $V_{+}$. It is known that there exist pure and indecomposable effects, and we denote by $\mathcal{E}^{\mathrm{ext}}(\Omega)$ (or simply $\mathcal{E}^{\mathrm{ext}}$) the set of all pure and indecomposable effects. They are thought to be a generalization of rank-1 projections in finite dimensional quantum theories (see \cite{takakura2020uncertainty}).
\subsection{Additional notions}
\label{subsec:additional notions}
Let $\Omega\subset V$ be a state space.
A linear bijection $T\colon V\to V$ is called a {\it state automorphism} on $\Omega$ if it satisfies $T(\Omega)=\Omega$, and we denote by $G(\Omega)$ (or simply $G$) the set of all state automorphisms on $\Omega$.
States $\omega_{1}, \omega_{2}\in\Omega$ are called {\it physically equivalent} if there exists $T\in G$ satisfying $T\omega_{1}=\omega_{2}$.
We say that $\Omega$ is {\it transitive} if all pure states are physically equivalent, i.e. for an arbitrary pair of pure states $\omega^{\mathrm{ext}}_{i}, \omega^{\mathrm{ext}}_{j}\in\Omega^{\mathrm{ext}}$ there exists $T\in G$ such that $T\omega^{\mathrm{ext}}_{i}=\omega^{\mathrm{ext}}_{j}$.
When $\Omega$ is transitive, we can define the {\it maximally mixed state} $\omega_{M}\in\Omega$ as a unique state satisfying $T\omega_{M}=\omega_{M}$ for all $T\in G$ \cite{Davies_compactconvex}.
There exists a useful inner product $\langle\cdot ,\cdot\rangle_{G}$ on $V$, with respect to which all elements of $G$ are orthogonal transformations on $V$.
That is,
\[
\ang{Tx, Ty}_{G}=\ang{x, y}_{G}\quad\ ^{\forall}x, y\in V
\]
holds for all $T\in G$. In fact, $\ang{\cdot, \cdot}_{G}$ is constructed in the way
\[
\ang{x, y}_{G}=\int_{G}(x, y)d\mu\quad (x, y\in V)
\]
by means of the two-sided invariant Haar measure $\mu$ on $G$ and a reference inner product $(\cdot,\cdot)$ on $V$ (such as the standard Euclidean inner product on $V$).
When $\Omega$ is transitive, we can prove that all pure states are of equal norm with respect to $\langle\cdot,\cdot\rangle_{G}$\ :
\begin{align}
\label{eq:equal norm}
\|\omega_{\lambda}^{\mathrm{ext}}\|_{G}=\sqrt{\alpha} \quad ^\forall \omega_{\lambda}^{\mathrm{ext}}\in\Omega^{\mathrm{ext}},
\end{align}
where $\|\cdot\|_{G}:=\langle\cdot,\cdot\rangle_{G}^{1/2}$ and $\alpha$ is a positive number.
For the positive cone $V_{+}$ generated by $\Omega$ and an inner product $(\cdot,\cdot)$ on $V$, the {\it internal dual cone} $V_{+(\cdot,\cdot)}^{*int}$ relative to $(\cdot,\cdot)$ is defined as $V_{+(\cdot,\cdot )}^{*int}:=\{w\in V\mid (w, v)\ge0\ ^{\forall}v\in V_{+}\}$, and the cone $V_{+}$ is called {\it self-dual} if $V_{+}=V_{+(\cdot ,\cdot)}^{*int}$ for some inner product $(\cdot,\cdot)$ on $V$. Note that by virtue of the Riesz representation theorem $V_{+(\cdot,\cdot)}^{*int}$ can be regarded as the dual cone $V^{*}_{+}$, i.e. the set of all unnormalized effects. Thus, the self-duality of $V_{+}$ means that (unnormalized) states can be identified with (unnormalized) effects. Let us assume that $\Omega$ is transitive and $V_{+}$ is self-dual with the self-dualizing inner product being $\langle\cdot,\cdot\rangle_{G}$. In this case, \eqref{eq:equal norm} holds, and we can prove that $\alpha\mathcal{E}^{\mathrm{ext}}=\Omega^{\mathrm{ext}}$, that is,
\begin{align}
\label{def:pure and indecomp effect}
e_{\lambda}^{\mathrm{ext}}:=\frac{\omega_{\lambda}^{\mathrm{ext}}}{\alpha}
\end{align}
gives a pure and indecomposable effect for any $\omega_{\lambda}^{\mathrm{ext}}\in\Omega^{\mathrm{ext}}$ because the extreme rays of $V_{+}=V_{+\langle\ ,\ \rangle_{G}}^{*int}$ are generated by $\Omega^{\mathrm{ext}}$ and $\omega_{\lambda}^{\mathrm{ext}}$ is a (unique) state satisfying $\langle e_{\lambda}^{\mathrm{ext}}, \omega_{\lambda}^{\mathrm{ext}}\rangle_{G}=1$ \cite{KIMURA2010175}. In \cite{takakura2020uncertainty}, it was demonstrated that the state spaces of finite dimensional classical and quantum theories satisfy both transitivity and self-duality with respect to the inner product $\langle\cdot,\cdot\rangle_{G}$. There was also shown a proposition about the self-duality with respect to $\langle\cdot,\cdot\rangle_{G}$ in \cite{takakura2020uncertainty}.
\begin{prop}[Proposition 2.3.2 in \cite{takakura2020uncertainty}]
\label{prop_self-duality}
Let $\Omega$ be transitive with $|\Omega^{\mathrm{ext}}|<\infty$ and $V_+$ be self-dual with respect to some inner product. There exists a linear bijection $\Xi\colon V\to V$ such that $\Omega':=\Xi\Omega$ is transitive and the generating positive cone $V'_{+}$ is self-dual with respect to $\langle\cdot,\cdot\rangle_{G(\Omega')}$, i.e.
$V^{'}_+ = V_{+\langle\cdot ,\cdot \rangle_{G(\Omega')}}^{'*int}$.
\end{prop}
Proposition \ref{prop_self-duality} demonstrates that
when $\Omega$ is transitive and $\Omega^{\mathrm{ext}}$ is finite, we can find another expression $\Omega'$ of the theory which is transitive and whose positive cone is self-dual with respect to $\langle\cdot ,\cdot \rangle_{G(\Omega')}$.
\subsection{Examples of GPTs: regular polygon theories}
\label{subsec:polygons}
The {\it regular polygon theories} are GPTs whose state spaces are the regular polygons in $V\equiv\mathbb{R}^{3}$, and if a state space is the regular polygon with $n$ sides ($n\ge3$), then we denote it by $\Omega_{n}$. In \cite{1367-2630-13-6-063024}, we can find that $\Omega_{n}$ is given by the convex hull of its pure states (its vertices)
\begin{align}
\label{def:polygon pure state0}
\Omega^{\mathrm{ext}}_{n}=\{\omega_{n}^{\mathrm{ext}}(i)\}_{i=0}
^{n-1},
\end{align}
where
\begin{align}
\label{def:polygon pure state}
\omega_{n}^{\mathrm{ext}}(i)=
\left(
\begin{array}{c}
r_{n}\cos({\frac{2\pi i}{n}})\\
r_{n}\sin({\frac{2\pi i}{n}})\\
1
\end{array}
\right)\quad\mbox{with}\quad r_{n}=\sqrt{\frac{1}{\cos({\frac{\pi}{n}})}}.
\end{align}
The corresponding effect space $\mathcal{E}(\Omega_{n})$ is given by $V^{*int}_{+(\ ,\ )_{E}}\cap\{u-V^{*int}_{+(\ ,\ )_{E}}\}$ in terms of the dual cone $V^{*}_{+}(\Omega_{n})=V^{*int}_{+(\ ,\ )_{E}}$ represented by the standard Euclidean inner product $(\ ,\ )_{E}$ of $V$. $V^{*}_{+}(\Omega_{n})=V^{*int}_{+(\ ,\ )_{E}}$ is generated by the pure and indecomposable effects
\begin{align}
\label{def:polygon pure effect0}
\mathcal{E}^{\mathrm{ext}}(\Omega_{n})=\{e_{n}^{\mathrm{ext}}(i)\}_{i=0}
^{n-1},
\end{align}
where
\begin{equation}
\label{def:polygon pure effect}
\begin{aligned}
&e_{n}^{\mathrm{ext}}(i)=\frac{1}{2}
\left(
\begin{array}{c}
r_{n}\cos({\frac{(2i-1)\pi}{n}})\\
r_{n}\sin({\frac{(2i-1)\pi}{n}})\\
1
\end{array}
\right)\ \ (n:\mbox{even})\ ;\\
&e_{n}^{\mathrm{ext}}(i)=\frac{1}{1+r_{n}^{2}}
\left(
\begin{array}{c}
r_{n}\cos({\frac{2i\pi}{n}})\\
r_{n}\sin({\frac{2i\pi}{n}})\\
1
\end{array}
\right)\ \ (n:\mbox{odd}).
\end{aligned}
\end{equation}
We can also consider the case when $n=\infty$ in \eqref{def:polygon pure state} - \eqref{def:polygon pure effect}. The state space $\Omega_{\infty}$ is a disc with its pure states and pure and indecomposable effects being
\begin{equation}
\label{def:disc pure state0}
\Omega_{\infty}^{\mathrm{ext}}=\{\omega_{\infty}^{\mathrm{ext}}(\theta)\}_{\theta\in[0, 2\pi)}
\end{equation}
and
\begin{equation}
\label{def:disc pure effect0}
\mathcal{E}^{\mathrm{ext}}(\Omega_{\infty})=\{e_{\infty}^{\mathrm{ext}}(\theta)\}_{\theta\in[0, 2\pi)},
\end{equation}
where
\begin{align}
\label{def:disc pure state and effect}
\omega_{\infty}^{\mathrm{ext}}(\theta)=
\left(
\begin{array}{c}
\cos\theta\\
\sin\theta\\
1
\end{array}
\right)\quad\mbox{and}\quad
e_{\infty}^{\mathrm{ext}}(\theta)=\frac{1}{2}
\left(
\begin{array}{c}
\cos\theta\\
\sin\theta\\
1
\end{array}
\right)
\end{align}
respectively.
For $n=3, 4, \cdots, \infty$, it can be shown that $\Omega_{n}$ is transitive with respect to $G(\Omega_{n})$, and the standard Euclidean inner product $(\cdot,\cdot)_{E}$ is indeed the inner product $\langle\cdot,\cdot\rangle_{G(\Omega_{n})}$ invariant with any $T\in G(\Omega_{n})$. Moreover, we can see from \eqref{def:polygon pure state0} - \eqref{def:disc pure state and effect} that $V_{+}(\Omega_{n})$ is self-dual with respect to $(\cdot ,\cdot )_{E}=\langle\cdot ,\cdot \rangle_{G(\Omega_{n})}$ when $n$ is odd or $\infty$, whereas $V_{+}(\Omega_{n})$ is no more than isomorphic to $V_{+ \langle\cdot ,\cdot \rangle_{G(\Omega_{n})}}^{*int}$ when $n$ is even (in this case, $V_{+}(\Omega_{n})$ is called {\it weakly self-dual} \cite{barnum2012teleportation,Barnum2013}). We note that the cases when $n=3$ and $n=\infty$ correspond to a classical trit system and a qubit system restricted to real coefficients respectively.
\section{Entropic Uncertainty Relations in a class of GPTs}
\label{sec:main section}
In this section, we present our main results on two types of entropic uncertainty in a certain class of GPTs. While our results reproduce entropic uncertainty relations obtained in finite dimensional quantum theories, they indicate that similar relations hold also in a broader class of physical theories. We also demonstrate entropic uncertainty relations in the regular polygon theories as an illustration of our results.
\subsection{Entropic PURs}
\label{subsec:entropic PURs}
In quantum theory, it is known that we cannot prepare a state on which individual measurements of position and momentum observables, for example, take simultaneously definite values \cite{Kennard1927}. This type of uncertainty and its quantifications are called {\it preparation uncertainty} and {\it preparation uncertainty relations} ({\it PURs}) respectively.
In order to give general descriptions of uncertainty in GPTs, the notion of ideal measurements has to be introduced. Considering that projection-valued measures (PVMs), whose effects are sums of rank-1 projections, give ideal measurements in finite dimensional quantum theories \cite{Busch_quantummeasurement}, we call a measurement $\{e_{x}\}_{x\in X}$ in some GPT {\it ideal} \cite{takakura2020uncertainty} if for any $x\in X$ there exists a finite set of pure and indecomposable effects $\{e^{\mathrm{ext}}_{i_{x}}\}_{i_{x}}$ such that
\begin{align}
\label{def:ideal measurement}
e_{x}=\sum_{i_{x}}e^{\mathrm{ext}}_{i_{x}} \quad\mbox{or}\quad e_{x}=u-\sum_{i_{x}}e^{\mathrm{ext}}_{i_{x}}.
\end{align}
It is easy to check that measurements satisfying $\eqref{def:ideal measurement}$ are reduced to PVMs in finite dimensional quantum theories.
Let us consider a GPT with its state space $\Omega$, and two ideal measurements $A=\{a_{x}\}_{x\in X}$ and $B=\{b_{y}\}_{y\in Y}$ on $\Omega$. For the probability distribution $\{a_{x}(\omega)\}_{x}$ obtained in the measurement of $A$ on a state $\omega\in\Omega$ (and similarly for $\{b_{y}(\omega)\}_{y}$), its Shannon entropy is defined as
\begin{align}
\label{def:Shannon entropy}
H\left(\{a_{x}(\omega)\}_{x}\right)=-\sum_{x\in X}a_{x}(\omega)\log{a_{x}(\omega)}.
\end{align}
One way to obtain an entropic PUR is to consider the Landau-Pollak-type relations \cite{Uffink_PhD,PhysRevA.71.052325,PhysRevA.76.062108}:
\begin{align}
\label{def:L-P UR}
\max_{x\in X}a_{x}(\omega)+\max_{y\in Y}b_{y}(\omega)\le \gamma_{A,B}\quad\ ^\forall\omega\in\Omega
\end{align}
with a constant $\gamma_{A,B}\in(0, 2]$. Remark that relations of the form \eqref{def:L-P UR} always can be found for any pair of measurements. It is known \cite{PhysRevLett.60.1103, inequalities1988} that $\max_{x\in X}a_{x}(\omega)$ is related with $H\left(\{a_{x}(\omega)\}_{x}\right)$ by
\[
\exp\left[-H\left(\{a_{x}(\omega)\}_{x}\right)\right]\le\max_{x\in X}a_{x}(\omega),
\]
and thus we can observe from \eqref{def:L-P UR}
\begin{align*}
\exp\left[-H\left(\{a_{x}(\omega)\}_{x}\right)\right]+\exp\left[-H\left(\{b_{y}(\omega)\}_{y}\right)\right]\le \gamma_{A, B}.
\end{align*}
Considering that
\begin{align*}
&\exp\left[-H\left(\{a_{x}(\omega)\}_{x}\right)\right]+\exp\left[-H\left(\{b_{y}(\omega)\}_{y}\right)\right]\\
&\qquad\qquad\qquad\qquad\qquad\qquad\ge 2\exp\left[\frac{-H\left(\{a_{x}(\omega)\}_{x}\right)-H\left(\{b_{y}(\omega)\}_{y}\right)}{2}\right]
\end{align*}
holds, we can finally obtain an entropic relation
\begin{align}
\label{eq:entropic PUR via L-P}
H\left(\{a_{x}(\omega)\}_{x}\right)+H\left(\{b_{y}(\omega)\}_{y}\right)\ge -2\log\frac{\gamma_{A, B}}{2}\quad\ ^\forall\omega\in\Omega.
\end{align}
If $\gamma_{A, B}<2$, then \eqref{eq:entropic PUR via L-P} gives an entropic PUR because it indicates that it is impossible to prepare a state which makes both $H\left(\{a_{x}(\omega)\}_{x}\right)$ and $H\left(\{b_{y}(\omega)\}_{y}\right)$ zero, that is, there is no state preparation on which $A$ and $B$ take simultaneously definite values (note that \eqref{def:L-P UR} also gives a PUR if $\gamma_{A,B}<2$). In a finite dimensional quantum theory with its state space $\Omega_{\mathrm{QT}}$, it can be shown that
\begin{align}
\label{eq:quantum LP}
\max_{x}a_{x}(\omega)+\max_{y}b_{y}(\omega)\le1+\max_{x, y}|\braket{a_{x}|b_{y}}|\quad\ ^\forall\omega\in\Omega_{\mathrm{QT}},
\end{align}
where $A=\{\ketbra{a_{x}}{a_{x}}\}_{x}$ and $B=\{\ketbra{b_{y}}{b_{y}}\}_{y}$ are rank-1 PVMs. In that case, \eqref{eq:entropic PUR via L-P} can be rewritten as
\begin{align}
\label{eq:Deutsch ent PUR}
H\left(\{a_{x}(\omega)\}_{x}\right)+H\left(\{b_{y}(\omega)\}_{y}\right)\ge2\log\frac{2}{1+\underset{x, y}{\max}|\braket{a_{x}|b_{y}}|}\quad\ ^\forall\omega\in\Omega_{\mathrm{QT}},
\end{align}
which is the entropic PUR proved by Deutsch \cite{PhysRevLett.50.631}. There have been studies to find a better bound \cite{PhysRevLett.60.1103} or generalization \cite{10.2307/25051432} of \eqref{eq:Deutsch ent PUR}.
\subsection{Entropic MURs}
\label{subsec:entropic MURs}
When considering two measurements, they are not always jointly measurable \cite{PhysRevA.94.042108}. Their incompatibility is represented quantitatively by {\it measurement uncertainty relations} ({\it MURs}) in terms of {\it measurement error}, which describes the difference between the ideal, original measurement and their {\it approximate joint measurement} \cite{Busch_2013,PhysRevA.89.022123}.
Let $\Omega$ be a state space which is transitive and satisfies $V_{+}(\Omega)\equiv V_{+}=V^{*int}_{+\langle\cdot,\cdot\rangle_{G}}$, and we hereafter denote the inner product $\langle\cdot,\cdot\rangle_{G}$ simply by $\langle\cdot,\cdot\rangle$. Then, because of the self-duality of $V_{+}$, we can in the following identify effects with elements of $V_{+}$. There can be defined measurement error in terms of entropy in the identical way with the quantum one by Buscemi {\it et al.} \cite{PhysRevLett.112.050401}. Let in the GPT $E=\{e_{x}\}_{x\in X}$ be an ideal measurement defined in \eqref{def:ideal measurement} and $\mathcal{M}=\{m_{\hat{x}}\}_{\hat{x}\in \hat{X}}$ be a measurement. Since it was demonstrated in \cite{takakura2020uncertainty} that
\begin{align}
\label{eq:eigenstate}
\left\langle e_{x'}, \frac{e_{x}}{\langle u, e_{x}\rangle}\right\rangle=\delta_{x'x}
\end{align}
holds for all $x, x'\in X$, and
\begin{equation}
\begin{aligned}
\omega_{M}=u&=\sum_{x}e_{x}\\
&=\sum_{x}\langle u, e_{x}\rangle \frac{e_{x}}{\langle u, e_{x}\rangle}
\end{aligned}
\end{equation}
holds, the joint probability distribution
\begin{equation}
\label{def:joint dist}
\{p(x, \hat{x})\}_{x, \hat{x}}=\{\langle e_{x}, m_{\hat{x}}\rangle\}_{x, \hat{x}}=\left\{\langle u, e_{x}\rangle\left\langle\frac{e_{x}}{\langle u, e_{x}\rangle}, m_{\hat{x}}\right\rangle\right\}_{x, \hat{x}}
\end{equation}
is considered to be obtained in the measurement of $\mathcal{M}$ on the eigenstates $\{e_{x}/\langle u, e_{x} \rangle\}_{x}$ of $E$ (see \eqref{eq:eigenstate}) with the initial distribution
\begin{align}
\label{eq:initial dist}
\left\{
p(x)
\right\}_{x}=
\left\{
\langle u, e_{x}\rangle
\right\}_{x}.
\end{align}
In fact, as shown in \cite{PhysRevLett.112.050401}, the conditional entropy
\begin{equation}
\label{def:entropic MN}
\begin{aligned}
\mathsf{N}(\mathcal{M};E):
&=H(E|\mathcal{M})\\
&=\sum_{\hat{x}}p(\hat{x})H\left(\{p(x|\hat{x})\}_{x}\right)\\
&=\sum_{\hat{x}}\ang{u, m_{\hat{x}}}H\left(\left\{\ang{e_{x}, \frac{m_{\hat{x}}}{\ang{u, m_{\hat{x}}}}}\right\}_{x}\right)
\end{aligned}
\end{equation}
calculated via \eqref{def:joint dist} describes how inaccurately the actual measurement $\mathcal{M}$ can estimate the input eigenstates of the ideal measurement $E$. Strictly speaking, if we consider measuring $\mathcal{M}$ on $e_{x}/\langle u, e_{x}\rangle$ and estimating the input state from the output probability distribution
\[
\{p(\hat{x}|x)\}_{\hat{x}}=\left\{\left\langle m_{\hat{x}}, \frac{e_{x}}{\langle u, e_x\rangle}\right\rangle\right\}_{\hat{x}}
\]
by means of a guessing function $f:\hat{X}\to X$, then the error probability $p_{\mathrm{error}}^{f}(x)$ is given by
\[
p_{\mathrm{error}}^{f}(x)=1-\sum_{\hat{x}: f(\hat{x})=x}p(\hat{x}|x)=\sum_{\hat{x}: f(\hat{x})\neq x}p(\hat{x}|x).
\]
When similar procedures are conducted for all $x\in X$ with the probability distribution $\{p(x)\}_{x}$ in \eqref{eq:initial dist}, the total error probability $p_{\mathrm{error}}^{f}$ is
\begin{align}
p_{\mathrm{error}}^{f}=\sum_{x}p(x)\ p_{\mathrm{error}}^{f}(x)=\sum_{x\in X}\sum_{\hat{x}: f(\hat{x})\neq x}p(x, \hat{x}),
\end{align}
and it was shown in \cite{PhysRevLett.112.050401} that
\[
\min_{f}p_{\mathrm{error}}^{f}\to0\quad\iff\quad\mathsf{N}(\mathcal{M};E)=H(E|\mathcal{M})\to0.
\]
We can conclude from the consideration above that the entropic quantity \eqref{def:entropic MN} represents the difference between $E$ to be measured ideally and $\mathcal{M}$ measured actually, and thus we can define their entropic measurement error as \eqref{def:entropic MN}.
We are now in the position to derive a similar entropic relation to \cite{PhysRevLett.112.050401} with the generalized entropic measurement error \eqref{def:entropic MN}. We continue focusing on a GPT with its state space $\Omega$ being transitive and $V_{+}$ being self-dual with respect to the inner product $\langle\cdot,\cdot\rangle_{G}\equiv\langle\cdot,\cdot\rangle$, that is, $V_{+}=V^{*int}_{+\langle\cdot,\cdot\rangle}$. Let $A=\{a_{x}\}_{x\in X}$ and $B=\{b_{y}\}_{y\in Y}$ be a pair of ideal measurements defined in \eqref{def:ideal measurement}, and consider their approximate joint measurement $\mathcal{M}=\{m_{\hat{x}\hat{y}}\}_{(\hat{x},\hat{y})\in X\times Y}$ and its marginals
\begin{align*}
&\mathcal{M}^{A}=\{m_{\hat{x}}\}_{\hat{x}\in X}\quad\mbox{with}\quad m_{\hat{x}}=\sum_{\hat{y}\in Y}m_{\hat{x}\hat{y}}\\
&\mathcal{M}^{B}=\{m_{\hat{y}}\}_{\hat{y}\in Y}\quad\mbox{with}\quad m_{\hat{y}}=\sum_{\hat{x}\in X}m_{\hat{x}\hat{y}}.
\end{align*}
We can prove the following theorem.
\begin{thm}
\label{thm:entropic MUR}
Suppose that $\Omega$ is a transitive state space with its positive cone $V_{+}$ being self-dual with respect to $\langle\cdot,\cdot\rangle_{G}\equiv\langle\cdot,\cdot\rangle$, $A=\{a_{x}\}_{x}$ and $B=\{b_{y}\}_{y}$ are ideal measurements on $\Omega$, and $\mathcal{M}$ is an arbitrary approximate joint measurement of $(A, B)$ with its marginals $\mathcal{M}^{A}$ and $\mathcal{M}^{B}$.
If there exists a relation
\begin{align*}
H\left(\{a_{x}(\omega)\}_{x}\right)+H\left(\{b_{y}(\omega)\}_{y}\right)\ge \Gamma_{A, B}\quad\ ^\forall\omega\in\Omega
\end{align*}
with a constant $\Gamma_{A, B}$, then it also holds that
\begin{align*}
\mathsf{N}(\mathcal{M}^{A};A)+\mathsf{N}(\mathcal{M}^{B};B)\ge \Gamma_{A, B}.
\end{align*}
\end{thm}
\begin{pf}
Since for every $\hat{x}\in X$ and $\hat{y}\in Y$ $\omega_{\hat{x}\hat{y}}:=m_{\hat{x}\hat{y}}/\langle u, m_{\hat{x}\hat{y}}\rangle$ is a state, it holds that
\[
H\left(\{a_{x}(\omega_{\hat{x}\hat{y}})\}_{x}\right)+H\left(\{b_{y}(\omega_{\hat{x}\hat{y}})\}_{y}\right)\ge \Gamma_{A, B}
\]
for all $\hat{x}\in X$ and $\hat{y}\in Y$. Therefore, taking into consideration that $\langle u, m_{\hat{x}\hat{y}}\rangle\ge0$ for all $\hat{x}, \hat{y}$ and $\sum_{\hat{x}\hat{y}}\langle u, m_{\hat{x}\hat{y}}\rangle=1$, we obtain
\[
\sum_{\hat{x}\in X}\sum_{\hat{y}\in Y}\langle u, m_{\hat{x}\hat{y}}\rangle\left[H\left(\{a_{x}(\omega_{\hat{x}\hat{y}})\}_{x}\right)+H\left(\{b_{y}(\omega_{\hat{x}\hat{y}})\}_{y}\right)\right]\ge \Gamma_{A, B},
\]
or equivalently (see \eqref{def:entropic MN})
\begin{align}
\label{eq:proof1}
H(A\mid\mathcal{M})+H(B\mid\mathcal{M})\ge \Gamma_{A, B}.
\end{align}
Because of the nonnegativity of (classical) conditional mutual information \cite{Cover:2006:EIT:1146355}, it holds that
\[
H(A\mid\mathcal{M}^{A})\ge H(A\mid\mathcal{M})\quad\mbox{and}\quad H(B\mid\mathcal{M}^{B})\ge H(B\mid\mathcal{M}),
\]
which proves the theorem together with \eqref{eq:proof1}.\qed
\end{pf}
Theorem \ref{thm:entropic MUR} is a generalization of the quantum result \cite{PhysRevLett.112.050401} to a class of GPTs. In fact, when we consider a finite dimensional quantum theory and a pair of rank-1 PVMs $A=\{\ketbra{a_{x}}{a_{x}}\}_{x}$ and $B=\{\ketbra{b_{y}}{b_{y}}\}_{y}$, our theorem results in the one in \cite{PhysRevLett.112.050401} with the quantum bound $\Gamma_{A, B}=-2\log\max_{x, y}|\braket{a_{x}|b_{y}}|$ by Maassen and Uffink \cite{PhysRevLett.60.1103}. Theorem \ref{thm:entropic MUR} demonstrates that if there is an entropic PUR, i.e. $\Gamma_{A, B}>0$, then there is also an entropic MUR which shows that we cannot make both $\mathsf{N}(\mathcal{M}^{A};A)$ and $\mathsf{N}(\mathcal{M}^{B};B)$ vanish.
\subsection{Examples: entropic uncertainty in regular polygon theories}
\label{subsec:eg UR}
In this part, we restrict ourselves to the regular polygon theories. Although the self-duality with respect to $\langle\cdot,\cdot\rangle_{G}$ holds only for regular polygons with odd sides, we can introduce the entropic measurement error \eqref{def:entropic MN} in the same way and prove the same theorem also for even-sided regular polygon theories. This is because, as shown in \cite{takakura2020uncertainty}, effects can be regarded as elements of $V_{+}$ and \eqref{eq:eigenstate} holds still in even-sided regular polygon theories by means of a suitable parameterization. We restate this fact as another theorem.
\begin{thm}
\label{thm:entropic MUR polygon}
Theorem \ref{thm:entropic MUR} also holds for the regular polygon theories.
That is, for a regular polygon theory with its state space $\Omega_{n}$ ($n=3, 4, \cdots, \infty$), ideal measurements $A=\{a_{x}\}_{x}$ and $B=\{b_{y}\}_{y}$ on $\Omega_{n}$, and an arbitrary approximate joint measurement $\mathcal{M}$ of $(A, B)$ with its marginals $\mathcal{M}^{A}$ and $\mathcal{M}^{B}$,
if there exists a relation
\begin{align*}
H\left(\{a_{x}(\omega)\}_{x}\right)+H\left(\{b_{y}(\omega)\}_{y}\right)\ge \Gamma_{A, B}(n)\quad\ ^\forall\omega\in\Omega_{n},
\end{align*}
then
\begin{align*}
\mathsf{N}(\mathcal{M}^{A};A)+\mathsf{N}(\mathcal{M}^{B};B)\ge \Gamma_{A, B}(n).
\end{align*}
\end{thm}
In the following, we give a concrete value of $\Gamma_{A, B}(n)$ in Theorem \ref{thm:entropic MUR polygon} in the way introduced in subsection \ref{subsec:entropic PURs}.
Let us focus on the state space $\Omega_{n}$. Any nontrivial ideal measurement is of the form $\{e_{n}^{\mathrm{ext}}(i), u-e_{n}^{\mathrm{ext}}(i)\}$ (see \eqref{def:polygon pure effect} and \eqref{def:disc pure state and effect}). Thus, if we consider a pair of ideal measurements $A$ and $B$, then we can suppose that they are binary: $A=A^{i}\equiv\{a^{i}_{0}, a^{i}_{1}\}$ and $B=B^{j}\equiv\{b^{j}_{0}, b^{j}_{1}\}$ with $a^{i}_{0}=e_{n}^{\mathrm{ext}}(i)$ and $b^{j}_{0}=e_{n}^{\mathrm{ext}}(j)$ for $i, j\in\{0, 1, \cdots, n-1\}$ (or $i, j\in[0, 2\pi)$ when $n=\infty$). On the other hand, it holds that
\begin{equation}
\begin{aligned}
\label{eq:bound for states}
\max_{x=0, 1}a^{i}_{x}(\omega)+\max_{y=0, 1}b^{j}_{y}(\omega)
&\le\sup_{\omega\in\Omega_{n}}\max_{(x, y)\in\{0, 1\}^{2}}[(a^{i}_{x}+b^{j}_{y})(\omega)]\\
&=\max_{\omega\in\Omega_{n}^{\mathrm{ext}}}\max_{(x, y)\in\{0, 1\}^{2}}[(a^{i}_{x}+b^{j}_{y})(\omega)]
\end{aligned}
\end{equation}
because $\Omega_{n}$ is a compact set and any state can be represented as a convex combination of pure states. Therefore, if we let $\omega_{n}^{\mathrm{ext}}(k)$ be a pure state (\eqref{def:polygon pure state} and \eqref{def:disc pure state and effect}), then the value
\begin{align}
\label{eq:bound for pure state1}
\gamma_{A^{i}, B^{j}}:=\max_{k}\max_{(x, y)\in\{0, 1\}^{2}}[(a^{i}_{x}+b^{j}_{y})(\omega_{n}^{\mathrm{ext}}(k))]
\end{align}
gives a Landau-Pollak-type relation
\begin{equation}
\label{eq:LP bound}
\max_{x=0, 1}a^{i}_{x}(\omega)+\max_{y=0, 1}b^{j}_{y}(\omega)\le\gamma_{A^{i}, B^{j}}\quad\ ^\forall\omega\in\Omega_{n},
\end{equation}
to derive entropic relations
\begin{align}
H\left(\{a_{x}(\omega)\}_{x}\right)+H\left(\{b_{y}(\omega)\}_{y}\right)\ge -2\log\frac{\gamma_{A^{i}, B^{j}}}{2}\quad\ ^\forall\omega\in\Omega_{n},
\end{align}
and
\begin{align}
\mathsf{N}(\mathcal{M}^{A};A)+\mathsf{N}(\mathcal{M}^{B};B)\ge-2\log\frac{\gamma_{A^{i}, B^{j}}}{2}.
\end{align}
\begin{table}[p]
\centering
\caption{The value $(a^{i}_{x}+b^{j}_{y})(\omega_{n}^{\mathrm{ext}}(k))$ when $n$ is even.}
\begin{tabular}{|c||c|}
\hline
$x=0, y=0$ & $1+r_{n}^{2}\cos\left[\frac{\theta_{i}+\theta_{j}}{2}-\phi_{k}\right]\cos\left[\frac{\theta_{i}-\theta_{j}}{2}\right]$ \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
$x=1, y=0$ & $1+r_{n}^{2}\sin\left[\frac{\theta_{i}+\theta_{j}}{2}-\phi_{k}\right]\sin\left[\frac{\theta_{i}-\theta_{j}}{2}\right]$ \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
$x=0, y=1$ & ($i \longleftrightarrow j$ in the case of $x=1, y=0$) \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
$x=1, y=1$ & $1-r_{n}^{2}\cos\left[\frac{\theta_{i}+\theta_{j}}{2}-\phi_{k}\right]\cos\left[\frac{\theta_{i}-\theta_{j}}{2}\right]$\rule[-3.5mm]{0mm}{10.5mm} \\ \hline
\multicolumn{2}{|c|}{$\theta_{i}\equiv\frac{2i-1}{n}\pi,\quad\theta_{j}\equiv\frac{2j-1}{n}\pi,\quad\phi_{k}\equiv\frac{2k}{n}\pi$} \rule[-3.5mm]{-1.3mm}{10.5mm} \\ \hline
\end{tabular}
\label{table:even}
\end{table}
\begin{table}[p]
\centering
\caption{The value $(a^{i}_{x}+b^{j}_{y})(\omega_{n}^{\mathrm{ext}}(k))$ when $n$ is odd.}
\begin{tabular}{|c||c|}
\hline
$x=0, y=0$ & $\frac{2}{1+r_{n}^{2}}+\frac{2r_{n}^{2}}{1+r_{n}^{2}}\cos\left[\frac{\theta_{i}+\theta_{j}}{2}-\phi_{k}\right]\cos\left[\frac{\theta_{i}-\theta_{j}}{2}\right]$ \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
$x=1, y=0$ & $1+\frac{2r_{n}^{2}}{1+r_{n}^{2}}\sin\left[\frac{\theta_{i}+\theta_{j}}{2}-\phi_{k}\right]\sin\left[\frac{\theta_{i}-\theta_{j}}{2}\right]$ \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
$x=0, y=1$ & ($i \longleftrightarrow j$ in the case of $x=1, y=0$) \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
$x=1, y=1$ & $\frac{2r_{n}^{2}}{1+r_{n}^{2}}-\frac{2r_{n}^{2}}{1+r_{n}^{2}}\cos\left[\frac{\theta_{i}+\theta_{j}}{2}-\phi_{k}\right]\cos\left[\frac{\theta_{i}-\theta_{j}}{2}\right]$ \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
\multicolumn{2}{|c|}{$\theta_{i}\equiv\frac{2i}{n}\pi,\quad\theta_{j}\equiv\frac{2j}{n}\pi,\quad\phi_{k}\equiv\frac{2k}{n}\pi$} \rule[-3.5mm]{-1.3mm}{10.5mm} \\ \hline
\end{tabular}
\label{table:odd}
\end{table}
\begin{table}[p]
\centering
\caption{The value $(a^{i}_{x}+b^{j}_{y})(\omega_{n}^{\mathrm{ext}}(k))$ when $n$ is $\infty$.}
\begin{tabular}{|c||c|}
\hline
$x=0, y=0$ & $1+\cos\left[\frac{\theta_{i}+\theta_{j}}{2}-\phi_{k}\right]\cos\left[\frac{\theta_{i}-\theta_{j}}{2}\right]$ \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
$x=1, y=0$ & $1+\sin\left[\frac{\theta_{i}+\theta_{j}}{2}-\phi_{k}\right]\sin\left[\frac{\theta_{i}-\theta_{j}}{2}\right]$ \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
$x=0, y=1$ & ($i \longleftrightarrow j$ in the case of $x=1, y=0$) \rule[-3.5mm]{0mm}{10.5mm} \\ \hline
$x=1, y=1$ & $1-\cos\left[\frac{\theta_{i}+\theta_{j}}{2}-\phi_{k}\right]\cos\left[\frac{\theta_{i}-\theta_{j}}{2}\right]$ \rule[-3.5mm]{0mm}{10.5mm}\\ \hline
\multicolumn{2}{|c|}{$\theta_{i}\equiv i,\quad\theta_{j}\equiv j,\quad\phi_{k}\equiv k$} \rule[-3.5mm]{-1.3mm}{10.5mm} \\ \hline
\end{tabular}
\label{table:infty}
\end{table}
Table \ref{table:even} - Table \ref{table:infty} show the value of $(a^{i}_{x}+b^{j}_{y})(\omega_{n}^{\mathrm{ext}}(k))$ in terms of the angles $\theta_{i}$, $\theta_{j}$, and $\phi_{k}$ between the $x$-axis and the effects $a^{i}_{0}=e_{i}^{\mathrm{ext}}(i)$, $b^{j}_{0}=e_{j}^{\mathrm{ext}}(j)$, and the state $\omega_{n}^{\mathrm{ext}}(k)$ respectively when viewed from the $z$-axis (see \eqref{def:polygon pure state0} - \eqref{def:disc pure state and effect}). As an illustration, let us consider the case when $n$ is a multiple of $4$ and $\theta_{i}-\theta_{j}=\frac{\pi}{2}$. From Table \ref{table:even}, we can calculate $\gamma_{A^{i}, B^{j}}$ and $\phi_{k}$ which gives the maximum in \eqref{eq:bound for pure state1}:
\begin{align}
\label{eq:LP in polygon 1}
\gamma_{A^{i}, B^{j}}=1+\frac{r_{n}^{2}}{\sqrt{2}}\qquad\left(\phi_{k}=\theta_{i}-\frac{\pi}{4}\right)
\end{align}
when $n\equiv 4\ (\mbox{mod}\ 8)$, and
\begin{align}
\label{eq:LP in polygon 2}
\gamma_{A^{i}, B^{j}}=1+\frac{1}{\sqrt{2}}\qquad\left(\phi_{k}=\theta_{i}-\frac{\pi}{4}\pm\frac{1}{n}\right)
\end{align}
when $n\equiv 0\ (\mbox{mod}\ 8)$. \eqref{eq:LP in polygon 1} and \eqref{eq:LP in polygon 2} are consistent with the case when $n=\infty$:
\begin{align}
\label{eq:LP in polygon 3}
\gamma_{A^{i}, B^{j}}=1+\frac{1}{\sqrt{2}}\qquad\left(\phi_{k}=\theta_{i}-\frac{\pi}{4}\right).
\end{align}
\eqref{eq:LP in polygon 3} for $n=\infty$ can be regarded as corresponding to the quantum result \eqref{eq:quantum LP} with $A$ and $B$ being, for example, the $X$ and $Z$ measurements on a single qubit respectively. Note that $\gamma_{A^{i}, B^{j}}$ can be used also to evaluate the nonlocality of the theory via its degree of incompatibility \cite{takakura2020uncertainty}.
\section{Conclusion and Discussion}
\label{sec:conclusion}
Overall, we examined entropic PURs and MURs in GPTs with transitivity and self-duality with respect to a specific inner product and in the regular polygon theories.
We proved similar entropic relations to PURs and MURs in quantum theory also in the GPTs with the Landau-Pollak-type relations and the entropic measurement error generalized respectively.
It manifests that the entropic behaviors of two kinds of uncertainty in quantum theory are also observed in a broader class of physical theories, and thus they are more universal phenomena.
It is easy to obtain similar results if more than two measurements are considered.
We also gave concrete calculations of our results in the regular polygon theories.
The resulting theorems (Theorem \ref{thm:entropic MUR} and Theorem \ref{thm:entropic MUR polygon}) can be considered as entropic expressions of the ones in \cite{takakura2020uncertainty}.
Our theorems demonstrate in an entropic way that MURs are indicated by PURs and both of them can be evaluated by the same bound. We note similarly to \cite{takakura2020uncertainty} that while the quantum results in \cite{PhysRevLett.112.050401} were based on the ``ricochet'' property of maximally entangled states, our theorems were obtained without considering entanglement or even composite systems. It may be indicated that some characteristics of quantum theory can be obtained without entanglement.
Although there are researches suggesting that our assumptions on theories are satisfied in the presence of several ``physical'' requirements \cite{1367-2630-19-4-043025,Barnum_2014,PhysRevLett.108.130401}, future study will need to investigate whether our theorems still hold in GPTs with weakened assumptions.
To give better bounds to our inequalities, and to find information-theoretic applications of our results are also future problems.
\section*{Acknowledgment}
TM acknowledges financial support from JSPS (KAKENHI Grant Number 20K03732).
\bibliographystyle{hieeetr}
|
1,116,691,499,806 | arxiv | \section{1. Introduction}
\begin{table}[!b]
\caption{Networks Terminology and Notation}
\centering
\small
\begin{tabular}{| l | l | l |}
\hline
Term & Notation & Description \\
\hline
\hline
adjacency matrix & $A$ & A square matrix whose elements $A_{ij}$ have a value\\
& & different from 0 if there is an edge from some node $i$\\
& & to some node $j$. $A_{ij} = 1$ if the link is a simple \\
& & connection (unweighted graph). $A_{ij} = w_{ij}$ when the \\
& & link is assigned some kind of weight (weighted graphs).\\
& & If the graph is undirected (links connect nodes\\
& & symmetrically), $A$ is symmetric.\\
& &\\
degree & $k_i$& The number of nodes a node $i$ is connected to\\
& &\\
in-degree & $k_i^{\text{in}}$ &In a directed network, the number of incoming edges to\\
& & a node $i$\\
& &\\
out-degree & $k_i^{\text{out}}$ & In a directed network, the number of outgoing edges \\
& & emanating from a node $i$ \\
& &\\
weight & $w_{ij}$ & In a weighted network, weight assigned to an edge from\\
& & some node $i$ to some node $j$\\
& &\\
strength & $s_i = \sum^{k_i}_{j = 1} w_{ij}$ & The sum of weights attached to ties belonging to some\\
& & node $i$ \\
& &\\
Erd\H{o}s-R\'{e}nyi & $G(n, p)$ & A random graph of $n$ nodes and edges generated by \\
random graph& & connecting a pair of nodes with some probability $p$ \\
& & independently of all other edges\\
& &\\
Call Detail Records &CDRs & Digital records of the attributes of a certain instance of a \\
& & telecommunication transaction (such as the start time or \\
& & duration of a call), but not the content.\\
\hline
\end{tabular}
\end{table}
Humans interact with each other both online and in-person, forming and dissolving social ties throughout our lives. The flexible architecture of networks or graphs make them a useful paradigm for modeling these complex relationships at the individual, group, and population levels. Social network nodes typically represent individuals, and edges the connections between individuals, such as friendships, sexual contacts, or cell phone calls. Social networks have been shown to have a direct impact on public health \cite{Chris,Chris2,Fowl,Fowl2,Good}. For example, a recent study examined the social networks of households in Malegaon, India, finding that households that refuse to have their children vaccinated against polio have a disproportionate number of social ties to other vaccine-reluctant and vaccine-refusing households \cite{Polio}. Several studies have now successfully modeled the spread of epidemics through various populations, finding that different network structures have an effect on the potential efficacy of an intervention \cite{Banerjee, Valente, VanderWeele}. Studies have also leveraged network properties to target highly connected individuals in public health interventions \cite{Kim2}. The structure of connections in contact networks have also been shown to affect statistical power in cluster randomized trials \cite{Banerjee, Staples}. Additionally, new classes of connectivity-informed study designs for cluster randomized trials have been proposed recently, and these designs appear to simultaneously improve public health impact and detect intervention effects \cite{Banerjee, Kim, Harling}.
There is also accumulating evidence that the habits of our friends influence our own behavior, such as the uptake of smoking or lifestyle choices that can lead to obesity \cite{Chris, Chris2, Fowl, Fowl2}. Moreover, electronic billing records have been used to study patient-physician interaction networks to learn about structural properties of these networks and how these properties are associated with the quality and cost of health care \cite{Landon, Kim, Sima}.
Network structure can be studied at different scales ranging from local to global. Microscopic (local) structures include one or a few nodes, macroscopic (global) structures involve most to all nodes, and mesoscopic structures lie between the microscopic and macroscopic scales. It has been shown that the different structures are not independent \cite{Fort}. Specifically, several microscopic mechanisms are known to give rise to microsopic, mesoscopic, and macroscopic structure \cite{Bianconi, Fort, Kumpula}. For example, triadic closure, the process of getting to know a friend of a friend, can generate network communities \cite{Fort, Kumpula, Porter}. The term community here refers to a group of nodes that are densely connected to one another but only sparsely connected to the rest of the network. Community structure is of particular interest because most social networks have meaningful community structure that is related to their function. Communities also arise from humans forming tightly-knit groups through shared interests and similar characteristics, and they play an important role in the spread of disease and information \cite{Chris, Chris2, Fort}.
Social network data has traditionally been collected from surveys, mostly capturing small, static network snapshots at one point in time \cite{Faust}.
Dozens of different metrics have been created to quantify and study the structure of these simple networks. However, with the recent availability of increasingly rich, complex network data, limitations of these metrics have become increasingly clear. For example, betweenness centrality, the number or proportion of all pairwise shortest paths in a network that pass through a specified node, is used quite broadly but becomes much more computationally demanding as the size of the network increases and, even more importantly, it is unclear how meaningful this metric is in very large social networks. Another example of a widely used metric is the clustering coefficient, which is defined as the fraction of paths of length two in the network that are closed, i.e., groups of three nodes where ``the friend of my friend is also my friend'' \cite{Watts}.
The clustering coefficient has subsequently been extend to weighted and directed networks \cite{Saramaki, Tore}.
For the classic Erd\H{o}s-R\'{e}nyi random graph, the local clustering coefficient (the average clustering coefficient taken across all nodes in the network) asymptotically tends to $p$ where $p$ is the probability of forming a tie between any two nodes in the network \cite{Lecture}.
Most social networks are more clustered than corresponding random networks \cite{Newman3, Newman}. This observation is expected since people are more likely to become friends with others whom they meet through their current friends. While an expression has been derived for the mean of the local clustering coefficient, an expression for the variance has not been presented. Thus, classification of a given value for clustering as either high or low, and whether that value is statistically significant, is not currently possible and its value cannot be compared across networks.
The rest of this paper is organized as follows. In Section 2, we introduce the microscopic metric known as edge overlap and define extensions of edge overlap for weighted and directed networks. We then present two closed-form expressions for the mean and variance of each version of edge overlap for the Erd\H{o}s-R\'{e}nyi random graph and its weighted and directed counterparts. We then demonstrate the accuracy of our mean and variance approximations through simulation. Finally, we apply our results to empirical social network data and quantify the difference in the observed average overlap to the value expected for a corresponding random graph. We present the results of our data analysis in Section 3 and discuss our conclusions in Section 4. Supplementary material is contained in Appendices A, B, C and D.
\section{2. Methods}
\subsection{2.1 Overlap Extensions}
A central microscopic metric, which captures the effect of triadic closure and is related to the clustering coefficient, is edge overlap, the proportion of common friends two connected individuals share. In mathematical terms, the overlap between two connected individuals $i$ and $j$ is defined as
\begin{equation}\label{eq:1}
o_{ij} = \frac{n_{ij}}{(k_i -1) + (k_j -1) - n_{ij}}
\end{equation}
where $n_{ij}$ is the number of common neighbors of nodes $i$ and $j$, and $k_i$ $(k_j)$ denotes the degree, or number of connections, node $i$ $(j)$ has. Note that the tie between nodes $i$ and $j$ is not included in the calculation; overlap for the edge $(i, j)$ is defined only where $A_{ij} = 1$ and $k_1 + k_j >2$. Currently, edge overlap is only defined for simple networks in which edges are both unweighted and undirected \cite{Onnela}. Moreover, expressions for the mean and variance of edge overlap do not yet exist, making it hard to carry out statistical comparisons of this metric across networks, in particular networks of different sizes.
In a weighted network, each edge has a weight assigned to it. We define weighted overlap in Eq. \eqref{eq:2} as the proportion of total weight associated with ties to common friends nodes $i$ and $j$ share, and denote it $o^W_{ij}$:
\begin{eqnarray}\label{eq:2}
o^W_{ij} = \frac{\sum^{n_{ij}}_{k=1}(w_{ik} + w_{jk})}{s_i + s_j - 2w_{ij}}.
\end{eqnarray}
Here, $n_{ij}$ is the number of common neighbors of nodes $i$ and $j$, $w_{ij}$ denotes the weight associated with the tie between nodes $i$ and $j$, and $s_i$ $(s_j)$ denotes the strength of node $i$ $(j)$. According to the definition, the common friends of two connected individuals are first identified, the weights associated with these edges are summed together, and this sum is then divided by the combined strengths of the two nodes excluding the tie that connects them. The last step is intended to ensure consistency with the original version of edge overlap, i.e., the weight of the tie between the two individuals being considered is not included in the calculation of $o^W_{ij} $. Also, the metric is only defined for $w_{ij} > 0$ and for $s_i + s_j > 2w_{ij}$.
In a directed network, each edge has a direction associated with it. Thus, ties between nodes can be reciprocated, meaning that there can be an edge pointing from node $i$ to $j$ and another edge pointing from $j$ to $i$. For directed networks, the concept of a `common friend' of two individuals is ambiguous due to the directionality associated with the ties. We define a common friend as a node that creates a directed path of length two between the two nodes either from $i$ to $j$, $j$ to $i$, or both. Defining a common friend in this manner allows information to flow between $i$ and $j$ via a neighbor of both $i$ and $j$. To illustrate this, let $i$ and $j$ be the two connected individuals of interest, and $k$ a potential common friend. If there is a directed edge from $i$ to $k$ and a directed edge from $k$ to $j$, then there is a path a length two from $i$ to $j$ through $k$, and $k$ is considered a common friend. Using this criterion, we define directed overlap in Eq. \eqref{eq:3} as the proportion of paths of length two between two connected individuals, and denote it $o^D_{ij}$:
\begin{eqnarray}\label{eq:3}
o^D_{ij} = \frac{ \sum^{n}_{k = 1} (A_{ik}A_{kj} + A_{jk}A_{ki}) }{\text{min}(k_i^{\text{in}}, k_j^{\text{out}}) + \text{min}(k_j^{\text{in}}, k_i^{\text{out}} ) - 1}.
\end{eqnarray}
Here, $A_{ij}$ is the $(i,j)$ element of the directed adjacency matrix, $k_i^{\text{in}}$ $(k_j^{\text{in}})$ denotes the in-degree of node $i$ $(j)$, $k_i^{\text{out}}$ $(k_j^{\text{out}})$ denotes the out-degree of node $i$ $(j)$, and min$(\cdot,\cdot)$ the minimum of the two arguments. We consider each edge separately, even in the case of unreciprocated edges, and again, the tie between nodes $i$ and $j$ is not included in the calculation. The metric is only defined if $\text{min}(k_i^{\text{in}}, k_j^{\text{out}}) + \text{min}(k_j^{\text{in}}, k_i^{\text{out}}) > 1$.
\begin{figure}[t]
\vspace{-25pt}
\centering
\includegraphics[scale = 0.6]{networks_diagram.pdf}
\vspace{-75pt}
\captionsetup{width=\textwidth}
\caption{Schematics of edge overlap for (a) an unweighted network, (b) weighted network, and (c) directed network. Nodes are labeled with letters and weights are labeled with numbers.}
\end{figure}
\subsection{2.2 Erd\H{o}s-R\'{e}nyi Random Graph Models}
With the extensions of edge overlap defined above, one can easily compute the mean overlap (simple or weighted or directed) across all edges in the network. However, in order to make meaningful comparisons, such as to learn whether the observed value is small or large for the given network, or whether it represents a statistically significant deviation from what might be expected to occur at random, one needs to consider suitable null models and derive both the expected value and the variance of overlap under these null models. The Erd\H{o}s-R\'{e}nyi random graph model, often denoted $G(n,p)$, is the simplest model for generating random graphs \cite{Erdos}. In this model, graphs are created by considering ${n \choose 2}$ distinct pairs of $n$ nodes and connecting each pair with probability $p$ independently of all other dyads (node pairs). The random process can therefore be thought of as a series of Bernoulli trials or coin flips. Suppose we have a coin that lands on heads with probability $p$. If the coin flip results in heads, the two nodes are connected, otherwise, they are not. Note that here the number of edges is not fixed, but rather the probability of creating an edge.
\par The weighted random graph (WRG) is the weighted counterpart of the canonical Erd\H{o}s-R\'{e}nyi random graph \cite{Diego}. In this case, a network of $n$ nodes is generated by selecting each pair of nodes and carrying out a series of independent Bernoulli trials for each pair with success probability $p$. This process is continued until the first failure is encountered, and every success preceding the failure adds a unit weight to the tie. Note that if the first Bernoulli trial is a failure, the two nodes will not be connected. We can again relate this to the tossing of a coin. If the coin lands on heads with probability $p$, the weight associated with an edge is given by the number of heads flipped until the first tails appears, and therefore tie weights are distributed according to the geometric distribution. This process is repeated for every distinct pair of nodes in the network.
\par The directed random graph is the directed version of the Erd\H{o}s-R\'{e}nyi random graph, and it is generated in a very similar manner as its canonical counterpart. For two nodes $i$ and $j$, in a network of $n$ nodes, an edge pointing from $i$ to $j$ is created with probability $p$ and, likewise, an edge pointing from $j$ to $i$ is also connected independently with probability $p$ \cite{Erdos, Erdos2, Ballobas}. In this case, in the coin analog of the model, we flip a coin twice for each pair of nodes, one flip for each direction. This process is repeated for every pair of nodes in the network.
\subsection{2.3 Erd\H{o}s-R\'{e}nyi Overlap}
In order to perform inference about overlap, i.e., to compare point estimates of overlap across networks, we need to know the mean and variance of each version of overlap under the null model in question. To fix our notation, we will let uppercase letters stand for random variables: $K_i$ denotes the degree of node $i$, $N_{ij}$ the number of common neighbors of nodes $i$ and $j$, $S_i$ the strength of node $i$, $W_{ij}$ the weight of the edge connecting nodes $i$ and $j$, $K^{in}_i$ the in-degree of node $i$, $K^{out}_i$ the out-degree of node $i$, and $A_{ij}$ the adjacency matrix element, where a nonzero (positive) value represents the existence of an edge between nodes $i$ and $j$ (binary in the case of unweighted graphs).
For the Erd\H{o}s-R\'{e}nyi random graph, a given node is connected to each of the remaining $n-1$ nodes with probability $p$, and its resulting degree can thus be viewed as a sum of independent Bernoulli trials. Therefore, as is well known, $K_i \sim$ binomial$(n-1, p)$, which can be approximated by a Poisson$(np)$ distribution for large $n$. For any pair of (connected) nodes, the probability of both nodes being connected to the same neighboring node, meaning that they have a common neighbor, is $p^2$ as each edge occurs independently of any others. Moreover, the total number of possible common friends two nodes can have is $n - 2$. Thus, $N_{ij} \sim$ binomial$(n-2, p^2)$, which can similarly be approximated by a Poisson$(np^2)$ random variable for large $n$. With these definitions, the numerator of edge overlap is a Poisson random variable, and the denominator is the difference of two Poisson random variables, known as a Skellam random variable \cite{Skellam}. In this case, the denominator is a Skellam$(2np - 2 - np^2)$ random variable. We can now view overlap as a random variable as in Eqn. \eqref{eq:unweightedrv}.
\begin{equation}\label{eq:unweightedrv}
O_{ij} = \frac{N_{ij}}{(K_i -1) + (K_j -1) - N_{ij}}
\end{equation}
Edge overlap is a ratio of two dependent random variables since the maximum number of possible common friends is bounded by the min$(K_i, K_j)$. This dependency increases the difficulty of deriving exact expressions for the mean and variance of overlap. However, despite this dependence, we can approximate both the mean and variance in two different ways. The first approach observes the weakness of the dependence between the numerator and denominator and simply ignores it, defining the ratio as a function of independent random variables. Approximations for the mean and variance of the ratio are then derived using Taylor expansions of the function about the means of the random variables \cite{Kendall, Johnson}. This results in Eqs. \eqref{eq:unmean} and \eqref{eq:unvar} (for details, see Appendix A.1.).
\begin{eqnarray}
\mathbb{E}[O_{ij}] &=& \frac{p}{2-p}\label{eq:unmean} \\[15pt]
\text{Var}(O_{ij}) &=& \frac{np^2}{(2np - 2 - np^2)^2} + \frac{n^2p^4(2np - 2 + np^2)}{(2np - 2 - np^2)^4}.\label{eq:unvar}
\end{eqnarray}
Our second approach incorporates results from \cite{Oxford}, where the local clustering coefficient for an Erd\H{o}s-R\'{e}nyi random graph is also written as a ratio of dependent random variables with the intention of deriving its distribution. The dependency is eliminated by replacing the random variable in the denominator with its expectation, and this approximation turns the denominator into a constant. Thus, the distribution of the clustering coefficient is approximated by a scaled version of the random variable in the numerator. It is subsequently shown that this is a good approximation for the actual distribution. We adopt the same approach here, and approximate the distribution of edge overlap by replacing the denominator with its expectation. We then derive the mean and variance of $O_{ij}$ using the distributional properties of the numerator. This results in the expressions in Eqs. \eqref{eq:unmean2} and \eqref{eq:unvar2} (for details, see Appendix B.1.):
\begin{eqnarray}
\mathbb{E}[O_{ij}] &=& \frac{p}{2-p}\label{eq:unmean2} \\[15pt]
\text{Var}(O_{ij}) &=& \frac{np^2}{(2np - 2 - np^2)^2}.\label{eq:unvar2}
\end{eqnarray}
Note that the expressions for the mean, Eqs. \eqref{eq:unmean} and \eqref{eq:unmean2}, are equivalent, but the expressions for the variance, Eqs. \eqref{eq:unvar} and \eqref{eq:unvar2} differ, with the expression for Eq. \eqref{eq:unvar2} corresponding to the first term of Eq. \eqref{eq:unvar}.
We use the same two approaches for the weighted and directed cases. For the weighted Erd\H{o}s-R\'{e}nyi random graph (WRG), we first define the distributions of $W_{ij}$ and $S_i$. Given how WRGs are constructed (as given above), the tie weights follow a geometric distribution, such that if an edge is placed between a pair of nodes with probability $p$, tie weight distribution will be $W_{ij} \sim$ geometric$(1-p)$. It then follows that node strength $S_i$ is a sum of geometric random variables, i.e., is the sum of the weights of the ties that are adjacent to the given node, leading to $S_i \sim$ negative binomial$(n-1, 1-p)$ \cite{Diego}.
For the first approach, the numerator can be written as $\sum^{N_{ij}}_{k=0} (W_{ik} + W_{jk})$, where $N_{ij}$ is again the number of common neighbors of nodes $i$ and $j$, and is distributed as in the unweighted Erd\H{o}s-R\'{e}nyi random graph. Thus, the numerator is a sum of geometric random variables, where the number of summed variables is itself a random variable. Moreover, we must have $W_{ik} > 0$ and $W_{jk} >0 $ since a common neighbor of two nodes can only exist if both nodes are attached to the node in question (the common neighbor). To address this constraint, each of the random variables must first be transformed into zero-truncated geometric random variables, and their mean and variance altered correspondingly. We can now write weighted overlap as a random variable as in Eqn. \eqref{eq:weightedrv}.
\begin{eqnarray}\label{eq:weightedrv}
O^W_{ij} = \frac{\sum^{N_{ij}}_{k=1}(W_{ik} + W_{jk})}{S_i + S_j - 2W_{ij}}.
\end{eqnarray}
Now hierarchical models can be used to find the mean and variance of the numerator, and these results combined with the mean and variance values of the denominator can be used to derive the expressions in Eqs. \eqref{eq:wmean} and \eqref{eq:wvar} (for details, see Appendix A.2.):
\begin{eqnarray}
\mathbb{E}[O^W_{ij}] &=& p \label{eq:wmean}\\[15pt]
\text{Var}(O^W_{ij}) &=& \frac{p+1}{n}.\label{eq:wvar}
\end{eqnarray}
The second approach again replaces the denominator with its expectation. The mean and variance derivations are then straightforward and result in the expressions in Eqs. \eqref{eq:wmean2} and \eqref{eq:wvar2}. Again, the expressions for the mean are equivalent for both approaches, and the variance expressions are quite similar (for details, see Appendix B.2.):
\begin{eqnarray}
\mathbb{E}[O^W_{ij}] &=& p \label{eq:wmean2}\\[15pt]
\text{Var}(O^W_{ij}) &=& \frac{np^2(p+2)}{2(np-1)^2}.\label{eq:wvar2}
\end{eqnarray}
The derivations for the directed Erd\H{o}s-R\'{e}nyi random graph are more complicated and do not have a closed form due to the minimum expressions in the denominator. Focusing on the numerator, each of the $A_{ik}A_{kj}$ and $A_{jk}A_{ki}$ terms is equal to one if and only if both adjacency matrix values are equal to 1, which happens with probability $p^2$ since each edge is independent. Thus, each of the terms is a Bernoulli$(p^2)$ random variable, and the numerator consists of a sum of 2$n$ independent Bernoulli random variables, meaning it is a binomial$(2n, p^2)$ random variable, which we will again approximate with a Poisson$(2np^2)$ random variable. The denominator includes the minimum of two identically distributed random variables $K^{in}_i$ and $K^{out}_i$. Due to the definition given above, the in and out degrees of nodes $i$ and $j$ cannot equal 0, making them zero-truncated binomial$(n-1, p)$ random variables, which will also be approximated as zero-truncated Poisson$(np)$ random variables since $n$ is assumed to be large. We can now write directed overlap as a random variable as in Eqn. \eqref{eq:directedrv}.
\begin{eqnarray}\label{eq:directedrv}
O^D_{ij} = \frac{ \sum^{n}_{k = 1} (A_{ik}A_{kj} + A_{jk}A_{ki}) }{\text{min}(K_i^{\text{in}}, K_j^{\text{out}}) + \text{min}(K_j^{\text{in}}, K_i^{\text{out}} ) - 1}.
\end{eqnarray}
The mean and variance of the denominator can now be calculated and used to derive the expressions in Eqs. \eqref{eq:dmean} and \eqref{eq:dvar} \cite{Kendall} (for details, see Appendix A.3.):
\begin{equation}
\mathbb{E}[O^D_{ij}] = \frac{np^2}{e^{-2np}\sum^{(n-1)}_{k=1}\left[\sum^{(n-1)}_{j=k} \frac{(np)^j}{j!} \right]^2 - 0.5} \label{eq:dmean}\\[15pt]
\end{equation}
\begin{equation}
\text{Var}(O^D_{ij}) = \frac{2n^2p^4}{(2e^{-2np}\sum^{(n-1)}_{k=1}\left[\sum^{(n-1)}_{j=k} \frac{(np)^j}{j!} \right]^2 - 1)^2} + \frac{\frac{32n^3p^5e^{np}}{e^{np} - 1} \left[ 1 - \frac{np}{e^{np}-1}\right]}{(2e^{-2np}\sum^{(n-1)}_{k=1}\left[\sum_{j=k}^{(n-1)} \frac{(np)^j}{j!} \right]^2 - 1)^2}.\label{eq:dvar}
\end{equation}
The second approach again replaces the denominator with its expectation, and the mean and variance derivations result in the expressions in Eqs. \eqref{eq:dmean2} and \eqref{eq:dvar2} (for details, see Appendix B.3.). Again, the expressions for the mean are equivalent for both approaches, but note that the expression for the variance using the second approach in Eq. \eqref{eq:dvar2} is equivalent to the first term of the variance resulting from the first approach in Eq. \eqref{eq:dvar}.
\begin{eqnarray}
\mathbb{E}[O^D_{ij}] &=& \frac{np^2}{e^{-2np}\sum^{(n-1)}_{k=1}\left[\sum^{(n-1)}_{j=k} \frac{(np)^j}{j!} \right]^2 - 0.5} \label{eq:dmean2}\\[15pt]
\text{Var}(O^D_{ij}) &=& \frac{2n^2p^4}{(2e^{-2np}\sum^{(n-1)}_{k=1}\left[\sum^{(n-1)}_{j=k} \frac{(np)^j}{j!} \right]^2 - 1)^2} \label{eq:dvar2}
\end{eqnarray}
\subsection{2.4 Simulation Studies}
We conducted simulation studies to evaluate the accuracy of the proposed mean and variance expressions for each version of Erd\H{o}s-R\'{e}nyi edge overlap. We simulated 5,000 realizations of networks with $n =1,000$ nodes for various values of $p \in (0,1)$. The mean and variance of edge overlap was calculated for each network realization, and those values subsequently averaged over all simulations. We considered values of $p > 1/n$, such that the resulting average degree $np > 1$, which ensures (asymptotically) that the graphs have non-vanishing largest connected components.
Figure \ref{fig:sims} displays the simulation results and accuracy of our approximations. The top row contains the results for the mean unweighted overlap (Figure \ref{fig:sims}a), mean weighted overlap (Figure \ref{fig:sims}b) and mean directed overlap (Figure \ref{fig:sims}c). In each plot, the red dots represent the simulated results, black lines represent the theoretical values using the first approach and blue lines the second approach. Note that each expression for average overlap is equivalent for the two approaches, making only the black lines visible. The bottom row of panels shows the results for the variance of unweighted overlap (Figure \ref{fig:sims}d), weighted overlap (Figure \ref{fig:sims}e) and directed overlap (Figure \ref{fig:sims}f). In each plot, black lines represent the theoretical values using the first approach, blue lines the second approach, and the red dots the simulated values.
For each version of overlap, our theoretical approximations of the mean closely match the simulations, with the unweighted case being the best fit for all values of $np$. The approximations of the variance overall are not as accurate, where the accuracy of the fit depends on the value of $np$. In the unweighted case (Figure \ref{fig:sims}d), both theoretical approaches match the simulated values for average degree $np \geq 10$ until about $np = 100$. The first approximation then deviates from the simulated values, followed by the second approach deviating from them when $np \approx 300$. In the weighted case (Figure \ref{fig:sims}e) the first approximation is more accurate than the second for average degree less than or equal to about 30. The approaches are then equally precise until the average degree is approximately 170; after this point, the second approximation is closer to the simulated values. In the directed case (Figure \ref{fig:sims}f) the two approximations are equivalent and closely match the simulated values until the average degree reaches about 10. After that point, approach two is more accurate. Furthermore, in all cases, both approximations systematically overestimate variability. We stress that this overestimation leads to inflated standard errors and thus to conservative hypothesis tests, which is preferable over the opposite situation, i.e., having deflated standard errors and anti-conservative tests.
\subsection{2.5 Data Analysis}
As an application of our derivations to analysis of empirical social networks, we used social network data collected in 2006 from 75 villages housed in 5 districts in rural southern Karnataka, India, all within 3 hours driving distance from Bangalore (Figure \ref{fig:india}) \cite{Banerjee}. The data were collected as part of a study that examined how participation in a microfinance program diffuses through social networks. First, a baseline survey was conducted in all 75 villages. The survey consisted of a village questionnaire, a full census that collected data on all households in the villages, and a detailed follow-up survey fielded to a subsample of individuals. The village questionnaire collected data on village leadership, the presence of pre-existing non-governmental organizations (NGOs) and savings self-help groups and various geographical features of the area. The household census gathered demographic information, GPS coordinates of each household and data on a variety of amenities for every household in each village (roof type, latrine type, and access to electric power). The individual surveys were administered to a random sample of villagers in each village and were stratified by religion and geographic sub-location. Over half of the households in each stratification were sampled, yielding a sample of about 46$\%$ of all households per village. The individual questionnaire asked for information including age, sub-caste, education, language, native home, and occupation of the person. Additionally, the survey included social network data along 12 dimensions: friends or relatives who visit the respondent's home, friends or relatives the respondent visits, any kin in the village, nonrelatives with whom the respondent socializes, those who the respondent receives medical advice from, who the respondent goes to pray with, from whom the respondent would borrow money, to whom the respondent would lend money, from whom the respondent would borrow or to whom the respondent would lend material goods,
\begin{figure}[!t]
\centering
\vspace{5pt}
\includegraphics[scale = 0.4]{sims-all.png}
\captionsetup{width=\textwidth}
\caption{Simulation results for the mean (top row) and variance (bottom row) of each type of Erd\H{o}s-R\'{e}nyi overlap. The first column corresponds to the unweighted Erd\H{o}s-R\'{e}nyi overlap, the second column to the weighted Erd\H{o}s-R\'{e}nyi overlap and the third to the directed Erd\H{o}s-R\'{e}nyi overlap case. The top row plots (a), (b) and (c) plot the average overlap on the $y$-axis and average degree ($np$) on the $x$-axis. The red dots represent values from the simulations, and the black line represents the theoretical outcome using approach 1 and the blue line represents the theoretical outcome using approach 2. Note that the blue lines are completly covered by the black lines since the values for average overlap are the same for both approaches. The bottom row plots (d), (e) and (f) plot the variance of edge overlap on the $y$-axis and average degree ($np$) on the $x$-axis. In each plot, the red dots represent values from the simulations, the black line represents the theoretical outcome using approach 1 and the blue line represents the theoretical outcome using approach 2.}
\label{fig:sims}
\end{figure}
\noindent from whom the respondent gets advice, and to whom the respondent gives advice.
\begin{figure}[!th]
\caption{A map of the districts of Karnataka, India. The five districts colored in green house all of the villages included in the data set. The districts included are Bangalore, Bangalore Rural, Kolar, Ramanagara and Chikballapura \cite{IndiaData, IndiaR}.}
\label{fig:india}
\centering
\vspace{5pt}
\includegraphics[scale = 0.2]{karn_map.png}
\captionsetup{width=\textwidth}
\end{figure}
The median pairwise distance between villages was 46km and the number of cross-village ties was minimal, allowing the villages to be regarded as independent networks. The villages were linguistically homogeneous but had variability in caste. Each village contained anywhere from 354 to 1775 residents, with a total population of 69,441 people in the 75 villages combined. The number of edges across all social networks totaled 2,361,745 which included 480 self-loops and 6,402 isolated dyads. The average degree was 6.79 (standard deviation of 4.03), and the average number of connected components was 17.99 per village. Among the respondents for whom covariate data was collected via the individual surveys, 55.4\% were female and 44.6\% were male. The mean age was 39 years with a range of 10 to 99 years. Four different castes were represented: scheduled caste, scheduled tribe, general caste, and OBC (``other backward castes''), with a majority of respondents members of the general and OBC castes ($\approx$ 69.5\%) \cite{Banerjee}.
\begin{table}[htb]
\centering
\vspace{5pt}
\begin{tabular}{|ccl|}
\hline
Label & \hspace{0.75cm} & Type of social interaction \\
\hline
\hline
1 && The respondent borrows money from this individual\\
2 && The respondent gives advice to this individual \\
3 && The respondent helps this individual make a decision \\
4 && The respondent borrows kerosene or rice from this individual\\
5 && The respondent lends kerosene or rice to this individual\\
6 && The respondent lends money to this individual\\
7 && The respondent obtains medical advice from this individual\\
8 && The respondent engages socially with this individual \\
9 && The respondent is related to this individual\\
10 && The respondent goes to temple with this individual\\
11 && The respondent has visited this individual's home\\
12 && The respondent has been invited to this individual's home\\
\hline
\end{tabular}
\caption{The types of social interactions recorded for individuals in each village. }
\label{table:relations}
\end{table}
We first calculated the average unweighted overlap for each type of social relationship (labeled 1-12, see Table \ref{table:relations}) for each village by treating all ties as undirected and by removing all self-loops since they do not contribute to edge overlap (Figure \ref{fig:full_raw}). Then we standardized each average overlap by subtracting the expected mean and dividing by the standard deviation under the null; the results from the unweighted Erd\H{o}s-R\'{e}nyi overlap derivations using the first approach discussed above (Figure \ref{fig:full_stand} in Appendix C). We stratified edges according to the availability of nodal attributes (since not all villagers completed an individual survey), sex, caste and age. Here we detail our results from stratifying by sex with Figures \ref{fig:sex_raw} and \ref{fig:sex_stand} showing raw and standardized overlap for female-female (F/F), male-male (M/M) and male-female (M/F) ties. For details and figures of stratification by attribute availability, age and caste, see Appendix C.
We next collapsed the twelve unweighted networks into one weighted network. Specifically, the weight of a tie between two individuals corresponds to the number of types of social relationships they are engaged in with each other. For example, if individual $i$ borrows money from, gives advice to and goes to temple with individual $j$, the weight of the (undirected) tie between $i$ and $j$ would be equal to 3. Similar to the unweighted networks, we stratified the weighted networks by nodal attributes, including the presence or absence of attribute information, sex, caste and age. Figure \ref{fig:sex_w} shows the distributions of raw and standardized weighted overlap for F/F, M/M and M/F ties. See Appendix C for figures stratified by attribute availability, caste and age.
\section{3. Results}
Here we detail our observations of the figures in the previous section where overlap is stratified by sex. For explanations about the figures detailing stratification by attribute information, caste and age, see Appendix D. In Figure \ref{fig:sex_raw}, the median average unweighted overlap is the largest for F/F ties, followed by M/F ties and then M/M ties. There is a clear separation in the values of average overlap between F/F and M/M ties with no overlap in values for interaction types 1, 2, 3, 4, 5, 6, 8, and 11. This suggests that women in these villages tend to form `cliques', tighter friendship circles where most individuals interact with each other more regularly and intensely than others in the same setting, much more than men for every type of social interaction. This kind of social development is quite common among females and has been studied in the social sciences \cite{Alison1, Alison2}. However, this trend could also be due to the significant difference in the average degree for males and females across the villages (Figure \ref{fig:sex_degrees} in Appendix C). The degrees of two attached nodes directly effects the value of overlap; it is easier for pairs of nodes with smaller degrees to have a higher value of overlap due to the smaller number of neighbors they need to have in common. The values of average overlap
\begin{figure}[!h]
\vspace{5pt}
\includegraphics[scale = 0.5]{overlap_gender_box_inclusive.pdf}
\captionsetup{width=\textwidth}
\caption{Distribution of average unweighted overlap for each village for each type of social interaction stratified by sex. A female individual is labeled with an `F' and a male individual is labeled with an `M'. We stratified the edges by sex, and labeled an edge between two female individuals as `F/F', an edge between two male individuals as `M/M', and an edge between a female individual and a male individual as `M/F'. The y-axis represents the proportion of average edge overlap and the x-axis represents the type of social interaction.}
\label{fig:sex_raw}
\end{figure}
\begin{figure}[!b]
\includegraphics[scale = 0.5]{overlap_gender_box_inclusive_stand.pdf}
\captionsetup{width=\textwidth}
\caption{Distribution of standardized unweighted overlap for each village for each type of social interaction stratified by sex. A female individual is labeled with an `F' and a male individual is labeled with an `M'. We stratified the edges by sex, and labeled an edge between two female individuals as `F/F', an edge between two male individuals as `M/M', and an edge between a female individual and a male individual as `M/F'. The y-axis represents the standardized value, also known as the Z-score, and the x-axis represents the type of social interaction.}
\label{fig:sex_stand}
\end{figure}
\noindent for the M/F ties are closer to the values for F/F ties than M/M ties and their distributions tend to have smaller variance compared to the other types of ties. This suggests that individuals who have mixed-sex social ties typically have more friends in common than individuals who are part of a M/M social tie. Interestingly, when the average overlap values are standardized, which effectively adjusts for differences in average degree, M/F and M/M ties have much more similar values and are still well below the F/F ties values. The only exceptions are for interaction types 9 and 10 where the F/F and M/M ties have comparable values. All values are significantly higher than expected under the null, which is not surprising.
\par Figure \ref{fig:sex_w} shows that when ties are aggregated across interaction types, the values of average weighted overlap for F/F and M/F ties are very similar. The distribution for F/F ties has larger values and more variation, but its median is almost equivalent to that of the M/F ties distribution. It can also be seen that the values for average weighted overlap are much smaller for M/M ties; in fact there is no overlap in values between the M/M ties and the F/F and M/F ties. This again points to females having the tendency to create social `cliques' more often than males. This trend is also seen when all values are standardized (Figure \ref{fig:sex_w}b). Again, all values are significantly higher than expected for each type of tie, as we would expect from Figure \ref{fig:sex_stand} above.
\begin{figure}[!t]
\begin{subfigure}{.45\textwidth}
\centering
\vspace{5pt}
\includegraphics[width=1.05\textwidth]{woverlap_gender_box_inclusive.pdf}
\caption*{(a)}
\label{fig:sex_raw_w}
\end{subfigure}
\qquad
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=1.05\textwidth]{woverlap_stand_gender_box_inclusive.pdf}
\caption*{(b)}
\label{fig:sex_stand_w}
\end{subfigure}
\caption{Distribution of average weighted overlap (a) and standardized weighted overlap (b) stratified by sex. A female individual is labeled with an `F' and a male individual is labeled with an `M'. We stratified the edges by sex, and labeled an edge between two female individuals as `F/F', an edge between two male individuals as `M/M', and an edge between a female individual and a male individual as `M/F'. The y-axis in (a) represents the proportion of average weighted edge overlap, and the y-axis in (b) represents the standardized value, also known as the Z-score.}
\label{fig:sex_w}
\end{figure}
\section{4. Conclusions and Discussion}
In this paper we introduced extensions of edge overlap for weighted and directed networks. We also used the classic Erd\H{o}s-R\'{e}nyi random graph and its weighted and directed counterparts to define a null model and derive approximations for the expected mean and variance of edge overlap for each type of graph. Edge overlap can be standardized using these approximations allowing its comparison across networks of different size. We used these approximations in a data analysis of the social networks of 75 villages in rural India. We found that overall, the average proportion of overlap was much higher than expected under the null for each type of social interaction, especially when the social activity was going to temple together.
We also found that there is a marked difference in the amount of overlap between female-female ties and male-male ties, with female-female ties consistently achieving much higher values of overlap. This could be a consequence of two types of mechanisms; the average degrees of males versus females and the tendency of women forming friendship `cliques' with other women much more frequently than men forming the same types of friendship circles with other men. We found that in this case, men have a significantly higher degree than women across all networks. Whichever mechanism is at work here, this structural information could lead to an alternative method of eliciting social network data to optimize diffusion or intervention strategies based on the type of tie.
While our work generalizes a central microscopic network metric, making it more broadly applicable, there are limitations to our work. The Erd\H{o}s-R\'{e}nyi random graph model is a simple and somewhat naive null model in the context of social networks. This model does not preserve the degree distribution and is relatively easy to reject. An alternative would be to derive these expressions for the configuration model, which does preserve the degree distribution. However, deriving the mean and variance under the configuration model null model would be considerably more difficult. Another limitation with our mean and variance approximations is the ignoring of the correlations that are present among the random variables in the overlap expressions. In each version of overlap, the number of common neighbors is constrained by the degree of the edge-sharing nodes, making the numerator dependent upon the denominator. While our approximations are quite precise for the majority of values of mean degree, they could be improved if the correlation were also included in the approximations.
\section{Acknowledgments}
We thank Banerjee et. al. for making the India data set publicly available.
\begin{filecontents}{authors.bib}
@article{Skellam,
author = "Skellam, J.G.",
journal = {Journal of the Royal Statistical Society},
title = {The Frequency Distribution of the Difference Between Two Poisson Variates Belonging to Different Populations},
volume = {109},
year = {1946},
}
@misc{Lecture,
author = "Reinert, G.",
title = {Probability and Statistics for Network Analysis},
institution = {University of Oxford},
howpublished = {University Lecture},
year = {2012},
}
@article{Kim2,
author = {Kim, D. and A. Hwong and D. Stafford and D. Hughes and A. O'Malley and J. Fowler and N. Christakis},
journal = {The Lancet},
title = {Social network targeting to maximise population behaviour change: a cluster randomised controlled trial},
volume = {386},
year = 2015,
}
@article{Erdos,
author = "Erd\H{o}s, P. and A. R\'{e}nyi ",
journal = {Publicationes Mathematicae},
title = {On random graphs I.},
volume = {6},
pages = {290-297},
year = {1959},
}
@article{Erdos2,
author = "Erd\H{o}s, P. and A. R\'{e}nyi ",
journal = {Publ. Math. Inst. Hung. Acad. Sci.},
title = {On the evolution of random graphs},
volume = {5},
year = {1960},
}
@article{Polio,
author = {Onnela, J-P. and B. Landon and AL Kahn and D. Ahmed and H. Verma and A. O'Malley and S. Bahl and R. Sutter and N. Christakis},
journal = {Social Science and Medicine},
title = {Polio vaccine hesitancy in the networks and neighborhoods of Malegaon, India},
year = {2016},
}
@article{Staples,
author = {Staples, P. and E. Ogburn and J-P. Onnela},
journal = {Scientific Reports},
title = {Incorporating contact network structure in cluster randomized trials},
volume = {5},
year = {2015},
}
@report{Kim,
author = {Kim, D. and AJ O'Malley and J-P. Onnela},
title = {The Social Geography of American Medicine},
institution = {Harvard T.H. Chan School of Public Health},
year = {2016},
}
@article{Bianconi,
author = "Bianconi, G. and R. Darst and J. Iacovacci and S. Fortunato",
journal = {Physics Review},
title = {Triadic closure as a basic generating mechanism of communities in complex networks},
volume = {90},
year = {2014},
}
@article{Diego,
author = "Garlaschelli, D.",
journal = {New Journal of Physics},
title = {The weighted random graph model},
volume = {11},
year = {2009},
}
@article{Tore,
author = "Tore, O.",
journal = {Social Networks},
title = {Triadic closure in two-mode networks: Redefining the global and local clustering coefficients},
volume = {35},
pages = {159-167},
year = {2013},
}
@article{Newman3,
author = "Newman, M.",
journal = {Phys. Rev. E},
title = {Properties of highly clustered networks},
volume = {68},
year = {2003},
}
@article{Porter,
author = "Porter, M. and Jukka-Pekka Onnela and Peter J. Mucha",
journal = {Notices of the AMS},
title = {Communities in Networks},
volume = {56},
pages = {1082 - 1166},
year = {2009},
}
@article{Handcock2,
author = "Handcock, M. and K. Gile",
title = {Modeling Social Networks from Sampled Data},
journal = {AMS},
year = {2008},
}
@article{Handcock,
author = "Handcock, M. and K. Gile",
title = {Modeling Social Networks with Sampled or Missing Data},
year = {2007},
}
@article{Sima,
author = "Sima, C. and K. Panageas and G. Heller and D. Schrag",
title = {Analytical Strategies for Characterizing Chemotherapy Diffusion with Patient-Level Population-Based Data},
journal = {Appl Health Econ Health Policy},
volume = {8},
pages = {37-51},
year = {2010},
}
@article{Valente,
author = "Valente, T.",
title = {Network Models and Methods for Studying the Diffusion of Innovations},
journal = {Models and Methods in Social Network Analysis},
pages = {98-116},
year = {2005},
}
@article{Banerjee,
author = "Banerjee, A. and A. Chandrasekhar and E. Duflo and M. Jackson",
title = {The Diffusion of Microfinance},
journal = {Science},
volume = {341},
year = {2013},
}
@article{VanderWeele,
author = "VanderWeele, T.",
title = {Sensitivity Analysis for Contagion Effects in Social Networks},
journal = {Sociological Methods and Research},
volume = {40},
pages = {240-255},
year = {2011},
}
@article{Landon,
author = "Landon, B. and N. Keating and M. Barnett and J-P. Onnela and S. Paul and A. O'Malley and T. Keegan and N. Christakis",
title = {Variation in Patient-Sharing Networks of Physicians Across the United States},
journal = {JAMA},
volume = {308},
pages = {265-273},
year = {2012},
}
@article{NewmanEpi,
author = "Newman, M.E.J.",
title = {Properties of highly clustered networks},
journal = {Physical Review E},
volume = {63},
year = {2003},
}
@article{Bu,
author = "Bu, Y. and S. Gregory and H. Mills",
title = {Efficient local behavioral change strategies to reduce the spread of epidemics in networks},
journal = {Physical Review E},
volume = {88},
year = {2013},
}
@article{Centola,
author = "Centola, D. and M. Macy and V. Eguiluz",
title = {Cascade Dynamics of Multiplex Propagation},
journal = {Physica A},
volume = {374},
pages = {449-456},
year = {2007},
}
@misc{IndiaData,
author = {Hijmans, R.},
year = {2009},
title = {Global Administrative Areas: Boundaries Without Limits},
note = {Accessed: 2016-05-12}
}
@misc{IndiaR,
author = {Mukerjee, P.},
year = {2013},
title = {Vizualyse},
note = {Accessed: 2016-05-12}
}
@article{Newman2,
author = "Newman, M.E.J.",
title = {Communities, modules and large-scale structure in networks},
journal = {Nature Physics},
volume = {8},
pages = {25-31},
year = {2012},
}
@article{Watts,
author = "Watts, D.J. and S.H. Strogatz",
title = {Collective dynamics od `small-world' networks},
journal = {Nature},
volume = {393},
pages = {440-442},
year = {1998},
}
@book{Newman,
author = "Newman, M.E.J.",
title = {Networks: An Introduction},
publisher = {Oxford University Press},
year = {2010},
}
@book{Faust,
author = "Wasserman, S. and K. Faust",
title = {Social network analysis: Methods and applications},
publisher = {Cambridge University Press},
year = {1994},
}
@book{Ballobas,
author = "Bollob\'{a}s, B.",
title = {Random Graphs},
publisher = {Academic Press},
year = {1985},
}
@book{Kendall,
author = "Stuart, A. and K. Ord",
title = {Kendall's Advanced Theory of Statistics: v.1},
publisher = {Wiley-Blackwell},
year = {1998},
}
@book{Johnson,
author = "Elandt-Johnson, R. and N. Johnson",
title = {Survival Models and Data Analysis},
publisher = {John Wiley \& Sons},
year = {1998},
}
@phdthesis{Oxford,
title = {Motif Counts, Clustering Coefficients and Vertex Degrees in Models of Random Networks},
school = {Oxford University},
author = {Lin, K.},
year = {2016},
type = {{PhD} dissertation},
}
@phdthesis{Alison1,
author = "Hwong, A. and J.P. Onnela and D. Kim and D. Stafford and D. Hughes and N. Christakis",
title = {Not Created Equal: Sex Differences in the Network-Based Diffusion of Public Health Interventions},
school = {Harvard T.H. Chan School of Public Health},
year = {2016},
}
@phdthesis{Alison2,
author = "Hwong, A. and P. Staples and J.P. Onnela",
title = {Simulating Network-Based Public Health Interventions in Low-Resource Settings},
school = {Harvard T.H. Chan School of Public Health},
year = {2016},
}
@article{Chris,
author = "Christakis, N. A. and J. H. Fowler",
title = {The spread of obesity in a large social network over 32 years},
journal = {N. Engl. J. Med.},
volume = {357},
pages = {370-379},
year = {2007},
}
@article{Chris2,
author = "Christakis, N. A. and J. H. Fowler",
title = {The collective dynamics of smoking in a large social network},
journal = {N. Engl. J. Med.},
volume = {358},
pages = {2249-2258},
year = {2008},
}
@article{Fowl,
author = "Fowler, J. H. and N. A. Christakis",
title = {Dynamic spread of happiness in a large scale network: longitudinal analysis over 20 years in the Framingham Heart Study},
journal = {BMJ},
volume = {337},
year = {2008},
}
@article{Fowl2,
author = "Fowler, J. H. and N. A. Christakis",
title = {Estimating peer effects on health in social networks},
journal = {J. Health Econ.},
volume = {27},
pages = {1400-1405},
year = {2008},
}
@article{Onnela,
author = "Onnela, J-P. and J. Saramaki and J. Hyvonen and G. Szabo and D. Lazer and K. Kaski and J. Kertesz and A-L. Barabasi",
title = {Structure and tie strengths in mobile communication networks},
journal = {PNAS},
volume = {104},
pages = {7332-7336},
year = {2007},
}
@article{Kumpula,
author = "Kumpula, J. and J-P. Onnela and J. Saramaki and K. Kaski and J. Kertesz",
title = {Emergence of Communities in Weighted Networks},
journal = {Physical Review Letters},
volume = {99},
year = {2007},
}
@article{Saramaki,
author = "Saramaki, J. and M. Kivela, J-P. Onnela and K. Kaski and J. Kertesz",
title = {Generalizations of the clustering coefficient to weighted complex networks},
journal = {Physical Review E},
year = {2007},
}
@article{JPO,
author = "Onnela, J-P. and J. Saramaki and J. Hoyvonen and G. Szabo and M. Argollo de Menezes and K. Kaski and A-L. Barabasi and J. Kertesz",
title = {Analysis of large-scale weighted network of one-to-one human communication},
journal = {New Journal of Physics},
volume = {9},
year = {2007},
}
@article{Milgram,
author = "Milgram, S.",
title = {The Small-World Problem},
journal = {Psychology Today},
volume = {1},
pages = {61-67},
year = {1967},
}
@article{Granovetter,
author = "Granovetter, M.",
title = "The Strength of Weak Ties",
journal = {American Journal of Sociology},
volume = {78},
pages = {1360-1380},
year = {1973},
}
@article{Good,
author = "Goodreau, S. and J. Kitts and M. Morris",
title = "Birds of a Feather, or Friend of a Friend? Using Exponential Random Graph Models to Investigate Adolescent Social Networks",
journal = {EPL},
volume = {87},
year = {2009},
}
@report{Harling,
author = {Harling, G. and J-P. Onnela},
title = {mpact of degree truncation on the spread of a contagious process on networks},
institution = {Harvard T.H. Chan School of Public Health},
year = {2016},
}
@article{Fort,
author = "Fortunato, S.",
title = {Community detection in graphs},
journal = {Physics},
volume = {486},
pages = {75-174},
year = {2010},
}
\end{filecontents}
\bibliographystyle{abbrv}
|
1,116,691,499,807 | arxiv | \section{Introduction}
The distribution of satellite galaxies around the Milky Way (MW) is highly anisotropic: they align in a narrow plane perpendicular to the Galactic disc \citep{LyndenBell1976,Kroupa2005}.
Many globular clusters and streams in the MW halo are part of the same structure, which has been termed the vast polar structure \citep[VPOS, ][]{Pawlowski2012a}.
While our knowledge of fainter objects is affected by the uneven sky coverage of surveys in which they are detected, the 11 brightest ('classical') satellites are generally believed to be a less biased distribution. Proper motion measurements reveal that eight of them are consistent with co-orbiting in the VPOS \citep{Pawlowski2013b}.
This phase-space correlation of the MW satellites is difficult to reconcile with expectations based on the current standard model of cosmology. Dark matter sub-halos in cosmological simulations do not show the observed degree of coherence. Claims of consistency with simulations \citep[e.g.][]{DOnghia2008,Li2008,Libeskind2009,Deason2011,Lovell2011,Wang2013,Bahl2014} have been found not to hold in view of additional observational data \citep{Metz2009,Pawlowski2012b,Pawlowski2013b} or to be based on flawed analyses \citep{Ibata2014,Pawlowski2012b,Pawlowski2014b}.
The possibility that the environment of a host affects the chance to find correlated sub-halo planes has not yet been investigated. The MW is part of the Local Group (LG), together with M31. The discovery of an apparently rotating satellite plane around M31 \citep{Ibata2013} and of two planes containing almost all isolated dwarf galaxies in the LG, the dominant one of which aligns with Magellanic Stream which is part of the VPOS \citep{Pawlowski2012a,Pawlowski2013a}, indicate that the nearby environment is possibly related to the satellite structures \citep[see also][]{Pawlowski2014}.
The high-resolution cold dark matter simulations of the 'Exploring the Local Volume in Simulations' (ELVIS) project \citep{GarrisonKimmel2014} offer an opportunity to test whether satellite planes are more likely to be present around paired hosts. Half of its 48 host halos are in a paired configuration, while the other half are isolated but matched in mass.
We use this dataset to test whether the probability to find VPOS-like satellite planes among the 11 most-massive satellites is different for paired and isolated hosts and whether being part of a paired group affects the distribution of the 11 to 99 most-massive satellites.
Sect. \ref{sect:method} summarized the simulations, sample selection and analysis, Sect. \ref{sect:results} presents our results and conclusions are drawn in Sect. \ref{sect:conclusion}.
\section{Method}
\label{sect:method}
The ELVIS suite \citep{GarrisonKimmel2014} is a set of cosmological zoom simulations focussing on 12 pairs of main halos with masses, separations and relative velocities similar to those of the MW and M31. A control sample of 24 isolated halos matching the paired ones in mass has also been simulated. The simulations are dissipationless ('dark-matter-only'), based on WMAP-7 cosmological parameters \citep{Larson2011} and complete for sub-halo masses down to $\approx 10^7 M_{\sun}$, thus resolving objects comparable to the classical MW satellites.
We use the publicly available present day ($z = 0$) halo catalogues. For the case of paired hosts, both sub-halo systems are analysed in the same way. To be comparable to the MW satellite system, only sub-halos between 15 and 260\,kpc from the center of their host are considered.
They are ranked by stellar mass, as determined by abundance matching (AM) applied to the maximum mass $M_{\mathrm{peak}}$\ they had over their history. While different AM prescriptions, such as the \citet{Behroozi2013} model or the preferred model by \citet{GarrisonKimmel2014}, differ in the stellar mass assigned to a given $M_{\mathrm{peak}}$, they preserve the $M_{\mathrm{peak}}$-ranking of satellites. As we select the highest-ranked satellites (the absolute mass is not part of the selection) the different prescriptions result in the same selection.
For the comparison to the observed VPOS the analysis accounts for the obscuration of satellites by the MW. All sub-halos within $\pm 11.5^{\circ}$\ from an obscuring disc (corresponding to 20\% of the sky) are ignored. For each of the 48 sub-halo systems 100 realisations are generated by drawing the top 11 sub-halos from outside the obscured area of different randomly oriented obscuring discs.
For each realisation a plane is fitted to the 11 sub-halos. The method is identical to the one applied to the observed satellite positions \citep{Pawlowski2013a} and the Millennium-II simulation \citep{Pawlowski2014b}. It finds the principal axes of the satellite distribution by determining the eigenvectors of the moments of inertia tensor constructed using non-mass-weighted positions. The orientation of the best-fit plane is described by its normal vector and the following parameters describing the shape of the distribution are determined (parameters measured for the MW satellites using positions compiled by \citet{McConnachie2012} are given in brackets):
\begin{itemize}
\item $r_{\mathrm{per}}$, the root-mean-square (RMS) height of the sub-halos perpendicular to the plane ($r_{\mathrm{per}}^{\mathrm{obs}} = 19.6$\,kpc).
\item $r_{\mathrm{par}}$, the RMS radius of the sub-halos projected into (parallel to) the best-fit plane measured from the center of their host ($r_{\mathrm{par}}^{\mathrm{obs}} = 129.5$\, kpc).
\item $c/a$\ and $b/a$, the short- and intermediate-to-long RMS axis ratios, respectively ($(c/a)^{\mathrm{obs}} = 0.182$\ and $(b/a)^{\mathrm{obs}} = 0.508$).
\end{itemize}
Eight of the 11 MW satellites have orbital poles which align with the normal of the VPOS, indicating that these satellites co-orbit within the structure. We consider this essential property of the VPOS by measuring the following two parameters and comparing them to those determined for the observed MW satellites using the same method as \citet{Pawlowski2013b}:
\begin{itemize}
\item $\Delta_{\mathrm{std}}$, the spherical standard deviation of the eight most-concentrated orbital poles which measures their concentration ($\Delta_{\mathrm{std}}^{\mathrm{obs}} = 29.3^{\circ}$).
\item $\theta_{\mathrm{VPOS}}$, the angle between the average direction of the eight most-concentrated orbital poles and the normal defining the best-fit plane, which measures the alignment with the plane ($\theta_{\mathrm{VPOS}}^{\mathrm{obs}} = 18.9^{\circ}$).
\end{itemize}
The 48 hosts are split up into two classes: 20 paired and 24 isolated halos. Like \citet{GarrisonKimmel2014} we exclude the halos Serena \& Venus and Siegfried \& Roy from the paired sample because unlike the LG these contain a third massive halo at a distance of about 1\,Mpc. In addition, for each realisation a randomized system having the same radial distribution as the simulated systems is generated by rotating each sub-halo into a random direction before applying the obscuration cut. These show which properties are to be expected for isotropic systems.
\section{Results}
\label{sect:results}
\subsection{Searching VPOS-analogues}
\label{sect:VPOS}
\begin{figure*}
\centering
\includegraphics[width=80mm]{fig1a.eps}
\includegraphics[width=80mm]{fig1b.eps}
\includegraphics[width=80mm]{fig1c.eps}
\includegraphics[width=80mm]{fig1d.eps}
\caption{
Median (symbols) and maximum and minimum values (error bars) for the disc-fit parameters determined from 100 random obscuring disc realisations for each of the paired and isolated hosts. The blue dot gives the parameters determined from the 11 most-luminous MW satellites. Models within the areas marked by the dashed lines reproduce these VPOS properties. The green contours contain 50, 90 and 95\% of all randomized realizations.
}
\label{fig:scatter}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=80mm]{fig2a.eps}
\includegraphics[width=80mm]{fig2b.eps}
\includegraphics[width=80mm]{fig2c.eps}
\includegraphics[width=80mm]{fig2d.eps}
\caption{
Cumulative distributions of $r_{\mathrm{per}}$, $c/a$, $\Delta_{\mathrm{std}}$\ and $\theta_{\mathrm{VPOS}}$. The blue dashed line indicates the respective value measured for the VPOS.
Each halo system contributes 100 realisations with randomly oriented obscuring discs to the cumulative curves, which smooths the distributions, but the different realisations for one host are not independent. Therefore 100 Kolmogorov-Smirnov-tests have been performed for combinations containing only one realisation per host and the median parameters of these tests have been determined. They show that with the current number of hosts is is not possible to rule out the null-hypotheses that the paired, isolated and randomized distributions have been drawn from the same parent distribution.
}
\label{fig:cumulative}
\end{figure*}
To be similarly correlated in phase-space as the observed MW satellites, a simulated system has to have at least as extreme plane-fit parameters, which results in the criteria compiled in Table \ref{tab:VPOSresults}. It also lists which fractions of realisations fulfil the criteria. The distribution of the plane-fit parameters are shown in Figures \ref{fig:scatter} and \ref{fig:cumulative}.
As apparent from Fig. \ref{fig:scatter}, the scatter of the plane-fit parameters for individual hosts is comparable to the scatter of the medians of these properties and to the scatter present among the randomized systems. The sub-halos in the different realisations are not naturally distributed in a way comparable to the observed VPOS. The simulated systems tend to be slightly closer to the extreme VPOS parameters than the randomized ones, but the measured VPOS parameters are more extreme than all median values and than almost all individual realisations (see Table \ref{tab:median} for the median values and extrema for each host).
\subsubsection{Absolute and relative shape}
To be at least as flattened as the VPOS in \textit{absolute} dimension, sub-halo systems must have $r_{\mathrm{per}} \leq r_{\mathrm{per}}^{\mathrm{obs}}$ (Criterion 1 in Table \ref{tab:VPOSresults}). Less than 1\% of the paired or isolated realizations fulfil this criterion. It is apparent from the cumulative distribution of $r_{\mathrm{per}}$\ (Fig. \ref{fig:cumulative}) that sub-halo systems with $r_{\mathrm{per}} \lesssim 35$\,kpc are slightly less frequent among paired than isolated systems. Afterwards, the curve for isolated systems rises more slowly than that of the paired ones, so the average $r_{\mathrm{per}}$\ is slightly smaller for paired than for isolated systems. The cumulative distribution of the randomized systems is similar to the paired one but offset by about 7\,kpc to larger values.
Most systems fulfil criterion 2 ($r_{\mathrm{par}} \geq r_{\mathrm{par}}^{\mathrm{obs}}$) and are thus sufficiently radially extended to be comparable to the VPOS. More narrow planes (small $r_{\mathrm{per}}$) tend to be found for more radially concentrated (small $r_{\mathrm{par}}$) satellite distributions. This can be best seen from the diagonal shape of the contours for the randomized systems and the distribution of the median values for individual hosts in the first panel of Fig. \ref{fig:scatter}.
Criteria 3 and 4 in Table \ref{tab:VPOSresults} use the axis ratios to compare the \textit{relative} shape of the sub-halo systems with that of the MW satellites. The results for $c/a$\ are similar to those for $r_{\mathrm{per}}$, both properties scatter around a linear relation (Fig. \ref{fig:scatter}), and also the cumulative distributions show the same behaviour (Fig. \ref{fig:cumulative}). Again the measured flattening of the VPOS is not naturally found among the sub-halo systems, only 0.2\% of the paired and 1.9\% of the isolated realisations have $(c/a) \leq (c/a)^{\mathrm{obs}}$. The distribution and scatter in $c/a$\ and $b/a$\ (Fig. \ref{fig:scatter}) is similar among paired, isolated, and randomized systems, but the latter tend to have slightly larger $c/a$\ than the simulated sub-halo systems.
\subsubsection{Orbital pole concentration and alignment}
In addition to being similarly flattened, the sub-halo systems also need to have at least as concentrated orbital poles ($\Delta_{\mathrm{std}} \leq \Delta_{\mathrm{std}}^{\mathrm{obs}}$, criterion 5 in Table \ref{tab:VPOSresults}) which are at least as well aligned with the normal to the best-fit plane ($\theta_{\mathrm{VPOS}} \leq \theta_{\mathrm{VPOS}}^{\mathrm{obs}}$, criterion 6) to be similar to the observed VPOS.
Like for the flattening criteria, paired and isolated systems do not naturally fulfil the $\Delta_{\mathrm{std}}$\ criterion ($\lesssim 1.5$\%, see Table \ref{tab:VPOSresults}). Paired systems are a bit more likely to have the most-concentrated poles ($\Delta_{\mathrm{std}} \lesssim 30^{\circ}$), but in general the isolated systems tend to have slightly smaller $\Delta_{\mathrm{std}}$ (Fig. \ref{fig:cumulative}). Randomized systems have $\approx 5^\circ$\ larger $\Delta_{\mathrm{std}}$\ on average, but again the general behaviour (Fig. \ref{fig:cumulative}) and scatter (Fig. \ref{fig:scatter}) are similar.
The cumulative distributions of $\theta_{\mathrm{VPOS}}$\ for paired and isolated systems (last panel Fig. \ref{fig:cumulative}) are almost identical. There is no indication that the existence of a neighbouring main halo affects the orbital alignment of sub-halos with their preferred plane.
Realisations with $\theta_{\mathrm{VPOS}}^{\mathrm{obs}} = 18.9^{\circ}$\ are almost twice as likely in simulated systems than in the randomized ones (17 to 10\%, see Table \ref{tab:VPOSresults}).
\subsubsection{Combined Criteria}
Only if a plane of sub-halos \textit{simultaneously} meets the different criteria defining the VPOS properties can it be said to reproduce the observed situation. The two essential properties of the VPOS are its narrow extent (measured with $r_{\mathrm{per}}$\ or $c/a$) and the alignment of the orbital poles of the satellites (measured with $\Delta_{\mathrm{std}}$). The last panel in Fig. \ref{fig:scatter} plots $\Delta_{\mathrm{std}}$\ against $r_{\mathrm{per}}$\ and shows that planes in simulated and randomized systems tend to have parameters that are significantly larger than those of the observed MW satellite system. Sub-halo systems that are sufficiently flattened are not sufficiently co-orbiting, while those which co-orbit are not sufficiently flattened. None of the 4800 randomized systems reproduce either of the two combined criteria 8 and 9 in Table \ref{tab:VPOSresults}, setting an upper limit on the fraction of such systems of 0.02\%. Likewise, none of the 2400 systems around isolated hosts fulfils the combined criteria (upper limit of 0.04\%). The sub-halo system with median values in $r_{\mathrm{per}}$\ and $c/a$\ coming closest to the observed ones belongs to the isolated host iRomulus, but none of its realisations is simultaneously sufficiently flattened and has sufficiently concentrated orbital poles.
Among the systems around paired hosts only one out of 2000 realisations (0.05\%) fulfils the combined criteria.
An agreement with the VPOS in two properties is thus extremely rare and requires a finely-tuned obscuring disc orientation: in 1 out of 20 hosts only 1 out of 100 randomly oriented obscuring discs produces a sample of sub-halos that shares two properties with the VPOS. The host of this particular realisation is Oates, whose sub-halo system has a relatively low median $r_{\mathrm{per}}$\ of 42\,kpc, and the lowest median $\Delta_{\mathrm{std}}$\ of $38.7^{\circ}$\ of all paired hosts (see Table \ref{tab:median}). Oates it the fourth-lowest-mass paired halo in ELVIS ('virial' mass of $M_{\mathrm{V}} = 1.2 \times 10^{12}\,M_{\sun}$) and has formed only recently (acquired half of its mass at a redshift of $z = 0.62$) \citep{GarrisonKimmel2014}. Furthermore, the realisation does not fulfil criterion 6 simultaneously, the concentrated orbital poles do not align with the plane normal as closely as observed.
Even with significantly relaxed criteria ($r_{\mathrm{per}} \leq 1.5 \times r_{\mathrm{per}}^{\mathrm{obs}}$\ and $\Delta_{\mathrm{std}} \leq 1.5 \times \Delta_{\mathrm{std}}^{\mathrm{obs}}$), the fractions of realisations reproducing these simultaneously remain between 0.5\% (paired) and 2\% (isolated).
\subsection{Dependency of satellite system shape on the number of satellites}
\label{sect:shape}
\begin{figure*}
\centering
\includegraphics[width=80mm]{fig3a.eps}
\includegraphics[width=80mm]{fig3b.eps}
\includegraphics[width=80mm]{fig3c.eps}
\includegraphics[width=80mm]{fig3d.eps}
\caption{The shape of the distribution of the top $N_{\mathrm{subhalo}}$\ sub-halos ranked by $M_{\mathrm{peak}}$\ in steps of 11. Shown are the RMS height $r_{\mathrm{per}}$, the RMS radius $r_{\mathrm{par}}$, and the short- and intermediate-to-long axis ratios $c/a$\ and $b/a$\ for each paired and isolated host (thin lines) and the averages of these two classes and the randomized systems (thick lines). The shaded area marks the range between the maximum and minimum parameters found for 48 randomized satellite systems. The blue dots indicate the parameters measured for the 11 classical MW satellites in the VPOS. The MW system is highly unusual by three out of four measures.
}
\label{fig:shape}
\end{figure*}
We now turn our attention to the dependency of the overall shape of the sub-halo distributions on the number of considered sub-halos. Fig. \ref{fig:shape} plots $r_{\mathrm{per}}$, $r_{\mathrm{par}}$, $c/a$\ and $b/a$\ for the $N_{\mathrm{subhalo}}$ = 11, 22, ..., 99 sub-halos with the largest $M_{\mathrm{peak}}$\ (largest stellar masses according to AM) for each host and the averages for the 20 paired, 24 isolated and 48 randomized systems.
As the number of satellites increases, the average $r_{\mathrm{per}}$\ first rises quickly from about 50--55 to 65--70\,kpc, and then more slowly to about 75\,kpc. The average $r_{\mathrm{per}}$\ of paired hosts remains below that of isolated hosts for all $N_{\mathrm{subhalo}}$, which remains below the average of the randomized systems. Overall the differences are small. The average of the randomized systems is only $\approx 5$\,kpc ($\lesssim 10$\%) larger than that of the paired systems. The difference between the averages for paired and isolated systems becomes smaller for larger $N_{\mathrm{subhalo}}$.
That the average absolute thickness $r_{\mathrm{per}}$\ of paired systems is lower than that of the isolated ones could either indicate that the sub-halos are on average in a more flattened configuration, or that the systems are more radially concentrated on average. The behaviour of the axis ratio $c/a$ hints at the latter explanation. The average $c/a$\ of paired and isolated systems follow the same curve, again rising steeply between $N_{\mathrm{subhalo}} = 11$ and 22 from 0.45 to 0.6 and then approaching a plateau of about 0.7 for large $N_{\mathrm{subhalo}}$\ \citep[see also][]{Wang2013}. The average \textit{relative} thickness of the sub-halo distributions is therefore independent of whether the host is part of a paired group or isolated. For randomized systems $c/a$\ is on average 0.05 to 0.1 larger ($\approx 10$\%), confirming that sub-halo systems are slightly more flattened than isotropic distributions \citep{Zentner2005}.
The average $r_{\mathrm{par}}$\ is largest for small $N_{\mathrm{subhalo}}$, i.e. sub-halos with the largest $M_{\mathrm{peak}}$\ are more radially extended \textit{in the best-fit plane}. However, the effect is minuscule, the average $r_{\mathrm{per}}$\ only changes from 150 to 140\,kpc, and might be due to the decreasing flattening for larger $N_{\mathrm{subhalo}}$\ which causes a larger component of the radial distance to contribute to $r_{\mathrm{per}}$\ instead of $r_{\mathrm{par}}$. The very small difference between paired and isolated systems is not necessarily caused by different environments, but might simply be an effect of low statistics due to the relatively small number of hosts.
The overall behaviour of the randomized and simulated systems is similar in all four parameters, which is also true for the spread in their values. This indicates that an isotropic distribution can be an acceptable zeroth-order approximation for sub-halo systems. The dominating reason why more flattened systems are found for lower $N_{\mathrm{subhalo}}$\ is then probably an effect of the number of sub-halos. An extreme case of only three sub-halos would always result in perfect planes ($r_{\mathrm{per}} = 0$\,kpc and $c/a = 0$). For small $N_{\mathrm{subhalo}}$\ only a few sub-halos are situated at large distances, but these dominate the plane fit. Sub-halos at smaller distances have small offsets from any plane passing close to the center of the host, such that the overall thickness of the distribution tends to be smaller for smaller $N_{\mathrm{subhalo}}$. For larger $N_{\mathrm{subhalo}}$\ more sub-halos will be present at large distances but outside the plane-fit, increasing the measured thickness.
\section{Conclusion}
\label{sect:conclusion}
We have investigated the phase-space distribution of the most-massive sub-halos (ranked by $M_{\mathrm{peak}}$, corresponding to a ranking in stellar mass in AM) around paired and isolated hosts in the ELVIS simulation suite \citep{GarrisonKimmel2014}. If the number of considered sub-halos is small ($N_{\mathrm{subhalo}} \lesssim 20$), the flattening of the sub-halo system strongly depends on and rises for larger $N_{\mathrm{subhalo}}$. This overall behaviour is identical for paired, isolated and randomized (isotropic) systems. The latter have an average flattening offset to slightly larger values, but the scatter among systems of each type is larger than this difference.
We have also compared the phase-space distribution of the 11 top-ranked sub-halos (accounting for obscuration by a galactic disc) with that of the 11 most-massive MW satellites. Paired hosts similar to the LG do not have a higher chance to contain sub-halo distributions which are similarly flattened as the MW satellites in the VPOS. In the analysed simulations, isolated hosts are in fact more likely to have the smallest $r_{\mathrm{per}}$\ and $c/a$, while the corresponding averages are smaller for paired hosts, which might be due to a slightly stronger radial concentration of their sub-halo systems. Paired and isolated systems also show a similar degree of orbital pole alignments.
The low rate of satellite planes that are as strongly correlated as the VPOS found in cosmological simulations such as the Millennium-II \citep{Pawlowski2014b} and the Aquarius simulations \citep{Pawlowski2013b}, is therefore most-likely not affected by ignoring the host halo environments. The absence of VPOS-like structures appears to be a natural feature of dissipationless cosmological simulations. In particular, the VPOS can not satisfactorily be understood as an extreme statistical outlier of the simulated distributions because additional objects align with the structure and more correlated satellite planes have been found in the local Universe \citep[e.g.][]{Kroupa2010,Pawlowski2012a,Ibata2013}.
This emphasizes that the search for an explanation of such structures requires different approaches. Examples for this are the inclusion of gas in cosmological simulations \citep{Khandai2014,Vogelsberger2014} or scenarios which question the association of dwarf galaxies with sub-halos, such as the formation of phase-space correlated populations of tidal dwarf galaxies \citep{Pawlowski2011,Fouquet2012,Hammer2013,Yang2014}.
\acknowledgments
We thank Shea Garrison-Kimmel and the ELVIS collaboration for making their simulations publicly available.
|
1,116,691,499,808 | arxiv | \section{Introduction}
In the era of big data, advanced machine learning techniques enable accurate data analytic for various application domains. This has incentivized commercial access to machine learning models by third-party users, e.g., machine learning as a service provided by data giants, such as Google and Amazon, allows companies to train models on their data and to sell access to these models. Although commercially attractive, these services open the door to data theft and privacy infringements. An example of this is membership inference attack by which an adversary attempts at inferring whether individuals' data is used for training a machine learning model~\cite{10shokri2017membership}.
In this paper, we propose membership information leakage metrics to investigate the reasons behind the success of membership inference attacks. We use conditional \textbf{mutual information leakage} to measure the amount of information leakage from the trained machine learning model about the presence of an individual in the training dataset. We find an upper bound for this measure of information leakage using Kullback--Leibler divergence between the distribution of the machine learning models in the presence of a particular data record and in the absence of that data record. Following this, we define \textbf{Kullback--Leibler membership information leakage}. Using the Le Cam's inequality~\cite{Yu1997} and the Pinsker's inequality~\cite{massart2007concentration}, we show that this measure bounds the \textbf{probability of success of any adversary trying to determine if a particular data record belongs to the training dataset of a machine learning model}. This provides an information-theoretic interpretation for our choices of membership information leakage metrics.
We use the developed measures of membership information leakage to investigate factors behind the success of membership inference attacks. We first prove that the \textbf{amount of the membership information leakage is a decreasing function of the training dataset size}. This signifies that, by using a larger training dataset, the model is less over-fit to the training dataset and therefore it is harder to distinguish the training data from the rest. This particular result is applicable to general machine learning models ranging from linear regression to deep neural networks as it does not require convexity. By focusing on convex machine learning problems, we investigate other important factors behind the success of membership inference attacks. We prove that \textbf{regularization reduces the amount of membership information leakage}. This can again be attributed to that increasing the importance of the regularization reduces over-fitting and is therefore an important tool for combating membership inference attacks. Then, we define sensitivity of machine learning models by bounding variations of the model fitness across all the data entries and model parameters. Following this, we prove that \textbf{less membership information is leaked if the training dataset is more sensitive}. This can illustrate that complex models, such as deep neural networks, are more susceptible to membership inference attacks in comparison to simpler models with fewer degrees of freedom.
Finally, we study the effect of additive noise on the success of membership inference attacks by quantifying the amount of decrease in the membership information leakage caused by additive Gaussian noise. We particularly prove that \textbf{membership information leakage reduces by $\mathcal{O}(\log^{1/2}(\delta^{-1})\epsilon^{-1})$ when using $(\epsilon,\delta)$-differentially-private additive Gaussian noises}, following the Gaussian mechanism in~\cite[Theorem~A.1]{dwork2014algorithmic}.
\section{Related Work}
Membership inference attacks, a class of adversarial inference algorithms designed to distinguish data used for training a machine learning model, have recently gained much attention~\cite{10shokri2017membership, 18truex2018towards,salem2018ml, sablayrolles2019white}. These attacks have been deployed on various machine learning models; see, e.g., \cite{10shokri2017membership, 17hayes2019logan, chen2019gan, hilprecht2019monte,liu2019socinf, backes2016membership}. The success of the attacks is often attributed to that a machine learning model behaves differently on the training dataset and the test dataset, e.g., it shows higher confidence on the training dataset due to an array of reasons, such as over-fitting.
Many defence mechanisms have been proposed against membership inference attacks. A game-theoretic approach is proposed in~\cite{nasr2018machine}, where a regularization term using the accuracy of membership inference attacks is incorporated when training machine learning models. Others have introduced indistinguishability for membership inference attacks as an estimate of the discrimination of the model on training and test datasets~\cite{yaghini2019disparate}. Alternatively, it has been suggested that we can counter membership inference attacks by reducing over-fitting~\cite{yeom2018privacy}. Membership inference attacks are shown to work better on certain subgroups of the population, e.g., underrepresented minorities, resulting in disparate vulnerability~\cite{yaghini2019disparate}. Furthermore, success of membership inference attack may not predict success of attribute inference attacks with only access to partial view of data records~\cite{zhao2019inferring}. Another approach is to use differentially-private machine learning at the cost of significantly reducing the utility~\cite{12rahman2018membership, leino2019stolen}. However, none of these capture the possibly many factors contributing to the success of membership inference attacks.
This motivates taking a deeper look at the factors behind the success of membership inference attacks using information-theoretic membership information leakage metrics. This is the topic of this paper.
Finally, we would like to point out recent results exploring differential privacy and mutual information, e.g., see~\cite{cuff2016differential,wang2016relation}. Although these results provide important insights into information-theoretic guarantees of differential privacy, they are far from the context of this paper and do not consider membership inference attacks.
\section{Membership Information Leakage}
Consider all possible data records in a universe $\mathcal{U}: =\{(x_i,y_i)\}_{i=1}^{N}\subseteq \mathbb{R}^{p_x}\times\mathbb{R}^{p_y}$ in which $x_i$ and $y_i$ denote inputs and outputs, respectively. Note that the data universe is not necessarily finite; $N$ can be infinite. A machine learning algorithm only has access to $n$ entries from this data universe. We denote this by the private training dataset $\mathcal{D}\subseteq \mathcal{U}$. Hence, the size of the training dataset is $|\mathcal{D}|=n<N$. Let $(z_i)_{i=1}^N\in\{0,1\}^N$ be such that $z_i=1$ if $(x_i,y_i)\in\mathcal{D}$ and $z_i=0$ otherwise. Let $(z_i)_{i=1}^N\in\{0,1\}^N$ be uniformly selected at random from $\mathcal{Z}:=\{(z_i)_{i=1}^N\in\{0,1\}^N|\sum_{i=1}^N z_i=n\}$. This implies that any record in the data universe $\mathcal{U}$ is equally likely to be part of the training dataset $\mathcal{D}$. This is a common assumption in machine learning~\cite{anthony2009neural} and membership inference~\cite{salem2018ml}.
Consider a generic supervised machine learning problem with the aim of training a model $\mathfrak{M}(\cdot;\theta):\mathbb{R}^{p_x}\rightarrow \mathbb{R}^{p_y}$ to capture the relationship between inputs and outputs in the training dataset $\mathcal{D}$ by solving the optimization problem in
\begin{align}\label{eqn:ML}
\theta_{\c}^*\in\argmin_{\theta\in\Theta_{\c}} \; f(\theta,\mathcal{D}),
\end{align}
with
\begin{align*}
f(\theta,\mathcal{D}):=\lambda g(\theta)+\frac{1}{|\mathcal{D}|} \sum_{(x,y)\in\mathcal{D}}\ell(\mathfrak{M}(x;\theta),y),
\end{align*}
where $\ell(\mathfrak{M}(x;\theta),y)$ is a loss function capturing the ``closeness'' of the outcome of the trained ML model $\mathfrak{M}(x;\theta)$ to the actual output $y$, $g(\theta)$ is a regularizing term, $\lambda\geq 0$ is a weight balancing between the loss function and the regularizing term, and $\Theta_{\c}\subseteq\mathbb{R}^{p_\theta}$ denotes the set of feasible models. Computers only have a finite precision. Therefore, in practice, we can only compute the optimal model to a finite precision as
\begin{align}\label{eqn:ML_d}
\theta_{\d}^*:=\Pi_{\Theta_{\d}}[\theta_{\c}^*],
\end{align}
where $\Pi_{\Theta_{\d}}(\cdot)$ denotes projection into the finite set $\Theta_{\d}\subset\Theta_{\c}$. The set $\Theta_{\d}$ can, for instance, be the intersection of the set of feasible models $\Theta_{\c}$ and the set of rational numbers modeled by the floating point number representation of the utilized computing unit.
For an arbitrary $(x_i,y_i)\in\mathcal{U}$, an adversary is interested in inferring whether $(x_i,y_i)$ belongs to $\mathcal{D}$ or not based on the knowledge of $(x_i,y_i)$ and $\theta^*_\d$. We use conditional mutual information between $\theta^*_\d$ and $z_i$ as a measure of how much information regarding $z_i$, i.e., whether $(x_i,y_i)$ belongs to $\mathcal{D}$ or not, is leaked through $\theta^*_\d$.
\begin{mdframed}[backgroundcolor=black!10,rightline=false,leftline=false,topline=false,bottomline=false,roundcorner=2mm]
\vspace{-.1in}
\begin{definition}[Mutual Membership Information Leakage]
We measure membership information leakage in machine learning by
\begin{align*}
\rho_{\mathrm{MI}}(\theta^*_\d):=&I(\theta^*_\d;z_i|x_i,y_i)\\
=& \mathbb{E}\Bigg\{\log\Bigg(\frac{p(\theta^*_\d,z_i|x_i,y_i)}{
p(\theta^*_\d|x_i,y_i)p(z_i|x_i,y_i)}\Bigg)\Bigg\}.
\end{align*}
Similarly, we can define $\rho_{\mathrm{MI}}(\theta^*_\c)$. Data processing inequality implies that $\rho_{\mathrm{MI}}(\theta^*_\d)\leq \rho_{\mathrm{MI}}(\theta^*_\c)$.
\end{definition}
\end{mdframed}
\begin{remark}[Average vs Worst-Case]
Mutual information 1 relies on expectation. Therefore, if the measure of the information leakage is non-zero, it means that a percentage of the population is vulnerable to membership inference attacks. Membership inference attacks are however more effective on samples that are distinct from the others~\cite{yaghini2019disparate}. This model captures those cases so long as the distinct population appears with a non-zero probability. That being said, as the experimental results show in Section~\ref{sec:numerical}, there are always outliers in the success of membership inference attacks. For investigating those cases, we need to use max entropy, i.e., Renyi entropy of order infinity. This is left as future work.
\end{remark}
We can rewrite the conditional mutual information as
\begin{align*}
I(\theta^*_\d;z_i|x_i,y_i)
=& \mathbb{E}\Bigg\{\log\Bigg(\frac{p(\theta^*_\d,z_i|x_i,y_i)}{
p(\theta^*_\d|x_i,y_i)p(z_i|x_i,y_i)}\Bigg)\Bigg\}\\
=& \mathbb{E}\Bigg\{\log\Bigg(\frac{p(\theta^*_\d|z_i,x_i,y_i)}{
p(\theta^*_\d|x_i,y_i)}\Bigg)\Bigg\}.
\end{align*}
where $p(\theta^*_\d|x_i,y_i)=p(\theta^*_\d|x_i,y_i,z_i=0)\P\{z_i=0\}
+p(\theta^*_\d|x_i,y_i,z_i=1)\P\{z_i=1\}.$
In what follows, let $p_0(\theta^*_\d)=p(\theta^*_\d|x_i,y_i,z_i=0)$, $p_1(\theta^*_\d)=p(\theta^*_\d|x_i,y_i,z_i=1)$, $\alpha_0=\P\{z_i=0\}=n/N$, and $\alpha_1=\P\{z_i=1\}=1-\alpha_0.$ Therefore,
\begin{align}
I(\theta^*_\d&;z_i|x_i,y_i)\nonumber\\
=& \mathbb{E}\Bigg\{\log\Bigg(\frac{p(\theta^*_\d|z_i,x_i,y_i)}{
\alpha_0p_0(\theta^*_\d)+\alpha_1p_1(\theta^*_\d)}\Bigg)\Bigg\}\nonumber
\\
=&\mathbb{E}\Bigg\{\alpha_0\log\Bigg(\frac{p_0(\theta^*_\d)}{
\alpha_0p_0(\theta^*_\d)+\alpha_1p_1(\theta^*_\d)}\Bigg)\nonumber +\alpha_1\log\Bigg(\frac{p_1(\theta^*_\d)}{
\alpha_0p_0(\theta^*_\d)+\alpha_1p_1(\theta^*_\d)}\Bigg)\Bigg\}\nonumber
\\
=&\mathbb{E}\{\alpha_0D_{\mathrm{KL}}(p_0(\theta^*_\d)||
\alpha_0p_0(\theta^*_\d)+\alpha_1p_1(\theta^*_\d)) +\alpha_1D_{\mathrm{KL}}(p_1(\theta^*_\d)
\alpha_0p_0(\theta^*_\d)+\alpha_1p_1(\theta^*_\d))\}.
\label{eqn:MI_reduced_KL}
\end{align}
From~\eqref{eqn:MI_reduced_KL}, we can develop an upper bound for $I(\theta^*_\d;z_i|x_i,y_i)$ based on the convexity of the Kullback--Leibler divergence $D_{\mathrm{KL}}(p||q)$ with respect to $q$. This bound is easier to numerically compute and is thus used in our numerical evaluations. Note that
\begin{align*}
D_{\mathrm{KL}}(p_0(\theta^*_\d)||
\alpha_0p_0(\theta^*_\d)+\alpha_1p_1(\theta^*_\d))
\leq &\alpha_0D_{\mathrm{KL}}(p_0(\theta^*_\d)||
p_0(\theta^*_\d))
+\alpha_1D_{\mathrm{KL}}(p_0(\theta^*_\d)||
p_1(\theta^*_\d))\\
=&\alpha_1D_{\mathrm{KL}}(p_0(\theta^*_\d)||
p_1(\theta^*_\d)),
\end{align*}
and
\begin{align*}
D_{\mathrm{KL}}(p_1(\theta^*_\d)||
\alpha_0p_0(\theta^*_\d)+\alpha_1p_1(\theta^*_\d))
\leq &\alpha_0D_{\mathrm{KL}}(p_1(\theta^*_\d)||
p_0(\theta^*_\d))+\alpha_1D_{\mathrm{KL}}(p_1(\theta^*_\d)||
p_1(\theta^*_\d))\\
=&\alpha_0D_{\mathrm{KL}}(p_1(\theta^*_\d)||
p_0(\theta^*_\d)).
\end{align*}
These inequalities imply that
\begin{align*}
I(\theta^*_\d;z_i|x_i,y_i)
\leq &\alpha_0\alpha_1\mathbb{E}\{D_{\mathrm{KL}}(p_0(\theta^*_\d)||
p_1(\theta^*_\d)) +D_{\mathrm{KL}}(p_1(\theta^*_\d)||
p_0(\theta^*_\d))\}\\
\leq &\frac{1}{4}\mathbb{E}\{D_{\mathrm{KL}}(p_0(\theta^*_\d)||
p_1(\theta^*_\d)) +D_{\mathrm{KL}}(p_1(\theta^*_\d)||
p_0(\theta^*_\d))\}.
\end{align*}
This derivation motivates the introduction of another measure for membership information leakage in machine learning using the Kullback--Leibler divergence.
\begin{mdframed}[backgroundcolor=black!10,rightline=false,leftline=false,topline=false,bottomline=false,roundcorner=2mm]
\vspace{-.1in}
\begin{definition}[Kullback--Leibler Leakage]
The Kullback--Leibler information leakage in machine learning is
\begin{align*}
\rho_{\mathrm{KL}}(\theta^*_\d):=&\mathbb{E}\{
D_{\mathrm{KL}}(p(\theta^*_\d|x_i,y_i,z_i=1)\| p(\theta^*_\d|x_i,y_i,z_i=0))\\
&+
D_{\mathrm{KL}}(p(\theta^*_\d|x_i,y_i,z_i=0)\|p(\theta^*_\d|x_i,y_i,z_i=1))\}.
\end{align*}
Again, similarly, we can define $\rho_{\mathrm{KL}}(\theta^*_\c)$.
\end{definition}
\end{mdframed}
We readily get a relationship between $\rho_{\mathrm{KL}}(\theta^*_\d)$ and $\rho_{\mathrm{ML}}(\theta^*_\d)$. This is expressed in the next corollary.
\begin{mdframed}[backgroundcolor=black!10,rightline=false,leftline=false,topline=false,bottomline=false,roundcorner=2mm]
\vspace{-.1in}
\begin{corollary} \label{cor:1} $\rho_{\mathrm{MI}}(\theta^*_\d)\leq \rho_{\mathrm{KL}}(\theta^*_\d)/4$.
\end{corollary}
\end{mdframed}
We can relate the Kullback--Leibler information leakage to the ability of an adversary inferring whether a data point belongs to the training dataset. This is investigated in the following theorem.
\begin{mdframed}[backgroundcolor=black!10,rightline=false,leftline=false,topline=false,bottomline=false,roundcorner=2mm]
\vspace{-.1in}
\begin{theorem} Let $\Psi_{x_i,y_i}:\mathbb{R}^{p_\theta}\rightarrow \{0,1\}$ denote the policy of the adversary for determining whether $(x_i,y_i)$ belongs to the training set based on access to the trained model $\theta^*_\d$. Then
\begin{align*}
\P\{\Psi_{x_i,y_i}(\theta^*_\d)= z_i\}\leq \frac{1}{2}\sqrt{\rho_{\mathrm{KL}}(\theta^*_\d)}.
\end{align*}
\end{theorem}
\end{mdframed}
\begin{proof}
Using Le Cam's inequality~\cite{Yu1997}, we get
\begin{align*}
\inf_{\Psi_{x_i,y_i}}& \P\{\Psi_{x_i,y_i}(\theta^*_\d)\neq z_i\}=1-\nu(p_1(\theta^*_\d),p_0(\theta^*_\d)),
\end{align*}
where $\nu$ is the total variation distance defined as
\begin{align*}
\nu(p_1(\theta^*_\d),p_0(\theta^*_\d)):=\hspace{-.04in}\sup_{\mathcal{A}\in 2^{\Theta_\d}} \hspace{-.04in}|&\mathbb{P}\{\theta^*_\d\in\mathcal{A}|x_i,y_i,z_i=1\}-\mathbb{P}\{\theta^*_\d\in\mathcal{A}|x_i,y_i,z_i=0\}|.
\end{align*}
Hence,
\begin{align*}
\sup_{\Psi_{x_i,y_i}} \P\{\Psi_{x_i,y_i}(\theta^*_\d)=z_i\}
&=\sup_{\Psi_{x_i,y_i}} [1-\P\{\Psi_{x_i,y_i}(\theta^*_\d)\neq z_i\}]\\
&=1-\inf_{\Psi_{x_i,y_i}}\P\{\Psi_{x_i,y_i}(\theta^*_\d)\neq z_i\}\\
&=\nu(p_1(\theta^*_\d),p_0(\theta^*_\d)).
\end{align*}
Note that
\begin{align*}
\mathbb{E}\{\nu(p_1(\theta^*_\d),p_0(\theta^*_\d))\}
\leq & \mathbb{E}\Bigg\{\hspace{-.04in}\sqrt{
\frac{1}{2} D_{\mathrm{KL}}(p_1(\theta^*_\d)||p_0(\theta^*_\d))
}\Bigg\}\\
\leq &\sqrt{\hspace{-.04in}\frac{1}{2}\mathbb{E}\Bigg\{\hspace{-.04in}
D_{\mathrm{KL}}(p_1(\theta^*_\d)||p_0(\theta^*_\d))
\Bigg\}},
\end{align*}
where the first inequality follows from the Pinsker's inequality~\cite{massart2007concentration} and the second inequality follows from the Jensen's inequality~\cite{cover2012elements} while noting that $x\mapsto\sqrt{x}$ is a concave function. Similarly, we can prove that
\begin{align*}
\mathbb{E}\{\nu(p_1(\theta^*_\d),p_0(\theta^*_\d))\}
\leq &\sqrt{\hspace{-.04in}\frac{1}{2}\mathbb{E}\Bigg\{\hspace{-.04in}
D_{\mathrm{KL}}(p_1(\theta^*_\d)||p_0(\theta^*_\d))
\Bigg\}}.
\end{align*}
Combining these two inequalities results in
\begin{align*}
\mathbb{E}\{\nu(p_1(\theta^*_\d),p_0(\theta^*_\d))\}^2 \hspace{-.03in}
\leq &\frac{1}{2}\mathbb{E}\{
\min\{D_{\mathrm{KL}}(p_0(\theta^*_\d)||p_1(\theta^*_\d)),D_{\mathrm{KL}}(p_1(\theta^*_\d)||p_0(\theta^*_\d))\}
\}.
\end{align*}
The rest of the proof follows from that
\begin{align*}
&\min\{D_{\mathrm{KL}}(p_0(\theta^*_\d)||p_1(\theta^*_\d)),D_{\mathrm{KL}}(p_1(\theta^*_\d)||p_0(\theta^*_\d))\}\leq (D_{\mathrm{KL}}(p_0(\theta^*_\d)||p_1(\theta^*_\d))+D_{\mathrm{KL}}(p_1(\theta^*_\d)||p_0(\theta^*_\d)))/2.
\end{align*}
This concludes the proof.
\end{proof}
\begin{remark}[Black-box vs. White-box]
In both definitions of the membership information leakage, we assume that the adversary has access to the parameters of the trained model $\theta^*_\d$ (i.e., white-box assumption). This is the strongest assumption for an adversary and the amount of the leaked information reduces if we instead let the adversary query the model (i.e., black-box assumption). In fact, the data processing inequality states that
$I(\mathfrak{M}(x_i;\theta^*_\d),y_i;z_i|x_i,y_i)\leq I(\theta^*_\d;z_i|x_i,y_i)=\rho_{\mathrm{MI}}(\theta^*_\d).$
We are interested in analyzing this framework as it provides an insight against the worst-case adversary and therefore the mitigation techniques extracted from this analysis would also work against weaker adversaries with more restricted access to the model.
\end{remark}
\begin{remark}[Gaussian Approximation]
Let us present a simple numerical method for computing Kullback--Leibler information leakage using Gaussian approximation. To this aim, we can approximate $p_1(\theta^*_\c)$ and $p_0(\theta^*_\c)$ by Gaussian density functions $\mathcal{N}(\mu_1^{x_i,y_i},\Sigma_1^{x_i,y_i})$ and $\mathcal{N}(\mu_0^{x_i,y_i},\Sigma_0^{x_i,y_i})$, respectively. The parameters $\mu_0^{x_i,y_i}$, $\mu_1^{x_i,y_i}$, $\Sigma_0^{x_i,y_i}$, and $\Sigma_1^{x_i,y_i}$ are extracted by Monte-Carlo simulation with and without $x_i,y_i$. These distributions are often more complex than Gaussian and are approximated with Gaussian distributions for numerical evaluation. When the underlying distributions are close to Gaussian, the errors in such approximations are inversely proportional to the square root of the number of the Monte Carlo scenarios. Using the Gaussian approximation, we get
$
\varrho_1(x_i,y_i)\hspace{-.03in}:=D_{\mathrm{KL}}(p_1(\theta^*_\c)||p_0(\theta^*_\c))*
=0.5(\trace((\Sigma_0^{x_i,y_i})^{-1}\Sigma_0^{x_i,y_i})-p_\theta
\hspace{-.03in}+\hspace{-.03in}(\mu_0^{x_i,y_i}\hspace{-.03in}-\hspace{-.03in}\mu_1^{x_i,y_i})(\Sigma_0^{x_i,y_i})^{-1}(\mu_0^{x_i,y_i}\hspace{-.03in}-\hspace{-.03in}\mu_1^{x_i,y_i})
\hspace{-.03in}+\hspace{-.03in}\ln(\det(\Sigma_0^{x_i,y_i})/\det(\Sigma_1^{x_i,y_i}))).$
We can similarly evaluate the value of $\varrho_2(x_i,y_i):=D_{\mathrm{KL}}(p_0(\theta^*_\c)||p_1(\theta^*_\c)).$
Then, we can approximate $\rho_{\mathrm{KL}}(\theta^*_\d)$ by computing $\varrho_1(x_i,y_i)$ and $\varrho_2(x_i,y_i)$ for a set of data entries $\mathcal{J}\subseteq\mathcal{U}$ and compute $\rho_{\mathrm{KL}}(\theta^*_\c)\approx \frac{1}{|\mathcal{J}|} \sum_{(x,y)\in\mathcal{J}} (\varrho_1(x,y)+\varrho_2(x,y)).$
This enables us to approximately compute the Kullback–Leibler information leakage in machine learning. Interestingly, this is an approximation method for computing $\rho_{\mathrm{MI}}(\theta^*_\c)$ when approximating $p_1(\theta^*_\c)$ and $p_0(\theta^*_\c)$ by Gaussian density functions~\cite{4218101}. This is because, under Gaussian approximation, $p(\theta^*_\c|x_i,y_i)$ follows a Gaussian mixture. Furthermore, due to data processing inequality, $\rho_{\mathrm{MI}}(\theta^*_\d)\leq \rho_{\mathrm{MI}}(\theta^*_\c)$. Therefore, in Section~\ref{sec:numerical}, we use this approximation for numerically computing $\rho_{\mathrm{MI}}(\theta^*_\d)$.
\end{remark}
\section{What Influences Membership Inference?}
One of the most important factors in machine learning is the size of the training dataset. In what follows, we show that the success of the membership inference attack is inversely proportional to the size of the training dataset.
\begin{mdframed}[backgroundcolor=black!10,rightline=false,leftline=false,topline=false,bottomline=false,roundcorner=2mm]
\vspace{-.1in}
\begin{theorem} \label{tho:dataset_size_1} Assume that\vspace{-.15in}
\begin{itemize}
\item[(A1)] $\Theta_{\c}$ is compact, $\Theta_{\d}\subset\Theta_\c$, and $|\Theta_{\d}|<\infty$;\vspace{-.1in}
\item[(A2)] $\{(x_i,y_i)\}_{i=1}^N$ is i.i.d. following distribution $\mathcal{P}$;\vspace{-.1in}
\item[(A3)] $\lambda g(\theta)+\mathbb{E}_{\mathcal{P}}\{\ell(\mathfrak{M}(x;\theta),y)\}$ is continuous and has a unique minimizer;\vspace{-.1in}
\item[(A4)] $\ell(\mathfrak{M}(x;\theta),y)$ is almost surely Lipschitz continuous with Lipschitz constant $L(x,y)$ on $\Theta_{\c}$ with respect to $\mathcal{P}$ and $\mathbb{E}_{\mathcal{P}}\{L(x,y)\}<\infty$.\vspace{-.1in}
\end{itemize}
Then, $\displaystyle \lim_{n,N\rightarrow\infty:n\leq N} \rho_{\mathrm{MI}}(\theta^*_\d)=0$.
\end{theorem}
\end{mdframed}
\begin{proof}
Using Proposition~8.5 in~\cite{Kim2015}, (A1), (A2) and (A4) ensures that $f(\theta;\mathcal{D})$ converges to $\lambda g(\theta)+\linebreak \mathbb{E}_{\mathcal{P}}\{\ell(\mathfrak{M}(x;\theta),y)\}$ almost surely uniformly on $\Theta_{\c}$ as $n\leq N$ tends to infinity. Following this observation in conjunction with (A1), (A3) and Theorem~8.2 in~\cite{Kim2015}, we get that $\theta^*_\c$ converges to $\theta'\in\argmin_{\theta\in\Theta_{\c}} \lambda g(\theta)+\mathbb{E}_{\mathcal{P}}\{\ell(\mathfrak{M}(x;\theta),y)\}$ almost surely. Therefore, $\theta^*_\d$ converges to $\Pi_{\Theta_\d}[\theta']$ almost surely. Almost sure convergence implies convergence in probability and that in turn implies convergence in distribution~\cite[p.\,38]{klebaner2005introduction}. Now, the continuity of mutual information on finite alphabet implies that $\lim_{n,N\rightarrow\infty:n\leq N} \rho_{\mathrm{MI}}(\theta^*_\d)\linebreak = I(\Pi_{\Theta_\d}[\theta'];z_i|x_i,y_i)=0$ because $\Pi_{\Theta_\d}[\theta']$ is deterministic.
\end{proof}
\begin{remark}[Projection vs Discrete Optimization] Instead of using the projection in~\eqref{eqn:ML_d}, we could rewrite the optimization problem for training the machine learning model in~\eqref{eqn:ML} as a discrete optimization problem over decision set $\Theta_\d$. Doing so, we could still prove the result of Theorem~\ref{tho:dataset_size_1} while relying on the properties of sample average approximation in stochastic discrete problems~\cite{kleywegt2002sample}. We opted for the projection-based approach in~\eqref{eqn:ML_d} as it is closer in spirit to the solutions extracted by computers.
\end{remark}
Theorem~\ref{tho:dataset_size_1} states that the \textbf{amount of the membership information leakage is a decreasing function of the size of the training dataset}. This shows that, by using a larger training dataset, the model is less over-fit to the training dataset and it is therefore harder to distinguish the training data. Increasing the dataset size also helps with over-learning (or memorization), which is another possible factor behind success of membership inference attacks. This is in-line with the observation that over-fitting contributes to success of membership inference attacks~\cite{yeom2018privacy}. The result of Theorem~\ref{tho:dataset_size_1} does not require convexity of the loss function or even its differentiability. Therefore, it is applicable to different machine learning models ranging from linear regression and support vector machines to neural networks and decision trees. In the next theorem, by focusing on convex smooth machine learning problems, we investigate the effect of other factors, such as regularization, on the success of membership inference attacks.
\begin{mdframed}[backgroundcolor=black!10,rightline=false,leftline=false,topline=false,bottomline=false,roundcorner=2mm]
\vspace{-.1in}
\begin{theorem} \label{tho:2} Assume that\vspace{-.15in}
\begin{itemize}
\item[(A1)]$\Theta_{\c}$ is compact, $\Theta_{\d}\subset\Theta_\c$, and $|\Theta_{\d}|<\infty$;\vspace{-.1in}
\item[(A2)] $\lambda g(\theta)+\mathbb{E}_{\mathcal{P}}\{\ell(\mathfrak{M}(x;\theta),y)\}$ is continuous and finite everywhere;\vspace{-.1in}
\item[(A3)] $\ell(\mathfrak{M}(x;\theta),y)$ is almost surely Lipschitz continuous with Lipschitz constant $L$ on $\Theta_{\c}$;\vspace{-.1in}
\item[(A4)] $g(\theta)$ is strictly convex and $\mathbb{E}\{\ell(\mathfrak{M}(x;\theta),y)\}$ is convex.\vspace{-.1in}
\end{itemize}
Then, \vspace{-.15in}
\begin{align}
\lim_{\lambda\rightarrow\infty} \rho_{\mathrm{MI}}(\theta^*_\d)&=0.
\end{align}
Consider a family of fitness functions $\ell(\mathfrak{M}(x;\theta),y)$ parameterized by the the Lipschitz constant $L\in[0,c)$ for some $c>0$, then
\begin{align}
\lim_{L\rightarrow 0} \rho_{\mathrm{MI}}(\theta^*_\d)&=0.
\end{align}
\end{theorem}
\end{mdframed}
\begin{proof}
The Maximum Theorem implies that $\theta^*_\c$ is a continuous function of $\lambda$~\cite[p.\,237]{sundaram1996first}. Thus, $\lim_{\lambda\rightarrow\infty} \theta^*_\c=\bar{\theta}:=\argmin_{\theta\in\Theta_{\c}} g(\theta)$ and, as a result, $\lim_{\lambda\rightarrow\infty} \theta^*_\d=\Pi_{\Theta_\d}[\bar{\theta}]$. Thus, $\lim_{\lambda\rightarrow\infty}I(\theta^*_\d;z_i|x_i,y_i)= I(\Pi_{\Theta_\d}[\bar{\theta}];z_i|x_i,y_i)\linebreak=0$. Again, the Maximum Theorem implies that $\theta^*_\c$ is a continuous function of $L$ over a family of fitness functions $\ell(\mathfrak{M}(x;\theta),y)$ parameterized by the the Lipschitz constant $L$. For $L=0$, the fitness function $\ell(\mathfrak{M}(x;\theta),y)$ is independent of $\theta$ because
$0\leq \|\ell(\mathfrak{M}(x;\theta),y)
-\ell(\mathfrak{M}(x;\theta'),y)\|\leq L\|\theta-\theta'\|=0$ for all $\theta,\theta'\in\Theta_{\c}$. Thus $\lim_{L\rightarrow 0} \theta^*_\d=\Pi_{\Theta_\d}[\bar{\theta}]$ and, similarly, $\lim_{L\rightarrow 0}I(\theta^*_\d;z_i|x_i,y_i)=0$.
\end{proof}
Theorem~\ref{tho:2} shows that \textbf{regularization reduces the amount of membership information leakage}. Increasing the importance of the regularization reduces the over-fitting, over-learning, or memorization and is therefore an important tool for combating membership inference attacks. Let us define model sensitivity by
\begin{align*}
S:=\sup_{(x,y)}\sup_{\theta} \left\|\frac{\partial \ell(\mathfrak{M}(x;\theta),y)}{\partial \theta}\right\|.
\end{align*}
Clearly, $L\leq S$. Therefore, Theorem~\ref{tho:2} shows that, \textbf{if the model sensitivity is high, more membership information is potentially leaked}. Therefore, complex models, such as deep neural networks, are more susceptible to membership inference attacks in comparison to simpler models with fewer degrees of freedom.
\section{Additive Noise for Membership Privacy}
In this section, we explore the use of additive noise, particularly, differential privacy noise, on the amount of the leaked membership information.
\begin{mdframed}[backgroundcolor=black!10,rightline=false,leftline=false,topline=false,bottomline=false,roundcorner=2mm]
\vspace{-.1in}
\begin{theorem} \label{tho:MI_increase_snr} Assume $w$ is a zero-mean Gaussian variable with unit variance. The mutual membership information leakage is $\rho_{\mathrm{MI}}(\theta^*_\c+tw)$ and Kullback–Leibler membership information leakage $\rho_{\mathrm{KL}}(\theta^*_\c+tw)$ are decreasing functions of $t$. Particularly, $\rho_{\mathrm{MI}}(\theta^*_\c+tw)=\rho_{\mathrm{MI}}(\theta^*_\c)-\mathcal{O}(t)$ and $\rho_{\mathrm{KL}}(\theta^*_\c+tw)=\rho_{\mathrm{KL}}(\theta^*_\c)-\mathcal{O}(t).$
\end{theorem}
\end{mdframed}
\begin{proof} Using de~Bruijn Identity in Terms of Divergences~\cite{valero2017generalization}, we get
\begin{align*}
&\frac{\mathrm{d}D_{\mathrm{KL}}(p_1(\theta^*_\c+tw)\| p_0(\theta^*_\c+tw))}{\mathrm{d}t}
=\hspace{-.02in}-\frac{J(p_1(\theta^*_\c+tw)\| p_0(\theta^*_\c+tw))}{2},
\end{align*}
where $J(p_1(\theta^*_\c+tw)\| p_0(\theta^*_\c+tw))$ is the Fisher divergence defined as
\begin{align*}
J(p_1(\theta^*_\c+tw)\| p_0(\theta^*_\c+tw))
=&\int \Bigg[\nabla_t \log\left(\frac{p_1(\theta^*_\c+tw)}{p_0(\theta^*_\c+tw)}\right)\Bigg]^\top
\Bigg[\nabla_t \log\left(\frac{p_1(\theta^*_\c+tw)}{p_0(\theta^*_\c+tw)}\right)\Bigg] p_0(\theta^*_\c+tw)\mathrm{d}\theta^*_\c.
\end{align*}
Note that, by construct, $J$ is semi-positive definite. Note that
$D_{\mathrm{KL}}(p_1(\theta^*_\c+tw)\| p_0(\theta^*_\c+tw))
=-c't+\mathcal{O}(t^2),$
and $D_{\mathrm{KL}}(p_0(\theta^*_\c+tw)\| p_1(\theta^*_\c+tw))=-c''t+\mathcal{O}(t^2),$
where
$
c'=(1/2)J(p_1(\theta^*_\c+tw)\| p_0(\theta^*_\c+tw))|_{t=0}
\geq 0$ and $c''=(1/2)J(p_0(\theta^*_\c+tw)\| p_1(\theta^*_\c+tw))|_{t=0}|_{t=0}\geq 0.$
Selecting $c_{\mathrm{KL}}=c'+c''$ concludes the proof for the mutual information by proving that $\rho_{\mathrm{KL}}(\theta^*_\c+tw)=\rho_{\mathrm{KL}}(\theta^*_\c)-c_{\mathrm{KL}}t+\mathcal{O}(t^2)$. In light of~\eqref{eqn:MI_reduced_KL}, the proof for the mutual membership information leakage follows the same line of reasoning.
\end{proof}
Theorem~\ref{tho:MI_increase_snr} proves that \textbf{the amount of membership information leakage is increased by reducing the amount of the additive noise}. This is why differential privacy works as a successful defence strategy in membership inference~\cite{10shokri2017membership}, albeit if its privacy budget is set small enough. In what follows, we focus on the effect of differential privacy noise on the membership information leakage.
\begin{definition}[Differential Privacy]
Mechanism $\theta^*_\c+w$ with additive noise $w$ is $(\epsilon,\delta)$-differentially private if, for all Lebesgue-measurable sets $\mathcal{W}\subseteq\mathbb{R}^{p_\theta}$,
\begin{align*}
\P\{\theta^*_\c+w\in&\mathcal{W}|(z_i)_1^N\}\leq \exp(\epsilon)\P\{\theta^*_\c+w\in\mathcal{W}|(z'_i)_1^N\}+\delta ,
\end{align*}
where $(z_i)_1^N$ and $(z'_i)_1^N$ are any two vectors in $\{0,1\}^N$ such that $\sum_{i=1}^N z_i=n$, $\sum_{i=1}^N z'_i=n$, and there exists at most one index $j$ for which $z_j\neq z'_j$.
\end{definition}
It has been proved that we can ensure $(\epsilon,\delta)$-differential privacy with Gaussian noise. This is recited in the following proposition.
\begin{proposition} \label{prop:differential_privacy}
Assume that $\Delta \theta^*_\c>0$ is such that $\|\theta^*_\c((z_i)_1^N)-\theta^*_\c((z'_i)_1^N)\|_2\leq \Delta \theta^*_\c$,
where $(z_i)_1^N$ and $(z'_i)_1^N$ are any two vectors in $\{0,1\}^N$ such that $\sum_{i=1}^N z_i=n$, $\sum_{i=1}^N z'_i=n$, and there exists at most one index $j$ for which $z_j\neq z'_j$. Then, the mechanism $\theta^*_\c+w$ is $(\epsilon,\delta)$-differentially private if $w$ is a zero-mean Gaussian noise with standard deviation $\sigma=\sqrt{2\log(1.25/\delta)}\Delta \theta^*_\c/\epsilon$.
\end{proposition}
\begin{proof}
The proof immediately follows from using the Gaussian mechanism for ensuring differential privacy; see, e.g.,~\cite[Theorem~A.1]{dwork2014algorithmic}.
\end{proof}
\begin{mdframed}[backgroundcolor=black!10,rightline=false,leftline=false,topline=false,bottomline=false,roundcorner=2mm]
\vspace{-.1in}
\begin{corollary} \label{cor:dp} Assume that $w$ is selected to ensure $(\epsilon,\delta)$-differential privacy based on Proposition~\ref{prop:differential_privacy}. Then, $\rho_{\mathrm{MI}}(\theta^*_\c+w)=\rho_{\mathrm{MI}}(\theta^*_\c)-\mathcal{O}(\log^{1/2}(\delta^{-1})\epsilon^{-1})$ and $\rho_{\mathrm{KL}}(\theta^*_\c+w)=\rho_{\mathrm{KL}}(\theta^*_\c)-\mathcal{O}(\log^{1/2}(\delta^{-1})\epsilon^{-1})$.
\end{corollary}
\end{mdframed}
Again, noting the data processing inequality, we get $\rho_{\mathrm{MI}}(\theta^*_\d)\leq \rho_{\mathrm{MI}}(\theta^*_\c)$. Therefore, we expect $\rho_{\mathrm{MI}}(\theta^*_\d)$ to follow the trend as in Corollary~\ref{cor:dp}.
\section{Experimental Validation} \label{sec:numerical}
In this section, we demonstrate the results of the paper numerically using a practical dataset.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\node[] at (0,0) {\includegraphics[width=.45\columnwidth]{LR_versus_n_m_5}};
\node[] at (0,-2.9) {$n$};
\node[rotate=90] at (-3.7,0) {$\mathrm{Adv}$};
\node[rotate=90] at (+3.7,0) {$\rho_{\mathrm{KL}}(\theta^*_\c)$};
\end{tikzpicture}
\vspace{-.1in}
\caption{\label{fig:1} The relationship between adversary's advantage in membership attack and the size of the training dataset for linear regression ($\lambda=0$ and $p_x=5$).
}
\vspace{.1in}
\begin{tikzpicture}
\node[] at (0,0) {\includegraphics[width=.45\columnwidth]{NN_5layer_50nodes_versus_n_m_5}};
\node[] at (0,-2.9) {$n$};
\node[rotate=90] at (-3.7,0) {$\mathrm{Adv}$};
\end{tikzpicture}
\vspace{-.1in}
\caption{\label{fig:11} The relationship between adversary's advantage in membership attack versus the size of the training dataset for neural network ($\lambda=0$ and $p_x=5$).
}
\end{figure}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\node[] at (0,0) {\includegraphics[width=.45\columnwidth]{LR_versus_lambda_m_5}};
\node[] at (0,-2.9) {$\lambda$};
\node[rotate=90] at (-3.7,0) {$\mathrm{Adv}$};
\node[rotate=90] at (+3.7,0) {$\rho_{\mathrm{KL}}(\theta^*_\c)$};
\end{tikzpicture}
\vspace{-.1in}
\caption{\label{fig:3} The relationship between adversary's advantage in membership attack and regularization weight $\lambda$ for linear regression ($n=10$ and $p_x=5$).
}
\vspace{.1in}
\begin{tikzpicture}
\node[] at (0,0) {\includegraphics[width=.45\columnwidth]{LR_versus_p_m_5}};
\node[] at (0,-2.9) {$p_x$};
\node[rotate=90] at (-3.7,0) {$\mathrm{Adv}$};
\node[rotate=90] at (+3.7,0) {$\rho_{\mathrm{KL}}(\theta^*_\c)$};
\end{tikzpicture}
\vspace{-.1in}
\caption{\label{fig:p} The relationship between adversary's advantage in membership attack versus number of features $p_x$ for linear regression ($n=10$ and $\lambda=10^{-6}$).
}
\end{figure}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\node[] at (0,0) {\includegraphics[width=.45\columnwidth]{LR_versus_b_m_5}};
\node[] at (0,-2.9) {$\sigma$};
\node[rotate=90] at (-3.7,0) {$\mathrm{Adv}$};
\node[rotate=90] at (+3.7,0) {$\rho_{\mathrm{KL}}(\theta^*_\c)$};
\end{tikzpicture}
\vspace{-.1in}
\caption{\label{fig:4} The relationship between adversary's advantage in membership attack and the standard deviation of additive noise $\sigma$ for linear regression ($n=30$, $\lambda=0$, and $p_x=5$).
}
\end{figure}
\subsection{Dataset Description}
We use the Adult Dataset available from the UCI Machine Learning Repository~\cite{Dua2019}. This dataset was first used in~\cite{kohavi1996scaling}. The Adult Dataset contains 14 individual attributes, such as age, race, occupation, and relationship status, as inputs and income level (i.e., above or below \$50K per annum) as output. The dataset contains $N=48,842$ instances extracted from the 1994 Census database. We translate all categorical attributes and outputs to integers. We perform feature selection using the Principal Component Analysis (PCA) to select the top $p_x$ important features. This greatly improves the numerical stability of the underlying machine learning algorithms. In what follows, we select $p_x=5$ except for one example in which we vary $p_x$ to study its effect on the success of membership inference and membership information leakage.
\subsection{Experiment Setup}
We use linear regression to demonstrate all the results of the paper, namely, Theorems~\ref{tho:dataset_size_1}--\ref{tho:MI_increase_snr}. However, noting that the results of Theorem~\ref{tho:dataset_size_1} also hold for non-convex learning problems, we demonstrate these results also for neural networks with five hidden layers with fifty neurons in each layer and hyperbolic tangent sigmoid activation function. We employ the quadratic regularization function $g(\theta)=\theta^\top \theta$ if one is used.
For membership inference, we use the threshold-based adversary in~\cite{yeom2018privacy}. To assess the effectiveness of the membership attacks, we also use the membership experiment from~\cite{yeom2018privacy}. Let us describe this experiment briefly. For any $n$, we select $(z_i)_{i=1}^N\in\{0,1\}^N$ uniformly at random such that $\sum_{i=1}^N z_i=n$. We train a model $\theta^*_\d$ based on the training dataset $\mathcal{D}=\{(x_i,y_i)\}_{i:z_i=1}$. We select $b$ with equal probability from $\{0,1\}$. Then, we select a single record from the training dataset $\mathcal{D}=\{(x_i,y_i)\}_{i:z_i=1}$ if $b=1$ or from the remaining data the training dataset $\mathcal{U}\setminus\mathcal{D}=\{(x_i,y_i)\}_{i:z_i=0}$ if $b=0$. We transmit the selected record to the adversary. The adversary estimates the realization of random variable $b$, denoted by $\hat{b}\in\{0,1\}$, based on the selected record and trained model. The adversary's advantage (in comparison to randomly selecting an estimate) is given by
$\mathrm{Adv}:=|2\P\{\hat{b}=b\}-1|. $
We investigate the relationship between this advantage and other factors, such as membership information leakage, training dataset size, regularization, and additive privacy-preserving noise, in the remainder of this section.
\subsection{Experimental Results}
We first show that the membership information leakage gets smaller by increasing the size of the training dataset; thus validating the prediction of Theorem~\ref{tho:dataset_size_1}. Figure~\ref{fig:1} illustrates the adversary's advantage in membership attack (left axis) and the membership information leakage (right axis) versus the size of the training dataset in the case of linear regression. The box plot shows the adversary's advantage and the solid line shows the membership information leakage. As expected from the theorem, the adversary's advantage in membership attack and the membership information leakage decrease rapidly by increasing the size of the training dataset. Noting that the results of Theorem~\ref{tho:dataset_size_1} also hold for non-convex learning problems, we demonstrate this results also for neural networks. Figure~\ref{fig:11} illustrates the relationship between adversary's advantage in membership attack and the size of the training dataset for a neural network with five hidden layers with fifty neurons in each layer and hyperbolic tangent sigmoid activation function. The same trend is also visible in this case.
Now, we proceed to validate the prediction of Theorem~\ref{tho:2} regarding the effect of regularization. Figure~\ref{fig:3} shows the adversary's advantage in membership attack (left axis) and the membership information leakage (right axis) versus the regularization weight $\lambda$ for linear regression. Evidently, by increasing the weight of regularization, the adversary's advantage in membership attack and the membership information leakage both decrease. This is intuitive as we expect, by increasing $\lambda$, the trained model to become closer to $\argmin_{\theta\in\Theta_{\c}} g(\theta)$ which is data independent.
An important factor in membership inference success is the number of features. Figure~\ref{fig:p} illustrates the adversary's advantage in membership attack (left axis) and the membership information leakage (right axis) versus the number of the features extracted from the PCA $p_x$ for linear regression. Now, by increasing the the number of the features, the adversary's advantage in membership attack and the membership information leakage both increase (as more features potentially makes the data entries more unique and their effect of the trained model more pronounced).
Finally, we investigate the effect of the additive noise on membership inference. Figure~\ref{fig:4} shows the adversary's advantage in membership attack (left axis) and the membership information leakage (right axis) versus the standard deviation of the additive noise to the model parameters for linear regression. By increasing the magnitude of the noise, the adversary's advantage in membership attack and the membership information leakage both decrease. This is in-line with our theoretical observations from Theorem~\ref{tho:MI_increase_snr}.
\section{Conclusions and Future Work}
We used mutual information and Kullback--Leibler divergence to develop measures for membership information leakage in machine learning. We showed that the amount of the membership information leakage is a decreasing function of the training dataset size, the regularization weight, the sensitivity of machine learning model. We also investigated the effect of privacy-preserving additive noise on membership information leakage and the success of membership inference attacks. Future work can focus on further experimental validation of the relationship between the membership information leakage and the success of membership inference attacks for more general machine learning models, e.g., deep neural networks. Another interesting avenue for future research is to use the developed measures of membership information leakage as a regularizer for training machine learning models in order to effectively combat membership inference attacks.
\bibliographystyle{ieeetr}
|
1,116,691,499,809 | arxiv | \section{Introduction} \label{sec:intro}
Quantum Computing has generated re-newed interest in theory and practice of Quantum Mechanics.
This is largely due to the fact that quantum computers can solve certain problems much faster than classical computers \citep{dj, shor, grover}.
Many efforts are being made to realize a scalable quantum computer using techniques such as
trapped ions, optical lattices, diamond-based quantum computers, Bose-Einstein condensate based quantum computers, cavity quantum electrodynamics (CQED)
and nuclear magnetic resonance \citep{brennen, jones, nielson, schmidt, nizovtsev}.
NMR has become an important experimental tool for demonstrating quantum algorithms, simulating quantum systems, and for verifying various tenets of
quantum mechanics \citep{jones2, cory2, linden, chuang, lieven, kavita, ranabir, laflamme, avik2, peng, aram, du}.
There are several theoretical protocols available for orthogonal state discrimination \citep{walgate, ghosh, virmani, chen}. Walgate \textit{et al.} showed that, using local operations and classical
communication (LOCC) multipartite orthogonal states can be distinguished perfectly \citep{walgate}. However if only a single copy is provided and only LOCC is allowed, it cannot discriminate quantum states
either deterministically or
probabilistically \citep{ghosh}. Estimation of the phase plays an important role in quantum information processing and is a key subroutine of many quantum algorithms.
When the phase estimation is combined with other quantum algorithms, it can be employed to perform certain computational tasks
such as quantum counting, order finding and factorization \citep{shor, brassard}. Phase Estimation Algorithm has also been utilised in a recent important application in which the
ground state of the Hydrogen molecule has been obtained upto 45 bit accuracy in NMR and upto 20 bit accuracy in photonic systems \citep{lanyon}.
By defining an operator with preferred eigen-values, phase estimation can be used logically for discrimination of quantum states with certainty \citep{nielson}. It preserves the
state since local operations on ancilla qubit measurements do not affect the state. In this paper we describe an algorithm for non-destructive state discrimination using only phase estimation alone.
The algorithm described in this paper is scalable and can be used for discriminating any set of orthogonal states (entangled or non-entangled). Earlier non-destructive Bell state discrimination has been
described by Gupta \textit{et al.} \citep{panigrahi} and verified experimentally in
our laboratory by Jharana \textit{et al.} \citep{jharana}. Bell states are specific example of orthogonal entangled states.
The circuit used for Bell state discrimination \citep{panigrahi} is based on parity and phase estimation and
will not be able to discriminate a superposition state which has no definite parity. For example consider a state $|\psi\rangle=\tfrac{1}{\sqrt{2}}(|00\rangle + |01\rangle)$,
which belongs to a set of orthogonal states. Here $|00\rangle$ has parity $0$ and $|01\rangle$ has parity $1$. Hence the above $|\psi\rangle$ does not have a definite parity and cannot be distinguished
from its other members of the set, by the method of Gupta \textit{et al.} \citep{panigrahi}.
Sec.\ref{sec:theory} of this paper describes the design of a circuit for non-destructive state discrimination using phase estimation. Sec. \ref{sec:theory} also contains non-destructive discrimination of
special cases such as Bell states and three qubit $GHZ$ states using phase estimation.
Sec.\ref{sec:expt} describes experimental implementation of the algorithm for two qubit states by NMR quantum computer and Sec.\ref{sec:matlab} describes the $Matlab^{\begin{scriptsize}\textregistered\end{scriptsize}}$
simulation of non-destructive discrimination of three qubit $GHZ$ states.
\section{Theory} \label{sec:theory}
For a given eigen-vector $|\phi\rangle$ of a unitary operator $U$, phase estimation circuit with $Controlled$-$U$ operator can be used for finding the eigen-value
of $|\phi\rangle$ \citep{nielson}. Conversely the reverse of the algorithm, with defined eigen-values can be used for discriminating eigen-vectors. By logically defining the operators with
preferred eigen-values, the discrimination, as shown here, can be done with certainty.
\subsection{The General Procedure (n-qubit case):} \label{sec:theory_n}
For $n$ qubit case the Hilbert space dimension is $2^n$, having $2^n$ independent orthogonal states. Hence we need to design a quantum circuit for
state discrimination for a set of $2^n$ orthogonal quantum states.
Consider a set of $2^n$ orthogonal states $\{\phi_i\}$, where $i=1,2, .... 2^n$.
The main aim of the discrimination circuit is to make direct correlation between the elements of $\{\phi_i\}$ and possible product states of ancilla qubits.
As there are $2^n$ states, we need $n$ ancilla qubits for proper discrimination.
\begin{figure*} \label{fig:circuit_nqbt}
\begin{center}
\includegraphics[width=10cm,height=3.2cm]{circuit_nqbt.jpeg}
\caption{The general circuit for non-destructive Quantum State Discrimination. For discriminating $n$ qubit states it uses $n$ number of
ancilla qubits with $n$ controlled operations. $n$ ancilla qubits are first prepared in the state $|00...0\rangle$. Here H represents Hadamard
transform and the meter represents a measurement of the qubit state. The original state encoded in n qubits is preserved(not destroyed).}.
\end{center}
\end{figure*}
The discrimination circuit requires $n$ Controlled Operations. Selecting these $n$ operators $\{U_j\}$ (where $j=1,2,... n$) is the main task in designing the algorithm.
The set $\{U_j\}$ depends on the $2^n$ orthogonal states in such a way that the set of orthogonal vectors forms the eigen-vector set of the operators,
with eigen-values $\pm1$. The sequence of $+1$ and $-1$ in the eigen-values should be defined in a special way, as outlined below.
Let $\{e_j^i\}$ (with $i=1,2... 2^n$) be the eigen-value array of $U_j$, and it should satisfy following conditions. \\
\textit{Condition \#1}: Eigen-value arrays $\{e_{j}^i\}$ of all operators $\{U_j\}$ should contain equal number of +1 and -1, \\
\textit{Condition \#2}: For the first operator $U_1$, the eigen-value array $\{e_1^i\}$ can be any possible sequence of +1 and -1 with \textit{Condition \#1}, \\
\textit{Condition \#3}: The restriction on eigen-value arrays starts from $U_{j=2}$ onwards. The eigen-value array ($\{e_2^i\}$) of operator $U_2$ should not be equal to $\{e_1^i\}$ or its complement,
while still satisfying the \textit{Condition \#1}. \\
\textit{Condition \#4}: By generalizing the \textit{Condition \#3}, the eigen-value array ($\{e_k^i\}$) of operator $U_k$ should not be equal to $\{e_m^i\}$ $(m=1,2,...k-1)$ or its complement. \\
Let $M_j$ be the diagonal matrix formed by eigen-value array $\{e_j^i\}$ of $U_j$. The operator $U_j$ is directly related to $M_j$ by a unitary transformation given by,
\begin{equation}
U_j=V^{-1} \times M_j \times V,
\end{equation}
where V is the matrix formed by the column vectors $\{|\phi_i\rangle\}$,
V = [ $|\phi_1\rangle$ ~~ $|\phi_2\rangle$ ~~ $|\phi_3\rangle$ ..... $|\phi_n\rangle$].
The circuit diagram for implementation of Phase Estimation Algorithm (PEA) to discriminate orthogonal states using the $Controlled$-$U_j$ operations
such that the original state is preserved for further use in any qauntum circuit is shown in Fig.1.
As the eigen-values defined are either $+1$ or $-1$, the final ancilla qubit states will be in product state (without superposition), and hence can be measured with certainty.
It can be shown that the selection of specific operator set $\{U_j\}$ with the conditions discussed above makes direct correlation between $2^n$ product states of
ancilla qubit and elements of $\{|\phi_i\rangle\}$ so that ancilla measurements can discriminate the state.
\subsection{Single qubit case:} \label{sec:theory_1}
For a single qubit system, the Hilbert space dimension is 2. So we can discriminate a state from a set of two orthogonal states.
Consider an illustrative example with the orthonormal set as \{ $|\phi_1\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$,
$|\phi_2 \rangle= \tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)$ \}. The quantum circuit for this particular case can be designed
by following the general procedure discussed in Sec.\ref{sec:theory_n}.
The V matrix for the given states $\{|\phi_1\rangle, |\phi_2\rangle\}$ is,
\begin{center}
V= $\dfrac{1}{\sqrt{2}}\begin{pmatrix}
1 & 1 \\
1 & -1
\end{pmatrix}. $
\end{center}
According to the rules given in Sec.\ref{sec:theory_n}, $M$ can be either
$\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix} $ or $\begin{pmatrix}
-1 & 0 \\
0 & 1
\end{pmatrix}. $ ~~~~~~~~~~~
For $M$ = $\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix} $,
\begin{equation}
~~~~~~~~U=V^{-1}\times M \times V = \begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix} .
\end{equation}
The circuit diagram for this case is identical to Fig.1, having only one work and one ancilla qubit. It can be easily shown that, the ancilla qubit measurements are directly correlated with the input states. For the selected $M_1$, if the given
state is $|\phi_1\rangle$ then ancilla will be in the state $|0\rangle$ and if the given state is $|\phi_2\rangle$ ancilla will be in the state $|1\rangle$.
For a general set \{$|\phi_1\rangle =(\alpha|0\rangle + \beta|1\rangle)$, $|\phi_2 \rangle= (\beta|0\rangle -\alpha |1\rangle)$\} (where $\alpha$ and $\beta$ are real numbers satisfying,
$|\alpha|^2+ |\beta|^2 = 1$), operator $U$ for eigenvalue array $\{1, -1\}$ can be shown as, \\
\begin{equation}
U=\begin{pmatrix}
Cos(\theta) & Sin(\theta) \\
Sin(\theta) &-Cos(\theta)
\end{pmatrix},
\end{equation} \\ with $\theta = 2 \times Tan^{-1}(\dfrac{\beta}{\alpha})$.
\subsection{Two qubit case:} \label{sec:theory_2}
The Hilbert space dimension of two qubit system of is four. Consider an illustrative example with a set of orthogonal states
\begin{align} \notag
\{|S(\alpha,\beta)\rangle\}=\{ (\alpha|00\rangle + \beta|01\rangle), (\alpha|10\rangle + \beta|11\rangle), \\
(\beta|10\rangle - \alpha|11\rangle), (\beta|00\rangle - \alpha|01\rangle)\},
\end{align}
where $\alpha$ and $\beta$ are real numbers satisfying, $|\alpha|^2+ |\beta|^2 = 1$.
This set is so chosen that the states are (a)orthogonal, (b)not entangled, (c)different from Bell states, (d)do not have definite parity and (e)contain
single-superposed-qubits (SSQB) (in this case second qubit is superposed).
Using the general procedure discussed above, we can select the eigen-value arrays for two operators $U_1$ and $U_2$ as
\begin{equation}
\{e_1\}=\{1,1,-1,-1\}, ~~~~~~~~~~~\{e_2\}=\{1,-1,1,-1\}.
\end{equation}
$U_1$ and $U_2$, the unitary transformation of the diagonal matrices formed by $\{e_1\}$ and $\{e_2\}$ are, \\
\begin{equation}
U_1 = \begin{pmatrix}
Cos(\theta) & Sin(\theta) & 0 & 0 \\
Sin(\theta) & -Cos(\theta) & 0 & 0 \\
0 & 0 & Cos(\theta) & Sin(\theta)\\
0 & 0 & Sin(\theta) & -Cos(\theta)\\
\end{pmatrix},
\end{equation}
\begin{equation}
U_2 = \begin{pmatrix}
Cos(\theta) & Sin(\theta) & 0 & 0\\
Sin(\theta) & -Cos(\theta) & 0 & 0\\
0 & 0 & -Cos(\theta) & -Sin(\theta)\\
0 & 0 & -Sin(\theta) & Cos(\theta)\\
\end{pmatrix},
\end{equation}
where, $\theta = 2 \times Tan^{-1}(\dfrac{\beta}{\alpha})$. \\
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
states & Ancilla-1 & Ancilla-2 \\
\hline
$|\phi_1\rangle$ & $|0\rangle$ & $|0\rangle$ \\
$|\phi_2\rangle$ & $|0\rangle$ & $|1\rangle$ \\
$|\phi_3\rangle$ & $|1\rangle$ & $|0\rangle$ \\
$|\phi_4\rangle$ & $|1\rangle$ & $|1\rangle$ \\
\hline
\end{tabular}
\label{table1}
\caption{State of ancilla qubits for different input states for two qubit orthogonal states.}
\end{center}
\end{table}
The output state of the ancilla qubit run through all possible product states as input state changes, as listed in Table.I. The
quantum circuit for two qubit state discrimination is shown in Fig.2a.
\subsubsection{Special case ($\alpha=\beta=\frac{1}{\sqrt{2}}$):} \label{sec:theory_spcl}
The set of orthogonal states are,
\begin{align} \notag
\{|S(\tfrac{1}{\sqrt{2}},\tfrac{1}{\sqrt{2}})\rangle\}= \{|\phi_i\rangle\}
= \{\tfrac{1}{\sqrt{2}}(|00\rangle + |01\rangle),\\\tfrac{1}{\sqrt{2}}(|10\rangle + |11\rangle),
\tfrac{1}{\sqrt{2}}(|10\rangle - |11\rangle),\tfrac{1}{\sqrt{2}}(|00\rangle - |01\rangle)\}.
\end{align}
The operators $U_1$ and $U_2$ can be found by substituting the value of $\theta = \tfrac{\pi}{2}$ in (5) and (6),
\begin{equation}
U_1 = \dfrac{1}{\sqrt{2}}
\begin{pmatrix}
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
\end{pmatrix}
and ~~
U_2 = \dfrac{1}{\sqrt{2}}
\begin{pmatrix}
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & -1\\
0 & 0 &-1 & 0\\
\end{pmatrix}.
\end{equation}
The quantum circuit for the set (Eqn.8) is same as the general case of any set of two qubit orthogonal states(Fig.2a).
Experimental implementation of this case has been performed using NMR and is described in Sec. \ref{sec:expt}.
\subsection{Bell state discrimination:} \label{sec:theory_Bell}
Bell states are maximally entangled two qubit states (also known as Einstein-Podolsky-Rosen states) \citep{epr}. They play a crucial role in several applications of quantum computation
and quantum information theory. They have been used for teleportation, dense coding and entanglement swapping \citep{bennet1, bennet2, pan, zukowski}. Bell states have also found application in remote state preparation, where
a known state is prepared in a distant laboratory \citep{pati}. Hence, it is of general interest to distinguish Bell states without disturbing them.
The complete set of Bell states is,
\begin{align} \notag
\{|B_i\rangle\} = \{\tfrac{1}{\sqrt{2}}(|0_20_1\rangle + |1_21_1\rangle),\tfrac{1}{\sqrt{2}}(|0_20_1\rangle - |1_21_1\rangle), \\
\tfrac{1}{\sqrt{2}}(|0_21_1\rangle + |1_20_1\rangle),\tfrac{1}{\sqrt{2}}(|0_21_1\rangle - |1_20_1\rangle)\}
\end{align}
Bell states form an orthogonal set. Hence one can design a circuit for Bell state discrimination using only phase estimation.
The circuit diagram is same as that shown in Fig.2a with different $U_1$ and $U_2$. For eigen-value arrays $\{e_1\}=\{1,-1,1,-1\}, \{e_2\}=\{-1,1,1,-1\}$.
$U_1$ and $U_2$ are obtained as, \\
\begin{equation}
U_1 = \begin{pmatrix}
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
\end{pmatrix} and~~ U_2 = \begin{pmatrix}
0 & 0 & 0 & -1\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
-1 & 0 & 0 & 0\\
\end{pmatrix}.
\end{equation}
The controlled operators ($C-U_1$ and $C-U_2$) for phase estimation which involves 3-qubit operators can be written as the product of 2-qubit operators as,
\begin{align}
C-U_1 & = C-NOT^3_1 \times C-NOT^3_2 , \\
C-U_2 & = C-\pi^2_1 \times C-NOT^4_1 \times C-NOT^4_2 \times C-\pi^2_1. \notag
\end{align}
Here qubits 1 and 2 are work qubits in which the Bell states are encoded and 3 and 4 are the ancilla qubits.
Here $C-NOT^i_j$ represents C-NOT operation with control on $i^{th}$ qubit and target on $j^{th}$ qubit.
The splitting of three qubit operator into two qubit operators is needed for the implemention of $C-U_1$ and $C-U_2$.
There already exists an algorithm for non-destructive discrimination of Bell state by Gupta \textit{et al.} \citep{panigrahi}, which has also been experimentally implemented in NMR by Jharana \textit{et al.} \citep{jharana}.
The circuit of Gupta \textit{et al.} \citep{panigrahi} is based on parity and phase measurement and will fail for a superposed state which has no definite parity.
However for non-destructive discrimination of Bell states using the present phase estimation algorithm is similiar to Gupta's circuit where the parity estimation is replaced by
modified phase estimation.
\subsection{$GHZ$ state discrimination:} \label{sec:theory_ghz}
$GHZ$ states are maximally entangled multi qubit states \citep{ghz}. GHZ states have been used in several quantum algorithms such as
quantum secret sharing, controlled dense coding and quantum key distribution \citep{hao, xiao, ying}.
These algorithms make use of entanglement and hence it is important to discriminate $GHZ$ states by preserving their entanglement.
All $n$ qubit $GHZ$ states form an orthogonal set (without definite parity). Hence a circuit can be designed for discriminating general $n$ qubit $GHZ$ states using only phase estimation.
Consider the case of three qubit $GHZ$ states, which are
\begin{align}
\notag
\{|G_i\rangle\} = \{ \tfrac{1}{\sqrt{2}}(|000\rangle + |111\rangle), \tfrac{1}{\sqrt{2}}(|000\rangle - |111\rangle),\\ \notag
\tfrac{1}{\sqrt{2}}(|001\rangle + |110\rangle), \tfrac{1}{\sqrt{2}}(|001\rangle - |110\rangle),\\ \notag
\tfrac{1}{\sqrt{2}}(|010\rangle + |101\rangle), \tfrac{1}{\sqrt{2}}(|010\rangle - |101\rangle), \\
\tfrac{1}{\sqrt{2}}(|011\rangle + |100\rangle), \tfrac{1}{\sqrt{2}}(|011\rangle - |100\rangle)\}
\end{align}
Here we need three ancilla qubits and have to implement three controlled operators for state discrimination. Verification of the NMR experiment to discriminate
such states has been
carried out here using $Matlab^{\begin{scriptsize}\textregistered\end{scriptsize}}$ and the parameters of a four qubit NMR system, as described in Sec.\ref{sec:matlab}.
\section{Experimental Implementation by NMR} \label{sec:expt}
\subsection{Non-Destructive Discrimination of two qubit orthogonal states:} \label{sec:expt_2}
Experimental implementation of the quantum state discrimination(QSD) algorithm has been performed here for 2 qubit case for the orthogonal set (Eqn.7).
\begin{figure*} \label{fig:circuit_2qbt_splt}
\begin{center}
\includegraphics[width=8.9cm,height=4.3cm]{circuit_2qbt_splt.jpeg}
\caption{(a) Two qubit State discrimination circuit for Experimental implementation in three qubit NMR quantum computer, (b) and (c) are splitting of the circuit-(a)
into two circuits with single ancilla measurements.}
\end{center}
\end{figure*}
The discrimination circuit diagram shown in Fig.2a, needs a 4 qubit system. As two ancilla qubits are independent of each other,
following \citep{jharana} one can split the experiment into 2 measurements with a single ancilla qubit (Fig.2b and 2c).
The NMR implementation of the discrimination algorithm starts with (i) preparation of the pseudo-pure state followed by (ii) creation of input state
(iii) Quantum Phase estimation with operators $\{U_j\}$. Finally the measurement on ancilla qubits yields the result.
The experiment has been carried out at 300K in $11.7 T$ field in a Bruker $AV500$ spectrometer using a triple resonance QXI probe.
The system chosen for the implementation of the discrimination algorithm is Carbon-13 labeled Dibromo-fluoro methane $^{13}CHFBr_2$, where $^1H$, $^{19}F$ and $^{13}C$ act as the three qubits\citep{vanedrsypen, avik3}.
The $^1H$, $^{19}F$ and $^{13}C$ resonance frequencies at this field are 500, 470 and 125 MHz, respectively. The scalar couplings between the spins are:
$J_{HC}= 224.5 Hz$, $J_{HF} = 49.7 Hz $ and $J_{FC} = - 310.9 Hz$(Fig.3).
The NMR Hamiltonian for a three qubit weakly coupled spin system is \citep{ernst},
\begin{align}
H = \displaystyle\sum\limits_{i=0}^3\nu_iI_z^i+ \displaystyle\sum\limits_{i<j=1}^3J_{ij}I_z^iI_z^j,
\end{align}
where $\nu_i$ are the Larmor frequencies and the $J_{ij}$ are the scalar couplings. The starting point of any algorithm in an
NMR quantum information processor is the equilibrium density matrix,
which under high temperature and high field approximation is in a highly mixed state represented by\citep{avik3},
\begin{align} \notag
\rho_{eq} ~~~~\propto ~~~~\gamma_H I_z^H + \gamma_C I_z^C +\gamma_F I_z^F \\
= \gamma_H(I_z^H +0.94I_z^F +0.25I_z^C).
\end{align}
There are several methods for creating pseudo pure states (PPS) in NMR from equlibrium state \citep{cory1, gershenfeld, cory2, sallt}. We have utilized the spatial averaging technique \citep{cory2} for creating
pseudo pure states as described in \citep{avik3}.
The spectra for equlibrium and $|000\rangle$ PPS are shown in Fig.3.
\begin{figure*} \label{fig:spctras_eq_pps_new}
\begin{tabular}{cc}
\subfigure{\includegraphics[width=4.5cm,height=2.7cm]{sample1.jpeg}}
\subfigure{\includegraphics[width=10cm,height=3cm]{spctras_eq_pps_new1.jpeg}}
\end{tabular}~\\
\caption{The three qubit NMR sample used for experimental implementation. The nuclear spins $^1H$, $^{19}F$ and $^{13}C$ are
used as the three qubits. (a) Equlibrium Spectra of proton, carbon and fluorine, (b) Spectra corresponds to the created $|000\rangle$ pseudo pure state.
These spectra are obtain by using $90^o$ measuring pulse on each spin.}
\end{figure*}
For Phase Estimation algorithm, due to its high sensitivity, proton spin has been utilized as the ancilla qubit; and the two qubit states,
to be discriminated, are encoded in carbon and fluorine spins.
As the measurements are performed only on ancilla qubit, we record only proton spectra for non-destructive discrimination of the state of carbon and fluorine.
The state of the ancilla qubit can be identified by the relative phase of the spectra. We set the phase such that a positive peak indicates that the proton
was initially in state $|0\rangle$.
\subsubsection*{Implementation of $Controlled$-$U_1$ and $U_2$:}
For the set of orthogonal states given in eqn.(9) the $U_1$ and $U_2$ are given in Eqn.(8). Let $H_1$ and $H_2$ be the effective Hamiltonians
for $Controlled$-$U_1$ and $Controlled$-$U_2$ propagators such that, \\
$Controlled$-$U_1=exp(i H_1)$, \\
$Controlled$-$U_2=exp(i H_2)$.\\
where $H_1$ and $H_2$, in terms of product operators \citep{ernst} are obtained as, \\
$H_1=(\dfrac{\pi}{4} I - \dfrac{\pi}{2} I^1_z - \dfrac{\pi}{2} I^3_x + \pi I^1_z I^3_x)$, \\
$H_2=(\dfrac{\pi}{4} I - \dfrac{\pi}{2} I^1_z- \pi I^2_z I^3_x + 2 \pi I^1_z I^2_z I^3_x)$.\\
Since the various terms in $H_1$ and $H_2$ commute with each other, one can write,
\begin{align}\notag
Controlled-U_1 = exp(i H_1) & \\\notag
= exp(i(\dfrac{\pi}{4} I - \dfrac{\pi}{2} I^1_z -& \dfrac{\pi}{2} I^3_x + \pi I^1_z I^3_x)) \\\notag
= exp(i\dfrac{\pi}{4} I) \times exp( -i \dfrac{\pi}{2} I^1_z) \times & exp(-i \dfrac{\pi}{2} I^3_x) \times exp(i\pi I^1_z I^3_x), \\\notag
Controlled-U_2 =exp(i H_2) &\\ \notag
= exp(i(\dfrac{\pi}{4} I - \dfrac{\pi}{2} I^1_z - &\pi I^2_z I^3_x + 2 \pi I^1_z I^2_z I^3_x)) \\ \notag
= exp(i\dfrac{\pi}{4} I)\times exp(-i&\dfrac{\pi}{2} I^1_z) \times exp(-i\pi I^2_z I^3_x)\\
&~~~ \times exp(i2\pi I^1_z I^2_z I^3_x).
\end{align}
As the decomposed terms commute with each other, these propagators can be easily implemented in NMR(Fig.4). Single spin operators such as $I_x$, $I_y$ are implemented using R.F pulses.
The $I_z$ operator is implemented using composite $z$ rotation pulses in NMR ($(\frac{\pi}{2})_{-x}(\frac{\pi}{2})_y(\frac{\pi}{2})_x$) \citep{ml,sorensen}.
Two spin product terms such as $I^i_z I^j_x$ are implemented using scalar coupling Hamiltonian evolution sandwiched between two $(\frac{\pi}{2})_y$ pulse on $j$ spin \citep{avik3}.
The three spin product operator terms are implemented using cascades of two spin operator evolutions(\textit{Tseng et al.} \citep{tseng}).
\begin{figure*} \label{fig:u1u2}
\begin{tabular}{cc}
\centering
\subfigure[$Controlled$-$U_1$]{\includegraphics[width=4cm,height=3cm]{u1.jpeg}}
\subfigure[$Controlled$-$U_2$]{\includegraphics[width=10cm,height=3cm]{u2.jpeg}}
\end{tabular}
\caption{The pulse sequence for $Controlled$-$U_j$ operators for two qubit orthogonal states shown in (14) (Here narrow pulses indicate ($\tfrac{\pi}{2}$) pulses and broad pulses
indicate $\pi$ pulses with the phase given above the pulse).}
\end{figure*}
\begin{figure*} \label{fig:finalspectra}
\includegraphics[width=15cm,height=3.5cm]{spectra.jpeg}
\caption{Ancilla (proton spin) spectra of final state for two qubit state discrimination algorithm for $(i) ~~ |\phi_1\rangle$, $(ii) ~~ |\phi_2\rangle$, $(iii) ~~ |\phi_3\rangle$,
$(iv) ~~ |\phi_4\rangle$ states. $A_1$ and $A_2$ are results of two measurements on single ancilla(Fig.4b and 4c respectively) qubit-1 and 2 (here it is two experiments with same ancilla qubit).
These spectra are obtained with a $90^o$ measuring pulse on the ancilla(proton) qubit at the end of the pulse sequence. }
\end{figure*}
The experimental results are shown in Fig.5. Proton spectra shows the state of ancilla qubit, which in-turn can be used for
discrimination of two qubit state in carbon and fluorine spins. Positive peaks in Fig. 5 means ancilla is in qubit state $|0\rangle$ and negative peak indicates
ancilla qubit is in state $|1\rangle$. Thus spectra in Fig.5 indicates that $(i),(ii),(iii),(iv)$ are respectively $|\phi_1\rangle,|\phi_2\rangle,|\phi_3\rangle, |\phi_4\rangle$ (Table.I).
To compute fidelity of the experiment, complete density matrix tomography has been carried out (Fig.6).
\begin{figure*} \label{fig:tomo}
\includegraphics[width=14cm,height=16.3cm]{tomo.jpeg}
\caption{Density Matrix Tomography of the initial and final states of QSD circuit. First qubit is the ancilla. It is evident that the state of $2^{nd}$ and $3^{rd}$ qubits are preserved.
(here $1 \rightarrow |000\rangle$, $2 \rightarrow |001\rangle$, $3 \rightarrow |010\rangle$, $4 \rightarrow |011\rangle$, $5 \rightarrow |100\rangle$, $6 \rightarrow |101\rangle$,
$7 \rightarrow |110\rangle$, $8 \rightarrow |111\rangle$.)}
\end{figure*}
The experimental results are in agreement with the Table.I with an \textquoteleft average absolute deviation\textquoteright ~\citep{avik} of 4.0\% and \textquoteleft maximum absolute deviation\textquoteright ~\citep{avik} of 7.2\%,
providing the desired discrimination.
\section{Three qubit $GHZ$ state Discrimination using $Matlab^{\textregistered}$ Simulation:} \label{sec:matlab}
Non-destructive discrimination of the three qubit maximally entangled ($GHZ$) states using only Phase Estimation algorithm as described in Sec.\ref{sec:theory}
in NMR has also been performed using a $Matlab^{\begin{scriptsize}\textregistered\end{scriptsize}}$ simulation. This
simulation verifies the principle involved but does not include any decoherence or pulse imperfection effects.
The three qubit GHZ states form a set $\{G_i\}$ given by eqn.(13) can be re-expressed as,
\begin{align}
\{|G_i\rangle\} = \{ |\phi_1\rangle, |\phi_1\rangle, |\phi_3\rangle,........ |\phi_8\rangle \}
\end{align}
The discrimination of a 3-qubit $GHZ$ state using phase estimation requires 3 work qubits and 3 ancilla. We divide the 6 qubit quantum circuit into three circuits.
Each circuit has three work qubits and a single ancilla. There are several possibilities for eigen-value sets which will satisfy the sets of conditions discussed in Sec.\ref{sec:theory}.
Consider one such set,
\begin{align}
\notag
\{e_1\}&=\{1,-1,1,-1,1,-1,1,-1\}, \\
\{e_2\}&=\{-1,1,1,-1,1,-1,-1,1\}, \\\notag
\{e_3\}&=\{-1,1,1,-1,-1,1,1,-1\}.
\end{align}
For this eigen-value set (18), the $Controlled-U_j$ operators can be written as
\begin{align}
\notag
Controlled-U_1 = C-NOT^a_1 &\times C-NOT^a_2 \times C-NOT^a_3, \\ \notag
Controlled-U_2 = C-\pi^2_3 \times C&-NOT^a_1 \times C-NOT^a_2 \\ \notag
& \times C-NOT^a_3 \times C- \pi^2_3, \\ \notag
Controlled-U_3 = C-\pi^1_3 \times C&-NOT^a_1 \times C-NOT^a_2 \\
\times C-NOT&^a_3 \times C- \pi^1_3.
\end{align}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
state & Measurement-1 & Measurement-2 & Measurement-3 \\
\hline
$|\phi_1\rangle$ & $|0\rangle$ & $|1\rangle$ & $|1\rangle$\\
$|\phi_2\rangle$ & $|1\rangle$ & $|0\rangle$ & $|0\rangle$\\
$|\phi_3\rangle$ & $|0\rangle$ & $|0\rangle$ & $|0\rangle$\\
$|\phi_4\rangle$ & $|1\rangle$ & $|1\rangle$ & $|1\rangle$\\
$|\phi_5\rangle$ & $|0\rangle$ & $|0\rangle$ & $|1\rangle$\\
$|\phi_6\rangle$ & $|1\rangle$ & $|1\rangle$ & $|0\rangle$\\
$|\phi_7\rangle$ & $|0\rangle$ & $|1\rangle$ & $|0\rangle$\\
$|\phi_8\rangle$ & $|1\rangle$ & $|0\rangle$ & $|1\rangle$\\
\hline
\end{tabular}
\label{table1}
\caption{State of ancilla qubits for different input states of Eqn.(13) and (17).}
\end{center}
\end{table}
Splitting of four qubit operator into two qubit operators is needed for its experimental implementation (Fig.7)
Here 1,2 and 3 are the work qubits, in which the $GHZ$ state is encoded and \textquoteleft a\textquoteright ~ is the ancilla qubit.
The results of ancilla qubit measurements are tabulated in Table.II.
~\\
\begin{figure*} \label{fig:u1u2u3}
\centering
\includegraphics[width=21cm,height=13cm,angle=90]{ghz_u1u2u3_1.jpeg}
\caption{Pulse sequence for Controlled operators in $GHZ$ state discrimination. (Here narrow pulses indicate ($\tfrac{\pi}{2}$) pulses and broad pulses
indicate $\pi$ pulses with the phase given above the pulse).}
\end{figure*}
NMR simulation has been carried out using the parameters of a well known 4-qubit system, crotonic acid with all carbons labelled by $^{13}C$(Fig.8) \citep{laflamme}.
The density matrix tomography of the
$Matlab^{\begin{scriptsize}\textregistered\end{scriptsize}}$ experiment for a few selected($|\phi_1\rangle $, $|\phi_4\rangle$ and $|\phi_7\rangle$)
$GHZ$ states are shown in Fig.10. This confirms that the method of Phase Estimation discussed in Sec.\ref{sec:theory} can be used
for discrimination of $GHZ$ states without destroying them.
\begin{figure*}[h] \label{fig:crotonic}
\begin{center}
\includegraphics[width=6cm,height=3.7cm]{crotonicacid1.jpeg}
\end{center}
\caption{The chemical structure, the chemical shifts and spin-spin coupling of a $^{13}$C labelled Crotonic Acid. The four $^{13}C$ spins act as four qubits \citep{laflamme}.}
\end{figure*}
\begin{figure*} \label{fig:ghztomo}
\begin{tabular}{cccc}
~~ & First Experiment & Second Experiment & Third Experiment \\~\\
$(i)$ & $|0\rangle_a (\tfrac{1}{\sqrt{2}}(|000\rangle+|111\rangle))$ & $|1\rangle_a (\tfrac{1}{\sqrt{2}}(|000\rangle+|111\rangle))$ & $|1\rangle_a (\tfrac{1}{\sqrt{2}}(|000\rangle+|111\rangle))$ \\
~ &\includegraphics[width=4.2cm,height=4cm]{ghz11.jpg} & \includegraphics[width=4.2cm,height=4cm]{ghz12.jpg} & \includegraphics[width=4.2cm,height=4cm]{ghz13.jpg} \\~\\~\\
$(ii)$ & $|1\rangle_a (\tfrac{1}{\sqrt{2}}(|001\rangle-|110\rangle))$ & $|1\rangle_a (\tfrac{1}{\sqrt{2}}(|001\rangle-|110\rangle))$ & $|1\rangle_a (\tfrac{1}{\sqrt{2}}(|001\rangle-|110\rangle))$ \\
~ & \includegraphics[width=4.2cm,height=4cm]{ghz41.jpg} & \includegraphics[width=4.2cm,height=4cm]{ghz42.jpg} & \includegraphics[width=4.2cm,height=4cm]{ghz43.jpg} \\~\\~\\
\end{tabular}
\caption{ $Matlab^{\textregistered}$ simulation results for $GHZ$ state discrimination. The simulated spectras are shown for three $GHZ$ states $|\phi_1\rangle $ and $|\phi_4\rangle$.
It is evident from final density matrix that the $GHZ$ states are preserved.
(Here $1 \rightarrow |0000\rangle$, $2 \rightarrow |0001\rangle$, $3 \rightarrow |0010\rangle$, $4 \rightarrow |0011\rangle$, $5 \rightarrow |0100\rangle$, $6 \rightarrow |0101\rangle$,
$7 \rightarrow |0110\rangle$, $8 \rightarrow |0111\rangle$, $9 \rightarrow |1000\rangle$, $10 \rightarrow |1001\rangle$, $11 \rightarrow |1010\rangle$, $12 \rightarrow |1011\rangle$, $13 \rightarrow
|1100\rangle$, $14 \rightarrow |1101\rangle$, $15 \rightarrow |1110\rangle$, $16 \rightarrow |1111\rangle$. First qubit is the ancilla)}
\end{figure*}
\section*{Conclusion}
A general scalable method for non-destructive quantum state discrimination of a set of orthogonal states using quantum phase estimation algorithm has been
descibed, and experimently implemented for a two qubit case by NMR. As the direct measurements are performed only on the
ancilla, the discriminated states are preserved. The generalization of the algorithm is
illustrated by discrimination of $GHZ$ states using a $Matlab^{\begin{scriptsize}\textregistered\end{scriptsize}}$ simulation.
~\\~\\
\bibliographystyle{unsrtnat}
|
1,116,691,499,810 | arxiv | \section{Introduction}
Random graphs are mathematical models commonly used to study real-world networks such as the World-Wide Web, social, financial, neural, and biological networks.
Many real-world networks exhibit the following two properties:
\begin{itemize}
\item \emph{The small-world property:} distances within the network are very small in comparison to the number of nodes. With ``small'' we mean distances are at most of order of an iterated logarithm.
Some real-world networks are even \emph{ultra-small,} meaning that the distances are at most a double logarithm.
\item \emph{The scale-free property:} the number of connections per node behave statistically like a power-law. This implies that the variation is typically very high.
\end{itemize}
An example of a random graph model with these properties is the \emph{Norros-Reittu random graph} \cite{norros2006} (see Figure \ref{FigureNorros}). This model produces a random graph $G=(V,E)$ on a fixed set of vertices $V$, but with a random edge set $E\subset V\times V$ as follows: Every vertex $x\in V$ is assigned an i.i.d.\ random weight $W_x>0$. Conditioned on the weights of its end-vertices, the edge $\{x,y\}$ is present in $E$ with probability $p_{xy}=1-\exp(W_xW_y/\mathcal{N})$, independently of the status of other possible edges (here $\mathcal{N}$ is a normalizing constant). See \cite{RGN} for more results on inhomogeneous random graphs.
These two properties are important, but the structure of many real-life networks, such as social networks, often have other features that influence the structure and formation of networks:
\begin{itemize}
\item \emph{Geometric clustering:} in social networks this manifests itself because people who are geographically close to each other are more likely to know each other, giving rise to formation of locally concentrated clusters within the network.
\item \emph{Hierarchies:} again in social networks, the more `important' people are, the more likely they know other important people, even if those people might be far away, giving rise to hierarchies within the network.
\end{itemize}
A well-known model that has geometric clustering and the connections over long distances required for the existence of hierarchies is \emph{long-range percolation} (LRP, see Figure \ref{FigureLRP}) \cite{Benjamini, BergerTransience,biskup2004, HeyHofSak08, Schulman}. LRP is a percolation model that produces random subgraphs of the graph $(\mathbb{Z}^d, \mathbb{Z}^d \times \mathbb{Z}^d)$ wherein an edge $\{x,y\}\in\mathbb{Z}^d \times \mathbb{Z}^d$ is (independently) \emph{retained} with probability $p_{xy} \propto \lambda/|x-y|^\alpha$ for some positive constants $\lambda$ and $\alpha$, and removed otherwise. Thus, the connection probabilities are monotonically decreasing in $\alpha$, and increasing in $\lambda$. For many choices of $d$ and $\alpha$, LRP has a \emph{percolation phase transition} in $\lambda$, meaning that there exists $\lambda_c(d, \alpha) \in (0,\infty)$ such that when $\lambda > \lambda_c$ there exists an infinite cluster almost surely, whereas when $\lambda < \lambda_c$, all clusters are almost surely finite. When $\alpha\in (d,2d)$, this model has the clustering property, as well as something akin to the small-world property \cite{biskup2004}. It is, however, clearly not scale-free, since the decay of the degree distribution is faster than exponential.
Various models have been introduced in the recent years that combine three or four of the network properties described above. We mention, for instance, the models introduced by Aiello \emph{et al.}\ \cite{aiello2008}, Flaxman, Frieze, and Vera \cite{flaxman2006}, and Jacob and M\"orters \cite{jacob2013}.
In this paper we consider another model that has all four properties: \emph{scale-free percolation} (SFP, also known as \emph{heterogeneous long-range percolation}). SFP interpolates between long-range percolation and the Norros-Reittu random graph (see Figure \ref{FigureScalefree}). SFP was introduced by Deijfen, van der Hofstad, and Hooghiemstra in \cite{DeijfenScaleFree}. We start with a formal definition of the model.
\begin{definition}[Scale-free percolation]\label{SFPDef}
Consider the graph $(\mathbb{Z}^d, \mathbb{Z}^d \times \mathbb{Z}^d)$ for some fixed $d\geq1$. Assign to each vertex $x\in\mathbb{Z}^d$ an i.i.d.\ weight $W_x$, where the weights follow a power-law distribution with parameter $\tau-1$:
\begin{equation*}\label{powerlawDistribution}
\mathbb{P}(W_x>w)= w^{-(\tau-1)}L(w), \qquad w > 0,
\end{equation*}
where $L$ is a slowly-varying function (i.e., $L(wa)/L(w)\rightarrow1$ for all $a>0$ as $w\rightarrow\infty$, so the law of $W_x$ is $(\tau-1)$-regularly varying).
Conditionally on the weights, an edge $\{x,y\}\in\mathbb{Z}^d\times\mathbb{Z}^d$ is retained independently of all other edges with probability
\begin{equation*}
p_{xy}=1-\exp\left(-\lambda \frac{W_xW_y}{\vert x-y\vert^\alpha}\right),
\end{equation*}
where $\vert x\vert=\| x \|_1$ and $\lambda, \alpha>0$ are positive constants of the model.\footnote{We choose to work with the $\ell_1$-norm because it is a practical metric, but defining SFP with respect to any $\ell_p$-norm with $p \in [1,\infty]$ gives qualitatively similar results.} The edge is removed otherwise. We call retained edges \emph{open}, and removed edges \emph{closed.}
We denote the joint probability measure of edge occupation and weights by $\P_{(\lambda,W)}$ (where the subscript $W$ refers to the \emph{law} of the weights, not the actual values) and write just $\P$ if the parameters are clear from the context.
\end{definition}
\begin{figure}
\begin{subfigure}{.48\linewidth}
\includegraphics[keepaspectratio = True,width = 0.99\linewidth]{inhomogeneous.pdf}
\caption{Norros-Reittu random graph for $\tau = 1.95$}\label{FigureNorros}
\end{subfigure}
\begin{subfigure}{.48\linewidth}
\includegraphics[keepaspectratio = True,width = 0.99\linewidth]{longrange1.pdf}
\caption{Long-range percolation for $\alpha=3.9, \lambda = 0.9$}\label{FigureLRP}
\end{subfigure}
\begin{subfigure}{.96\linewidth}
\includegraphics[keepaspectratio = True,width = 0.99\linewidth]{scalefreebreed.pdf}
\caption{Scale-free percolation for $\alpha = 3.9, \tau=1.95, \lambda = 0.1$}\label{FigureScalefree}
\end{subfigure}
\caption{Simulations of the Norros-Reittu random graph (A), long-range percolation (B), and scale-free percolation. The size of the vertices is drawn proportionally to their weights.}
\end{figure}
Before we proceed with our results, let us briefly summarize some important features of SFP, as proved by Deijfen, van der Hofstad, and Hooghiemstra \cite{DeijfenScaleFree}, and by Deprez, Hazra, and W\"uthrich \cite{DeprezInhomogeneous}.
It turns out that the following parameter is frequently useful to describe the behaviour of SFP concisely:
\begin{equation}
\gamma:=\frac{\alpha(\tau-1)}{d}.
\end{equation}
Like long-range percolation, SFP on $\mathbb{Z}^d$ with parameter $\alpha$ and i.i.d.\ vertex weights whose law $W$ is $(\tau-1)$-regularly varying has a percolation phase-transition in $\lambda$ at
\begin{equation}
\lambda_c = \lambda_c(d,\alpha, W)
:=\inf\big\{\lambda>0\,\big|\,\text{there exists an infinite cluster }\mathcal C_\infty\big\}.
\end{equation}
This phase transition is non-trivial, except when $d\ge1$ and $\gamma<2$, in which case $\lambda_c=0$, and when $d=1$, $\gamma>2$, and $\alpha>2d$, in which case $\lambda_c=\infty$ \cite{DeijfenScaleFree}.
In the regime where SFP percolates, the infinite cluster $\mathcal{C}_\infty$ is almost surely unique \cite{Gandolfi}. Deprez \emph{et al.}\ show that the percolation density of SFP is continuous when $\alpha\in(d,2d)$: at $\lambda=\lambda_c$ there is no infinite cluster almost surely~\cite{DeprezInhomogeneous}.
By the choice of the power-law distribution, this model is scale-free. Indeed, the degrees $D$ follow a power-law of the form
\[ \mathbb{P}(D>s)=s^{-\gamma}\ell(s)\]
for some slowly varying function $\ell(s)$ \cite{DeijfenScaleFree}.
This shows that the model behaves differently from long-range percolation. Many real-world networks are believed to have infinite variance degree distributions. SFP has infinite variance degrees when $\gamma<2$. When $\gamma < 2$, SFP locally behaves like an ultra-small world \cite{DeijfenScaleFree}.
Under the assumption that the weights are bounded away from 0, the probability that an edge is open in scale-free percolation with parameters $\alpha, \tau$ and $\lambda$ stochastically dominates the probability that an edge is open in long-range percolation with parameters $\alpha$ and some $\lambda'>0$. Deprez \emph{et al.}\ \cite{DeprezInhomogeneous} use this domination to show that SFP locally has the small-world and clustering properties when $\alpha\in(d,2d)$, analogous to long-range percolation \cite{biskup2004}.
\section{Main results}
\subsection*{Distances within the infinite percolation cluster}
Given a graph $G = (V,E)$, the \emph{graph distance on $G$} between any $x,y \in V$ is defined as
\[
d_{G}(x,y) = \# \text{\emph{ edges in $E$ on a shortest path from $x$ to $y$,}}
\]
with the conventions that $d_G(x,x) =0$ and $d_G(x,y) =\infty$ if $x$ and $y$ are not in the same connected component of the graph. We define the \emph{diameter} of $G$ as the maximal distance between two vertices in $G=(V,E)$, i.e., $\mathrm{diam}\, G = \max_{x,y \in V} d_{G}(x,y)$.
The infinite random subgraph $\mathcal{C}$ of $(\mathbb{Z}^d, \mathbb{Z}^d \times \mathbb{Z}^d)$ corresponding to the infinite component of supercritical SFP thus naturally produces a random metric on $\mathbb{Z}^d$. We write $d_\mathcal{C}$ for this metric. We write $x\wedge y$ for the minimum of $x$ and $y$.
Our first result is the proof of a conjecture by Deijfen \emph{et al.}\ \cite{DeijfenScaleFree}.
\begin{theorem}[Finite diameter in the infinite-degree cases]\label{thmGraphDiameter}
Consider SFP on $\mathbb{Z}^d$ with $d \ge 1$, $\lambda > 0$, and with i.i.d.\ vertex weights whose law $W$ satisfies for some $\tau >1$ and some $c >0$,
\begin{equation}\label{lowerBoundWeights}
\mathbb{P}(W\geq w)\geq cw^{-(\tau-1)}\wedge 1,\qquad \text{ for all } w>0.
\end{equation}
Then $\mathrm{diam}\, \mathcal{C} = 2$ almost surely when $\gamma \le 1$, and $\mathrm{diam}\, \mathcal{C} \le \lceil d/(d-\alpha)\rceil$ almost surely when $\alpha < d$.
\end{theorem}
Note that \eqref{lowerBoundWeights} implies $\P(W < c^{1/(\tau-1)}) =0$, thus the weights are bounded away from $0$.
See Figure \ref{phasesDistances} for an overview of the graph distances in which we combine the results of the present paper and those of \cite{DeijfenScaleFree,DeprezInhomogeneous}. Theorem \ref{thmGraphDiameter} thus complements the characterization of distances. Our proof for the case $\alpha<d$ is based on the proof of a similar result for long-range percolation with $\alpha<d$ by Benjamini, Kesten, Peres, and Schramm \cite{Benjamini}.
For the Norros-Reittu random graph a similar result to Theorem \ref{thmGraphDiameter} is known: van den Esker \emph{et al.}\ \cite{Esker} prove that when the weights are distributed as an infinite-mean power-law, then the diameter of the graph is almost surely $2$ or $3$ (more precise results are obtained under extra conditions).
\subsection*{Transience and recurrence}
Graph distances are one way of characterizing the geometry of a graph. Another way of doing this is by studying the behaviour of random walk on the graph.
The notions of \emph{transience} and \emph{recurrence} are particularly relevant:
\begin{definition}[Random walk, transience and recurrence]
A simple random walk on a locally finite graph $G=(V,E)$ is a sequence $(X_n)_{n=0}^\infty$ with $X_0 \in V$ where $X_{n+1}$ is chosen uniformly at random from the ``neighbours'' of $X_n$, i.e., $$X_{n+1} \in \{x \in V \, : \, \{x, X_n\} \in E\},$$ independently of $X_0,\dots,X_{n-1}$. A graph is called \emph{recurrent} if for every $X_0$ a random walk returns almost surely to its starting point $X_0$. A graph is called \emph{transient} if it is not recurrent.
\end{definition}
We prove the following two theorems, the results of which are summarized in the phase diagram in Figure \ref{phasesRandomWalk}.
\begin{figure}[ht!!]
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[keepaspectratio,height = .4\textheight]{phasetransitionDistances.pdf}
\caption{Overview of graph distances, combined results of Theorem \ref{thmGraphDiameter}, \cite{DeijfenScaleFree} and \cite{DeprezInhomogeneous}.
By the notation $d_{\mathcal{C}}(x,y)\lesssim f(x,y)$ we mean that there exists a constant $c>0$, such that ${\lim_{|x-y|\rightarrow\infty}\mathbb{P}(d_{\mathcal{C}}(x,y)\leq c f(x,y))=1}$. For $\gamma\in (1,2)$ and $\alpha>d$ stronger bounds have been proved~\cite[Theorem 5.1, 5.3]{DeijfenScaleFree}.
}\label{phasesDistances}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[keepaspectratio,height = .4\textheight]{phasetransitionRW.pdf}
\caption{Recurrent vs.\ transient. Results of Theorem \ref{transientRandomThm} and \ref{recurrentRandomWalk2D}}\label{phasesRandomWalk}
\end{subfigure}
\caption{Phase diagrams. Transitions in $\gamma$ and $\alpha$. }
\end{figure}
\begin{theorem}[Transience in $d\geq 1$]\label{transientRandomThm}
Consider SFP on $\mathbb{Z}^d$ with $d \ge 1$, i.i.d.\ vertex weights whose law $W$ satisfies \eqref{lowerBoundWeights}, either $1 < \gamma < 2$ or $d < \alpha < 2d$, or both, and $\lambda > \lambda_c(d, \alpha, W)$.
Then the infinite cluster of SFP is transient almost surely.
\end{theorem}
Recall P\'olya's theorem, which states that the lattice of $\mathbb{Z}^d$ with nearest neighbour edges is recurrent if and only if $d\in\{1,2\}$, and transient otherwise. Therefore, transience in these dimensions shows a dramatic difference to regular lattices. Berger \cite{BergerTransience} proved for LRP that the random walk is transient in one or two dimensions if and only if $\alpha\in(d,2d)$. For SFP the result is stronger: for any $\alpha>d$, there exists $\tau>1$ such that the infinite cluster is transient.
\begin{theorem}[Recurrence in two dimensions]\label{recurrentRandomWalk2D}
Consider SFP on $\mathbb{Z}^d$ with $d =2$, i.i.d.\ vertex weights whose law $W$ satisfies
\begin{equation}\label{upperBoundWeights}
\mathbb{P}(W\geq w)\leq cw^{-(\tau-1)}, \qquad \text{for all }w\geq 0,
\end{equation}
for some $\tau >1$ and $c >0$, $\alpha > 4$, and $\lambda > \lambda_c(2, \alpha, W)$, and such that either $\tau >2$ or $\gamma >2$, or both. Then the infinite percolation cluster is recurrent $\mathbb{P}_{(\lambda, W)}$-almost surely.
\end{theorem}
Note that, as mentioned before, in dimension 1 when $\gamma>2$ and $\alpha>2$ there is no infinite cluster almost surely \cite{DeijfenScaleFree}, so in this case a random walk is trivially recurrent. We therefore give a full characterization of recurrence and transience of SFP in dimension one and two, while for $d\geq3$ we only characterize it when $\alpha<2d$ or $\gamma<2$.
For nearest-neighbour percolation it is known that the infinite cluster is transient \cite{KestenRandomWalk}.
It would be interesting to verify whether this is true for other percolation models on $\mathbb{Z}^d$, in particular for scale-free percolation or long-range percolation.
\subsection*{Geometric clustering and hierarchies}
We show that SFP has the geometric clustering property not only for $\alpha\in(d,2d)$ as shown by Deprez \emph{et al.}\ \cite[Theorem 6]{DeprezInhomogeneous}, but also when $1< \gamma<2$. Moreover, these clusters can be organized in a hierarchical structure, a phenomenon that is also present in some real-life networks (see for example \cite[Chapter 13]{carrington2005models} or \cite[Chapter 9]{barabasi}). These hierarchical structures are not only present in finite boxes, they extend throughout $\mathbb{Z}^d$. Indeed, the infinite component of SFP contains an infinite subgraph exhibiting a prescribed hierarchy. We introduce the notion of a \emph{hierarchically clustered tree}.
\begin{definition}[Hierarchically clustered trees]\label{def:hct}
Fix $m\geq 1$ and $x\in\mathbb{Z}^d$. Let $\mathcal{Q}_m(x):=x+[0, m-1]^d\cap \mathbb{Z}^d$. Consider the set of trees $\mathcal{T}_{x,m}$ of all unrooted, connected, cycle-free subgraphs of $(\mathcal{Q}_m(x), \mathcal{Q}_m(x) \times \mathcal{Q}_m(x))$ (i.e., trees on $\mathcal{Q}_m(x)$), where each vertex $v$ in such a tree is endowed with a weight $W_v \in \mathbb{R}$.
Fix $\rho\in(0,1]$ and $K>0$. We call an element $T \in \mathcal{T}_{x,1}$ an \emph{$(x,1,K,\rho)$-hierarchically clustered tree} if $T = (\{x\}, \varnothing, \{W_x\})$ (i.e., $T$ is the isolated vertex $x$ with a weight). For $m \ge 2$, we call an element $T \in \mathcal{T}_{x,m}$ an \emph{$(x,m,\rho,K)$-hierarchically clustered tree} if
the following four properties hold:
\begin{enumerate}
\item\emph{[Positive density]} $T$ contains at least a fraction $\rho$ of all the vertices in the box $\mathcal{Q}_m(x)$:
\[
|V|>\rho m^d.
\]
\item\emph{[Ultra-small world]} $T$ is an ultra-small world in the sense that
\[
\mathrm{diam}\left(T\right)\leq K \;\max\{1, \log\log m\}.
\]
\item\emph{[Ordered weights]} If we root $T$ at its maximum-weight vertex, then, for any vertex in the tree, the weights decrease step-by-step along the path from the root to that vertex.
\item\emph{[Spatial clustering]} If we remove any given edge from $T$, then there exists an $m'\le m$ (depending on $T$ and the removed edge) such that the two trees $T'_1=(V'_1, E'_1, W'_1)$ and $T'_2=(V'_2, E'_2, W'_2)$ that remain satisfy
\begin{enumerate}
\item at least one (say $T'_1$) is an $(x', m',\rho,K)$-hierarchically clustered tree for some $x'\in \mathcal{Q}_m(x)$, and
\item the other (say $T'_2$) has its vertex set $V'_2$ disjoint with the box on which $T'_1$ is defined:
\[
\mathcal{Q}_{m'}(x')\cap V'_2 = \varnothing.
\]
\end{enumerate}
\end{enumerate}
\end{definition}
Note that condition $(1)$ together with condition $(2)$ implies that there exists $K'>0$, such that for all $m\geq 1$
\[
\mathrm{diam}\left(T\right) <K'\log\log\left|V_{T}\right|,
\]
so hierarchically clustered trees combine a topological and a spatial version of the ultra-small world property.
\begin{theorem}[Hierarchically clustered trees]\label{thm:trees}
Consider SFP on $\mathbb{Z}^d$ with $d \ge 1$, i.i.d.\ vertex weights whose law $W$ satisfies \eqref{lowerBoundWeights}, with $1<\gamma<2$, and any $\lambda >0$.
Let $\mathcal{S}_m$ denote the SFP configuration \emph{inside} the cube $[0,m-1]^d$.
There exist $\xi >0$, a density $0 < \rho \le 1, K>0$ and a constant $m_0>0$, such that
\begin{enumerate}
\item for all $m\geq m_0$,
\[
\mathbb{P}(\mathcal{S}_m\text{ contains a }(0, m, \rho,K)\text{-hierarchically clustered tree})\geq 1-\exp(-\rho m^\xi), \text{ and}
\]
\item the infinite component $\mathcal{C}_\infty$ contains a.s.\ an infinite, connected, cycle-free subgraph $\mathcal{T}_\infty$ such that if we remove any given edge from $\mathcal{T}_\infty$, a finite and infinite connected component remain and there exist $x\in\mathbb{Z}^d$ and $m\geq 1$ such that the finite connected component is an $(x, m, \rho,K)$-hierarchically clustered tree.
\end{enumerate}
\end{theorem}
\begin{figure}[t!]
\centering
\includegraphics[width = 0.8\linewidth, height=0.2\textheight]{scalefree1d.pdf}
\caption{A simulation of scale-free percolation in $d=1$. The vertex-height in the figure depends on the weight (logarithmically). \mbox{$\alpha=2, \tau=1.95, \lambda = 0.1$}. }\label{FigureHierarchical}
\end{figure}
\subsection*{Related results and open questions}\label{SectionOpenQuestions}
\subsubsection*{Graph distance}
This paper combined with \cite{DeijfenScaleFree, DeprezInhomogeneous} gives bounds on the graph distance for every value in the parameter space, but the picture is not yet complete. We were not able to prove a non-trivial lower bound on the diameter in the regime where $\gamma<2$ and $d > \alpha$, and it is not clear to us that the upper bound is sharp. And in the regime $\alpha\in(d,2d)$ where $\gamma>2$, there is a gap in the bounds on graph distances, since there the best known bounds are \cite{DeijfenScaleFree,DeprezInhomogeneous}:
\[
\lim_{|x|\rightarrow \infty} \mathbb{P}\left(c\log|x|\leq d_\mathcal{C}(0,x)\leq c^{-1}(\log|x|)^{\log(2)/\log(2d/\alpha)}\right)=1, \qquad \text{ for some } c>0.
\]
What is the right asymptotics of $d_\mathcal{C}(0,x)$ in this regime?
\subsubsection*{Hierarchical structure}
In Section \ref{SectionClusters} below we determine that the bound on $\xi$ in Theorem \ref{thm:trees} is ${\xi<\min \{d(2-\gamma)/(\tau+1),\tfrac{d}{2}(\tau+2-\sqrt{(\tau+2)^2-4(2-\gamma)})\}}$.
Biskup \cite{biskup2004} shows a result rather similar to Theorem \ref{thm:trees} on the clustering density for long-range percolation when $\alpha\in(d,2d)$, where $\xi <d(2-\alpha)$. The corresponding range for $\xi$ for scale-free percolation would be $\xi <d(2-\gamma)$. It might be possible to extend Theorem \ref{thm:trees} to hold for this regime of $\xi$.
\subsubsection*{Scale-free percolation on the torus.}
Scale-free percolation is defined as a model on the infinite lattice $\mathbb{Z}^d$.
A challenging question is the study of scale-free percolation, and in particular its critical behaviour, on the finite torus. Working on the torus keeps the translation invariance and provides the opportunity to compare the model to its non-spatial counterparts, such as the Norros-Reittu random graph \cite{norros2006}.
Scale free percolation on finite boxes is strongly related to geometric variants of the Norros-Reittu model or the Chung-Lu model.
For example, Bringmann, Keusch, and Lengler \cite{BringmannGIRG} introduce \emph{geometric inhomogenous random graphs}, which generalise a certain class of hyperbolic random graphs, and which could be described as ``continuous SFP on the torus''. Indeed, they do not use a grid, but place the points randomly. In a fairly general setup, where, contrary to our model, the connection probability does not need to approach to 1 as $W_xW_y/|x-y|^\alpha$ goes to infinity, these authors prove that such graphs are ultra-small \cite{BringmannDistances}. Moreover, they claim that their results also carry over to finite boxes. Because of this more general setup, it would be interesting (but possibly not straightforward) to see whether in their setting hierarchically clustered trees are also present.
\subsection*{Organization}
The proofs of the main results partly rely on a number of elementary properties of the vertex weights. We begin by proving these properties in Section~\ref{SectionPreliminary}. In Section~\ref{SectionDistance} we prove the boundedness of the graph distance for $\alpha< d$ and $\gamma\leq 1$. In Section~\ref{SectionRW} we prove the random walk results, and in Section~\ref{SectionClusters} we prove Theorem \ref{thm:trees}
on hierarchical clustering.
\section{Preliminaries: properties of the vertex weights}\label{SectionPreliminary}
We start by introducing some basic notation and definitions.
Given two percolation configurations $\omega, \omega' \in \{0,1\}^{\mathbb{Z}^d \times \mathbb{Z}^d}$, we write $\omega' \succcurlyeq \omega$ if $\omega'(e) =1$ when $\omega(e)=1$ for all $e \in \mathbb{Z}^d \times \mathbb{Z}^d$, i.e., all edges that are open in $\omega$ are also open in $\omega'$. We say that an event $A$ is \emph{increasing} if $\omega \in A$ implies $\omega' \in A$ for all $\omega' \succcurlyeq \omega$.
Given two random variables $X$ and $Y$, we say that $Y$ \emph{stochastically dominates} $X$ if for every $x \in \mathbb{R}$ the inequality $\P(X > x) \le \P(Y >x)$ holds, and we write $X \preceq_d Y$.
\begin{lemma}[Stochastic domination for SFP]\label{obs:increasing} Let $W$ and $W'$ be random variables such that $W' \preceq_d W$. For any increasing event $A$,
\begin{equation}\label{e:coupling}
\P_{(\lambda,W)}(A) \ge \P_{(\lambda,W')}(A).
\end{equation}
\end{lemma}
This lemma can be proved with a straightforward coupling argument that we leave to the reader.
\medskip
We commonly use Lemma \ref{obs:increasing} to simplify the law of $W$: If the law of $W$ satisfies \eqref{lowerBoundWeights} and the law of $W'$ satisfies
\begin{equation}\label{e:Wprime}
\P(W' \ge w) = c w^{-(\tau-1)}, \qquad \text{ for all } w \ge c^{1/(\tau-1)},
\end{equation}
with the same constant $c$ as in \eqref{lowerBoundWeights}, then \eqref{e:coupling} holds.
The upcoming lemmas allow us to construct a coarse-graining argument in the proofs of Theorems \ref{transientRandomThm} and \ref{thm:trees}.
\begin{lemma}\label{powerLawMultiplyScalar}
Let $W$ be a random variable with law given by \eqref{e:Wprime}. Let $W''$ be a random variable with law given by
\begin{align}\label{standardPowerLaw}
\mathbb{P}(W''\geq w)&=w^{-(\tau-1)},\qquad \text{ for all }w\geq 1.
\end{align}
Then, for $y\geq c^{1/(\tau-1)}$, the conditional law of $W$ given $\{W\geq y\}$
is the same as
the law of $yW''$, i.e., $\mathbb{P}(W\geq x \mid W\geq y) = \P(y W'' \ge x)$.
\end{lemma}
\proof
For $x\geq y$
\[
\mathbb{P}(W\geq x \mid W\geq y) = \left(\frac{y}{x}\right)^{\tau-1}=\mathbb{P}\left(W''\geq\frac{x}{y}\right)=\mathbb{P}(yW'' \ge x).\qed
\]
\medskip
\begin{lemma}\label{maximumWeight}
Let $\{W_i\}_{i=1}^\infty$ be an i.i.d.\ sequence of random variables with law given by \eqref{e:Wprime}. Then, for all $n\ge1$ and all $K_2\geq K_1\geq c^{1/(\tau-1)}$,
\[
\mathbb{P}\left(\max_{i=1,...,n}W_{i}\leq K_2 \, \Big| \, W_{i}\geq K_1 \text{ for }i=1,\dots ,n \right)\leq \exp\left(-n\left(\frac{K_1}{K_2}\right)^{\tau-1}\right).
\]
\end{lemma}
\medskip
\proof
Using that the weights are i.i.d., that $K_2\geq K_1$, and that $1-x\leq \exp(-x)$, we can bound the left-hand side by
\[
\left(1-\left(\frac{K_2}{K_1}\right)^{-(\tau-1)}\right)^n
\leq \exp\left(-n\left(\frac{K_1}{K_2}\right)^{\tau-1}\right). \qed
\]
\medskip
\begin{lemma}\label{transientLemmaBigDegrees}
Fix an integer $d\ge1$ and $\alpha \in (0,\infty)$ such that $\gamma=\alpha(\tau-1)/d<2$. Assign to each vertex in $[0,N-1]^d \subset \mathbb{Z}^d$ an i.i.d.\ random variable with law satisfying \eqref{lowerBoundWeights}. Let $E_{N,\beta}$ be the event that the box $[0,N-1]^d$ contains at least $\log N$ vertices with weight larger than $\beta N^{\alpha/2}$. Then, for all $\beta>0$,
\[
\mathbb{P}\left(E_{N,\beta}\right)\longrightarrow 1,\qquad \text{ as }N\rightarrow\infty.
\]
\end{lemma}
\proof
Let $Y$ denote the number of vertices in $[0,N-1]^d$ with weight exceeding $\log N$. By \eqref{lowerBoundWeights} and independence of the weights we have $Y \preceq_d X$, where
$X\sim\text{Bin}(N^d,c\left(\beta N^{\alpha/2}\right)^{-(\tau-1)})$.
Note that since $\gamma < 2$ we have $\mathbb{E}[X] = c \beta^{-(\tau-1)} N^{d(1-\gamma/2)} \gg \log N$ and Var$(X) \ll \mathbb{E}[X]^2$. It follows by the Paley-Zygmund inequality that (when $N$ is sufficiently large),
\[
\P(E_{N, \beta}) \ge \P(X \ge \log N) \ge \frac{(\mathbb{E}[X] - \log N)^2}{\mathrm{Var}(X) + \mathbb{E}[X]^2}
\longrightarrow 1. \qed
\]
\medskip
We call any set that is a translate of $[0,N-1]^d \subset \mathbb{Z}^d$ an \emph{$N$-box.} We say that two $N$-boxes $\mathcal{Q}_1=v_1+[0,N-1]^d$ and $\mathcal{Q}_2=v_2+[0,N-1]^d$ are ``$k$ boxes away from each other'' if $\vert v_1-v_2\vert=kN$ (where we recall that $|\cdot|$ denotes the $\ell_1$-norm).
\begin{lemma}\label{maxDistance} Let $d \ge 1$ and $k \ge 1$.
Consider two $N$-boxes $\mathcal{Q}_1$ and $\mathcal{Q}_2$ that are $k$ boxes away from each other. For arbitrary $u_1\in \mathcal{Q}_1$ and $u_2\in \mathcal{Q}_2$,
\[
\vert u_1-u_2\vert\leq 3dkN.
\]
\end{lemma}
\proof
Let $v_1$ and $v_2$ be such that $\mathcal{Q}_1=v_1+[0,N-1]^d$ and $\mathcal{Q}_2=v_2+[0,N-1]^d$.
Applying the triangle inequality twice, one obtains
\[
\vert u_1 -u_2\vert
\leq \vert v_1 -v_2\vert+\vert u_1 -v_1\vert + \vert v_2 -u_2\vert
\leq kN+2dN
\leq 3dkN.\qed
\]
\medskip
\begin{lemma}\label{transientLemmaConnectivity}
Fix $N \in \mathbb{N}$ and let $\mathcal{Q}_1$ and $\mathcal{Q}_2$ be two $N$-boxes that are $k$ boxes away from each other such that $\mathcal{Q}_1 = Nv_1 + [0,N]^d$ and $\mathcal{Q}_2 = Nv_2 + [0,N]^d$ with $v_1, v_2 \in \mathbb{Z}^d$. Let $\beta>0$ be given, the weights $\{W_x\}_{x \in \mathbb{Z}^d}$ be i.i.d.\ according to a law satisfying \eqref{e:Wprime}, and $\{W'_x\}_{x \in \mathbb{Z}^d}$ be i.i.d.\ with law \eqref{standardPowerLaw}.
For $i=1,2$ write
\[
u_i = \argmax_{u \in \mathcal{Q}_i} W_u.
\]
Then
\[
\mathbb{P}_{(\lambda,W)}\left(\{u_1,u_2\}\text{ is open}\,\big|\,W_{u_1},W_{u_2}\geq \beta N^{\alpha/2}\right)
\geq
\mathbb{P}_{(\lambda\beta^2(3d)^{-\alpha},W')}\Big(\{v_1,v_2\}\text{ is open}\Big).
\]
\end{lemma}
\proof
Let $U \sim $ Unif$[0,1]$ denote a standard uniform random variable with cdf $\P(U < x) =x$ for $x \in [0,1]$. Then, by Definition \ref{SFPDef},
\begin{multline*}
\mathbb{P}_{(\lambda,W)}\left(\{u_1,u_2\}\text{ is open}\,\big|\,W_{u_1},W_{u_2}\geq \beta N^{\alpha/2}\right)\\
= \P^*\left(U < 1-\exp\left(-\lambda \frac{W_1 W_2}{|u_1 -u_2|^\alpha}\right) \mid W_1, W_2 \ge \beta N^{\alpha/2}\right),
\end{multline*}
where the probability measure $\P^*$ on the right-hand side is with respect to $W_1$ and $W_2$, which are i.i.d.\ with the same law as the elements of $\{W_x\}_{x \in \mathbb{Z}^d}$, and an independent random variable $U \sim$ Unif$[0,1]$.
Using Lemmas \ref{powerLawMultiplyScalar} and \ref{maxDistance} we bound the right-hand side from below by
\[
\P^{**}\left(U < 1-\exp\left(-\lambda \beta^2 (3d)^{-\alpha} \frac{W'_1 W'_2}{k^\alpha}\right)\right),
\]
where the probability measure $\P^{**}$ is with respect to $W'_1$ and $W'_2$ which are i.i.d.\ with the same law as the elements of $\{W'_x\}_{x \in \mathbb{Z}^d}$ and an independent random variable $U \sim$ Unif$[0,1]$.
On the other hand, since $|v_1 -v_2|=k$, by Definition \ref{SFPDef} we also have
\[
\P_{(\lambda \beta^2 (3d)^{-\alpha},W')}(\{v_1,v_2\} \text{ is open})
= \P^{**}\left(U < 1-\exp\left(-\lambda \beta^2 (3d)^{-\alpha} \frac{W'_1 W'_2}{k^\alpha}\right)\right).
\]
The claim thus follows. \qed
\medskip
\section{Distances in the infinite degree case: proof of Theorem \ref{thmGraphDiameter}}\label{SectionDistance}
\proof[Proof of Theorem \ref{thmGraphDiameter}(1)]
[\emph{The case $\gamma\leq 1$}] By translation invariance of the model, it suffices to show $\P(d_{\mathcal{C}}(0,x)\leq 2)=1$.
Since we assumed that the law of $W$ satisfies \eqref{lowerBoundWeights}, there exists a $c>0$ such that $W_x \ge c^{1/(\tau-1)}$ for all $x \in \mathbb{Z}^d$ almost surely.
Fix $x\in\mathbb{Z}^d$. For $k\geq1$, let $\mathcal{Q}_k$ denote the box centred at $x/2$ with sides of length $l_k:=2^k|x|$ and let $\mathcal{A}_k :=\mathcal{Q}_k \setminus \mathcal{Q}_{k-1}$ with $\mathcal{A}_1 :=\mathcal{Q}_1$. Note that there are $(2^d-1)|x|^d2^{d(k-1)}$ vertices in $\mathcal{A}_k$.
\begin{figure}[hbt]
\centering
\includegraphics[keepaspectratio,width = 8cm]{diameter.pdf}
\caption{Construction for the proof of Theorem \ref{thmGraphDiameter} for $d=2$.}\label{diameterFigure}
\end{figure}
We prove that the probability that the vertex with maximal weight for every $\mathcal{A}_k$ is connected to both 0 and $x$ is strictly greater than some positive constant and let the result follow by Borel-Cantelli.
For each $k \in \mathbb{N}$, let $v_k$ be the vertex in $\mathcal{A}_k$ with maximal weight and let
$E_k$ be the event
that $v_k$ is connected by an open edge to both 0 and $x$. Let $a_k := 2^{\frac{dk}{\tau-1}}$ and denote
$F_k :=\{W_{v_k}\geq a_k\}$, which is an increasing event.
Using Lemma \ref{obs:increasing} and Lemma \ref{maximumWeight} with $K_1=c^{1/(\tau-1)}$ and $K_2=2^{\frac{dk}{\tau-1}}$, we can bound
\begin{equation}\label{Fk-bound}
\begin{split}
\P_{(\lambda, W)}(F_k) & \ge \P_{(\lambda, W')}(F_k)
\geq 1- \exp\left({-c(2^d-1)|x|^d2^{d(k-1)}2^{-\frac{dk}{\tau-1}(\tau-1)}}\right)\\
&=1-\exp\left(-\frac{c(2^d-1)|x|^d}{2}\right),
\end{split}
\end{equation}
where the measure $\P_{(\lambda, W')}$ refers to a model where all weights are distributed as in \eqref{e:Wprime}.
The right hand side of \eqref{Fk-bound} is bounded below by some $\delta > 0$ uniformly in $k$.
Observe that $|v_k|,|v_k-x|\leq dl_k$ and recall that $\tau>1$ and $\gamma\leq 1$. Write $\varepsilon = c^{1/(\tau-1)}$. We can bound the probabilities on the events $E_k$ by conditioning on $F_k$ as follows:
\begin{align*}
\mathbb{P}_{(\lambda, W)}(E_k \mid F_k) \ge \mathbb{P}_{(\lambda, W')}(E_k \mid F_k) \ge \mathbb{P}_{(\lambda, a_k)}(E_k)
&\geq \left(1-\exp\left(-\frac{\lambda \varepsilon a_k }{(dl_k)^\alpha}\right)\right)^2\\
&=\left(1-\exp\left(-\frac{\lambda \varepsilon}{(d|x|)^\alpha}2^{dk/(\tau-1)-k\alpha}\right)\right)^2\\
&\geq \frac{1}{4}\left(\left(\frac{\lambda \varepsilon}{(d|x|)^\alpha}2^{dk(1/(\tau-1)-\alpha/d)}\right)^2\wedge1\right)\\
&= \left(\left(\frac{\lambda \varepsilon}{2(d|x|)^{\alpha}}\right)^2\left(4^{d(1-\gamma)/(\tau-1)}\right)^k\right)\wedge\frac{1}{4} \\
&\ge \left(\left(\frac{\lambda \varepsilon}{2(d|x|)^{\alpha}}\right)^2\right)\wedge\frac{1}{4} =: \eta.
\end{align*}
Since this bound is independent of $k$ and of the weights $\{W_x\}_{x \in \mathbb{Z}^d}$, it follows that $\P_{(\lambda,W)}(E_k \mid F_k) \ge \eta$, and therefore,
\begin{equation*}
\P_{(\lambda,W)}(E_k) = \P_{(\lambda,W)}(E_k \mid F_k)\;\P_{(\lambda, W)} (F_k)\geq \eta \, \delta >0.
\end{equation*}
Observe that the events $E_k$ are independent of each other, hence we obtain the result for $\gamma\leq 1$ using the Lemma of Borel-Cantelli. \qed
\medskip
\proof [Proof of Theorem \ref{thmGraphDiameter}(2)]
[\emph{The case $\alpha <d$}]
By translation invariance it again suffices to show that
\[\P_{(\lambda,W)}(d_{\mathcal{C}}(0,x) \le \lceil d /(d-\alpha)\rceil) =1\]
for all $x \in \mathbb{Z}^d$. Recall that the assumption \eqref{lowerBoundWeights} on the law of $W$ implies that $W \ge c^{1/(\tau-1)}$ almost surely. Note that $\{ d_{\mathcal{C}}(0,x) \le \lceil d /(d-\alpha)\rceil \}$ is an increasing event. Hence by Lemma~\ref{obs:increasing},
\[
\P_{(\lambda,W)}(d_{\mathcal{C}}(0,x) \le \lceil d /(d-\alpha)\rceil) \ge \P_{(\lambda,c^{1/(\tau-1)})}(d_{\mathcal{C}}(0,x) \le \lceil d /(d-\alpha)\rceil).
\]
Observe that SFP with constant vertex weights is equivalent to long-range percolation with the same $d$ and $\alpha$ and some possibly different parameter $\lambda'$.
Benjamini \emph{et al.\ }\cite[Example 6.1]{Benjamini} show that the diameter of the infinite cluster in long-range percolation with $\alpha < d$ for any $\lambda>0$ is equal to $\lceil d/(d-\alpha)\rceil$ almost surely. Our claim about SFP therefore follows. \qed
\section{Transience vs.\ recurrence}\label{SectionRW}
\subsection*{Transience proof}
The proof of Theorem \ref{transientRandomThm} is inspired by Berger's proof of transience for long-range percolation \cite[Theorem 1.4(II)]{BergerTransience}. We use in particular a multiscale ansatz which roots back to the work of Newman and Schulman \cite{newman1986} for long-range percolation.
\medskip
\textbf{The case $1< \gamma<2$.}
In view of Lemma \ref{obs:increasing}, we may assume \eqref{e:Wprime} rather than \eqref{lowerBoundWeights} without loss of generality.
We show that the infinite cluster of SFP almost surely contains a transient subgraph. The proof has two steps:
\begin{enumerate}
\item We first assume that $\lambda$ is large enough. With small probability we remove some vertices from the graph independent of each other. Then we use a multiscale ansatz: we group vertices into finite boxes, and call boxes `good' or `bad' according to the weights and edge structure \emph{inside} the box. We iterate this process by considering larger boxes, which we call good or bad according to the number of good boxes in them and the edges between vertices in those boxes. This will imply transience for large values of $\lambda$.
\item To couple the original model (for \emph{any} $\lambda>\lambda_c$) to the model of the first step, we use a coarse-graining argument: We `zoom out' by considering large boxes of vertices and only considering the vertices with maximum weight in the boxes. We show that, with high probability, the weights of these vertices are so high, that the graph, only defined on these vertices, dominates a graph as described in the first step.
\end{enumerate}
We use \cite[Lemma 2.7]{BergerTransience}, which describes a sufficient structure for a graph to be transient. To this end, we introduce the notion of a ``renormalized graph'':
We start with some notation. Given a graph $G=(V,E)$ and a sequence $\{C_n\}_{n=1}^\infty$ let $V_l(j_l,\dots,j_1)$ with $l \in \mathbb{N}$ and $j_n \in \{1,\dots,C_n\}$ be a subset of the vertex set $V$. Now let for $l \ge m$
\[
V_l(j_l,\dots,j_m) = \bigcup_{j_{m-1} =1}^{C_{m-1}} \dotsm \bigcup_{j_1=1 }^{C_1} V_l(j_l,\dots,j_1).
\]
We call the sets $V_l(j_l,\dots,j_m)$ \emph{bags}, and the numbers $C_n$ \emph{bag sizes.}
\begin{definition}
We say that the graph $G=(V,E)$ is \emph{renormalized for the sequence} $\{C_n\}_{n=1}^\infty$ if we can construct an infinite sequence of graphs such that the vertices of the \emph{$l$-th stage graph} are labelled by $V_{l}(j_l, \dots, j_1)$ for all $j_n \in \{1,\dots, C_n\}$, and such that for every $l \ge m >2$, every $j_l,\dots,j_{m+1}$, and all pairs of distinct $u_m, w_m \in \{1,\dots,C_m\}$ and $u_{m-1}, w_{m-1} \in \{1, \dots, C_{m-1}\}$ there is an edge in $G$ between a vertex in $V_l(j_l, \dots, j_{m+1}, u_m, u_{m-1})$ and a vertex in $V_l( j_l, \dots, j_{m+1}, w_m, w_{m-1}).$
\end{definition}
The underlying intuition is that every $n$-th stage bag contains $C_n$ $(n-1)$-stage bags, which contains again $C_{n-1}$ $(n-2)$-stage bags. Every pair of $(n-2)$-stage bags in an $n$ stage bag is connected by an edge between one of the vertices in the bags (see Figure \ref{renormalizedGraph}).
\begin{figure}
\centering
\includegraphics[keepaspectratio,width = \textwidth]{renormalizedGraph.pdf}
\caption{$n$ and $(n-1)$ stage bag of a renormalized graph}\label{renormalizedGraph}
\end{figure}
\begin{lemma}[Berger, {\cite[Lemma 2.7]{BergerTransience}}]\label{renormalizedTransient}
A graph renormalized for the sequence $C_n$ is transient if $\sum_{n=1}^\infty C_n^{-1}<\infty$.
\end{lemma}
The lemma follows from the proof of \cite[Lemma 2.7]{BergerTransience}.
\begin{proposition}\label{lemmaTransientLargeLambda}
Consider scale-free percolation with $\gamma<2$ and weight distribution satisfying \eqref{standardPowerLaw}.
Independently of this, perform an i.i.d.\ Bernoulli site percolation on the vertices of $\mathbb{Z}^d$, colouring a vertex ``green'' with probability $\mu \in (0,1]$.
Then the subgraph of the infinite scale-free percolation cluster that is induced by the green vertices has a (unique) infinite component $\mathcal{C}_{\lambda,\mu}$.
There exists $\mu_0<1$ and $\lambda_0 > 0$, such that $\mathcal{C}_{\lambda,\mu}$ is transient for $\mu\geq \mu_0$ and $\lambda\geq\lambda_0$ almost surely.
\end{proposition}
The proof exploits a multiscale technique. Indeed, we proceed by showing that $\mathcal{C}_{\lambda,\mu}$ contains a renormalized subgraph that is transient. Therefore, $\mathcal{C}_{\lambda,\mu}$ is also transient.
\proof[Proof of Proposition \ref{lemmaTransientLargeLambda}]
For all $n \in \mathbb{N}$, let
\[
D_n := 2(n+1)^2, \qquad C_n := (n+1)^{2d},
\]
and
\[
u_n := d^{\alpha/2}(n+2)^{d(2-\gamma)/2}2^{(n+2)\alpha/2}((n+3)!)^\alpha.
\]
We partition the lattice $\mathbb{Z}^d$ into disjoint boxes of side length $D_1$, so that each such box contains $D_1^d$ vertices, and call these the \emph{1-stage} boxes. (By convention we call vertices of $\mathbb{Z}^d$ the \emph{0-stage} boxes.) We view these boxes as the vertices of a renormalized lattice. Now cover the lattice again, grouping together $(D_2)^d$ 1-stage boxes to form \emph{2-stage} boxes with sides of length $D_2$. Continue in this fashion, so that the \emph{$n$-stage} boxes form a covering of $\mathbb{Z}^d$ by translates of $[0,\prod_{k=1}^n D_k-1]^d$.
We call a 0-stage box ``good'' if the vertex associated with it is green.
For every stage $i \geq 1$, we define rules for a box to be ``good'' or ``bad'', depending only on the weights $W_x$ and the edges of $\mathcal{C}$ inside the box. This implies that disjoint boxes are good or bad independently of each other.
A 1-stage box is good if it contains at least $C_1$ good 0-stage boxes and one of the vertices in these boxes has weight at least $u_1$. For each good $1$-stage box, call the maximum-weight vertex, having weight at least $u_1$, and call it \emph{1-dominant.}
For $n\geq 2,$ say that an $n$-stage box $\mathcal{Q}$ is good if the following three conditions are satisfied:
\begin{enumerate}
\item[(E)] At least $C_n$ of the $(n-1)$-stage boxes in $\mathcal{Q}$ are good.
\item[(F)] For any good $(n-1)$-stage box $\mathcal{Q}' \subset \mathcal{Q}$, the $(n-2)$-dominant vertices in $\mathcal{Q}'$ form a \emph{clique} (i.e., every two $(n-2)$-dominant vertices in $\mathcal{Q}'$ are connected by an edge in $\mathcal{C}$).
\item[(G)] There is an $(n-1)$-dominant vertex in one of its good $(n-1)$-stage boxes, with weight at least $u_{n}$.
\end{enumerate}
For each good $n$-stage box, choose the maximum weight vertex and call it the \emph{$n$-dominant} vertex if its weight is at least $u_{n}$. (A vertex may be dominant for different values of $n$.)
\begin{figure}
\centering
\includegraphics[width = 0.8\textwidth]{renormalizationTransientColor2.pdf}
\centering
\caption{Sketch of the renormalization in Theorem \ref{transientRandomThm} in $d=1$ for \\$D_n=4, D_{n-1}=3, D_{n-2}=2, C_n=3, C_{n-1}=2, C_{n-2}=1$. \\
`Good' boxes are marked with a solid line, `bad' boxes have a dashed line.
}
\label{renormalizationTransient}
\end{figure}
See Figure \ref{renormalizationTransient} for a sketch of this definition.
Note that by construction, the subgraph of $\mathcal{C}$ induced by the vertices that are in a good $n$-stage box for every $n \ge 0$ is a graph renormalized by a sequence of bag sizes $\{C_n\}$ that satisfies the transience condition of Lemma \ref{renormalizedTransient}. Our aim is therefore to show that almost surely such a subgraph exists.
Define $E_n(v), F_n(v)$ and $G_n(v)$ to be the events that conditions (E), (F) and (G) hold for the $n$-stage box containing the vertex $v$. To simplify notation, define $E_n:=E_n(0),F_n:=F_n(0)$ and $G_n:=G_n(0)$. We write $L_n(v)$ and $L_n$ for the events that the $n$-th stage boxes containing $v$ and $0$, respectively, are good. By translation invariance it is sufficient to show that
\[\mathbb{P}\Bigg(\bigcap_{n=1}^\infty L_n\Bigg)>0.\]
The events $L_n$ are positively correlated, hence it is sufficient to show that \[\prod_{n=1}^\infty \mathbb{P}(L_n)>0.\]
We bound
\begin{equation}\label{EqLnTrans}
\mathbb{P}(L_n^c)\leq\mathbb{P}(E_n^c)+\mathbb{P}(F_n^c \mid E_n)+\mathbb{P}(G_n^c \mid E_n).
\end{equation}
First, we give an upper bound for $\mathbb{P}(F_n^c \mid E_n)$. Recall that we use the $\ell_1$-norm for distance in the definition of the edge-probabilities of SFP. The $\ell_1$-distance between two vertices in the same $n$-stage box is at most
\[
d\prod_{k=1}^nD_k=d2^n((n+1)!)^2.
\]
The probability that two good $(n-2)$-stage boxes are \emph{not} connected by an open edge between its $(n-2)$-dominant vertices (which have weight at least $u_{n-2}$) is therefore at most
\begin{align*}
\exp\left(-\lambda d^{-\alpha}u_{n-2}^2\prod_{k=1}^nD_k^{-\alpha}\right)&=\exp\left(-\lambda d^{-\alpha}\left(d^{\alpha/2}n^{d(2-\gamma)/2}2^{n\alpha/2}[(n+1)!]^\alpha\right)^22^{-n\alpha}((n+1)!)^{-2\alpha}\right)\\
&=\exp(-\lambda n^{d(2-\gamma)}).
\end{align*}
There are
\[
\binom{D^d_nD^d_{n-1}}{2}
<4^d(n+1)^{4d}
\]
pairs of $(n-2)$-stage boxes inside an $n$-stage box, so there can be at most $4^d(n+1)^{4d}$ edges between $(n-2)$-dominant vertices inside a good $n$-stage box.
It follows by taking the union bound that
\begin{equation}\label{upperFc}
\mathbb{P}(F_n^c \mid E_n)\leq\exp\left(d\log(4)+4d\log(n+1)-\lambda n^{d(2-\gamma)}\right).
\end{equation}
\medskip
We proceed by establishing an upper bound on $\mathbb{P}(G_n^c \mid E_n)$. There exists a constant $c_1>0$ such that
\begin{equation}\label{fractionWeights}
\begin{split}
\frac{u_{n-1}}{u_{n}}& =2^{-\alpha/2}\left(\frac{n+1}{n+2}\right)^{d(2-\gamma)/2}\frac{1}{(n+3)^\alpha}\\
& \geq c_1(n+1)^{-\alpha}.
\end{split}
\end{equation}
Note that any good $n$-stage box contains at least $C_{n}$ $(n-1)$-dominant vertices that all have weight larger than $u_{n-1}$. Using \eqref{fractionWeights}, Lemma \ref{maximumWeight}, and $\gamma=\alpha(\tau-1)/d$, gives for some $c_2>0$ that
\begin{equation}\label{upperGc}
\begin{split}
\mathbb{P}(G_n^c \mid E_n) & \leq \exp\left(-C_{n}\left(\frac{u_{n-1}}{u_{n}}\right)^{\tau-1}\right)\\
& \leq\exp\left(c_1^{\tau-1}(n+1)^{2d-\alpha(\tau-1)}\right)\\
&\leq \exp\left(-c_2n^{d(2-\gamma)}\right).
\end{split}
\end{equation}
\medskip
The last term we bound is $\mathbb{P}(E_n^c)$. All $(n-1)$-stage boxes are good independent of each other with probability $\mathbb{P}(L_{n-1})$. Let $X\sim \text{Bin}(D_n^d,\mathbb{P}(L_{n-1}))$ be binomially distributed, so that $\mathbb{P}(E_n^c)=\mathbb{P}(X<C_n)$.
We use Chernoff's bound that if $X\sim\text{Bin}(m,p)$, $\theta \in(0,1)$, then $\mathbb{P}(X<(1-\theta)mp)\leq\exp(-{\frac12 \theta^2mp}).$
For our model, this obtains
\begin{equation}\label{upperEc}
\begin{split}
\mathbb{P}(E_n^c) & \leq\exp\left(-\frac{1}{2}\left(1-\frac{1}{2\mathbb{P}(L_{n-1})}\right)^2\mathbb{P}(L_{n-1}) D_n^d \right)\\
& \leq \exp\left(-2^{d-3}(2\mathbb{P}(L_{n-1})-1)^2 (n+1)^{2d}\right).
\end{split}
\end{equation}
Combining \eqref{EqLnTrans}, \eqref{upperFc}, \eqref{upperGc}, and \eqref{upperEc}, gives that
\begin{equation*}
\begin{split}
\mathbb{P}(L_n^c) & \leq \exp(d\log(4)+4d\log(n+1)-\lambda n^{d(2-\gamma)}) + \exp\left(-c_2n^{d(2-\gamma)}\right)\\
& \quad +\exp\left(-2^{d-3}(2\mathbb{P}(L_{n-1})-1)^2(n+1)^{2d}\right).
\end{split}
\end{equation*}
If $\lambda$ large enough (say larger than $\lambda_0$), there exists $n_0$ such that for $n\geq n_0$
\begin{equation}\label{upperEFG}
\mathbb{P}(L_n^c)\leq 2\exp\left(-c_2n^{d(2-\gamma)}\right)+\exp\left(-2^{d-3}(2\mathbb{P}(L_{n-1})-1)^2(n+1)^{2d}\right).
\end{equation}
Define the sequence \[\ell_n:=1-(n+1)^{-3/2}\] and observe that
\begin{equation}\label{prodPositive}
\prod_{n=1}^\infty \ell_n>0.
\end{equation}
For any fixed $n_1>n_0$, we can find $\lambda_0>0$ and $\mu_0<1$ such that
$\mathbb{P}(L_{n_1})\geq \ell_{n_1}$, because $L_{n_1}$ depends only on the weights and edges inside a \emph{finite} box.
We further bound \eqref{upperEFG} for all $n > n_1$ by
\begin{align}\label{recursiveBound}
\mathbb{P}(L_n^c)&\leq\exp\left(-c_2n^{d(2-\gamma)}\right)+\exp\left(-2^{d-3}\left(1-\frac{1}{\sqrt{2}}\right)^2(n+1)^{2d}\right)\nonumber\\
&\leq (n+1)^{-3/2}
=1-\ell_n,
\end{align}
and choose $n_1$ so large, that the last bound in \eqref{recursiveBound} holds.
Thus, using \eqref{prodPositive}, \eqref{recursiveBound} and $\mathbb{P}(L_n)>0$ for all $n$, yields that
\begin{equation*}
\prod_{n=1}^\infty\mathbb{P}(L_n)=\prod_{n=1}^{n_1}\mathbb{P}(L_n)\prod_{n=n_1+1}^\infty \mathbb{P}(L_n)\geq \prod_{n=1}^{n_1}\mathbb{P}(L_n)\prod_{n=n_1+1}^\infty \ell_n>0.
\end{equation*}
With probability 1 the graph contains a cluster of good vertices that can be renormalized for the sequence $C_n$. By Lemma \ref{renormalizedTransient} this cluster is transient itself, since showing transience for a subgraph is enough for transience on the whole graph \cite[Section 9]{LectureNotesPeres}. \qed
\medskip
\textbf{The case $d< \alpha<2d$.}
We need two lemmas from the literature, which are complementary to the case $\alpha\in(d,2d)$ of Proposition \ref{lemmaTransientLargeLambda} and Lemma \ref{transientLemmaBigDegrees}.
\begin{lemma}[Deprez, Hazra \& W\"uthrich, {\cite[Lemma 9]{DeprezInhomogeneous} }]\label{clustersizeTransient}Assume $\gamma>1$ and let $\alpha\in(d,2d)$. Choose $\lambda>\lambda_c$ and let $\alpha'\in[\alpha,2d)$. For every $\mu\in[0,1)$ and $\beta>0$ there exists $M_0\geq 1$ such that for all $m\geq M_0$
\[
\mathbb{P}\left(|\mathcal{C}_m|\geq \beta m^{\alpha'/2}\right)\geq \mu,
\]
where $\mathcal{C}_m$ is the largest connected component in $[0,m-1]^d$.
\end{lemma}
Note that \cite[Lemma 9]{DeprezInhomogeneous} is proven for the exact power law distribution of the weights in \eqref{standardPowerLaw}. This is not a problem, since Lemma \ref{obs:increasing} implies that the result extends to a weight distribution satisfying \eqref{lowerBoundWeights} when $c\geq 1$.
For $c<1$, and percolation parameter $\lambda'>0$, we observe that the model is equivalent to the case where $c=1$ and $\lambda=\lambda'c'^{-2/(\tau-1)}$, since if the law of $W$ satisfies \eqref{e:Wprime} for some $c>0$ and for $W'$ it holds for $w \ge 1$ that $\mathbb{P}(W'>w)=w^{-(\tau-1)}$, then $W\overset{d}{=}c^{-1/(\tau-1)}W'.$ Hence, we can scale the parameters such that $c=1$ and apply Lemma \ref{obs:increasing}.
\begin{lemma}[Berger, {\cite[Lemma 2.7]{BergerTransience}}]\label{sitebondlongrangeLemma}
Let $d\geq1$, $\alpha\in(d,2d)$ and $\lambda>0$. Consider the long-range percolation model on $\mathbb{Z}^d$ in which every two vertices, $x$ and $y$, are connected by an open edge with probability \[1-\exp\left(-\frac{\lambda}{|x-y|^\alpha}\right),\] independently of other edges, and every vertex is good with probability $\mu<1$, independently of all other vertices.
There exist $\mu_1<1$ and $\lambda_1$, such that if $\lambda\geq\lambda_1$ and $\mu\geq\mu_1$, the infinite cluster on the good vertices is transient.
\end{lemma}
\begin{lemma}\label{largeConnectivity}
Consider scale-free percolation with weight distribution satisfying \eqref{lowerBoundWeights}.
Let $\mathcal{Q}_1$ and $\mathcal{Q}_2$ be two $N$-boxes that are $k$ boxes away from each other. Let $\beta>0$ be given. Moreover, assume that $\mathcal{Q}_1$ and $\mathcal{Q}_2$ contain connected components $\mathcal{C}_1$ and $\mathcal{C}_2$, respectively, of size at least $\beta N^{\alpha/2}$.
Then
\[
\mathbb{P}\Big(\mathcal{C}_1\text{\emph{ connected by an open edge to }} \mathcal{C}_2
\;\Big|\; |\mathcal{C}_1|,|\mathcal{C}_2|\geq\beta N^{\alpha/2}\Big)
\geq 1-\exp\left(-\frac{\lambda(3d)^{-\alpha}\beta^2c^{2/{(\tau-1})}}{k^\alpha}\right).
\]
\end{lemma}
\proof
Since we assumed \eqref{lowerBoundWeights}, all weights are at least $c^{1/(\tau-1)}$. By Lemma \ref{maxDistance}, we get for arbitrarily chosen vertices $u\in\mathcal{C}_1, v\in\mathcal{C}_2$ that
\[
\mathbb{P}(\{u,v\}\text{ is closed}) \leq \exp\left(-\lambda(3d)^{-\alpha}c^{2/{(\tau-1})}\frac{1}{kN^\alpha}\right).
\]
Since both clusters contain at least $\beta N^{\alpha/2}$ vertices, there are at least $\beta^2 N^\alpha$ possible edges. We obtain
\begin{align*}
\mathbb{P}\Big(\mathcal{C}_1 &\text{ not connected by an open edge to } \mathcal{C}_2
\;\Big|\; |\mathcal{C}_1|,|\mathcal{C}_2|\geq\beta N^{\alpha/2}\Big)\\
&\leq \exp\left(-\lambda(3d)^{-\alpha}c^{2/{(\tau-1})}\frac{1}{kN^\alpha}\right)^{\beta^2 N^{\alpha}}\\
&=\exp\left(-\lambda(3d)^{-\alpha}\beta^2c^{2/{(\tau-1})}\frac{1}{k^\alpha}\right). \qed
\end{align*}
Note that this is the $\alpha\in(d,2d)$-counterpart of Lemma \ref{transientLemmaConnectivity}.
\proof[Proof of Theorem \ref{transientRandomThm}]
The previous lemmas readily imply the result for sufficiently large $\lambda$, and we are left to extend this to all $\lambda>\lambda_c$, which we achieve via coarse-graining.
When $\gamma<2$, let $\lambda_0$ and $\mu_0$ be the values that we obtain from Proposition \ref{lemmaTransientLargeLambda}. To apply the proposition for all $\lambda > \lambda_c = 0$, we partition $\mathbb{Z}^d$ into (hyper)cubes of side length $N$ (for some large $N$ to be determined below), which we call $N$-boxes.
In every $N$-box we identify the vertex in it with maximum weight and call it the dominant vertex. We now choose $\beta$ large enough so that $\lambda\beta^2(3d)^{-\alpha}>\lambda_0$.
Second, we call those $N$-boxes good that contain a vertex with weight at least $\beta N^{\alpha/2}$. We choose $N$ large enough so that the probability that an $N$-box is good is larger than $\mu_0$ using Lemma \ref{transientLemmaBigDegrees}.
Lemma \ref{transientLemmaConnectivity} implies that the probability that the dominant vertices in two good $N$-boxes, being \emph{$k$ boxes away from each other}, are connected, is bounded from below by $\P_{(\lambda_0, W'')}(\{v_1, v_2\} \text{ is open})$,
where $v_1, v_2 \in \mathbb{Z}^d$ such that $|v_1 -v_2|=k$, and where $W''_1,W''_2$ are i.i.d.\ distributed according to \eqref{standardPowerLaw}. Thus, the status of the edges between dominant vertices in good $N$-boxes stochastically dominates an SFP model on $\mathbb{Z}^d$ with parameters $\alpha$, $\lambda_0$ and weight-law $W''$, combined with a site percolation of intensity $\mu_0$, exactly as described in Proposition \ref{lemmaTransientLargeLambda}.
We now apply Proposition~\ref{lemmaTransientLargeLambda} to obtain the result for the case $\gamma < 2$.
The case $\alpha \in (d,2d)$ is analogous: When $\alpha\in(d,2d)$, let $\lambda_1$ and $\mu_1$ be the values that we obtain from Lemma \ref{sitebondlongrangeLemma}. To apply the lemma, we partition $\mathbb{Z}^d$ into $N$-boxes again. Choose $\beta$ large enough, using Lemma~\ref{largeConnectivity}, such that two $N$-boxes being $k$-boxes away from each other, having clusters $\mathcal{C}_1$ and $\mathcal{C}_2$ with size at least $\beta N^{\alpha/2}$ are connected by an open edge between $\mathcal{C}_1$ and $\mathcal{C}_2$ with probability at least
$1-\exp\left(-{\lambda_1}/{k^\alpha}\right)$.
Call the $N$-boxes that contain a cluster of size at least $\beta N^{\alpha/2}$ the good boxes. Choose $N$ large enough so that the probability that an $N$-box is good is larger than $\mu_1$, using Lemma \ref{clustersizeTransient}. We thus find that the status of the edges between the dominant vertices of good $N$-boxes stochastically dominates an LRP model on $\mathbb{Z}^d$ with independent edge probabilities $p_{x,y} = 1 - \exp(-\lambda_1/ |x-y|^{\alpha})$, combined with a site-percolation of intensity $\mu_1$. An application of Lemma \ref{sitebondlongrangeLemma} thus obtains the result for the case $\alpha \in (d,2d)$.
We conclude that in both cases we found a subgraph of the infinite cluster on which the random walk is transient, and hence the random walk is transient on the infinite cluster itself, cf.\ \cite[Section 9]{LectureNotesPeres}. \qed
\newpage
\subsection*{Recurrence proof}
We verify that we can apply the following lemma.
\begin{lemma}[Berger {\cite[Theorem 3.10]{BergerTransience}}]\label{lemma2Drecurrent}
Let $d=2, \alpha\geq2d = 4$ and let $(P_{i,j})_{i,j\in\mathbb{N}}$ be a family of probabilities, such that
\[
\limsup_{i,j\rightarrow\infty} \frac{P_{i,j}}{(i+j)^{-4}}<\infty.
\]
Consider a shift invariant percolation model on $\mathbb{Z}^2$ on which the bond between $(x_1,y_1)$ and $(x_2,y_2)$ is open with marginal probability $P_{|x_1-x_2|,|y_1-y_2|}$. If there exists an infinite cluster, then this cluster is recurrent.
\end{lemma}
To bound the marginal probabilities we need a bound on the expectation of the product of the weights.
\begin{lemma}\label{upperboundProductWeights}
Assume that the weight-distribution satisfies \eqref{upperBoundWeights}. Let $W_1,W_2$ be two independent copies of the random variable $W$.
If $\tau>2$, there exists a constant $C>0$, such that for $u\geq 1$:
\[
\mathbb{E}\left[\left(W_1W_2/u\right)\wedge1\right]\leq C u^{-1}.
\]
If $\tau\leq 2$, then there exists a constant $C>0$, such that
\[
\mathbb{E}\left[\left(W_1W_2/u\right)\wedge1\right]\leq C \log(u)u^{-(\tau-1)}.
\]
\end{lemma}
\proof
The proof for $\tau>2$ is straightforward. Observe that $\mathbb{E}[W]<\infty$, hence
\[\mathbb{E}\left[\left(W_1W_2/u\right)\wedge1\right]\leq \mathbb{E}[W]^2/u.\]
For $\tau\leq 2$, we prove the claim for weights that satisfy \eqref{e:Wprime} for some $c\geq 1$. Lemma \ref{obs:increasing} then implies that the claim holds for weights that satisfy \eqref{upperBoundWeights}.
Let $H(u)$ denote the distribution function of $W_1W_2$. From \cite[Lemma 4.3]{DeijfenScaleFree} we have for some $C'>0$ that
\[
1-H(u)\leq C'\log(u)u^{-(\tau-1)}.
\]
By
\[
\int\limits_1^uvdH(v)\leq \int\limits_1^u (1-H(v))dv,
\]
we obtain the result
\[\begin{split}
\mathbb{E}\left[\left(W_1W_2/u\right)\wedge1\right]&=1-H(u)+\frac{1}{u}\int_1^u vdH(v)\\
&\leq 1-H(u)+\frac{1}{u}\int\limits_1^u (1-H(v))dv\\
&\leq C'\log(u)u^{-(\tau-1)}+\frac{C'}{u}\int\limits_1^u\log(v)v^{-(\tau-1)}dv\\
&\leq C'\log(u)u^{-(\tau-1)}+\frac{C''}{u}\log(u)u^{-(\tau-2)}\\
&\leq C \log(u)u^{-(\tau-1)}. \qed
\end{split}
\]
\proof[Proof of Theorem \ref{recurrentRandomWalk2D}]
Observe that the scale-free percolation measure $\mathbb{P}_{(\lambda, W)}$ is indeed shift invariant.
According to Lemma \ref{lemma2Drecurrent}, we need to prove that
\[
\limsup_{k\rightarrow\infty} k^4 \mathbb{P}\left(\{(0,0),(i,j)\}\text{ is open}\right)<\infty
\]
whenever $|i|+|j|=k$. For convenience, we treat only $(i,j)=(k,0)$, the other cases follow analogously.
Lemma \ref{upperboundProductWeights} and the bound $1-\exp(-x)\leq x$ give
\begin{align*}
\limsup_{k\rightarrow\infty} k^4 \mathbb{P}\left(\{(0,0),(k,0)\}\text{ is open}\right) &= \limsup_{k\rightarrow\infty} k^4\mathbb{E}\left[1-e^{-\lambda\frac{W_{(k,0)}W_{(0,0)}}{k^\alpha}}\right]\\
&\leq \limsup_{k\rightarrow\infty} k^4\mathbb{E}\left[1-e^{-\lambda\frac{W_{(k,0)}W_{(0,0)}}{k^\alpha}}\right]\\
&\leq \limsup_{k\rightarrow\infty} k^4\mathbb{E}\left[\left(\lambda\frac{W_{(k,0)}W_{(0,0)}}{k^\alpha}\right)\wedge1\right].
\end{align*}
For $\tau>2$, recall that $\alpha\geq 2d = 4$, and therefore
\begin{equation*}
\limsup_{k\rightarrow\infty} k^4\mathbb{E}\left[\left(\lambda\frac{W_{(k,0)}W_0}{k^\alpha}\right)\wedge1\right] \leq \limsup_{k\rightarrow\infty}C \lambda k^{4-\alpha}\leq C \lambda<\infty.
\end{equation*}
For $\tau\leq 2$ and $\gamma>2$,
\begin{align*}
\limsup_{k\rightarrow\infty} k^4\mathbb{E}\left[\lambda\frac{W_{(k,0)}W_0}{k^\alpha}\wedge1\right]&\leq \limsup_{k\rightarrow\infty}k^4C \lambda^{\tau-1} \log(k^\alpha/\lambda)k^{-\alpha(\tau-1)}\\
&\leq \limsup_{k\rightarrow\infty} C'\log(k)k^{4-\alpha(\tau-1)}\\
&=\limsup_{k\rightarrow\infty} C' \log(k)k^{4-2\gamma}\\
&=0. \qed
\end{align*}
\section{Hierarchical clustering: proof of Theorem \ref{thm:trees}}\label{SectionClusters}
For the proof of Theorem \ref{thm:trees} we largely use the same two steps as for the proof of Theorem~\ref{transientRandomThm}. First we prove the result for large values of $\lambda$ on an SFP model combined with an i.i.d.\ Bernoulli site percolation, but only along a sequence $\{m_n\}_{n=1}^\infty$ diverging to infinity. Then, we prove the result in this SFP model for all sufficiently large $m$. Lastly, by a coarse graining argument, we extend the result to hold for all $\lambda>\lambda_c=0$.
We assume the weights satisfy \eqref{e:Wprime}. Lemma \ref{obs:increasing} then implies that the claim holds for weights that satisfy \eqref{lowerBoundWeights}.
We start with stating and proving a proposition in which we consider a site-percolated version of SFP.
\begin{proposition}\label{lemLargestCluster}
Consider SFP on $\mathbb{Z}^d$ with $d\ge1$, $\gamma<2$, and weight distribution that satisfies \eqref{standardPowerLaw}.
Independently of this, perform an i.i.d.\ Bernoulli site percolation on the vertices of $\mathbb{Z}^d$, colouring a vertex ``green'' with probability $\mu \in (0,1]$.
Denote by $\mathcal{C}_{\lambda,\mu}$ the (unique) infinite subgraph of the infinite scale-free percolation cluster induced by the green vertices. We call this the site-percolated SFP.
There exist constants $\mu_1 <1,\lambda_1>0,K, \rho>0$ and $n_2 \in \mathbb{N}$, and sequence $\{m_n\}_{n=1}^\infty$ with $m_n \in \mathbb{N}$, such that for all $\xi$ satisfying
\begin{equation}\label{eqXiDef}
0<\xi<\min\left(\frac{d(2-\gamma)}{\tau+1},\frac{d}{2}\left(\tau+2-\sqrt{(\tau+2)^2-4(2-\gamma)}\right)\right),
\end{equation}
the following hold:
\begin{enumerate}
\item
The probability that the site-percolated SFP configuration with parameters $\mu \ge \mu_1$ and $\lambda \ge \lambda_1$ contains a $(0, m_n, \rho,K)$-hierarchically clustered tree inside the box $[0,m_n-1]^d$, is bounded from below by
\[
1-3\exp\left(-\rho m_n^\xi \right).
\]
\item Site-percolated SFP with $\lambda \ge \lambda_1$ and $\mu \ge \mu_1$ has an infinite component $\mathcal{C}_{\lambda, \mu}$ almost surely. $\mathcal{C}_{\lambda, \mu}$ contains a.s.\ an infinite, connected, cycle-free subgraph $\mathcal{T}_\infty$ such that
removing an arbitrary edge yields a finite and infinite connected component and there exist $x\in\mathbb{Z}^d$ and $m\ge 1$ such that the finite connected component is an $(x,m,\rho,K)$-hierarchically clustered tree.
\end{enumerate}
\end{proposition}
\proof
Let $D_1$ be a large integer to be determined later, and let $\{a_n\}_{n=1}^\infty$, with ${a_n \in (0,1]}$, be a sequence also to be determined later, such that
\begin{equation}\label{constraintsAk1}
\rho:=\prod_{n=1}^\infty a_n > 0.
\end{equation}
Let
\[
\xi'\in \left(\xi, \min\left(\frac{d(2-\gamma)}{\tau+1},\frac{d}{2}\left(\tau+2-\sqrt{(\tau+2)^2-4(2-\gamma)}\right)\right)\right).
\]
The bound $\xi'<d\,\frac{2-\gamma}{\tau+1}$ implies
\begin{equation}
\label{eqDefZeta}
\zeta:=\frac{d-\xi'}{(\alpha+\xi')(\tau-1)-(d-\xi')}>1.
\end{equation}
For all $n \ge 2$, let $D_n :=\left \lceil D_{n-1}^{\zeta} \right \rceil$,
so that we have the telescoping product
\begin{equation}\label{formsDn}
D_n \ge D_{n-1}^{\zeta} \ge D_1\prod_{k=1}^{n-1}D_k^{\zeta-1} \ge D_1^{\zeta^{n-1}},
\end{equation}
and let
\[
u_n :=\prod_{k=1}^n D_k^{\frac{d-\xi'}{\tau-1}} \qquad \text{ and } \qquad C_n := a_n D_n^d.
\]
We give a sequence of graphs for $\mathbb{Z}^d$ by the same procedure as in Proposition~\ref{lemmaTransientLargeLambda}.
We call the vertices the $0$-stage boxes.
We partition the lattice $\mathbb{Z}^d$ into boxes of side-length $D_1$, so that each box contains $D_1^d$ vertices, and call these the $1$-stage boxes.
Iteratively, we group $D_{n-1}^d$ $(n-1)$-stage boxes into $n$-stage boxes, so that the $n$-stage boxes form a covering of $\mathbb{Z}^d$ by translates of $[0,\prod_{k=1}^n D_k -1]^d$.
We call a $0$-stage box ``good'' if the associated vertex is green, and we call this vertex ``$0$-dominant''.
For every stage $n \geq 1$, we define rules for a box to be ``good'' or ``bad'' and for a vertex to be ``$n$-dominant'' depending only on the weights $W_x$ and colours of the vertices and on the edges of $\mathcal{C}_{\lambda,\mu}$ inside the box. This implies that disjoint boxes are good or bad independently of each other.
For $n\geq 1$ we inductively define that an $n$-stage box is good if the following three conditions hold:
\begin{enumerate}
\item[(E)] At least $C_n$ of the $(n-1)$-stage boxes it contains are good.
\item[(F)] The maximum weight $(n-1)$-dominant vertex in one of its good $(n-1)$-stage boxes has weight at least $u_{n}$. Call this vertex $n$-dominant. (A vertex can be dominant for multiple stages.)
\item[(G)] All $(n-1)$-dominant vertices in the good $(n-1)$-stage boxes are connected to the $n$-dominant vertex by an edge in $\mathcal{C}_{\lambda,\mu}$.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[keepaspectratio,width = .6\textwidth]{finiteboxes.pdf}
\caption{Sketch of the renormalization in Proposition \ref{lemLargestCluster} for $d=2$.}\label{renormalizedFiniteBoxes}
\end{figure}
Define $E_n(v), F_n(v)$ and $G_n(v)$ to be the events that respectively (E), (F) and (G) hold for the $n$-stage box containing the vertex $v$. Denote by $L_n(v)$ the event that the $n$-stage box containing $v$ is good, i.e., $L_n(v) = E_n(v) \cap F_n(v) \cap G_n(v)$. To simplify notation we write $E_n:=E_n(0), F_n:=F_n(0), G_n:=G_n(0),$ and $L_n := L_n(0)$.
On the event that $L_n$ occurs, we can construct a graph by the following procedure:
\begin{enumerate}
\item Start with the set of all $0$-dominant vertices inside the $n$-stage box containing $v$.
\item For every $i \in \{0, \dots, n-1\}$ connect each $i$-dominant vertex $v$ to the $(i+1)$-dominant vertex inside the $(i+1)$-stage box that contains $v$ (unless this creates a self-loop).
\end{enumerate}
We sequence $m_k$ and obtain a bound:
\begin{equation}\label{e:mprimedef}
m_n:=\prod_{k=1}^n D_k \ge D_1^{\frac{\zeta^n-1}{\zeta-1}}.
\end{equation}
We claim that the constructed connected component of the $n$-dominant vertex in the $n$-stage box of $v$ is a tree that satisfies the conditions given in Definition \ref{def:hct}. Since \eqref{constraintsAk1} holds, the event $L_k$ implies that the intersection of the site-percolated SFP configuration and the cube $[0,m_k-1]^d$ contains a tree with at least
\[ \prod_{k=1}^n C_k =
\prod_{k=1}^n a_k D_k^d
\ge\rho {m}_n^d \]
vertices, which verifies Condition~(1) of Definition \ref{def:hct}. We obtain Condition~(2) for some $K>0$ because the box-size $m_n$ as well as the number of vertices in $[0,m_n-1]^d$ both grow double exponentially fast in $n$. Conditions~(3) and (4) follow straightforwardly from the construction.
We therefore conclude that Proposition \ref{lemLargestCluster}(1) follows if we show that
\begin{equation}\label{eq Ln condition}
\mathbb{P}(L_n^c)\leq 3\exp\left(-\rho\prod_{k=1}^nD_k^\xi\right),
\end{equation}
and that Proposition \ref{lemLargestCluster}(2) follows if we show that
\[
\mathbb{P}\left(\bigcap_{n=1}^\infty L_n\right)>0.
\]
The events $L_n$ are positively correlated, hence it is {sufficient} for assertion (2) of the theorem that
$\prod_{n=1}^\infty \mathbb{P}(L_n)>0$,
which follows from \eqref{eq Ln condition}. It thus remains to prove \eqref{eq Ln condition}.
\medskip
We bound
\begin{equation}\label{clusterLc}
\mathbb{P}(L_n^c)\leq \mathbb{P}(E_n^c)+\mathbb{P}(F_n^c \mid E_n)+\mathbb{P}(G_n^c \mid E_n\cap F_n),
\end{equation}
and analyze the terms separately.
We start with proving two bounds on $\mathbb{P}(F_n^c \mid E_n)$. The conditioning on $E_n$ gives that the sum of cluster sizes in the good $(n-1)$-stage boxes is at least $\prod_{k=1}^n C_k \ge \prod_{k=1}^n a_k D_k^d$. Using Lemma \ref{maximumWeight} with $K_1=1, K_2=u_n$, the definition of $u_n$ yields that
\begin{equation}\label{clusterFc}
\mathbb{P}(F_n^c \mid E_n)\leq \exp\left(-u_n^{-(\tau-1)}\prod_{k=1}^na_kD_k^d\right)=\exp\left(-\prod_{k=1}^n a_k D_k^{\xi'}\right).
\end{equation}
Similarly, the conditioning on $E_n$ gives that there are at least $C_n$ vertices with weight at least $u_{n-1}$. Using Lemma \ref{maximumWeight} with $K_1=u_{n-1}, K_2=u_n$, the definition of $u_n$ yields that
\begin{equation}\label{clusterFc2}
\mathbb{P}(F_n^c \mid E_n)\leq \exp\left(-a_nD_n^d \left(D_n^{-\frac{d-\xi'}{\tau-1}}\right)^{\tau-1}\right) = \exp\left(-a_nD_n^{\xi'}\right).
\end{equation}
{Note that it is not a-priori clear which of these two bounds is better, since this may depend on the choice of $D_1, n, \tau, \xi'$ and the sequence $(a_k)$. Therefore we use both bounds.}
We move on to $\mathbb{P}(G_n^c \mid E_n \cap F_n)$.
For $n\ge2$ define
\begin{equation}\label{eqDefBetaN}
\beta_n := u_n u_{n-1}\prod_{k=1}^n D_k^{-\alpha}.
\end{equation}
Since $D_n=\left\lceil D_{n-1}^\zeta\right\rceil$ by definition, it follows that
\[
D_{n-1} \ge \left(\frac{1}{2}D_n\right)^{1/\zeta}.
\]
Substituting the values of $u_n,u_{n-1},\zeta$, and \eqref{formsDn} gives
\begin{align*}
\frac{\beta_n}{\beta_{n-1}}
&=\frac{u_n}{u_{n-2}}D_n^{-\alpha}
=D_n^{-\alpha+\frac{d-\xi'}{\tau-1}}D_{n-1}^{\frac{d-\xi'}{\tau-1}}
\ge \left(\frac{1}{2}\right)^{1/\zeta}D_n^{-\alpha+\frac{d-\xi'}{\tau-1}}\left(D_n^{\frac{(\alpha+\xi')(\tau-1)-(d-\xi)}{d-\xi'}}\right)^{\frac{d-\xi'}{\tau-1}}\\
&{=} \left(\frac{1}{2}\right)^{1/\zeta}D_n^{-\alpha+\frac{d-\xi'}{\tau-1}} D_n^{\alpha+\xi'-\frac{d-\xi'}{\tau-1}}
= \left(\frac{1}{2}\right)^{1/\zeta}D_n^{\xi'}.
\end{align*}
It follows that for some $c>0$,
\[
\beta_n \ge c\left(\frac{1}{2}\right)^{(n-1)/\zeta}\prod_{k=1}^nD_k^{\xi'}.
\]
The distance between two $(n-1)$-dominant vertices in the same $n$-stage box is at most $ d\prod_{k=1}^nD_k$.
There are at most $D_n^d$ good $(n-1)$-dominant vertices (having weight at least $u_{n-1}$). Recalling \eqref{eqDefBetaN}, by the union bound and the conditioning on $E_n$ and $F_n$, we thus obtain that
\begin{align}
\mathbb{P}(G_n^c \mid E_n \cap F_n)&\leq D_n^d\exp\left(-\lambda d^{-\alpha}u_nu_{n-1}\prod_{k=1}^n D_k^{-\alpha}\right)\nonumber\\
&= D_n^d\exp\left(-\lambda d^{-\alpha}\beta_n\right)\\
&\le \exp\left(d\log(D_n)-c\lambda d^{-\alpha}\left(\frac{1}{2}\right)^{(n-1)/\zeta}\prod_{k=1}^nD_k^{\xi'}\right).\nonumber
\end{align}
Since $D_n$ {grows} double exponentially fast and $\xi'>\xi>0$, we obtain for $n$ sufficiently large {(larger than $n_0$, say)}, that
\begin{equation}\label{clusterGc}
\mathbb{P}(G_n^c \mid E_n \cap F_n)\leq \exp\left(-\prod_{k=1}^nD_k^{\xi}\right).
\end{equation}
\medskip
We move on to $\mathbb{P}(E_n^c)$. All $(n-1)$-stage boxes are good independently of each other with probability $\mathbb{P}(L_{n-1})$.
Let $X\sim\text{Bin}(D_n^d,\mathbb{P}(L_{n-1}))$. Then,
\[
\mathbb{P}(E_n^c)=\mathbb{P}(X<C_n)=\mathbb{P}(X<a_nD_n^d).
\]
As in \eqref{upperEc}, we apply Chernoff's bound and obtain
\[
\mathbb{P}(E_n^c)\leq\exp\left(-\frac{\left(\mathbb{P}(L_{n-1})-a_n\right)^2}{2\mathbb{P}(L_{n-1})}D_n^d\right),
\]
whenever ${0 < }a_n < \mathbb{P}(L_{n-1})$.
We now choose
\begin{equation}\label{eqDefAn}
a_n:=\mathbb{P}(L_{n-1})\left(1-\sqrt{2}D_n^{-d/2}\prod_{k=1}^n D_k^{\xi'/2}\right).
\end{equation}
We will show below that $a_n >0$ for all $n \ge 1$ and that $\prod_{n=1}^\infty a_n >0$, as required by \eqref{constraintsAk1}. Assuming these inequalities we have
\begin{equation}\label{clusterEc}
\mathbb{P}(E_n^c)\leq\exp\left(-\mathbb{P}(L_{n-1})\prod_{k=1}^nD_k^{\xi'}\right),
\end{equation}
so that using $\mathbb{P}(L_{n-1})>a_n>\rho$, $\xi'>\xi$, \eqref{clusterFc}, and \eqref{clusterGc} yields
\begin{align}\label{boundLn2}
\mathbb{P}(L_n^c)&\leq \exp\left(-\mathbb{P}(L_{n-1})\prod_{k=1}^nD_k^{\xi'}\right)+\exp\left(-\prod_{k=1}^n a_k\prod_{k=1}^n D_k^{\xi'}\right)+\exp\left(-\prod_{k=1}^nD_k^{\xi}\right)\nonumber\\
&\leq 3\exp\left(-\prod_{k=1}^n a_k\prod_{k=1}^n D_k^\xi\right),
\end{align}
which gives the desired bound \eqref{eq Ln condition}.
For later reference we note that if instead we apply
\eqref{clusterLc}, \eqref{clusterFc2}, and \eqref{clusterGc},
then we obtain for $n$ sufficiently large
\begin{equation}\label{boundLn1}
\mathbb{P}(L_n^c)\leq \exp\left(-\mathbb{P}(L_{n-1})\prod_{k=1}^nD_k^{\xi'}\right)+\exp\left(- a_n D_n^{\xi'}\right)+\exp\left(-\prod_{k=1}^nD_k^\xi\right).\end{equation}
\medskip
All that remains is to show that {$a_n >0$ for all $n \ge 1$ and that $\prod_{n=1}^\infty a_n >0$.} For positivity of $a_n$, it is by \eqref{eqDefAn} sufficient to show that for some $b>0$ and $D_1$ sufficiently large
\begin{equation}\label{dnDoubleExp}
D_n^{-d/2}\prod_{k=1}^n D_k^{\xi'/2} < \frac{1}{2\sqrt{2}}D_1^{-b\zeta^{n}}.
\end{equation}
Since $\xi'>0, \zeta>1$, $\prod_{k=1}^nD_k\leq \big(D_{n+1}/D_1)^{1/(\zeta-1)}$ and $D_{n+1}=\left\lceil D_n^\zeta\right\rceil\leq 2D_n^\zeta$ by \eqref{formsDn}, we obtain
\begin{align*}
D_n^{-d}\prod_{k=1}^n D_k^{\xi'} & \le D_n^{-d}\left(\frac{D_{n+1}}{D_1}\right)^{\xi'/(\zeta-1)}
= D_n^{-d}\left(\frac{\left\lceil D_{n}^\zeta\right\rceil}{D_1}\right)^{\xi'/(\zeta-1)}
\le \left(\frac{2}{D_1}\right)^{\xi'/(\zeta-1)}D_n^{-d+\frac{\xi'\zeta}{(\zeta-1)}}.
\end{align*}
We show that
\[
-d+\xi'\frac{\zeta}{\zeta-1} < 0.
\]
By definition $\xi'<\frac{d}{2}\left(\tau+2-\sqrt{(\tau+2)^2-4(2-\gamma)}\right)$. So we derive, after rearranging terms and dividing by $\sqrt{d}$, that
\[
-\frac{1}{\sqrt{d}}\xi' + \frac{\sqrt{d}}{2}(\tau+2) > \frac{\sqrt{d}}{2}\sqrt{(\tau+2)^2 - 4 (2-\gamma)}.
\]
Squaring and substituting $\gamma=\alpha(\tau-1)/d$ yield
\[
\frac1d\xi'^2-\xi'(\tau+2)+2d-\alpha(\tau-1)>0,
\]
so that after rearranging
\[
\frac1d(d-\xi')^2 > (\alpha+\xi')(\tau-1)-(d-\xi').
\]
Hence, we obtain the result after dividing by $(d-\xi')$ and, using \eqref{eqDefZeta}, substituting $\zeta$:
\[
1 - \frac{\xi'}{d} > \frac1\zeta, \] which can be inverted to give \[-d+\xi'\frac{\zeta}{\zeta-1} < 0.
\]
Hence, there exists a constant $b>0$ such that, when we choose $D_1$ sufficiently large,
\[
D_n^{-d/2}\prod_{k=1}^n D_k^{\xi'/2} < \frac{1}{2\sqrt{2}}D_n^{-b\zeta}.
\]
By \eqref{formsDn} we have $D_n\geq D_1^{\zeta^{n-1}}$, so \eqref{dnDoubleExp} follows, and we may conclude that $a_n>0$.
\medskip
Observe that $\prod_{k=1}^\infty a_k>0$ if and only if $\prod_{k=1}^\infty \mathbb{P}(L_k) > 0$, since $a_n$ approaches $\mathbb{P}(L_n)$ double exponentially fast. Moreover, combining \eqref{eqDefAn} and \eqref{dnDoubleExp} gives
\[
a_n\geq \frac{1}{2}\mathbb{P}(L_{n-1}),
\]
so that, using \eqref{boundLn1}, we can bound
\begin{equation}\label{exp:clustLn}
\mathbb{P}(L_n^c)\leq \exp\left(-\mathbb{P}(L_{n-1})\prod_{k=1}^nD_k^{\xi'}\right)+\exp\left(- \frac{1}{2}\mathbb{P}(L_{n-1}) D_n^{\xi'}\right)+\exp\left(-\prod_{k=1}^nD_k^\xi\right).
\end{equation}
Define the sequence \[\ell_n:=1-(n+1)^{-3/2},\] and observe that
\begin{equation}\label{prodPositive2}
\prod_{n=1}^\infty \ell_n>0.
\end{equation}
For any fixed $n_1>n_0$, we can find $\lambda_0>0$ and $\mu_0<1$ such that
$\mathbb{P}(L_{n_1})\geq \ell_{n_1}$, because $L_{n_1}$ depends only on the weights and edges inside a \emph{finite} box.
Since $D_k$ grows double exponentially fast, we further bound \eqref{exp:clustLn} for all $n > n_1$ by
\begin{align}\label{recursiveBound2}
\mathbb{P}(L_n^c)&\leq \exp\left(-\left(1-\frac{1}{\sqrt{2}}\right)\prod_{k=1}^nD_k^{\xi'}\right) + \exp\left(-\frac12\left(1-\frac{1}{\sqrt{2}}\right)D_n^{\xi'}\right)+\exp\left(-\prod_{k=1}^nD_k^\xi\right)\nonumber\\
&\leq (n+1)^{-3/2}
=1-\ell_n.
\end{align}
We thus choose $n_1$ so large that the last bound in \eqref{recursiveBound2} holds.
Using \eqref{prodPositive2}, \eqref{recursiveBound2} and $\mathbb{P}(L_n)>0$ for all $n$, yields that
\begin{equation*}
\prod_{n=1}^\infty\mathbb{P}(L_n)=\prod_{n=1}^{n_1}\mathbb{P}(L_n)\prod_{n=n_1+1}^\infty \mathbb{P}(L_n)\geq \prod_{n=1}^{n_1}\mathbb{P}(L_n)\prod_{n=n_1+1}^\infty \ell_n>0.
\end{equation*}
Recalling that, by \eqref{eqDefAn} and \eqref{dnDoubleExp}, this is equivalent to \eqref{constraintsAk1}, the bound \eqref{boundLn2} concludes the proof. \qed
\medskip
To prove Theorem~\ref{thm:trees} we need extend the above claims from the specific sequence $\{m_n\}_{n=1}^\infty$ to \emph{all} (sufficiently large) $m\in \mathbb{N}$. This extension is the content of the next lemma. After this lemma we extend the claim to hold for all $\lambda>0$ and weights following a power-law given by \eqref{e:Wprime}.
\begin{lemma}\label{Lemma:Density}
Consider SFP on $\mathbb{Z}^d$ with $d\ge 1, \gamma<2$, and weight distribution that satisfies \eqref{standardPowerLaw}. Independently of this, perform i.i.d.\ Bernoulli site percolation on the vertices of $\mathbb{Z}^d$, colouring a vertex ``green'' with probability $\mu\in(0,1]$. Denote by $\mathcal{S}_{m, \lambda,\mu}$ the SFP-realization induced by the green vertices in the box $[0, m-1]^d$. We call this the site-percolated SFP.
Then there exist a density $\rho>0$ and constants $\mu_1<1, \lambda_2>0$, and $K', m_0\in\mathbb N$, such that for $m\geq m_0$, and parameters $\lambda\ge \lambda_2, \mu\ge \mu_1$, the probability that $\mathcal{S}_{m, \lambda,\mu}$ with parameters contains a $(0, m,\rho,K')$-hierarchically clustered tree inside the box $[0,m-1]^d$ is bounded from below by
\[
1-3\exp\left(-\rho m^\xi \right),
\]
whenever ${\xi<\min\left(\frac{d(2-\gamma)}{\tau+1},\frac{d}{2}\left(\tau+2-\sqrt{(\tau+2)^2-4(2-\gamma)}\right)\right)}$.
\end{lemma}
\proof
Let the constants $\mu_1, \lambda_1, \xi', K$, the sequences $\{D_k\}, \{u_k\}, \{C_k\}$ and $\{m_k\}$ be as in Proposition~\ref{lemLargestCluster}, and $\zeta$ as in \eqref{eqDefZeta}.
Assume $\mu\geq \mu_1, \lambda\geq \lambda_1$, and let $m$ be large enough (how large precisely will be determined in several steps).
We define
\begin{equation}\label{def:nm}
n =\sup\{i:m_i\leq m\}, \qquad\text{ and }\qquad k=\left\lfloor \frac{m}{m_n}\right\rfloor,
\end{equation}
both depending on $m$, and note that $n\to\infty$ as $m\to\infty$.
Partition the box $[0,km_n]^d$ into $k^d$ boxes of side-length $m_n$. We call these the $n$-boxes. Let $v^\ast$ be the vertex in $[0,km_n]^d$ with maximum weight.
We use the same definition of good boxes and dominant vertices as in the proof of Proposition \ref{lemLargestCluster}. So in particular, an $n$-box is good if its $n$-dominant vertex has weight at least $u_n=m_n^{(d-\xi')/(\tau-1)}$.
We define $E$ to be the event that at least $\frac{1}{2}k^d$ $n$-boxes are good, $F$ the event that $W_{v^\ast}\geq (km_n)^{(d-\xi')/(\tau-1)}$, and $G$ the event that every good $n$-box's $n$-dominant vertex is connected by an open edge to $v^\ast$. Let $L=E\cap F\cap G$.
Observe that the event $L$ implies indeed that there exists a $(0, m, \rho, K)$-hierarchically clustered tree. Properties~(1), (3) and (4) of Definition \ref{def:hct} readily follow from Proposition~\ref{lemLargestCluster}. For Property~(2), we observe that, for any $n\in\mathbb{N}$, the diameter of the constructed tree, after connecting the separate trees via $v^\ast$, only increases by
2 in comparison to Proposition \ref{lemLargestCluster}. Hence there exists $K'>0$, such that Property~(2) is satisfied.
We bound
\begin{equation}\label{eq:clusterGeneralBound}
\mathbb{P}(L^c)\leq \mathbb{P}(E^c)+\mathbb{P}(F^c\mid E)+\mathbb{P}(G^c \mid E\cap F),
\end{equation}
and analyze the three summands term by term.
By our assumption \eqref{standardPowerLaw}, all $(km_n)^d$ vertices have weight at least 1. By Lemma \ref{maximumWeight},
\begin{equation}\label{eq:clusterBoundWeight}
\mathbb{P}\left(F^c \mid E\right)\leq \exp(-(km_n)^\xi).
\end{equation}
The $\ell^1$-distance between two vertices in $[0,km_n]^d$ is bounded above by $dkm_n$, and the number of $n$-dominant vertices is at most $k^d$. Therefore,
\begin{equation}\label{eqGcEFbound}
\mathbb{P}(G^c \mid E\cap F)\leq k^d \exp\left(-\lambda d^{-\alpha}m_n^{2\frac{d-\xi}{\tau-1}-\alpha}k^{\frac{d-\xi}{\tau-1}-\alpha} \right).
\end{equation}
We claim that
\begin{equation}\label{eq-complicatedbound}
m_n^{2\frac{d-\xi}{\tau-1}-\alpha}k^{\frac{d-\xi'}{\tau-1}-\alpha} \geq c\left(km_n\right)^{\xi'}, \qquad \text{for some constant } c>0.
\end{equation}
Indeed, by our choice of $n$ and $k$ in \eqref{def:nm}, $D_{n+1}=\left\lceil D_n^\zeta\right\rceil < D_n^\zeta+1$, and $m_n=\prod_{k=1}^nD_k$ by their definitions in \eqref{formsDn} and \eqref{e:mprimedef}, we have
\[
\frac{k}{m_n^{\zeta-1}} \leq \frac{D_{n+1}}{m_n^{\zeta-1}} \le \frac{D_n^\zeta + 1}{D_n^{\zeta-1}\prod_{k=1}^{n-1} D_k^{\zeta-1}}= \frac{D_n}{m_{n-1}^{\zeta-1}} + \prod_{k=1}^n D_k^{1-\zeta}.
\]
Iterating this gives
\begin{align*}
\frac{k}{m_n^{\zeta-1}} \leq \frac{D_{n+1}}{m_n^{\zeta-1}} &\leq \frac{D_n}{m_{n-1}^{\zeta-1}} +\prod_{k=1}^n D_k^{1-\zeta} \\
&\leq \frac{D_{n-1}}{m_{n-2}^{\zeta-1}} + \prod_{k=1}^{n-1} D_k^{1-\zeta}+\prod_{k=1}^n D_k^{1-\zeta}\leq \hdots\leq D_1 + \sum_{k=1}^n\left(\prod_{l=1}^{k}D_l\right)^{1-\zeta}.
\end{align*}
Recall that by \eqref{eqDefZeta} $\zeta>1$, and by \eqref{formsDn} $\prod_{l=1}^kD_l \geq (1/D_1)D_1^{\zeta^k/(\zeta-1)}$, so
\begin{align}
\frac{k}{m_n^{\zeta-1}} & \le D_1 + \sum_{k=1}^n\left(\prod_{l=1}^{k}D_l\right)^{1-\zeta}
\le D_1 + \sum_{k=1}^\infty\left(\prod_{l=1}^{k}D_l\right)^{1-\zeta}
\le D_1 + \sum_{k=1}^\infty D_1^{-\zeta^k} <\infty.
\end{align}
Hence, we can further estimate for some $c'>0$
\[
k \leq c'm_n^{-1+\zeta} = c'm_n^{-1+\frac{d-\xi'}{(\alpha+\xi')(\tau-1)-(d-\xi')}} = c'm_n^{-1+\frac{d-\xi'}{\tau-1}\left(\alpha+\xi'-\frac{d-\xi'}{\tau-1}\right)^{-1}},
\]
so that
\[
k^{\alpha+\xi'-\frac{d-\xi'}{\tau-1}} \leq {c'}^{\alpha+\xi'-\frac{d-\xi'}{\tau-1}}m_n^{-\left(\alpha+\xi'-\frac{d-\xi'}{\tau-1}\right)+\frac{d-\xi'}{\tau-1}},
\]
and finally
\[
{c'}^{\frac{d-\xi'}{\tau-1}-\xi'-\alpha}(km_n)^{\xi'} \leq m_n^{2\frac{d-\xi'}{\tau-1}-\alpha}k^{\frac{d-\xi'}{\tau-1}-\alpha},
\]
from which \eqref{eq-complicatedbound} follows.
Consequently, by \eqref{eqGcEFbound},
\begin{equation}\label{eq:clusterBoundConnections}
\mathbb{P}(G^c \mid E\cap F)
\leq k^d \exp\left(-\lambda d^{-\alpha}{c'}^{\frac{d-\xi'}{\tau-1}-\xi'-\alpha}(km_n)^{\xi'}\right)
\leq \exp\left(-(km_n)^\xi\right),
\end{equation}
where we choose $\lambda$ sufficiently large for the second bound (this determines the value of $\lambda_2$).
It remains to bound $\mathbb{P}(E^c)$. By Proposition \ref{lemLargestCluster}, there exists $\rho>0$, such that for $n$ large, $n$-boxes are good independently of each other with probability at least $1-\exp(-\rho m_n^\xi)$. Let $X\sim \text{Bin}\big(k^d,1-\exp\big(-\rho m_n^\xi\big)\big)$.
Writing out the binomial distribution, using $\binom{n}{k}\leq n^k$ and $1-\exp(-x)\leq 1$, further bounding the sum by its maximum and the number of terms, we obtain
\begin{align*}
\mathbb{P}(E^c)
=\mathbb{P}\left(X<\frac{1}{2} k^d\right)
&=\sum_{l=0}^{\left\lfloor \frac{1}{2} k^d\right\rfloor}\binom{k^d}{l}\exp\left(-\rho m_n^\xi\right)^{k^d-l}\left(1-\exp\left(-\rho m_n^\xi\right)\right)^l\\
&\leq \left(\frac{1}{2} k^d+1\right)k^{\frac{1}{2}d k^d}\exp\left(-\frac{1}{2}\rho k^dm_n^\xi\right)\\
&=\exp\left(\log\left(\frac{1}{2} k^d+1\right)+\frac{1}{2}d k^d\log(k)-\frac{\rho}{2}k^dm_n^\xi\right).
\end{align*}
For any $\varepsilon$ with $0<\varepsilon<d-\xi$, we can take $m$ large enough so that (using $k^{d-\xi}\ge1$)
\begin{equation}\label{eq:numberGood}
\mathbb{P}(E^c) \leq \exp\left(-\frac{\rho}{2^{1+\varepsilon}}(km_n)^{\xi}\right).
\end{equation}
Combining \eqref{eq:clusterGeneralBound}, \eqref{eq:clusterBoundWeight}, \eqref{eq:clusterBoundConnections}, and \eqref{eq:numberGood} gives that for $m$ sufficiently large,
\[\mathbb{P}(\mathcal{S}_{km_n}\text{ contains a }(0, km_n, \frac{\rho}{2},K)\text{-hierarchically clustered tree})\geq 1-3\exp\left(-\frac{\rho}{2^{1+\varepsilon}}(km_n)^\xi\right).\]
From the construction it follows that
\[
\frac12\leq\frac{{km_n}}{m}\leq1,
\]
so that for $m$ large
\begin{align*}
\mathbb{P}(\mathcal{S}_{m}&\text{ contains a }(0, m, \frac{\rho}{2^{d+1}},K')\text{-hierarchically clustered tree})\\
&\geq \mathbb{P}(\mathcal{S}_{km_n}\text{ contains a }(0, km_n, \frac{\rho}{2},K')\text{-hierarchically clustered tree})\\
& \geq 1-3\exp\left(-\frac{\rho}{2^{1+\varepsilon}}\frac{m^\xi}{2^\xi}\right)
\geq 1-\exp\left(-\frac{\rho}{2^{d+1}}m^\xi\right),
\end{align*}
which finishes the proof. \qed
The last step is to extend the claim to hold for all $\lambda>0$ and weights following a power-law given by \eqref{e:Wprime}.
\begin{proof}[Proof of Theorem \ref{thm:trees}]
Recall that we consider SFP models with $1 < \gamma < 2$ and any $\lambda >0$, and that in this setting, $\lambda_c =0$ \cite{DeijfenScaleFree}.
Let $\mu_1$ and $\lambda_1$ be the values that we obtain from Lemma \ref{Lemma:Density}. To apply Lemma \ref{Lemma:Density}, we partition $\mathbb{Z}^d$ into $N$-boxes. In every $N$-box we only consider the vertex with maximum weight and call it the dominant vertex. Choose $\beta$ large enough, using Lemma \ref{transientLemmaConnectivity}, such that two $N$-boxes that are $k$-boxes apart, with dominant vertices $u_1$ and $u_2$ having weight at least $\beta N^{\alpha/2}$, are connected by an open edge between $u_1$ and $u_2$ with probability at least
\[
\P_{(\lambda_1,W'')}(\{v_1, v_2\} \text{ is open}),
\]
for $v_1, v_2 \in \mathbb{Z}^d$ such that $|v_1 -v_2| =k$ and weights $W''$ with law given by \eqref{e:Wprime}.
Define the $N$-boxes that contain a vertex with weight at least $\beta N^{\alpha/2}$ to be the good boxes. Choose $N$ large enough so that the probability that an $N$-box is good is larger than $\mu_1$ using Lemma~\ref{transientLemmaBigDegrees}.
Thus, the status of the edges between dominant vertices in good $N$-boxes in the SFP model with parameters $\alpha, \lambda$ and weight-law $W$ stochastically dominates an SFP model on $\mathbb{Z}^d$ with parameters $\alpha$, $\lambda_1$ and weight-law $W''$ combined with a site percolation of intensity $\mu_1$, exactly as described in Lemma~\ref{Lemma:Density}.
Let $K'$ and $\rho'$ be the constants we obtain from Lemma~\ref{Lemma:Density}. Observe that
\[\frac{\left\lfloor\frac{m}{N}\right\rfloor N}{m}\geq \frac{1}{2}.\]
Then the assertions of Theorem \ref{thm:trees} follows if we set $K=K'$ and $\rho=\frac{\rho'}{2^dN^d}$.
\end{proof}
\subsection*{Acknowledgement}
JJ thanks the Erasmus+ programme for funding and the LMU Munich for hospitality during a three-month stay in autumn 2015.
The work of TH is supported by the Netherlands Organisation for Scientific Research (NWO) through the Gravitation NETWORKS grant 024.002.003.
We thank Remco van der Hofstad for suggesting the model to us and Thomas Beekenkamp for comments and corrections on an earlier version of the manuscript.
|
1,116,691,499,811 | arxiv | \section{Introduction}\label{sec:intro}
Consider a sequence of probability measures $\{\eta_{l}\}_{l\geq 0}$ on a common measurable
space $(E,\mathcal{E})$; we assume that the probabilities have common dominating
finite-measure $du$ and write the densities w.r.t.~$du$ as $\eta_l=\eta_l(u)$. In particular, for some known $\gamma_{l}:E\rightarrow\mathbb{R}^+$, we let
\begin{equation}
\label{eq:target}
\eta_{l}(u) = \frac{\gamma_l(u)}{Z_l}
\end{equation}
where the normalizing constant $Z_l = \int_E\gamma_l(u)du$ may be unknown. The context of interest is when the sequence of densities is
associated to an `accuracy' parameter $h_l$, with $h_l\rightarrow 0$
as $l\rightarrow \infty$ with $\infty>h_0>h_1>\cdots>h_{\infty}=0$.
This set-up is relevant to the context of
discretised numerical approximations of continuum fields, as we will explain below.
The objective is to compute:
$$
\mathbb{E}_{\eta_\infty}[g(U)] := \int_E g(u)\eta_\infty(u)du
$$
for potentially many measurable $\eta_\infty-$integrable functions $g:E\rightarrow\mathbb{R}$. In practice one cannot treat $h_l=0$ and
must consider these distributions with $h_l>0$.
Problems involving numerical approximations of continuum fields are discretized before being solved numerically. Finer-resolution solutions are more expensive to compute than coarser ones.
Such discretizations naturally give rise to hierarchies of resolutions via the use of nested meshes.
Successive solution at refined meshes can be utilized to mitigate the number of necessary solves for the
finest resolutions. For the solution of linear systems, the coarsened systems are solved as pre-conditioners
within the framework of iterative linear solvers in order to reduce the condition number, and hence the
number of necessary iterations at the finer resolution. This is the principle of multi-grid methods.
For Monte Carlo methods, as in the context above, a telescoping sum of associated differences at successive refinement levels
can be utilized. This is so that the bias of the resulting multilevel estimator is determined by the finest
level but the variance of the estimators of the differences decays.
The reduction in the variance at finer levels implies that the number of samples
required to reach a given error tolerance is also reduced with increasing resolution. This procedure is then optimized to balance
the extra per-sample cost at the finer levels.
Overall one can obtain a method with smaller computational effort to reach a pre-determined error than applying a standard Monte Carlo method immediately at the finest resolution \cite{gile:08}.
Multi Level Monte Carlo (MLMC) \cite{gile:08} (see also \cite{hein:98}) methods are such that
one typically sets an error threshold for a target expectation,
and then sets out to attain an estimator with the prescribed error
utilizing an optimal allocation of Monte Carlo resources.
Within the context of \cite{gile:08, hoan:12}, the continuum problem is a
stochastic differential equation (SDE) or PDE with random coefficients, and the target quantity is an expectation of a functional, say $g: E \rightarrow \mathbb R$,
of the parameter of interest $U\in E$, over an ideal measure $U\sim \eta_{\infty}$
that avoids discretisation. The levels are a hierarchy of refined approximations
of the function-space, specified in terms of a small resolution parameter say $h_l$,
for $0\le l \le L$, thus giving rise to a corresponding sequence of approximate laws
$\eta_l$.
The method uses the telescopic sum
$$
\mathbb{E}_{\eta_L}[g(U)] = \mathbb{E}_{\eta_0}[g(U)] + \sum_{l=1}^L
\{\mathbb{E}_{\eta_l}[g(U)]-\mathbb{E}_{\eta_{l-1}}[g(U)]\}
$$
and proceeds by coupling the consecutive probability
distributions $\eta_{l-1}$, $\eta_{l}$.
Thus, the expectations are estimated via the standard unbiased Monte Carlo
averages $$Y^{N_l}_l = \sum_{i=1}^{N_l} \{ g(U_l^{(i)})-g(U_{l-1}^{(i)})\}N_l^{-1}$$
where $\{U_{l-1}^{(i)},U_l^{(i)}\}$ are i.i.d.\@ samples,
with marginal laws $\eta_{l-1}$, $\eta_l$, respectively, carefully constructed on a joint
probability space.
This is repeated
independently for $0\le l\le L$.
The overall multilevel estimator will be
\begin{equation}
\label{eq:multi}
\hat{Y}_{L,{\rm Multi}} = \sum_{l=0}^{L} Y^{N_l}_l\ ,
\end{equation}
under the convention that $g(U_{-1}^{(i)})=0$.
A simple error analysis gives that the mean squared error (MSE) is
\begin{equation}
\mathbb{E} \{ \hat{Y}_{L,{\rm Multi}} - \mathbb{E}_{\eta_{\infty}} [g(U)] \}^2 = \underbrace{\mathbb{E}
\{ \hat{Y}_{L,{\rm Multi}}- \mathbb{E}_{\eta_{L}} [{g}(U)]\}^2}_{\rm variance}
+ \underbrace{\{\mathbb{E}_{\eta_L} [{g}(U)] - \mathbb{E}_{\eta_{\infty}} [g(U)]\}^2}_{\rm bias}\ .
\label{eq:mse}
\end{equation}
One can now optimally allocate $N_0, N_1,\ldots, N_L$ to minimize the variance term
$\sum_{l=0}^L V_l/N_l$ for fixed
computational cost $\sum_{l=0}^L C_l N_l$,
where $V_l$ is the variance of $[g(U_l^{(i)})-g(U_{l-1}^{(i)})]$ and $C_l$
the computational cost for its realisation. Using Lagrange multipliers for the above constrained optimisation, we get the optimal allocation of resources
$N_l \propto \sqrt{V_l/C_l}$.
In more detail,
%
the typical chronology is that one targets an MSE, say $\mathcal{O}(\epsilon^2)$, then
(i) given a characterisation of the bias as an order of $h_l$,
one
determines $h_l=M^{-l}$, $l=0,1,\ldots,L$, for some integer $M>1$,
and
chooses a horizon $L$ such that the bias is
$\mathcal{O}(\epsilon^2)$ and
(ii) given a characterisation of $V_l$, $C_l$ as some orders of $h_l$, one
optimizes the required samples $N_0,\ldots N_L$ needed to give variance $\mathcal{O}(\epsilon^2)$.
Thus, a specification of the bias, variance and computational
costs as functions of $h_l$ is needed.
As a prototypical example of the above setting \cite{gile:08}, consider the case $U=X(T)$ with $X(T)$ being the terminal position of the solution $X$ of a
SDE
and $\eta_l$ is the distribution of $X(T)$ under the consideration of a numerical approximation with time-step $\Delta t_l = h_l$.
The laws $\eta_{l-1}$, $\eta_l$ can be coupled via use of the same driving Brownian
path.
Invoking the relevant error analysis
for SDE models, one can obtain (for $U\sim \eta_{\infty}$, $U_{l}\sim \eta_l$,
and defined on the common probability space):
\begin{itemize}
\item[(i)] weak error $|\mathbb{E} [g(U_l)- g(U)]| =\mathcal{O}(h_l^\alpha)$, providing the bias
$\mathcal{O}(h_l^\alpha)$,
\item[(ii)] strong error, $\mathbb{E} |g(U_l) - g(U)|^2 = \mathcal{O}(h_l^\beta)$, giving the variance
$V_l = \mathcal{O}(h_l^\beta)$,
\item[(iii)] computational cost for a realisation of $g(U_l)-g(U_{l-1})$, $C_l =\mathcal{O}(h_l^{-\zeta})$,
\end{itemize}
for some constants $\alpha, \beta, \zeta$ related to the details of the
discretisation method.
The standard Euler Marayuma method for solution of SDE
gives the orders $\alpha=\beta=\zeta=1$.
Assuming a general context,
given such rates for bias, $V_l$ and $C_l$,
one proceeds as follows.
Recall that $h_l=M^{-(l+k)}$, for some integer $M>1$.
Then, targeting an error tolerance of $\epsilon$ and letting $h_L^{\alpha} = M^{-L\alpha}=\mathcal{O}(\epsilon)$, one has $L=\log(\epsilon^{-1})/(\alpha \log(M)) + \mathcal{O}(1)$, as in \cite{gile:08}.
Using the optimal allocation $N_l \propto \sqrt{V_l/C_l}$, one finds that
$N_l \propto h_l^{(\beta+\zeta)/2}$.
Taking under consideration a target error of size $\mathcal{O}(\epsilon)$,
one sets $N_l \propto \epsilon^{-2} h_l^{(\beta+\zeta)/2} K_L$,
with $K_L$ chosen to control the total error for increasing~$L$.
Thus, for the resulted estimator in (\ref{eq:multi})-(\ref{eq:mse}), we have:
\begin{align*}
\textrm{Variance} &= \sum_{l=0}^{L} V_l N_l^{-1}=\epsilon^2 K_L^{-1} \sum_{l=0}^L h_l^{(\beta-\zeta)/2}\ ; \\
\textrm{Comp. Cost} & = \sum_{l=0}^L N_l C_l = K_L^2 \epsilon^{-2}\ .
\end{align*}
To have a variance of $\mathcal{O}(\epsilon^2)$, one sets
$K_L = \sum_{l=0}^L h_l^{(\beta-\zeta)/2}$, so
$K_L$ may or may not depend on $\epsilon$ depending on whether
this sum converges or not (recalling that $L=\mathcal{O}(|\log(\epsilon)|)$).
In the case of Euler-Marayuma, for example, $\beta=\zeta$, $K_L=L$,
and the cost is $\mathcal{O}(\log(\epsilon)^2 \epsilon^{-2})$, versus $\mathcal{O}(\epsilon^{-3})$
using a single level with mesh-size $h_L=\mathcal{O}(\epsilon)$. If $\beta>\zeta$,
corresponding for instance to the Milstein method, then the cost is $\mathcal{O}(\epsilon^{-2})$.
The latter is the cost of obtaining the given level of error for a scalar random variable, and
is therefore optimal. The worst scenario is when $\beta<\zeta$. In this case
it is sufficient to set $K_{L}=h_L^{(\beta-\zeta)/2}$ to make the variance
$\mathcal{O}(\epsilon^2)$, and then
the number of samples on the finest level is given by $N_L = h_L^{\beta-2\alpha}$
whereas the total algorithmic cost is
$\mathcal{O}(\epsilon^{-(\zeta/\alpha + \delta)})$, where $\delta = 2-\beta/\alpha \geq 0$.
In this case, one can choose the largest value for the bias, $\alpha = \beta/2$, so that $N_L=1$ and the total cost, $\mathcal{O}(\epsilon^{-\zeta/\alpha})$, is dominated by this single
sample. See \cite{gile:08} for more details.
It is important to note that the realizations $U_l^{(i)}$, $U_{l-1}^{(i)}$ for a given
increment
must be coupled to obtain decaying variances $V_l$.
In the case of an SDE driven by Brownian motion one can simply simulate the
driving noise on level $l$ and then upscale it to level $l-1$ by summing elements of the finer path
\cite{gile:08}.
For the case of a PDE forward model relying on uncertain input the scenario is quite similar \cite{cliffe2011multilevel}.
For example, in the case
that
the input is of fixed dimension and
the levels arise due to discretization
of the forward map alone within a finite element context,
one would use the same realization of the input on two separate meshes for a pairwise-coupled realization.
Note that in the more general context of PDE, it is natural to decompose
$\zeta=d\cdot \gamma$, where $d$ is the spatio-temporal dimension of the underlying continuum. In particular,
the number of degrees of freedom of a $d-$dimensional field approximated on a mesh of diameter $h_l$
is given by $h_l^{-d}$. Then, the forward solve associated to the evaluation of $g(U_l)$ may range from linear
($\gamma=1$) to cubic ($\gamma=3$) in the number of degrees of freedom. For example, the solution of an SDE
or a sparse matrix-vector multiplication give $\gamma=1$, a dense matrix-vector multiplication would give
$\gamma=2$, and direct linear solve by Gaussian elimination would give $\gamma=3$.
The present work will focus on the case of an inverse problem with fixed-dimensional input. Indeed the difficulty arises here because we only know how to {\it evaluate} (up-to a constant) the target density at any given level, and cannot directly obtain independent samples from it. There exist many approaches to solving such problem, for example one can review the recent works \cite{hoan:12, ketelsen2013hierarchical} which use Markov chain Monte Carlo (MCMC) methods in the multilevel framework. In this article a more natural
and powerful
formulation is considered, related with the use of Sequential Monte Carlo approaches.
Sequential Monte Carlo (SMC) methods are amongst the most widely used computational techniques in statistics, engineering, physics, finance and many other disciplines. In particular SMC samplers \cite{delm:06b} are designed to approximate a sequence $\{ \eta_l \}_{l \geq 0}$ of probability distributions on a common space, whose densities are only known up-to a normalising constant.
The method uses $N\geq 1$
samples (or particles) that are generated in parallel, and are propagated with importance sampling (often) via MCMC and resampling methods. Several convergence results, as $N$ grows, have been proved (see e.g.~\cite{cl-2013,delm:04,delmoral1,douc}).
SMC samplers have also recently been proven to be stable in certain high-dimensional contexts \cite{beskos}. Current state of the art for the analysis of SMC algorithms include the work
of \cite{cl-2013,chopin1,delm:04,delmoral1,douc}. In this work, the method of SMC samplers is perfectly designed to approximate the sequence of distributions, but as we will see, implementing the standard telescoping
identity of MLMC requires some ingenuity. In addition, in order to consider the benefit of using SMC, one must analyze the variance of the estimate; in such scenarios this is not a trivial extension of the convergence analysis previously mentioned.
In particular, one must very precisely consider the auto-covariance of the SMC approximations and consider the rate of decrease of this quantity as the time-lag between SMC approximations increases. Such a precise analysis does not appear to
exist in the literature. We note that our work, whilst presented in the context of PDEs, is not restricted to such scenarios and, indeed can be applied in almost any other similar context (that is, a sequence of distributions on a common space, with increasing computational costs associated to the evaluation of the densities which in some sense converge to a given density); however, the potential benefit of doing so, may not be obvious in general.
This article is structured as follows. In Section \ref{sec:set_up} the ML identity and SMC algorithm are given. In Section \ref{sec:complex} our main complexity result is given under assumptions and their implications are discussed. In Section \ref{sec:IP}
we give a context where the assumptions of our theoretical results can be verified. In Section \ref{sec:numerics} our approach is numerically demonstrated on a Bayesian inverse problem. Section \ref{sec:complex} and the Appendix provide the proofs of our main theorem.
\section{Sequential Monte Carlo Methods}\label{sec:set_up}
\subsection{Notations}
Let $(E,\mathcal{E})$ be a measurable space.
The notation $\ensuremath{\mathcal{B}_b}(E)$ denotes the class of bounded and measurable real-valued functions. The
supremum norm is written as $\|f\|_{\infty} = \sup_{u\in E}|f(u)|$
and $\mathcal{P}(E)$ is the set of probability measures on $(E,\mathcal{E})$. We will consider non-negative operators
$K : E \times \mathcal{E} \rightarrow \mathbb R_+$ such that for each $u \in E$ the mapping $A \mapsto K(u, A)$ is a finite non-negative measure on $\mathcal{E}$ and for each $A \in \mathcal{E}$ the function $u \mapsto K(u, A)$ is measurable; the kernel $K$ is Markovian if $K(u, dv)$ is a probability measure for every $u \in E$.
For a finite measure $\mu$ on $(E,\mathcal{E})$, and a real-valued, measurable $f:E\rightarrow\mathbb{R}$, we define the operations:
\begin{equation*}
\mu K : A \mapsto \int K(u, A) \, \mu(du)\ ;\quad
K f : u \mapsto \int f(v) \, K(u, dv).
\end{equation*}
We also write $\mu(f) = \int f(u) \mu(du)$. In addition $\|\cdot\|_{r}$, $r\geq 1$, denotes the $L_r-$norm, where the expectation is w.r.t.~the law of the appropriate simulated algorithm.
\subsection{Algorithm}
As described in Section \ref{sec:intro}, the context of interest is when a sequence of densities
$\{\eta_{l}\}_{l\ge 0}$, as in (\ref{eq:target}), are
associated to an `accuracy' parameter $h_l$, with $h_l\rightarrow 0$
as $l\rightarrow \infty$, such that $\infty>h_0>h_1\cdots>h_{\infty}=0$. In practice one cannot treat $h_\infty=0$ and so must consider these distributions with $h_l>0$.
The laws with large $h_l$ are easy to sample from with low computational cost, but are very different from $\eta_{\infty}$, whereas, those distributions with small $h_l$ are
hard to sample with relatively high computational cost, but are closer to $\eta_{\infty}$.
Thus, we choose a maximum level $L\ge 1$ and we will estimate
$$
\mathbb{E}_{\eta_L}[g(U)] := \int_E g(u)\eta_L(u)du\ .
$$
By the standard telescoping identity used in MLMC, one has
\begin{align}
\mathbb{E}_{\eta_L}[g(U)] & = \mathbb{E}_{\eta_0}[g(U)] + \sum_{l=1}^{L}\Big\{
\mathbb{E}_{\eta_l}[g(U)] - \mathbb{E}_{\eta_{l-1}}[g(U)]\Big\} \nonumber \nonumber \\
& =\mathbb{E}_{\eta_0}[g(U)] + \sum_{l=1}^{L}\mathbb{E}_{\eta_{l-1}}\Big[
\Big(\frac{\gamma_l(U)Z_{l-1}}{\gamma_{l-1}(U)Z_l} - 1\Big)g(U)\Big]\ .
\label{eq:ml_approx}
\end{align}
Suppose now that one applies an SMC sampler \cite{delm:06b} to obtain
a collection of samples (particles) that sequentially approximate $\eta_0, \eta_1,\ldots, \eta_L$.
We consider the case when one initializes the population of particles by sampling i.i.d.~from $\eta_0$, then at every step resamples and applies a MCMC kernel to mutate the particles.
We denote by $(U_{0}^{1:N_0},\dots,U_{L-1}^{1:N_{L-1}})$, with $+\infty > N_0\geq N_1\geq \cdots N_{L-1}\geq 1$, the samples after mutation; one resamples $U_l^{1:N_l}$ according to the weights $G_{l}(U_l^i) =
(\gamma_{l+1}/\gamma_l)(U_l^{i})$, for indices $l\in\{0,\dots,L-1\}$.
We will denote by $\{M_l\}_{1\leq l\leq L-1}$ the sequence of MCMC kernels used at stages $1,\dots,L-1$, such that $\eta_{l}M_l = \eta_l$.
For $\varphi:E\rightarrow\mathbb{R}$, $l\in\{1,\dots,L\}$, we have the following estimator
of $\mathbb{E}_{\eta_{l-1}}[\varphi(U)]$:
$$
\eta_{l-1}^{N_{l-1}}(\varphi) = \frac{1}{N_{l-1}}\sum_{i=1}^{N_{l-1}}\varphi(U_{l-1}^i)\ .
$$
We define
$$
\eta_{l-1}^{N_{l-1}}(G_{l-1}M_l(du_l)) = \frac{1}{N_{l-1}}\sum_{i=1}^{N_{l-1}}G_{l-1}(U_{l-1}^i) M_l(U_{l-1}^i,du_l)\ .
$$
The joint probability distribution for the SMC algorithm is
$$
\prod_{i=1}^{N_0} \eta_0(du_0^i) \prod_{l=1}^{L-1} \prod_{i=1}^{N_l} \frac{\eta_{l-1}^{N_{l-1}}(G_{l-1}M_l(du_l^i))}{\eta_{l-1}^{N_{l-1}}(G_{l-1})}\ .
$$
If one considers one more step in the above procedure, that would deliver samples
$\{U_L^i\}_{i=1}^{N_L}$, a standard SMC sampler estimate of the quantity of interest in (\ref{eq:ml_approx})
is $\eta_L^{N}(g)$; the earlier samples are discarded.
Within a multilevel context, a consistent SMC estimate of \eqref{eq:ml_approx}
is
\begin{equation}
\widehat{Y} =
\eta_{0}^{N_0}(g) + \sum_{l=1}^{L}\Big\{\frac{\eta_{l-1}^{N_{l-1}}(gG_{l-1})}{\eta_{l-1}^{N_{l-1}}(G_{l-1})} - \eta_{l-1}^{N_{l-1}}(g)\Big\}\label{eq:smc_est}\ ,
\end{equation}
and this will be proven to be superior than the standard one, under assumptions.
There are two important structural differences within the MLSMC context,
compared to the standard ML implementation of \cite{gile:08}, sketched in
Section \ref{sec:intro}:
\begin{itemize}
\item[i)] the $L+1$ terms in (\ref{eq:smc_est}) are \emph{not} unbiased estimates
of the differences $\mathbb{E}_{\eta_l}[g(U)] - \mathbb{E}_{\eta_{l-1}}[g(U)]$,
so the relevant MSE error decomposition here is:
\begin{equation}
\label{eq:dec}
\mathbb{E}\big[ \{\widehat{Y}-\mathbb{E}_{\eta_\infty}[g(U)] \}^2\big]
\le 2\,\mathbb{E}\big[\{\widehat{Y}-\mathbb{E}_{\eta_L}[g(U)]\}^2\big] +
2\,\{ \mathbb{E}_{\eta_L}[g(U)] - \mathbb{E}_{\eta_\infty}[g(U)] \}^2\ .
\end{equation}
\item[ii)] the same $L+1$ estimates are \emph{not} independent. Hence a
substantially more complex error analysis will be required to characterise
$\mathbb{E}[\{\widehat{Y}-\mathbb{E}_{\eta_L}[g(U)]\}^2]$.
In Section \ref{sec:complex}, we will obtain an expression for this discrepancy,
which will be more involved than the standard $\sum_{l=0}^{L}V_l/N_l$,
but will still allow for a relevant constrained optimisation
to determine the optimal allocation of particle sizes $N_l$ along the levels.
\end{itemize}
Given an appropriate classification of both terms on the R.H.S.\@
of (\ref{eq:dec}) as an order of the tolerance
for a Bayesian Inverse Problem (to be described in Section \ref{sec:IP}), one can specify a level $L$, and optimal
Monte-Carlo sample sizes $N_l$ so that the MSE of $\widehat{Y}$
is $\mathcal{O}(\epsilon^2)$ at a reduced computational cost.
\section{Development of multilevel SMC}\label{sec:complex}
\subsection{Main Result}
We will now obtain an analytical result that controls the error term
$\mathbb{E}[\{\widehat{Y}-\mathbb{E}_{\eta_L}[g(U)]\}^2]$ in expression (\ref{eq:dec}).
This is of general significance for the development of MLSMC in various contexts.
Then, we will look in detail at an inverse problem context (developed in Section \ref{sec:IP})
and fully investigate the MLSMC method.
For any $l\in\{0,\dots,L\}$ and $\varphi\in \mathcal{B}_b(E)$ we write:
$
\eta_l(\varphi) := \int_E \varphi(u)\eta_l(u)du.
$
We introduce the following assumptions, which will be verifiable in some contexts. They are rather strong, but could be relaxed at condsiderable increase in the complexity of the arguments,
which will ultimately provide the same information. In addition, the assumptions are standard in the literature of SMC methods; see \cite{delm:04,delmoral1}.
\begin{hypA}
\label{hyp:A}
There exist $0<\underline{C}<\overline{C}<+\infty$ such that
\begin{eqnarray*}
\sup_{l \geq 1}
\sup_{u\in E} G_l (u) & \leq & \overline{C}\ ;\\
\inf_{l \geq 1}
\inf_{u\in E} G_l (u) & \geq & \underline{C}\ .
\end{eqnarray*}
\end{hypA}
\begin{hypA}
\label{hyp:B}
There exists a $\rho\in(0,1)$ such that for any $l\ge 1$, $(u,v)\in E^2$, $A\in\mathcal{E}$:
$$
\int_A M_l(u,du') \geq \rho \int_A M_l(v,dv')\ .
$$
\end{hypA}
\begin{theorem}\label{theo:main_error}
Assume (A\ref{hyp:A}-\ref{hyp:B}). There exist $C<+\infty$ and $\kappa\in (0,1)$ such that for any $g\in\mathcal{B}_b(E)$, with $\|g\|_{\infty}=1$,
\begin{align*}
\mathbb{E}\big[\{\widehat{Y}-\mathbb{E}_{\eta_L}[g(U)]\}^2\big]
\leq
C\,\bigg(\frac{1}{N_0} + &\sum_{l=1}^{L}\frac{\|\tfrac{Z_{l-1}}{Z_{l}}G_{l-1}-1\|_{\infty}^2}{N_{l-1}} \\ &+
\sum_{1\le l<q\le L}\|\tfrac{Z_{l-1}}{Z_{l}}G_{l-1}-1\|_{\infty} \|\tfrac{Z_{q-1}}{Z_{q}}G_{q-1}-1\|_{\infty}
\big\{\tfrac{\kappa^{q-l}}{N_{l-1}}
+\tfrac{1}{N_{l-1}^{1/2}N_{q-1}}
\big\}\bigg)\ .
\end{align*}
\end{theorem}
\subsection{Proof of Theorem \ref{theo:main_error}}\label{sec:main_proof}
The following notations are adopted; this will substantially simplify subsequent expressions:
\begin{align}
Y_{l-1}^{N_{l-1}} &= \frac{\eta_{l-1}^{N_{l-1}}(gG_{l-1})}{\eta_{l-1}^{N_{l-1}}(G_{l-1})} - \eta_{l-1}^{N_{l-1}}(g)\ , \quad \nonumber \\[0.2cm]
Y_{l-1} &= \frac{\eta_{l-1}(gG_{l-1})}{\eta_{l-1}(G_{l-1})} - \eta_{l-1}(g)
\,\,\,\,\big(\, \equiv \eta_{l}(g) - \eta_{l-1}(g)\, \big)\ , \label{eq:analytical} \\[0.3cm]
\nonumber
\overline{\varphi_l}(u) & = \big(\tfrac{Z_{l-1}}{Z_l}G_{l-1}(u)-1\big) \ , \\[0.3cm]
\nonumber
\widetilde{\varphi_l}(u)
&= g(u) \overline{\varphi_l}(u)
\ , \\[0.3cm]
\label{eq:ay}
A_n(\varphi,N) & = \eta_n^N(\varphi G_n)/\eta_n^N(G_n) \ , \quad \varphi\in\mathcal{B}_b(E)\ , \quad
0\leq n\leq L-1 \ , \\[0.2cm]
\label{eq:aybar}
\overline{A}_n(\varphi,N) & = A_n(\varphi,N) - \frac{\eta_n(\varphi G_n)}{\eta_n(G_n)}\ .
\end{align}
Throughout this Section, $C$ is a constant whose value may change, but does not depend on any time parameters of the Feynman-Kac formula, nor $N_l$. The proof of Theorem \ref{theo:main_error} follows from several technical lemmas which are now given and supported by further results in the Appendix; the proof of the theorem is at the end of this subsection.
It is useful to observe that $Z_l/Z_{l-1} = \eta_{l-1}(G_{l-1})$, $\eta_{l-1}(\overline\varphi_l) =0$
and
$|A_n(\varphi,N)|\le |\varphi|_{\infty}$ with probability 1
as the conditional $L_1$-norm of functional $\varphi$
over a discrete distribution.
We will make repeated use of the following identity which follows from these observations upon adding and subtracting
$ \eta_{l-1}^{N_{l-1}} (\frac{Z_{l-1}}{Z_l}g(\cdot)G_{l-1}(\cdot) )$:
\begin{equation}
\label{eq:basic}
Y_{l-1}^{N_{l-1}}-Y_{l-1} = A_{l-1}(g,N_{l-1})\,\{\eta_{l-1} - \eta_{l-1}^{N_{l-1}}\} (\overline{\varphi_l}) + \{\eta_{l-1}^{N_{l-1}}-\eta_{l-1}\} (\widetilde{\varphi_l})\ .
\end{equation}
\begin{lem}\label{lem:tech_lem}
Assume (A\ref{hyp:A}-\ref{hyp:B}). There exists a $C<+\infty$ such that
for any $l\ge 1$:
$$
\|Y_{l-1}^{N_{l-1}}- Y_{l-1} \|_2^2 \leq \frac{C\,\|\frac{Z_{l-1}}{Z_{l}}G_{l-1}-1\|_{\infty}^2}{N_{l-1}}\ .
$$
\end{lem}
\begin{proof}
From (\ref{eq:basic}) and the $C_2$-inequality we obtain:
\begin{align*}
\|Y_{l-1}^{N_{l-1}}- Y_{l-1} \|_2^2 \le
2\,\|A_{l-1}(g,N_{l-1})\{\eta_{l-1}^{N_{l-1}}-\eta_{l-1}\} (\overline{\varphi_l})\|^2_2 +
2\,\|\{\eta_{l-1}^{N_{l-1}}-\eta_{l-1}\} (\widetilde{\varphi_l})\|^2_2
\\
\leq 2\,\|\{\eta_{l-1}^{N_{l-1}}-\eta_{l-1}\} (\overline{\varphi_l})\|_2^2
+ 2\,\|\{\eta_{l-1}^{N_{l-1}}-\eta_{l-1}\} (\widetilde{\varphi_l})\|^2_2
\end{align*}
By \cite[Theorem 7.4.4]{delm:04} we have that both $L_2$-norms are upper bounded by
$\frac{C\|\frac{Z_{l-1}}{Z_{l}}G_{l-1}-1\|_{\infty}^2}{2N_{l-1}}$.
This completes the proof.
\end{proof}
By the $C_2$-inequality and standard properties of i.i.d.~random variables one has:
\begin{align*}
\mathbb{E}\big[\{\widehat{Y}-\mathbb{E}_{\eta_L}[g(U)]\}^2\big]
= \mathbb{E}\Big[\big\{\sum_{l=1}^{N}(Y_{l-1}^{N_{l-1}} - Y_{l-1})\big\}^2\Big]
\le \frac{C}{N_0} + 2\,\mathbb{E}\Big[\big\{\sum_{l=2}^{N}(Y_{l-1}^{N_{l-1}} - Y_{l-1})\big\}^2\Big]\ .
\end{align*}
We have that:
\begin{equation*}
\mathbb{E}\Big[\big\{\sum_{l=2}^{N}(Y_{l-1}^{N_{l-1}} - Y_{l-1})\big\}^2\Big]
= \mathbb{E}\Big[ \sum_{l=2}^{N}(Y_{l-1}^{N_{l-1}} - Y_{l-1})^2\Big] + 2\sum_{2\le l < q\le L} \mathbb{E}\big[(Y_{l-1}^{N_{l-1}} - Y_{l-1}) (Y_{q-1}^{N_{q-1}} - Y_{q-1})\big]
\end{equation*}
Lemma \ref{lem:tech_lem} gives that:
\begin{equation*}
\mathbb{E}\Big[ \sum_{l=2}^{N}(Y_{l-1}^{N_{l-1}} - Y_{l-1})^2\Big]\le
C\sum_{l=2}^{L}\frac{\|\frac{Z_{l-1}}{Z_{l}}G_{l-1}-1\|_{\infty}^2}{N_{l-1}}
\end{equation*}
thus it remains to treat the cross-interaction terms.
Using the decomposition in (\ref{eq:basic}), we obtain
\begin{align*}
\sum_{2\le l < q\le L} \mathbb{E}&\big[(Y_{l-1}^{N_{l-1}} - Y_{l-1}) (Y_{q-1}^{N_{q-1}} - Y_{q-1})\big] = \\
&=\sum_{2\le l < q\le L} \mathbb{E}\,\big[A_{l-1}(g,N)A_{q-1}(g,N)\{\eta_{l-1}^{N_{l-1}}-\eta_{l-1}\}(\overline{\varphi_l})\{\eta_{q-1}^{N_{q-1}}-\eta_{q-1}\}(\overline{\varphi_q})\,\big]\\
&\hspace{1.5cm}+\sum_{2\le l < q\le L}
\mathbb{E}\,\big[\,A_{l-1}(g,N)\{\eta_{l-1}^{N_{l-1}}-\eta_{l-1}\}(\overline{\varphi_l})\{\eta_{q-1}^{N_{q-1}}-\eta_{q-1}\}(\widetilde{\varphi_q})\,\big]\\
&\hspace{1.5cm}+\sum_{2\le l < q\le L}
\mathbb{E}\,\big[\,A_{q-1}(g,N)\{\eta_{l-1}^{N_{l-1}}-\eta_{l-1}\}(\widetilde{\varphi_l})\{\eta_{q-1}^{N_{q-1}}-\eta_{q-1}\}(\overline{\varphi_q})\,\big]\\
&\hspace{1.5cm}+\sum_{2\le l < q\le L}
\mathbb{E}\,\big[\,\{\eta_{l-1}^{N_{l-1}}-\eta_{l-1}\}(\widetilde{\varphi_l})\{\eta_{q-1}^{N_{q-1}}-\eta_{q-1}\}(\widetilde{\varphi_q})\,\big]\ .
\end{align*}
We will now apply Proposition \ref{prop:prop_corr_bd3} to the relevant terms in the sum, to yield the upper-bound:
\begin{align*}
C \sum_{1\le l < q\le L}\|\widetilde{\varphi_l}\|_{\infty}\|\widetilde{\varphi_q}\|_{\infty}\Big\{\frac{\kappa^{q-l}}{N_{l-1}}
&+\frac{1}{N_{l-1}^{1/2}N_{q-1}}
\Big\}\ .
\end{align*}
From here one can conclude the proof of Theorem \ref{theo:main_error}.
\subsection{MLSMC Variance Analysis}
\label{ssec:multilevel_component}
This section considers the specification of parameters for the MLSMC algorithm after consideration of Theorem \ref{theo:main_error}. Recall that in the simpler SDE setting of \cite{gile:08}
one must work with the strong error estimate $\mathbb{E} |g(U_l) - g(U)|^2 = \mathcal{O}(h_l^\beta)$
and the deduced variance $V_l=\mathrm{Var}[g(U_l)-g(U_{l-1})] = \mathcal{O}(h_l^{\beta})$.
From Theorem \ref{theo:main_error}, a similar role within MLSMC is taken
by:
\begin{equation}
\label{eq:VL}
V_l:= \|\tfrac{Z_{l-1}}{Z_{l}}G_{l-1}-1\|_{\infty}^2 \ .
\end{equation}
We assume that in the given context one can obtain
that $V_l = \mathcal{O}(h_l^{\beta})$ for some appropriate
rate constant $\beta\ge 1$.
Recall that we have $h_l=M^{-l}$, for some integer $M>1$
and we assume a bias of $\mathcal{O}(h_L^{\alpha})$.
Thus, targeting an error tolerance of $\epsilon$, we have
$h_L^{\alpha} = M^{-L}=\mathcal{O}(\epsilon)$,
so that $L=\log(\epsilon^{-1})/(\alpha \log(M)) + \mathcal{O}(1)$.
Now, to optimally allocate $N_0, N_1, \ldots, N_L$,
one proceeds along the lines outlined in the Introduction
under consideration of Theorem \ref{theo:main_error}.
Notice that $\sum_{q=l+1}^L \kappa^{q-l} \leq \frac{1}{1-\kappa}$ and
$V_q$ is smaller than $V_l$ (in terms of the obtained upper bounds), so the upper bound in Theorem \ref{theo:main_error} can be bounded by:
\begin{equation}
\label{eq:up}
\frac{1}{N_0} + \sum_{l=1}^L \bigg(\frac{h_l^\beta}{N_l} +
\Big(\frac{h_l^\beta}{N_l} \Big)^{1/2} \sum_{q=l+1}^L \frac{h_q^{\beta/2}}{N_q} \bigg)\ .
\end{equation}
We also assume a computational cost proportional to $\sum_{l=0}^L N_l h_l^{-\zeta}$, for some rate $\zeta\ge 1$,
with the resampling cost considered to
to be negligible for practical purposes compared to the cost of the calculating the importance weights (as it is the case for the inverse problems we focus upon later).
As with standard MLMC in \cite{gile:08}, we need to find $N_{0},\ldots, N_L$
that optimize (\ref{eq:up}) given a fixed computational cost $\sum_{l=0}^L N_l h_l^{-\zeta}$.
Such a constrained optimization with the complicated error bound in (\ref{eq:up}), results
in the need to solve
a quartic equation
as a function of $V_l$ and $C_l$. Instead, one can {\it assume} that the second term
on the R.H.S.\@ of (\ref{eq:up})
is negligible, solve the constrained optimization ignoring that term,
and then check that the effect of that term for the given choice of $\{N_l\}_{l=0}^{L-1}$ is smaller than $\mathcal{O}(\epsilon^2)$.
Following this approach gives a constrained optimisation problem
identical to the simple case of \cite{gile:08}, with solution
$N_l \propto \sqrt{V_l/C_l} = \mathcal{O}(h_l^{(\beta+\zeta)/2})$.
One works as in Section \ref{sec:intro}, and
selects:
\begin{equation*}
N_l \propto \epsilon^{-2}h_l^{(\beta+\zeta)/2}K_L \ ; \quad
K_L \eqsim \sum_{l=0}^{L} h_l^{(\beta-\zeta)/2}\ .
\end{equation*}
Then returning to (\ref{eq:up}) one can check that indeed the extra summand
is smaller than $\mathcal{O}(\epsilon^2)$ for the above choice
of $N_l$. Notice that: (i)\, $h_q^{\beta/2}/N_q =
\mathcal{O}(\epsilon^{2}h_l^{-\zeta/2}/K_L)$,
and the sum $\sum_{q=l+1}^{L}h_l^{-\zeta/2}$ is dominated
by $h_L^{-\zeta/2} = \mathcal{O}(\epsilon^{-\zeta/(2\alpha)})$;
\,(ii)\,we have $(h_l^{\beta}/N_l)^{1/2} \propto \epsilon/K_L^{1/2}h_l^{(\beta-\zeta)/4} $.
Therefore, %
\begin{align*}
\sum_{l=1}^L \bigg(
\Big(\frac{h_l^\beta}{N_l} \Big)^{1/2} \sum_{q=l+1}^L \frac{h_q^{\beta/2}}{N_q} \bigg) = \mathcal{O}\Big(\epsilon^{2}
\epsilon^{1-\zeta/(2\alpha)}\sum_{l=0}^{L}h_l^{(\beta-\zeta)/4}
/K_L^{3/2}\Big) = \mathcal{O}(\epsilon^{2}\epsilon^{1-\zeta/(2\alpha)})\ .
\end{align*}
Thus, when $\zeta\le 2\alpha$, the overall mean squared error
is still $\mathcal{O}(\epsilon^2)$.
In the inverse problem context of Section \ref{sec:IP},
we will establish that $\beta=2$, $\alpha=\beta/2$.
Also, in many cases (depending on the chosen PDE solver)
we have $\zeta=d$.
\section{Bayesian Inverse Problem}
\label{sec:IP}
A context will now be introduced in which the results are of interest and
the assumptions can be satisfied. We begin with another round of notations.
Introduce the Gelfand triple $V := H^{1}(D) \subset L^2(D) \subset H^{-1} (D)=: V^*$,
where the domain $D$ will be understood.
Furthermore, denote by $\langle \cdot, \cdot \rangle, \|\cdot\|$ the inner product and norm
on $L^2$, with superscripts
to denote the corresponding inner product and norm on the Hilbert
spaces $V$ and $V^*$. Denote the finite dimensional Euclidean inner product and norms as
$\langle \cdot, \cdot \rangle, |\cdot|$, with the latter also representing size of a set and absolute value,
and denote weighted norms by adding a subscript as
$\langle,\cdot, \cdot \rangle_A := \langle A^{-\frac12}\cdot, A^{-\frac12}\cdot \rangle$, with corresponding norms
$|\cdot |_A$ or $\|\cdot \|_A$ for Euclidean and $L^2$ spaces, respectively
(for symmetric, positive definite $A$ with $A^\frac12$ being the unique symmetric square root).
In the following, the generic constant $C$ will be used for the right-hand side of inequalities as necessary,
its precise value actually changing between usage.
Let $D \subset \mathbb R^d$ with $\partial D \in C^1$ convex.
For $f \in V^*$, consider the following PDE on $D$:
\begin{align}
\label{eq:uniellip}
- \nabla \cdot ( \hat{u} \nabla p ) & = f\ , \quad {\rm on} ~ D\ , \\
p & = 0\ , \quad {\rm on} ~ \partial D\ ,
\label{eq:bv}
\end{align}
where:
\begin{equation}
\label{eq:expand}
\hat{u} (x) = \bar{u}(x) + \sum_{k=1}^K u_k \sigma_k \phi_k(x) \ .
\end{equation}
Define $u=\{u_k\}_{k=1}^K$, with $u_k \sim
U[-1,1]$ i.i.d. This determines the prior distribution for $u$.
Assume that $\bar{u}, \phi_k \in C^\infty$ for all $k$ and that
$\|\phi_k\|_\infty =1$.
In particular,
assume $\{\sigma_k\}_{k=1}^K$
decay\footnote{If $K\rightarrow \infty$ it is important that they decay with a suitable
rate in order to ensure $u$ lives almost surely in an appropriate sequence-space,
or equivalently $\hat{u}$ lives in the appropriate function-space.
However, here we down-weight higher frequencies as necessary only to
induce certain smoothness properties, while actually for a given value of $u \in E$
the resulting permeability
$\hat{u} \in \widehat{E} \subset C^\infty(D) \subset C(D) \subset L^\infty(D) \subset L^p(D)$ for all $p\geq 1$.} with $k$.
The state space is $E = \prod_{k=1}^K [-1,1]$.
It is important that the following property holds:
$$\inf_x \hat{u}(x) \geq \inf_x \bar{u}(x) - \sum_{k=1}^K \sigma_k \geq u_* > 0$$
so that the operator on the
left-hand side of \eqref{eq:uniellip} is uniformly elliptic.
Let $p(\cdot;u)$ denote the weak solution of \eqref{eq:uniellip} for parameter value $u$.
Define the following the vector-valued function
$$
\mathcal{G}(p) = [ g_1( p), \cdots , g_M ( p ) ]^\top\ ,
$$
where $g_m$ are elements of the dual space
$V^*$ for $m=1,\ldots, M$.
It is assumed that the data take the form
\begin{equation}
y = \mathcal{G} (p) + \xi\ , \quad \xi \sim N(0,\Gamma)\ , \quad \xi \perp u\ ,
\label{eq:data}
\end{equation}
where $N(0,\Gamma)$ denotes the Gaussian random variable with mean $0$ and covariance $\Gamma$,
and $\perp$ denotes independence.
The unnormalized density then is given by:
\begin{equation*}
\gamma(u) = e^{-\Phi[\mathcal{G}(p(\cdot;u))]} \ ; \quad \Phi(\mathcal{G}) = \tfrac{1}{2}\, | \mathcal{G} - y|^2_\Gamma
\ .
\end{equation*}
Consider the triangulated domains $\{D^l\}_{l=1}^\infty$
approximating $D$,
where $l$ indexes the number of nodes $N(l)$, so that we have
$D^1 \subset\cdots \subset D^{l} \subset D^\infty :=D$,
with sufficiently regular triangles.
Consider a finite element discretization on $D^l$
consisting of $H^{1}$ functions $\{\psi_\ell\}_{\ell=1}^{N(l)}$.
In particular, continuous piecewise linear hat functions will be
considered here, the explicit form of which will be given in section \ref{ssec:numset}.
Denote the corresponding space
of functions of the form $\varphi = \sum_{\ell=1}^{N(l)} v_\ell \psi^l_\ell$ by $V^l$, and notice that
$V^1\subset V^{2}\subset \cdots \subset V^l \subset V$.
By making the further Assumption 7 of
\cite{hoan:12} that the weak solution $p(\cdot;u)$ of \eqref{eq:uniellip}-(\ref{eq:bv}) for parameter value $u$
is in the space $W=H^2 \cap H^1_0 \subset V$, one obtains a well-defined
finite element approximation $p^l(\cdot;u)$ of $p(\cdot;u)$.
Thus, the sequence of distributions of interest in this context is:
\begin{equation*}
\gamma_l(u) = e^{-\Phi[\mathcal{G}(p^l(\cdot;u))]}\ , \quad l=0,1,\ldots, L\ .
\end{equation*}
\subsection{Error Estimates}\label{sec:verify}
Notice one can
take the inner product of \eqref{eq:uniellip} with the solution $p \in V$,
and perform integration by parts on the right-hand side,
in order to obtain
$\langle \hat{u} \nabla p, \nabla p \rangle = \langle f , p \rangle$.
Therefore
\begin{equation}
u_* \| p \|^2_V = u_* \langle \nabla p, \nabla p \rangle \leq
\langle \hat{u} \nabla p, \nabla p \rangle =
\langle f , p \rangle \leq \|f\|_{V^*} \|p\|_V.
\end{equation}
So the following bound holds in $V$, uniformly over $u$:
\begin{equation}
\| p(\cdot;u) \|_V \leq \frac{\|f\|_{V^*}}{u_*}\ .
\label{eq:pvbound}
\end{equation}
Notice that:
\begin{equation}
|\mathcal{G}(p)-\mathcal{G}(p')| = \Big(\sum_{m=1}^M \langle g_m, p-p' \rangle^2 \Big)^{1/2} \leq \| p -p'\|_V
\sum_{m=1}^M \|g_m\|_{V^*} = C \| p-p' \|_V\ .
\label{eq:gunifu}
\end{equation}
So the following uniform bound also holds:
\begin{equation*}
|\mathcal{G}(p(\cdot;u))| \leq C\,\frac{\|f\|_{V^*}}{u_*}\ .
\end{equation*}
The uniform bound on $\mathcal{G}$ provides the Lipschitz bound
\begin{equation}
|\Phi(\mathcal{G}) - \Phi(\mathcal{G}')| \leq C |\mathcal{G} - \mathcal{G}'|,
\label{eq:ctslip}
\end{equation}
obtained as follows:
\begin{align}
\nonumber
|\Phi(\mathcal{G}) - \Phi(\mathcal{G}')| = & \frac12\left | |\mathcal{G} - y|_\Gamma^2 - |\mathcal{G}' - y|_\Gamma^2 \right |\\
\nonumber
= &\left | |\mathcal{G}|_\Gamma^2 - |\mathcal{G}'|_\Gamma^2 + 2 \langle \mathcal{G}' - \mathcal{G} , y \rangle _{\Gamma} \right |\\
\nonumber
\leq & \left ( |\mathcal{G}| + |\mathcal{G}'| + 2 |y| \right ) |\Gamma^{-1}| |\mathcal{G} - \mathcal{G}'|\ ,
\end{align}
Setting $\mathcal{G}'=0$ gives the boundedness of
$\Phi$.
Considering some sequence $h_l$ indicating the maximum diameter of an individual element
at level $l$, with $h_l \rightarrow 0$ (e.g. $h_l = 2^{-l}$),
the following asymptotic bound holds for continuous piecewise linear hat functions
\cite{ciarlet1978finite}\footnote{Higher order finite elements can yield stronger convergence rates,
but will not be considered here in the interest of a more streamlined presentation.}
\begin{equation}
\|p(\cdot;u) - p^l(\cdot;u)\|_V \leq C h_l \| p(\cdot;u)\|_W\ .
\label{eq:femconv}
\end{equation}
Furthermore, Proposition 29 of \cite{hoan:12} provides a uniform bound based on the
following decomposition of \eqref{eq:uniellip}:
\[
-\Delta p = \frac{1}{\hat{u}} \left ( f + \nabla \hat{u} \cdot \nabla p
\right)\ .
\]
Thus, we have
\begin{align}
\nonumber
{\rm sup}_{u} \|p(\cdot;u)\|_W & \leq C' {\rm sup}_{u} \|\Delta p(\cdot;u)\| \\
\nonumber
& \leq \frac{C'}{u_*} {\rm sup}_{u}\left( \|f\| + \|\hat{u}\|_V \|p\|_V \right ) \\
& \leq C \|f\|\ ,
\label{eq:breakdownw}
\end{align}
where the first line holds by equivalence of norms, the second holds
since $\hat{u} \in C^\infty$, by the triangle inequality and Cauchy-Schwarz inequality,
and the last line holds by \eqref{eq:pvbound} and the fact $\|f\|_{V^*} \leq c \|f\|$ for some $c$.
The constant $C$ depends on $u_*, \|\nabla \hat{u}\|_\infty, C',$ and $c$ .
Note that $ \|\hat{u}\|_V \leq \| \nabla \hat{u}\|_\infty \leq C'' < \infty$ by \eqref{eq:expand}.
Note that the bound \eqref{eq:breakdownw} in \eqref{eq:femconv}
together with \eqref{eq:pvbound} provides a uniform bound over $l$ for
$\mathcal{G}^l$, defined by $\mathcal{G}^l: u \mapsto \mathcal{G}(p^l(\cdot;u))$,
following the same argument as \eqref{eq:gunifu},
which means that the Lipschitz bound in \eqref{eq:ctslip}
holds here over different $l$ as well.
Now, the following holds by \eqref{eq:femconv},
\eqref{eq:breakdownw}, \eqref{eq:pvbound}, and the triangle inequality
\begin{equation}
\|p^{l}(\cdot;u) - p^{l-1}(\cdot;u)\|_V \leq C h_l\ .
\label{eq:lincrement}
\end{equation}
Hence, from (\ref{eq:gunifu})
\begin{equation}
|\mathcal{G}^l(u) - \mathcal{G}^{l-1}(u) | = |\mathcal{G}(p^{l}(\cdot;u)) - \mathcal{G}(p^{l-1}(\cdot;u)) | \leq C h_l\ ,
\label{eq:glincrement}
\end{equation}
where $C$ is independent of the realization of $u$.
\begin{prop}
\label{pr:V}
For $G_{l-1}(u) := \exp\{\Phi(\mathcal{G}^{l-1}(u)) - \Phi(\mathcal{G}^{l}(u))\}$ one has the following
estimates, uniformly in $u$:
\begin{equation}
1 - \mathcal{O}(h_l) = \underline{C}_l := e^{-C h_l} \leq G_{l-1} = \exp\{\Phi(\mathcal{G}^{l-1}) - \Phi(\mathcal{G}^{l})\} \leq e^{C h_l}
=: \overline{C}_l = 1 + \mathcal{O}(h_l).
\label{eq:wlestimate}
\end{equation}
\end{prop}
\begin{proof}
In combination with \eqref{eq:ctslip}, equation (\ref{eq:glincrement}) gives
the stated result.
\end{proof}
\begin{prop}[Bias]
\label{pr:bias}
Let $g\in \ensuremath{\mathcal{B}_b}(E)$.
Then
$$|\mathbb{E}_{\eta_L}[g(U)] - \mathbb{E}_{\eta_\infty}[g(U)]| \le C h_L\ .$$
\end{prop}
\begin{proof}
It follows from the same reasoning as in Proposition \ref{pr:V},
upon observing that
$$\mathbb{E}_{\eta_L}[g(U)] - \mathbb{E}_{\eta_\infty}[g(U)]
= \mathbb{E}_{\eta_\infty}\left[g(U)\left( \frac{d \eta_L}{d\eta_\infty} - 1\right)\right]\ .$$
\end{proof}
\subsection{Verification of Assumptions}
\label{sec:v}
Assumption (A\ref{hyp:A}) is satisfied by letting
$$\underline{C} := \inf_{l \geq 1}
\underline{C}_l\ ;
\quad
\overline{C} := \sup_{l \geq 1}
\overline{C}_l\ .$$
Notice that the asymptotic bounds of Proposition \ref{pr:V} imply that
$\underline{C}_l$ is increasing with $l$ while $\overline{C}_l$
are decreasing with $l$. Therefore, these will actually be minimum and maximum
over a sufficiently large set of low indices.
Assumption (A\ref{hyp:B})
can be shown to hold in this context,
if a Gibbs sampler is constructed.
Let $\theta$ be the uniform measure on $[-1,1]$ and consider a probability measure $\pi$ on $E:=\prod_{i=1}^{K}[-1,1]$ with density w.r.t.~the measure $\bigotimes_{i=1}^{K} \theta(du_i)$:
$$
\pi(u) = \frac{\exp\{-\Phi(u)\}}{\int_{E}\exp\{-\Phi(u)\}\bigotimes_{i=1}^{K} \theta(du_i)}
$$
where it is assumed that $\forall u\in E$, $\Phi(u)\in[0,\Phi^*]$.
This is the setting above, for all $l$,
following from equations \eqref{eq:ctslip} and \eqref{eq:glincrement}.
Let $k\in\mathbb{N}, k<K$ be given
and consider a partition of $[1,\dots, K]$ into $k$ disjoint subsets $(a_i)_{i=1}^k$.
For example $k=2$ and
$a_1$ and $a_2$ are the sets of (positive) odd and even numbers up to $K$, respectively.
One can consider the Gibbs sampler to generate from $\pi$, with kernel:
$$
M(u,du') = \Big(\prod_{j=1}^{k} \pi(u_{a_j}'|u_{a_1:a_{j-1}}',u_{a_{j+1}:a_{k}}) \Big)\bigotimes_{i=1}^{K} \theta(du_i')
$$
with
$$
\pi(u_{a_j}'|u_{a_1:a_{j-1}}',u_{a_{j+1}:a_{k}}) = \frac{\pi(u_{a_1:a_{j}}',u_{a_{j+1}:a_{k}})}{\int_{[-1,1]^{|\{a_j\}|}} \pi(u_{a_1:a_{j}}',u_{a_{j+1}:a_{k}}) \bigotimes_{i\in(a_j)} \theta(du_i')}.
$$
One can, for example, perform rejection sampling on $\pi$ using the prior as a proposal (and accepting with probability $\exp\{-\Phi(u)\}$)
and we would still have a theoretical acceptance probability of
$$
\int_{E}\exp\{-\Phi(u)\}\bigotimes_{i=1}^{K} \theta(du_i) \geq \exp\{-\Phi^*\}.
$$
Sampling from the full conditionals will have a higher-acceptance probability and thus the Gibbs sampler is not an unreasonable algorithm.
\begin{prop}
For any $u,\tilde{u}\in E$
$$
M(\tilde{u},du') \geq \exp\{-2\Phi^* (k-1)\} M(u,du').
$$
\end{prop}
\begin{proof}
Consider
\begin{eqnarray*}
\frac{\pi(u_{a_j}'|u_{a_1:a_{j-1}}',u_{a_{j+1}:a_{k}})}{\pi(u_{a_j}'|u_{a_1:a_{j-1}}',\tilde{u}_{a_{j+1}:a_{k}})} & = & \frac{\pi(u_{a_1:a_j}',u_{a_{j+1}:a_k})}{\pi(u_{a_1:a_j}',\tilde{u}_{a_{j+1}:a_k})}\frac{\int_{[-1,1]^{|a_j|}} \pi(u_{a_1:a_j}',\tilde{u}_{a_{j+1}:a_k}) \bigotimes_{i\in (a_j)}
\theta(du_i')}
{\int_{[-1,1]^{|a_j|}} \pi(u_{a_1:a_j}',u_{a_{j+1}:a_k})\bigotimes_{i\in (a_j)}
\theta(du_i')}\\
& \leq & \exp\{2\Phi^*\}.
\end{eqnarray*}
Thus, since
$$
M(u,du') = \Big(\prod_{j=1}^{k} \pi(u_{a_j}'|u_{a_1:a_{j-1}}',u_{a_{j+1}:a_{k}}) \Big)\bigotimes_{i=1}^{K} \theta(du_i'),
$$
and
$$
M(\tilde{u},du') = \Big(\prod_{j=1}^{k} \pi(u_{a_j}'|u_{a_1:a_{j-1}}',\tilde{u}_{a_{j+1}:a_{k}}) \Big)\bigotimes_{i=1}^{K} \theta(du_i'),
$$
and the final element in each product is identical, it follows that
$$
M(\tilde{u},du') \geq \exp\{-2\Phi^* (k-1)\} M(u,du').
$$
as was to be proved.
\end{proof}
\section{Numerical Results}\label{sec:numerics}
\subsection{Set-Up}
\label{ssec:numset}
In this section a
1D version of the elliptic PDE problem in \eqref{eq:uniellip} is considered.
Let $D=[0,1]$ and
consider $f(x)=100x$. For the prior specification of $u$,
set $K=2$, $\bar{u}(x)=0.15=const.$,
$\sigma_1=0.1$, $\sigma_2=0.025$, $\phi_1(x)=\sin(\pi x)$ and $\phi_2(x) = \cos(2\pi x)$.
The forward problem at resolution level $l$ is solved using a finite element method with
piecewise linear shape functions on a uniform mesh of width
$h_l=2^{-(l+k)}$, for some starting $k\geq1$ (so that there are at least two grid-blocks in the finest, $l=0$, case).
Thus, on the $l^{th}$ level the finite-element basis functions are $\{\psi^l_i\}_{i=1}^{2^{l+k}-1}$ defined as (for $x_i=i\cdot 2^{-(l+k)}$) \cite{ciarlet1978finite}:
\[
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\psi_i^{l}(x) = \Bigg \{
\hspace{-50\in}
\begin{split}
(1/h_l)[x - (x_i-h_l) ] &\quad if \quad x\in[x_i-h_l,x_i], \quad\quad\quad\quad\quad\quad
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\quad\quad\quad\quad
\\
(1/h_l)[x_i+h_l -x ] & \quad if \quad x\in [x_i,x_i+h_l].
\quad\quad\quad\quad\quad\quad\quad
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\quad\quad\quad\quad\qua
\end{split}
\]
The functional of interest $g$ is taken as the solution of the forward problem at the midpoint of the domain, that is
$g(u)=p(0.5;u)$. The observation operator is $\mathcal{G}(u) = [p(0.25;u),p(0.75;u)]^\top$, and the observational noise
covariance is taken to be $\Gamma=0.25^2 I$.
To solve the PDE, the ansatz $p_l(x) = \sum_{i=1}^{2^{l+k}-1} p^l_i \psi^l_i(x)$ is plugged into
\eqref{eq:uniellip}, and projected onto each basis element:
\[
- \Big \langle \nabla \cdot\Big ( \hat{u} \nabla \sum_{i=1}^{2^{l+k}} p^l_i \psi^l_i(x) \Big), \psi^l_j(x) \Big \rangle
= \langle f , \psi^l_j \rangle \ ,
\]
resulting in the following linear system:
\[
{\bf A}^l(u) {\bf p}^l = {\bf f}^l,
\]
where we introduce the matrix ${\bf A}^l(u)$ with entries
$A^l_{ij}(u) = \langle \widehat{u} \nabla \psi^l_i , \nabla \psi^l_j \rangle$, and vectors
${\bf p}^l, {\bf f}^l$ with entries $p^l_i$ and $f^l_i = \langle f , \psi^l_i \rangle$, respectively.
Omitting the index $l$, the matrix is sparse and tridiagonal with
\begin{equation*}
A_{(i-1)i}(u) = A_{i(i-1)}(u)=-(1/h^2) \int_{x_{i-1}}^{x_i} \widehat{u}(x)dx\ ,\quad
A_{ii}=(1/h^2) \left ( \int_{x_{i-1}}^{x_i} \widehat{u}(x)dx + \int_{x_{i}}^{x_{i+1}} \widehat{u}(x)dx \right)\ ,
\end{equation*}
and
zero otherwise. The elements $f_i$ are computed analogously. The system can therefore be solved with
cost
$\mathcal{O}(2^{l+k})$, corresponding to a computational cost rate of $\gamma=1$.
To get some understanding about the numerics and validate the theory, a number of
results and figures will be generated.
First, the PDE solution is obtained for a reference value of $u$
on a very fine mesh.
This reference value of $p$ is used to numerically obtain
the rate $\beta$ in upper bounds of the form $h_l^{\beta}$ for the quantities in
\eqref{eq:femconv}, hence also in \eqref{eq:lincrement}, over increasing $l$.
Then,
$N_l$ are optimally allocated using this
$\beta$ and the $\gamma$ above
using the formulae from
Section \ref{ssec:multilevel_component}.
Following the error analysis in Section \ref{sec:verify}, once $\beta$ has been decided,
we have $\alpha=\beta/2$.
Then, observing the cost/error trend for a range
of errors $\epsilon$, we expect to observe the appropriate scaling between computational
cost and mean squared error (e.g.\@ MSE $\propto$ cost$^{-1}$ for MLSMC).
\subsection{Results}
The following setting is simulated.
The sequence of step-sizes is given by $h_l = 2^{-(l + k)}$, $k = 3$.
The data $\mathcal{G}(u)$ is simulated with
a given $u_i \sim U[-1,1]$ (i=1,2) and $h = 2^{-20}$. The
observation variance and other algorithmic elements are as stated above.
We will contrast the accuracy of two algorithms.
The first is (i) MLSMC as detailed above; the second is
(ii) plain SMC: the same sequence of distributions
as MLSMC, but using
equal number of particles for a given $L$,
and averaging only the samples at the last level.
For both MLSMC and SMC algorithms, random walk MCMC kernels
were used (iterated 10 times) with scale parameters falling deterministically
(the ratio of standard deviation used for target $\eta_l$
versus the one for target $\eta_{l+1}$
is set to $(l+1)/l$).
\subsubsection{Numerical Estimation of Algorithmic Rates}
To numerically estimate the rate $\beta$,
the quantity
$\|p_l(\cdot;u) - p_{l-1}(\cdot;u)\|_V$ is computed over increasing
levels $l$. Figure~\ref{fig:rate1} shows these computed values plotted against $h_l$ on
base-2 logarithmic scales. A fit of a linear model
gives rate $\beta = 1.935$, and a similar experiment
gives $\alpha=0.993$.
This is consistent with the rate $\beta=2$ and $\alpha=\beta/2$ expected
from the theoretical error analysis in Section \ref{sec:verify}
(and agrees also with other literature \cite{ciarlet1978finite}).
An expensive preliminary MLSMC is executed to get some first results over the algorithmic
variabilty. In this execution the number of particles are set with the recursion
$N_l = \lceil{2N_{l + 1}}\rceil$ and $N_L =
1000$. The simulations are repeated 100 times.
The estimated variance of $\eta_l^{N_l}(gG_l) / \eta_l^{N_l}(G_l) - \eta_l^{N_l}(g)$, as a
proxy of $V_l$, is
plotted in Figure~\ref{fig:var}
against $h_l$ on the same scales as before. The estimate
of the rate now is $\beta = 5.06$.
In this case
the numerical estimate is much stronger
than the theoretical rate used here.
In fact, under suitable regularity conditions one may theoretically
obtain the rate $\beta=4$ with a stronger $L^2(D)$ bound on
$\|p(\cdot;u) - p_{l}(\cdot;u)\|$, which follows from an Aubin-Nitsche duality argument
\cite{ern2004theory}. However, even this stronger estimate is still beat by the empirical estimate.
Nonetheless, the objective of the present work is to illustrate the theory and not to really
optimize the implementation. In fact, similar results as presented below are obtained using either
rate, presumably owing to the fact that $\beta=2>\zeta$,
which is already the optimal relationship of $\beta$ and $\zeta$ and hence already
provides the optimal asymptotic behavior of MSE$\propto$cost$^{-1}$.
In case an optimal $\beta$ induces a change in the relationship between
$\beta$ and $\zeta$, one may expect a
change in asymptotic behavior of MSE vs. cost,
which justifies such empirical rate estimation.
\begin{figure}\centering
\includegraphics{rate1_new}
\caption{An analyical calculation
of $\|p_l(\cdot;u)-p_{l-1}(\cdot;u)\|^{2}_V$, with $u$ equal to the true value used to generate the data,
for various choices
of $h_l$.}
\label{fig:rate1}
\end{figure}
\begin{figure}\centering
\includegraphics{ml_var}
\caption{Variance estimates.}
\label{fig:var}
\end{figure}
\subsubsection{Algorithmic Performance with Diminishing MSE}
Given the choices of $\alpha$ and $\beta$ as above,
the performance of the MLSMC algorithm is benchmarked by simulating samplers with different
maximum levels $L$.
The value of $\eta_\infty(g)$ was first
estimated with the SMC algorithm targeting $\eta_{13}(g)$ ($h^{-16}$),
with $N_L = 1000$. This sampler was realized 100 times and
the average of the estimator is take as the ground truth.
The standard deviation is much smaller than the smallest bias of subsequent simulations.
When updating $L \rightarrow
L+1$,
the new bias is approximately a factor
$2^{-\alpha}$ smaller than
the previous one.
Therefore the two sources
of error in \eqref{eq:dec} can be roughly balanced by setting $N_l'=2^{2\alpha}N_l$,
for $l=0,1,\ldots,L$, and $N_{L+1}' = 2^{-(\beta+\zeta)/2}N'_{L}$.
To check the effectiveness of the MCMC steps employed for dispersing the particles within
the SMC methods, we show in Figure~\ref{fig:acc} the average (over the number of particles)
acceptance probability for
each of the $L$ iterations
when
the MCMC was executed
(here $L=15$).
The plot indicates reasonable performance of this
particular aspect of the sequential algorithm.
\begin{figure}[!h]\centering
\includegraphics{smc_acc}
\caption{Acceptance rates of MCMC kernel.}
\label{fig:acc}
\end{figure}
The error-vs-cost plots for SMC and MLSMC are shown in
Figure~\ref{fig:mlsmc}.
Note that the bullets in the graph correspond to different choices of $L$
(ranging from $L=0$ to $L=5$).
Then, as mentioned earlier, for a given $L$,
the single level SMC
uses a fixed number of particles over
the sequence of targets over $l=0,1,\ldots,L$, and
this number is tuned
to have approximately the same computational cost as MLSMC with the same $L$.
The MSE data points are each estimated with $100$ realizations of the given sampler.
The fitted linear model of
$\log\textrm{MSE}$ against
$\log\text{Cost}$ has a gradient of $-0.6493$ and $-1.029$ for SMC and MLSMC
respectively.
This verifies numerically
the expected asymptotic behavior
MSE$\propto$cost$^{-1}$ for MLSMC, determined from the theory.
Furthermore, the first rate indicates that the single level
SMC performs similarly to the single level vanilla MC with asymptotic behavior MSE$\propto$cost$^{-2/3}$.
The results clearly establish
the potential improvements of MLSMC versus a standard SMC sampler.
It is remarked that the MLSMC algorithm can be improved in many directions and this is subject to future work.
\begin{figure}\centering
\includegraphics[scale=1.05]{mlsmc_mse1.pdf}
\caption{Mean square error against computational cost.
\label{fig:mlsmc}
\end{figure}
\subsubsection*{Acknowledgements}
AJ, KL \& YZ were supported by an AcRF tier 2 grant: R-155-000-143-112. AJ is affiliated with the Risk Management Institute and the Center for Quantitative Finance at NUS. RT, KL \& AJ were additionally supported by
King Abdullah University of Science and Technology (KAUST). AB was supported by the Leverhulme Trust Prize.
|
1,116,691,499,812 | arxiv | \section{Introduction}
\label{sec:intro}
We study two natural budgeted edge covering problems in undirected
simple graphs with integral weights on vertices.
The budget is given either as a bound on the number of edges to be
covered or as a bound on the total
weight of the vertices. We say that an edge $e$ is \textit{touched} by a set of
vertices $U$ or that $e$ \textit{touches} the set of vertices $U$, if at least
one of $e$'s endpoints is in $U$.
Specifically, the problems that we study are as follows.
The {\small \sf Maximum weight $m'$-edge cover } ({\small \sf MWEC }) problem that we study was first introduced by Goldschmidt
and Hochbaum \cite{GH97}. In this problem, we are given an undirected
simple graph $G=(V,E)$ with integral vertex weights.
The goal is to select a subset $U\subseteq V$ of maximum weight
so that the number of edges touching
$U$ is at most $m'$.
This problem is
motivated by application in loading of semi-conductor components to be
assembled into products \cite{GH97}.
We also study the closely related {\small \sf Fixed cost minimum edge cover } ({\small \sf FCEC }) problem in which
given a graph $G=(V,E)$ with vertex weights and
a number $W$, our goal is to
find $U\subseteq V$ of weight at least $W$ such that the number
of edges touching $U$ is minimized.
Finally, we study
the {\small \sf Degrees density augmentation } problem which is the density version of the {\small \sf FCEC } problem.
In the {\small \sf Degrees density augmentation } problem, we are given an
undirected graph graph $G=(V,E)$ and
a set $U\subseteq V$ and our goal is to
find a set $W$ with maximum augmenting density
i.e., a set $W$ that maximizes $(e(W)+e(U,W))/deg(W)$.
\subsection{Related Work}
Goldschmidt and Hochbaum \cite{GH97} introduced the {\small \sf MWEC } problem.
They show that the problem is NP-complete and give algorithms that yield
$2$-approximate and $3$-approximate algorithm for
the unweighted and the weighted versions of the problem,
respectively.
A class of related problems are the density problems -- problems in
which we are to find a subgraph and the objective function considers
the ratio of the total number or weight of edges in the subgraph
to the number of vertices in the subgraph. A well known problem in
this class is the {\small \sf Dense $k$-subgraph } problem ($DkS$) in which we want to find a subset of
vertices $U$ of size $k$ such that the total number of edges
in the subgraph induced by $U$ is maximized.
The best ratio known for the problem is
$n^{1/4+\epsilon}$ \cite{FKP,BCCFV}, which is an improvement over the
bound of $O(n^{1/3-\epsilon})$, for $\epsilon$ close to $1/60$
\cite{FKP}. The {\small \sf Dense $k$-subgraph } problem is
APX-hard under the assumption that
NP problems can not be solved in subexponential time
\cite{K06}. Interestingly, if there is no bound on the the size of
$U$ then the problem can be solved in polynomial time \cite{Lawler,G84}.
Consider an objective function in which we minimize $deg(U)$.
One can associate a cost $c_u=deg(u)$ with each vertex $u$ and
a size $s_u=w(u)$ for each vertex $u$, and then the objective is just
to minimize $deg(U)$ subject to $\sum s_ux_u \geq k$. Carnes
and Shmoys \cite{CS08} give a $(1+\epsilon)$-approximation for the
problem. Using this result and the observation that the
objective function is at most a factor of $2$ away from the objective
function for the {\small \sf FCEC } problem, a $2(1+\epsilon)$-approximation
follows for the {\small \sf FCEC } problem.
Variations of the {\small \sf Dense $k$-subgraph } problem in which the size of $U$ is at least
$k$ ($Dalk$) and the size of $U$ is at most $k$ ($Damk$) have been studied
\cite{AC09,KS09}. In \cite{AC09, KS09}, they give evidence that $Damk$
is just as hard as $DkS$. They also give $2$-approximate solutions to
the $Dalk$ problem. In \cite{KS09}, they also consider the density
versions of the problems in directed graphs. Gajewar and Sarma \cite{GS12}
consider a generalization in which we are give a partition of vertices
$U_1, U_2, \ldots, U_t$, and non-negative integers $r_1, r_2, \ldots,
r_t$. the goal is to find a densest subgraph such that partition $U_i$
contributes at least $r_i$ vertices to the densest subgraph. They give
a $3$-approximation for the problem, which was improved to $2$ by
Chakravarthy et al. \cite{CMNRS}, who also consider other
generalizations. They also show using linear programming that the {\small \sf Degrees density augmentation }
problem can be solved optimally.
A problem parameterized by $k$
is Fixed Parameter Tractable \cite{dany}, if it
admits an exact algorithm with running time of $f(k)\cdot n^{O(1)}$.
The function $f$ can be exponential in $k$ or larger.
Proving that a problem is W[1]-hard (with respect to parameter $k$)
is a strong indication that it has no FPT algorithm with parameter $k$
(similar to NP-hardness implying the likelihood of no polynomial time algorithm).
The {{\small \sf FCEC }} problem parameterized by $k$ is W[1] hard
but admits a $f(k,\epsilon)\cdot n^{O(1)}$ time,
$(1+\epsilon)$-approximation, for any constant $\epsilon>0$ \cite{dany}.
This is in contrast to our result that shows that it is highly
unlikely that {\small \sf FCEC } admits
a polynomial time approximation scheme (PTAS), if the running time is bounded by a polynomial in $k$.
\subsection{Preliminaries}
The input is an undirected simple graph $G=(V,E)$ and vertex weights
are given by $w(\cdot)$. Let $n=|V|$ and $m=|E|$.
For any subset $S\subseteq V$, let $\overline{S} = V\setminus S$. Let
$e(P,Q)$ be the set of edges with one endpoint in $P$ and the other in
$Q$. Let $deg(S)$ denote the sum of degrees of all vertices in $S$,
i.e., $deg(S)=\sum_{v\in S}deg(v)$. Let $deg_H(v)$ denote the number
of neighbors of $v$ among the vertices in $H$. Let $deg_H(S)$ denote
the quantity $\sum_{v\in S}deg_H(v)$. We use $OPT$ to denote an
optimal solution as well as the cost of an optimal solution. The
meaning will be clear from the context in which it is used.
For set $U\subseteq V$, let
$T(U)$ be the collection of all edges with at least one endpoint in $U$.
Namely, is the set of edges touching $U$.
We denote $t(U)=|T(U)|$.
The set of edges with both endpoints in $U$, also called {\em internal} edges
of $U$, is denoted by $E(U)$.
We denote $e(U)=|E(U)|$.
We denote by $e(X,Y)$ the number of edges with one endpoint in
$X$ and one in $Y$. Let $e_{U}(X,Y)$ be the number of edges
between $X\cap U$ and $Y\cap U$ in the graph $G(U)$ induced
by $U$.
\begin{lemma}
\label{lemma:klowest}
The {\small \sf FCEC } problem admits a simple $2$-approximate solution in case of
uniform vertex weights.
\end{lemma}
\begin{proof}
Let $Z$ be the set of $k$ lowest degree vertices in $G$. The set $Z$
is a $2$-approximate solution. Why? Let $b$ be the average degree of
vertices in $Z$. Thus $t(Z) \leq bk$. The claim follows since $t(OPT) \geq
deg(OPT)/2 \geq bk/2$.
\end{proof}
From Lemma \ref{lemma:klowest}, if $deg(OPT)\geq bk(1+\epsilon)$, we
obtain a $2/(1+\epsilon)<2$ approximation guarantee using the set $Z$
as the solution.
Henceforth we assume that $deg(OPT)<bk(1+\epsilon)$
\begin{claim}
For every set $U$, $t(U)=deg(U)-e(U)$
\end{claim}
\begin{proof}
Consider separately the edges
$E(U,V\setminus U)$ and $E(U)$.
Note that the edges $E(U,V\setminus U)$ are counted once
in the sum of degrees, but edges in $E(U)$ are counted twice.
Thus in order to get the number of edges touching $U$,
we need to subtract $e(U)$ from $deg(U)$.
\end{proof}
\subsection{Our results}
Our contributions in this paper are as follows.
\begin{itemize}
\item
For the {\small \sf MWEC } problem we
give an algorithm that yields an approximation guarantee of $2$, thereby improving
the guarantee of $3$ given by Goldschmidt and Hochbaum \cite{GH97}.
\item
We give a $2$-approximate solution to the
{\small \sf FCEC } problem. This improves the $2(1+\epsilon)$-ratio that follows
from the work of Carnes and Shmoys \cite{CS08}.
\item
Can linear programming
be used to improve the ratio of $2$ for {\small \sf FCEC } and {\small \sf MWEC } problems?
We take a first step and show that a natural LP for
{{\small \sf FCEC }} has an integrality gap of $2(1-o(1))$, even for the unweighted case.
\item
We show that unless a well-known instance of the {\small \sf Dense $k$-subgraph } admits a
constant ratio, {\small \sf FCEC } and {\small \sf MWEC } do not admit a PTAS. Note that the best
approximation guarantee known for this instance of {\small \sf Dense $k$-subgraph } is
$O(n^{2/9})$ \cite{BCCFV}. This gives a stronger hardness result than
the NP-completeness result known for {\small \sf MWEC } \cite{GH97}.
\item
For any constant $\rho>1$, we show that if {{\small \sf FCEC }} admits a
$\rho$-approximation algorithm then
{{\small \sf MWEC }} admits a $\rho(1+o(1))$-approximation algorithm.
\item
We give a combinatorial algorithm that solves the {{\small \sf Degrees density augmentation }}
problem optimally.
\end{itemize}
\section{A 2-approximation for Maximum Weight
$m'$-Edge Cover}
In this section we give a dynamic programming based solution for
the {\small \sf MWEC } problem. The idea of using dynamic programming in this
context was first proposed by Goldschimdt and Hochbaum \cite{GH97}.
Recall that in the {\small \sf MWEC } problem, we are
given an undirected simple graph $G=(V,E)$ with integral vertex weights.
The goal is to select a subset $U\subseteq V$ of maximum weight
so that the number of edges touching
$U$ is at most $m'$.
We will guess the following entities (by trying all possibilities) and
for each guess, we use dynamic programming to solve the
problem.
\begin{enumerate}
\item
$H^* = \{v_h\}$, where $v_h$ is the heaviest vertex in an optimal solution.
\item
$P_{H^*} = e(H^*, OPT\setminus H^*)$ -- the number of neighbors of $v_h$ in the
optimal solution. There are at most $n$ possibilities.
\item
$D_{H^*} = deg_{\overline{H}^*}(OPT\setminus H^*)$: total degree of
vertices in $OPT\setminus H^*$ in the graph induced by vertices in
$V\setminus H^*$. There are at most $n^2$ possibilities.
\end{enumerate}
We will try all combinations of the above entities. Since there are at
most polynomial number of possibilities for each entity, we have at
most polynomial number of possibilities in total.
We define the following subproblems as part of our dynamic
programming solution. Let $H$ be a guess for the singleton set $H^*$
that contains the heaviest vertex in an optimal solution. Let
$\{v_1,v_2, \ldots, v_{n-1}\}$ be the vertices in
$\overline{H}$ (recall $\overline{H}=V\setminus H$).
Then, for any $H$, we solve the following subproblems.
\begin{quote}
$A[H, i, P_{H}, D_{H}]$ denote the maximum weighted subset
$Q\subseteq \{v_1, v_2, \ldots, v_i\}$ such that $e(H,Q)\geq P_{H}$
and $deg_{\overline{H}}(Q)\leq D_{H}/2$.
\end{quote}
Note that while the natural bound on $deg_{\overline{H}}(Q)$ is $D_H$,
using such a bound will lead to an infeasible solution.
For fixed parameters $H$, $P_{H}$, and $D_{H}$, we are interested in
$A[H,n-1,P_H,D_H]$. We use the following recurrence as the basis for
our dynamic programming solution: the value of $A[H,i,P_{H},D_{H}]
= -\infty$ in any of the following three cases -- (i) $i=0$ and $P_{H}>0$, (ii) $i=0$
and $D_{H} < 0$, and (iii) $D_{H}/2 > m'-e(H,\overline{H})$.
When $i=0$, $P_{H}\leq 0$ and $D_{H}\geq 0$, the value of
$A[H,i,P_{H},D_{H}] = 0$. Otherwise, we have
\begin{eqnarray*}
A[H,i,P_{H},D_{H}] & = \max\{A[H,i-1,P_{H},D_{H}], w(v_i)+A[H,i-1,P'_{H},D'_{H}]\}
\end{eqnarray*}
where, $P'_{H} = P_{H}-deg_{H}(v_i)$ and $D'_{H} = D_{H} - 2(deg_{\overline{H}}(v_i))$.
Our solution is given by
$\max_{H,P_{H},D_{H}}\{w(H)+A[H,n-1, P_{H}, D_{H}]\}$.
\subsection*{Analysis}
\begin{lemma}
Our algorithm yields a feasible solution.
\end{lemma}
\begin{proof}
Let $H'\cup Q'$, where $Q'\subseteq V\setminus H'$, be the set of
vertices returned by our solution. The number of edges with at least
one endpoint in $H'\cup Q'$, is
\begin{eqnarray*}
& = e(H', \overline{H}') + e(Q', \overline{H}') \\
& \leq e(H', \overline{H}') + deg_{\overline{H'}}(Q')\\
& \leq e(H', \overline{H}') + \frac{D_{H'}}{2}\\
& \leq e(H', \overline{H'}) +
(m'-e(H',\overline{H'})) \hspace*{1cm} (\mbox{using the
base case})\\
& = m'
\end{eqnarray*}
\end{proof}
\begin{lemma}
The above algorithm results in a $2$-approximate solution.
\end{lemma}
\begin{proof}
Recall that $H^*$ consists of the highest degree vertex in the
optimal solution. Let $Q^*$ be the remaining vertices in the
optimal solution. Consider the scenario when our algorithm makes the
correct guess for $H^*$.
Let $Q\subseteq \overline{H^*}$ be the solution returned by the dynamic
program in this setting. We know that
\[
deg_{\overline{H}^*}(Q) \leq \frac{deg_{\overline{H}^*}(Q^*)}{2}
\]
We now use ideas from \cite{GH97} to show that $w(H^*\cup Q)\geq
2w(H^*\cup Q^*)$. Recall that $H'\cup Q'$ be the output of our
algorithm. Since $w(H'\cup Q')\geq w(H^*\cup Q^*)$,
it follows that our solution is a factor of at most $2$
away from $OPT$.
Consider any arbitrary ordering of vertices $v_1, v_2, \ldots$
in $Q^*$. Note that the weight of each vertex in $Q^*$ is at most
$w(H^*)$.
Let $Q^*_r$ denote the the first $r$ vertices in the above
ordering of vertices of $Q^*$. Let $p$ be the first index such that
$deg_{\overline{H}^*}(Q^*_p) > deg_{\overline{H}^*}(Q^*)/2$. This
implies the following -- (i) $deg_{\overline{H}^*}(Q^*_{p-1})\leq
deg_{\overline{H}^*}(Q^*)/2$, and (ii)
$deg_{\overline{H}^*}(Q^*\setminus Q^*_p)<
deg_{\overline{H}^*}(Q^*)/2$. Note that both the sets $Q^*_{p-1}$ and
$Q^*\setminus Q^*_p$ (neither set contains $v_p$) are feasible candidates for the set $Q$, the solution
returned by our algorithm when the heaviest vertex set was chosen to
be $H^*$. Since $w(Q)\geq w(Q^*_{p-1})$, $w(Q)\geq w(Q^*\setminus Q^*_{p})$, and
$w(v_p)\leq w(H^*)$, we have
\begin{eqnarray*}
w(OPT) & \leq w(H^*\cup Q^*) \\
& \leq w(H^*) + w(Q^*) \\
& \leq w(H^*) + w(Q^*_{p-1}) + w(v_p) + w(Q^*\setminus
Q^*_{p})\\
& \leq w(H^*) + w(Q) + w(H^*) + w(Q) \\
& = 2 w(H^* \cup Q) \\
& \leq 2w(H'\cup Q')
\end{eqnarray*}
\end{proof}
\section{A 2-approximation for Fixed Weight Minimum Edge Cover}
Recall the {\small \sf FCEC } problem: Given a graph $G=(V,E)$
with arbitrary vertex weights and a positive integer $W$, our objective
is to choose a set $S\subseteq V$ of vertices of total weight at least
$W$ such that that the number of edges with at least one end point in
$S$ is minimized.
We will solve the following related problem optimally and then show
that an optimal solution to the problem is a 2-approximation to
{\small \sf FCEC }: we want to find a subset $S$
of vertices such that $deg(S)$ is smallest and $w(S)$ is at least
$W$.
We use the dynamic programming algorithm of the well-known Knapsack
problem to find a solution to the above problem. For completeness, we
restate the dynamic programming formulation below.
\begin{quote}
$P[i,D]$: maximum weight of set $Q\subseteq \{v_1,v_2,\ldots,v_i\}$
such that $deg(Q)$ is at most $D$.
\end{quote}
Note that $P[0,D]=0$, for all values of $D$ is the base case. For all
other case, we invoke the following recurrence.
\[
P[i,D] = \max\{P[i-1,D], w(v_i)+P[i-1,D-w(v_i)]\}
\]
After filling the table $P$ using dynamic programming, we scan all
entries of the form $P[|V|,D]$ to find the smallest value of $D$ for
which $P[|V|, D]\geq W$. Let $S$ be the corresponding set.
\begin{lemma}
The
is a $2$-approximate solution to the Fixed Cost Minimum Edge Cover
Problem as follows.
\[
t(S) \leq deg(S) \leq deg(OPT) = 2(deg(OPT)/2)\leq 2OPT
\]
\end{lemma}
\section{Integrality gap for Fixed Cost Minimum Edge Cover}
Consider the following natural integer linear program for the problem
\begin{eqnarray*}
\min \; \sum_{e} y_e\\
\mbox{subject\ to }\quad \quad \sum_{v\in V} x_v &\ge k,\ & \\
y_e &\ge x_u,\ &\forall e = (u,v)\\
y_e &\ge x_v,\ & \forall e = (u,v)\\
x_v &\in \{0,1\},\ &\forall v \in V \\
y_e & \in \{0,1\},\ &\forall e\in E
\end{eqnarray*}
The LP relaxation can be obtained by relaxing the integrality
constraints on $x_v$ and $y_e$ to $x_v \geq 0, \forall v \in V$ and
$y_e \geq 0, \forall e\in E$.
\begin{theorem}
The above LP has an integrality gap of $2(1-o(1))$.
\end{theorem}
Let $k=\lfloor \sqrt{n}\rfloor$.
Construct a graph $G$ on $n$ vertices as follows. For each pair of
vertices, include an edge between the pair with a probability
$1/\lfloor \sqrt{n}\rfloor$. For any vertex $v$,
$E[deg(v)] = n(1/\lfloor \sqrt{n}\rfloor )\leq \lceil \sqrt{n}\rceil$.
Using Chernoff bounds, for $0<\delta < 1$, we have
\[
\sqrt{n}(1-o(1)) \leq deg(v) \leq \sqrt{n}(1+o(1))
\]
Consider any subset $Q$ of vertices in $G$ such that
$|Q|=\lfloor \sqrt{n}\rfloor $. Then we have
\[
E[e(Q)] = \frac{1}{\lfloor \sqrt{n}\rfloor }{Q\choose 2}
= \frac{\lfloor \sqrt{n}\rfloor (\lfloor \sqrt{n}\rfloor
-1)}{2\lfloor \sqrt{n}\rfloor }
= \frac{\lfloor \sqrt{n}\rfloor -1}{2}
\]
Thus, $n\geq 4$, we have $\sqrt{n}/4\leq E[e(Q)] < \sqrt{n}/2$.
We use the following Chernoff bound to obtain the probability that $e(Q)\geq n^{1-\epsilon}$,
for a constant $\epsilon$.
\[
\Pr[e(Q)\geq (1+\delta)E[e(Q)]] \leq \left (\frac{\exp(\delta)}{(1+\delta)^{(1+\delta)}} \right )^{E[e(Q)]}
\]
In our case, $2n^{1/2-\epsilon}\leq 1+\delta \leq 4n^{1/2-\epsilon}$, thus we get
\[
\Pr[e(Q)\geq n^{1-\epsilon}] \leq \left (\frac{\exp(4n^{1/2-\epsilon})}{(2n^{1/2-\epsilon})^{2n^{1/2-\epsilon}}} \right )^{\sqrt{n}/4}
\]
Let $f(n,\epsilon)= \left (
\frac{\exp(n^{1/2-\epsilon})}{(2n^{1/2-\epsilon})^{(n^{1/2-\epsilon}/2)}}
\right )^{\sqrt{n}}$.
The number of sets of size $\lfloor \sqrt{n}\rfloor$ is given by ${n\choose \sqrt{n}}\leq
(ne/\lfloor \sqrt{n}\rfloor )^{\sqrt{n}} = (\lceil \sqrt{n}\rceil
e)^{\sqrt{n}}$. The probability that there is no
subset of size $\lfloor \sqrt{n}\rfloor$ that has at least $n^{1-\epsilon}$ edges is
given by the union-bound as follows
\[
f(n,\epsilon){n\choose \sqrt{n}} << 1
\]
The number of edges with at least one end point in $Q$ is given by
\begin{eqnarray*}
t(Q) & = deg(Q) - e(Q) \\
& \geq \lfloor \sqrt{n}\rfloor \cdot \sqrt{n}(1-o(1)) -
n^{1-\epsilon}\\
& = n(1-o(1))
\end{eqnarray*}
On the other hand, consider the fractional solution in which
$x_v=1/\sqrt{n}$, for each $v$ and $y_e=1/\sqrt{n}$, for each $e\in E$. This
LP solution is feasible and has a cost of $|E|/\sqrt{n}$. The number
of edges $|E| = n\sqrt{n}/2(1+o(1))$. Thus the cost of the LP solution
is at most $n(1+o(1))/2$, which results in a gap of $2(1-o(1))$.
\section{APX-hardness for unweighted Fixed Cost Minimum Edge Cover and
Maximum Weight $m'$-Edge Cover}
Let $G$ be the input for the {\small \sf Dense $k$-subgraph } problem and let $OPT$ be an optimal
subset of $k$ vertices. To prove the hardness result we consider the
following \textit{important} instance $\langle G,k\rangle$ of the {\small \sf Dense $k$-subgraph } problem.
\begin{enumerate}
\item [$P_1$.] The $k/2$ largest degree vertices, $H$,
in $G$, have average degree $d_H=\Theta(n^{1/3})$
\item [$P_2$.]
$k=\Theta(n^{2/3})$, and
\item [$P_3$.]
$OPT$ has average degree $d^*=\Theta(n^{1/3})$.
\end{enumerate}
Feige et al. \cite{FKP} gave a relatively simple
$n^{1/3}$ ratio for the {{\small \sf Dense $k$-subgraph }} problem.
The ratio was improved to $n^{1/3-1/60}$
by improving the ratio of $n^{1/3}$ for two very specific instances.
One of the instances was the important instance defined above.
We now show that if {\small \sf FCEC } admits a PTAS then this important instance for
the {\small \sf Dense $k$-subgraph } problem admits a constant factor approximation algorithm.
Consider the important instance and assume that $e(H)=O(k)$. Note that
a constant-factor approximation for the important instance implies a
constant approximation for the case when $e(H)=O(k)$. Clearly,
removing $H$ does not change the value of the optimum up to lower order
terms. This \textit{modified} instance has a maximum degree of $O(n^{1/3})$ and it
also satisfies properties $P_2$ and $P_3$ of the important instance.
The best ratio, given the state-of-the-art, for the modified instance is
$\Theta(n^{2/9})$ (\textit{M. Charikar, Private Communication}) and hence
the following conjecture seems
highly likely: The modified instance does not admit a constant
approximation.
\begin{claim}
We can modify $G$ into a graph $G'$
for which the optimal solutions for the {\small \sf Dense $k$-subgraph } problem and the {\small \sf FCEC }
are the same, and in addition, the value of the optimum solution
for the {\small \sf Dense $k$-subgraph } problem does not change.
\end{claim}
\begin{proof}
Let the largest degree of $G$
be $\Delta=c_1\cdot n^{1/3}$.
We show how to make the graph $\Delta$ regular without
changing the optimum value for the {\small \sf Dense $k$-subgraph } instance.
For every vertex $v\in V$ add
$\Delta-deg(v)$ vertices $F_v$ and connect
$v$ to all the vertices of $F_v$.
The sets $F_v$ for different vertices are disjoint.
Let $F=\bigcup_{v} F_v$.
We now add a set of $n^2$ disjoint edges (no two edges share a vertex)
$F'$ (and thus $2n^2$ new vertices).
We then make $F\cup F'$ regular by adding
adding a random $\Delta-1$ regular graph on $F'\cup F$.
Let $G'$ be the new graph.
Clearly, every vertex has degree $\Delta$ now.
Indeed all vertices in $F_v$ and the $2n^2$ vertices that were added had degree
exactly one before the random $\Delta-1$ subgraph is added.
Since $G'$ is regular, the sum of degrees
in $G'$ for any $k$ vertices is the same.
As $t(U)=deg(U)-e(U)$ the optimal solutions for {\small \sf FCEC } and {\small \sf Dense $k$-subgraph }
are the same on regular graphs.
Since the graph on $F\cup F'$ is basically a random
graph with degrees $O(n^{1/3})$, and at least $n^2$ vertices,
basic calculations show that for
every $F''\subseteq F\cup F'$, $e(F'')=O(|F''|)$.
In addition, every vertex in $F'\cup F$ has degree at most $1$
in $G$.
Therefore any $F''\subseteq F\cup F'$, can contribute at most
$deg(F'')=O(|F''|)$ to the number of edges
in a {\small \sf Dense $k$-subgraph } solution.
As $|F''|\leq k$, it follows that $F''$ can contribute $O(k)$ edges to the
the {\small \sf Dense $k$-subgraph } solution.
Observe that this number is negligible compared to the {\small \sf Dense $k$-subgraph } in $G$.
The number of edges in the {\small \sf Dense $k$-subgraph } optimum in $G$ is
$c'kn^{1/3}$, for some constant $c'$.
Hence the value of the optimum solution does not change (up to lower
order terms)
by the addition of $F\cup F'$.
\end{proof}
\begin{theorem}
\label{thm:kminhard}
A PTAS for {\small \sf FCEC } problem implies a constant factor approximation for
the modified {\small \sf Dense $k$-subgraph } instance.
\end{theorem}
\begin{proof}
Since $G'$ is a regular graph,
the optimal {\small \sf FCEC } solution
is the same as the {\small \sf Dense $k$-subgraph } optimum solution.
In fact the number of touching edges is
$\Delta k-c'kn^{1/3}$.
Recall that $\Delta=c_1\cdot n^{1/3}$.
Thus the optimum is
$c_1kn^{1/3}-c'kn^{1/3}$.
By the value of $k$, the optimum value is
$c_1n-c'n$.
If {\small \sf FCEC } has a PTAS then there exists a $1+c'/c_1$-approximation for
the {\small \sf FCEC } problem.
Assuming this ratio, a set $U$ is output so that it is touched
by at most
$c_1n-c'n+(c'/c_1)(c_1n-c'n)
=c_1n-({c'}^2/c_1)n$ edges. This implies that $e(U)= (c'^2/c_1)n$.
Thus we find
a subgraph with $k$ vertices and at least
${c'}^2n/c_1$ edges. Therefore the ratio obtained on the modified instance
is $c'/(c'^2/c_1) = c_1/c'$, contradicting the conjecture that the modified
instance does not admit a constant approximation.
\end{proof}
\subsection{APX-hardness for Maximum Weight $m'$-Edge Cover}
We show that
PTAS for (unweighted) {\small \sf MWEC } implies a PTAS
for (unweighted) {\small \sf FCEC } on the modified instance.
As we showed that this is not possible,
{\small \sf MWEC } is APX-hard as well.
Recall that the optimum for the modified instance
had $c_1n-c'n$ edges and size $k$.
Furthermore, the modified instance is
$\Delta$-regular.
Let $OPT$ be the optimum solution for the {\small \sf FCEC } instance.
The number of edges touching $OPT$ is at least:
$t(OPT)\geq
k\cdot \Delta/2.$
We impose a bound of $c_1n-c'n$ on the number of edges
to the hypothetical PTAS for the {\small \sf MWEC }
problem.
A PTAS algorithm for {\small \sf MWEC } will return a set $S$ with size {\em at least}
$k/(1+\epsilon)$, touched by
at most $c_1n-c'n$ edges.
The amount of vertices still required to be added
to transform $S$ to a legal {\small \sf FCEC } output is
$k-k/(1+\epsilon)=\epsilon\cdot k/(1+\epsilon)$.
We can complete the set $S$ to size $k$ by {\em any}
set $S'$ of
$\epsilon\cdot k/(1+\epsilon)$ vertices.
In such case $t(S')\leq
\epsilon\cdot \Delta\cdot k/(1+\epsilon)$.
As we showed before that $t(OPT)\geq k\cdot \Delta/2$,
it follows that
$t(S\cup S')\leq t(OPT)+2\epsilon \cdot t(OPT)$
Thus for getting a ratio of $1+\delta$ for any constant $\delta$ just set $\epsilon=\delta/2$. Therefore, the assumption that the {\small \sf MWEC }
problem admits a PTAS, implies that the {\small \sf FCEC } problem
admits a PTAS on the modified instance, which is highly unlikely, by
Theorem \ref{thm:kminhard}.
\section{An approximation for Fixed Cost Minimum Edge Cover implies the same approximation for Maximum Weight $m'$-Edge Cover}
We first transform the input instance for the {\small \sf MWEC } problem to one in
which the optimum value of the objective function is
at most $n^5$ by paying a very small penalty in the approximation ratio.
\begin{lemma}
\label{lemma:newInstance}
For the Maximum weight $m'$-subgraph problem, we can convert the input
instance $\langle G,w,m'\rangle$, with an optimal
solution denoted by $OPT$ into an instance $\langle
G',w',m'\rangle$, with optimal solution OPT'', such that $OPT''\leq
n^5$. Furthermore, if $OPT'$ is the total weight of the vertices in
$OPT''$ under the weight function $w$, then
\[
OPT'\geq OPT(1-1/n)(1-1/n^2)
\]
\end{lemma}
\begin{proof}
Let $v_1, v_2, \ldots, v_n$ be the vertices in $G$ such that
$w(v_1)\geq w(v_2)\geq \cdots \geq w(v_n)$. Let $v_p$ be the last
vertex in the ordering such that $w(v_p)\geq w(v_1)/n^2$. In other
words, for each $j$, $p<j\leq n$, $w(v_1)> n^2w(v_j)$. Let $G'$ is the
graph induced on vertices $v_1,v_2, \ldots, v_p$. Let OPT$_1$ be the
optimal solution for the instance $\langle G',w,m'\rangle$. Note that
OPT may choose some vertices from the set $\{v_{p+1}, v_{p+1}, \ldots,
v_{n}\}$. The error incurred in not considering these vertices is at
most $n(w(v_1)/n^2) \leq OPT/n$. Thus we get
\[
OPT_1 \geq OPT(1-1/n)
\]
We now scale the weights of vertices in $G'$ to create an instance
$\langle G',w',m'\rangle$, where
\[
w'(v_j) = \left \lfloor \left (\frac{w(v_j)}{w(v_p)}\right )n^2 \right \rfloor
\]
Let $OPT''$ be an optimal solution to $\langle
G',w',m'\rangle$. Clearly, $OPT''\leq n^5$.
Let $OPT'$ be the cost of the solution
$OPT''$ under the weight function $w$, i.e., $OPT' =
\sum_{v\in OPT''}w(v)$. Thus we have
\begin{equation}
OPT' \geq OPT_1\left (1 - \frac{1}{n^2}\right )
\geq OPT\left (1 - \frac{1}{n}\right )\left (1 - \frac{1}{n^2}\right )
\end{equation}
\end{proof}
\begin{theorem}
\label{thm:}
For some constant $\alpha$, an $\alpha$ approximation guarantee for
{{\small \sf FCEC }} implies an $\alpha(1+o(1))$ approximation guarantee for {{\small \sf MWEC }}.
\end{theorem}
\begin{proof}
Suppose that we have an $\alpha>1$ approximation algorithm
for {{\small \sf FCEC }}, for some constant $\alpha$.
Using Lemma \ref{lemma:newInstance}, we transform the {{\small \sf MWEC }} instance
$(G,m')$ with an optimal weight $W^*$ to an instance in which the optimum weight
$W^*\leq n^4$. This increase the approximation ratio by a factor of
only $(1+o(1))$.
We now consider the modified instance $(G',m')$ as an
input to {{\small \sf FCEC }}. We guess the value of $W^*$ by trying all possible
integral values between $1$ and $n^4$. For each guess of $W^*$, we
apply the $\alpha$-approximation algorithm for {{\small \sf FCEC }} to the new
instance.
When our guess $W^*$ is correct and we apply the algorithm, we obtain a
set $U$ of vertices of cost at least $W^*$ and that touch at most
$\alpha\cdot m'$ edges.
Create a new set $B$ in which every vertex from $U$ is chosen
with a probability $1/\alpha$.
We say that an edge $e$ is {\em deleted} if $e\not\in E(B)$.
Let $\tau$ be a constant.
We consider the following ''bad" events: (i) $w(B)\leq
W^*/((1+\tau)\alpha)$, (ii) $t(B)>m'$.
We first bound the probability that $w(B)\leq W^*/((1+\tau)\alpha)$.
The expected cost of $B$ is $w(U)/\alpha= W^*/\alpha$.
Consider the expected cost of $U\setminus B$. The expected cost is $W^*- W^*/\alpha$.
The event that $w(B)\leq W^*/(\alpha(1+\tau)))$
is equivalent to the event
$w(U)-w(B)\geq W^*-W^*/(\alpha(1+\tau))=W^*(1-1/(\rho(1+\tau))$.
By the Markov's inequality, the last event has probability at most
$(1-1/\alpha)/(1- 1/(\alpha(1+\tau))= 1-\tau/(\alpha+\alpha\cdot \tau-1)$
We now bound the probability of the second bad event.
The expected number of edges in $E(B)$ is
at most $m'(1-(1-\frac{1}{\alpha})^2)$.
Note that the events that edges are deleted
are positively correlated because given
that an edge $(v,u)$ is deleted, one of the possibilities that
can cause this event, is that $v$ is deleted, and in that case all
edges of $v$ are deleted with probability $1$. Clearly, we can assume
that $m'\geq c$ for any constant $c$. Otherwise, we can solve the
{{\small \sf MWEC }} problem in polynomial time by checking all subsets of edges.
By the Chernoff bound, the probability that the number of edges is
more than $m'$ is bounded by $exp(-c\delta^2/2)$, for some $\delta <
1$. We can choose a large enough $c$ so that the above probability is
at most $\tau/(2(\alpha+\alpha\cdot \tau -1))$.
This would mean that the sum of probabilities of bad events is strictly
smaller than $1$.
This construction can be derandomized
by the method of conditional expectations.
\end{proof}
\section{Exact algorithm for the Degrees Density Augmentation Problem}
The {\small \sf Degrees density augmentation } problem is as follows: Given
a graph $G=(V,E)$ and a subset $U\subseteq V$, the objective is to
find a subset $W\subseteq V\setminus U$ such that
\[
\rho = \frac{e(W)+e(U, W)}{deg(W)} \mbox{ is maximized}
\]
The {\small \sf Degrees density augmentation } problem is related to the {\small \sf FCEC } problem in the same way as the
Densest subgraph problem is related to the Dense $k$-subgraph
problem. A natural heuristic for the {\small \sf FCEC } problem would be to
iteratively find a set $W$ with good augmentation degrees density.
A polynomial time exact solution for the problem using linear
programming is given in \cite{CMNRS}. Here we present a combinatorial
algorithm.
We solve the {\small \sf Degrees density augmentation } problem exactly by finding minimum $s$-$t$ cut in
the flow network constructed as follows. Let $\overline{U}$ denote the
set $V\setminus U$. In addition to the source $s$
and the sink $t$, the vertex set contains $V_{E'}\cup \overline{U}$,
where $V_{E'}=\{v_e\, |\, e\in E \mbox{ and both end points of $e$ are
in } \overline{U}\}$.
There is an edge from $s$ to every vertex in $V_{E'}\cup \overline{U}$. If
$a$ is a vertex in $V_{E'}$ then the capacity of the edge
$(s,a)$ is $1$, otherwise, the capacity of the edge is
$deg_U(a)$. For each vertex $v_e\in V_{E'}$, where $e=(p,q)$, there
are edges $(v_e,p)$ and $(v_e,q)$. Each such
edge has a large capacity of $M= \infty$ (any capacity of at least
$n^5$ would work).
Finally, each
vertex $p\in \overline{U}$ is connected to $t$ and has a capacity of
$\rho\cdot deg(p)$.
\subsection{Algorithm}
For a particular value of $\rho$, let $W_s\subseteq \overline U$ be
the vertices that are on the $s$($t$) side of a minimum $s$-$t$
cut. Let $V_{E'}^s\subseteq V_{E'}$($V_{E'}^t\subseteq V_{E'}$) be the
vertices in $V_{E'}$ that are on the $s$($t$) side of the minimum $s$-$t$
cut. We now state the algorithm.
\begin{enumerate}
\item[1.]
Construct the flow network as shown above.
\item[2.]
For each value of $\rho$, compute a minimum $s$-$t$ cut and find the
resulting value of
$e(W_s) + e(U, W_s) - \rho deg(W_s)$. Find the largest value of
$\rho$ for which the expression is at least $0$.
\item[3.]
Return $W_s$ corresponding to the largest value of $\rho$.
\end{enumerate}
\subsection{Analysis}
\begin{lemma}
\label{lemma:mincut}
Any minimum $s$-$t$ cut in the above flow network has
capacity at most $2n^2$.
\end{lemma}
\begin{proof}
This follows because the $s$-$t$ cut $(s,V\setminus \{s\})$ has
capacity at most $2n^2$.
\end{proof}
\begin{lemma}
\label{lemma:allins}
For any minimum $s$-$t$ cut $C$, $|V_{E'}^s| = e(W_s)$.
\end{lemma}
\begin{proof}
Note that it cannot be the case that $|V_{E'}^s| > e(W_s)$, as this
will result in the
capacity of the cut $C$ being at least $M$, which is not possible by
Lemma \ref{lemma:mincut}. Note that
any $s$-$t$ cut for which $|V_{E'}^s| < e(W_s)$ can be
transformed into another $s$-$t$ cut of a lower capacity in which $|V_{E'}^s| =
e(W_s)$ by moving vertices in $V_{E'}^t$ that correspond to edges in
$W_s$ to $V_{E'}^s$. Since edges from $s$ to vertices in $V_{E'}$
(vertices in $V_{E'}^t$, in particular) have
capacity of $1$,
the capacity of the cut reduces. The claim follows.
\end{proof}
\begin{lemma}
The \textsf{Degrees Density Augmentation} problem admits a polynomial
time exact solution.
\end{lemma}
\begin{proof}
We are interested in finding a non-empty set $W_s\subseteq
\overline{U}$ such that
\(
\frac{e(W_s) + e(U,W_s)}{deg(W_s)} \mbox{ is maximized }.
\)
Note that there are at most $2n^4$ possible values of $\rho$ that our
algorithm needs to try.
Indeed, the numerator is an integer between $1$ and $2n^2$
and the denominator is an integer between $1$ and $n^2$.
Since minimum $s$-$t$ cut can be computed in polynomial time, our
algorithm runs in polynomial time.
For any fixed guess for $\rho$, the capacity of the min $s$-$t$ cut is
given by
\begin{eqnarray*}
& \min_{W_s\subseteq \overline{U}} {|V_{E'}^t| + deg_U(W_t) + \rho
deg(W_s)} \\
= & \min_{W_s\subseteq \overline{U}} {|V_{E'}|-|V_{E'}^s| +
deg_U(\overline{U}) - deg_U(W_s) + \rho deg(W_s)} \\
= & |V_{E'}| + deg_U(\overline{U}) - \max_{W_s\subseteq \overline{U}}
{|V_{E'}^s| + deg_U(W_S) -\rho deg(W_s)}\\
= & |V_{E'}| + deg_U(V\setminus U) - \max_{W_s\subseteq \overline{U}}
{e(W_s) + e(U,W_S) -\rho deg(W_s)} (\mbox{ using Lemma \ref{lemma:allins}})\\
\end{eqnarray*}
Our algorithm ensures that $\rho deg(W_s) \geq e(W_s) + e(U,W_S)$,
which eliminates the possibility of $W_s=\emptyset$. Thus, finding the
minimum $s$-$t$ cut for a fixed $\rho$ in the above flow network is
equivalent to finding a set $W_s$ with the largest degree
density. Thus we have
\[
\frac{e(W_s) + e(U,W_s)}{deg(W_s)} \geq \rho
\]
Since our algorithm finds such $W_s$ for each possible fraction that
$\rho$ can assume and returns the $W_s$ with the highest degree
density, our solution is optimal.
\end{proof}
\subparagraph*{Acknowledgements}
We thank V.\ Chakravarthy for introducing the {\small \sf FCEC } problem to us. We
also thank V.\ Chakravarthy and S.\ Roy for useful discussions.
\bibliographystyle{abbrv}
|
1,116,691,499,813 | arxiv | \section{General}
This dataset release accompanies \newcite{Pinter2016} which describes the motivation and grammatical theory\footnote{This dataset is presented there in section 3.3.}. Please cite that paper when referencing the dataset.
The dataset may be accessed via the Yahoo Webscope homepage\footnote{\url{http://webscope.sandbox.yahoo.com}} under \textbf{Linguistic Data} as dataset \textbf{L-28}. The description in Section~\ref{sec:desc} is included within the dataset as a Readme.
The dataset is sure to have annotation errors which are not covered by the special cases specified in this document. Please approach the first author for any corrections and they will appear in the next release. See Section~\ref{sec:errors} for known errors.
\section{Dataset Description}
\label{sec:desc}
User queries annotated for syntactic dependency parsing, version 1.0.
These queries were issued on all search engines between 2012 and 2014 and led the searcher to click on a result link to a question page on the Yahoo Answers site\footnote{\url{http://answers.yahoo.com}}.
\subsection{Full description}
This dataset contains two files:\\
\begin{tabular}{|c|}
\hline
ydata-search-parsed-queries-dev-v1\_0.txt\\ 1,000 queries, 5,344 tokens\\
\hline
ydata-search-parsed-queries-test-v1\_0.txt\\ 4,000 queries, 26,015 tokens\\
\hline
\end{tabular}
\\\\
These files differ in their level of annotation, but share the schema. They contain tab-delimited lines, each representing a single token in a Web query. The tokens in each query are given sequentially, and queries are given in order of an arbitrarily-selected numeric ID (with no empty lines between queries). The field schema is detailed in Table~\ref{table:schema}.
\begin{table*}
\centering
\begin{tabular}{|l|l|}
\hline
Field \# & Field content\\
\hline
0 & query-id\\
1 & token-in-query (starting with 1)\\
2 & segmentation marker (`SEG' iff token starts a new segment, `-' otherwise)\\
3 & token form\\
4 & token part-of-speech $\dagger{}$\\
5 & index of syntactic head token, with 0 denoting root $\dagger{}$\\
6 & dependency relation of edge from head token to this token $\dagger{}\ddagger{}$\\
\hline
\end{tabular}
\caption{Fields marked by $\dagger{}$ are not populated for the dev set in V 1.0; fields with $\ddagger{}$ are not fully populated in the test set.}
\label{table:schema}
\end{table*}
\begin{table}
\centering
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
183 & 1 & SEG & charter & NN & 2 & nn\\
183 & 2 & - & school & NN & 0 & root\\
183 & 3 & SEG & graduate & VB & 0 & root\\
183 & 4 & - & early & RB & 3 & advmod\\
\hline
\end{tabular}
\caption{A sample query (\#{}183 in test set).}
\label{table:sample}
\end{table}
An example query is brought in Table~\ref{table:sample}.
These lines represent a query, whose ID in the `test' set is 183, and whose raw form is \textit{charter school graduate early}. It is interpretable thus: the query is composed of two syntactic segments: \textit{[charter school] [graduate early]}. In the first segment, the syntactic root is the noun \textit{school}, and the noun \textit{charter} modifies it in a nominal compound modifier relation. The second segment is rooted by the verb \textit{graduate}, modified by the adverb \textit{early} in an adverbial modifier relation.
A dependency tree corresponding to this query is produced in Figure~\ref{fig:example}.
\begin{figure}
\centering
\begin{dependency}[edge style=thick]
\begin{deptext}
\small NN \& \small NN \& \small VB \& \small RB \\
charter \& school \& graduate \& early \\
\end{deptext}
\depedge[edge unit distance=2ex]{2}{1}{nn}
\deproot[edge unit distance=1.6ex]{2}{root}
\deproot[edge unit distance=1.6ex]{3}{root}
\depedge[edge unit distance=2ex]{3}{4}{advmod}
\wordgroup{2}{1}{2}{pred}
\wordgroup{2}{3}{4}{arg}
\end{dependency}
\caption{Query \#183 from the test set, tagged and parsed.}
\label{fig:example}
\end{figure}
\subsection{Linguistic pre-processing notes}
All queries were tokenized using the ClearNLP tokenizer for English \cite{Clear} \footnote{Version 2.0.1, from \url{http://www.clearnlp.com}} and were not spell-corrected or filtered for adult content. For Excel-friendliness, initial quotes were replaced with backticks. When using off-the-shelf processing tools, we recommend to re-replace them in pre-processing.
\section{Annotation Guidelines}
The parse tree annotations followed the segmentation annotation. Both phases included supervision over automatic part-of-speech tagging.
In general, parse tree annotators were instructed to adhere to the Stanford Dependency guidelines \cite{sd2008}\footnote{Version 3.5.2, from \url{http://nlp.stanford.edu/software/dependencies_manual.pdf}} with the necessary caveats that come with non-standard text prone to errors.
Following are some selected issues which can be seen in the test dataset. Section~\ref{ssec:seg} describes issues relating to segmentation tagging, and the subsequent subsections address issues of dependency edge attachment.
\subsection{Segmentation Ambiguity}
\label{ssec:seg}
\paragraph{Noun Strings}
The most difficult segmentation decisions have been in cases of long strings of nouns, both common and proper. The guideline is based on judgment, asking whether \textbf{the phrase can stand as a constituent in a coherent sentence}. For example, in query 116 words 1-3 are \textit{big mac cost}, a phrase considered clunky as opposed to the much-preferred \textit{cost of (a) big mac}, resulting in the decision to segment it as \textit{[big mac] [cost]}. A more clear-cut case is the segmentation decision in q.284: \textit{[first day of missed period] [symptoms]}, where theoretically \textit{day} could be tagged as a modifier phrase head of \textit{symptoms}, but no imaginable well-formed sentence would use this formation as a constituent.
Sometimes reasonable semantics forced us to conclude towards a segmentation decision over an unlikely (but syntactically well-formed) single-segment constituent. For example, in q.271 \textit{[fanfiction] [guest reviews]} we assume (and support this decision by running our own search) that the searcher was looking for guest reviews on fanfiction website platforms rather than guest reviews written about pieces of fanfiction. Likewise, in q.3013 \textit{[questionnaire] [assisted suicide]} we treated \textit{assisted} as an adjective relating to \textit{suicide} rather than a main verb in a sentence describing an unlikely scenario.
\paragraph{Missing Auxiliary Verbs}
Another issue was certain constructions which may become a sentence by the addition of a copula or auxiliary verb. The guidelines stated that if the post-copular sequence describes an attribute of the semantic subject (pre-copula), it is considered the head of an adjunct phrase and shares its segment (e.g. q.2916 \textit{[paragraph \textbf{describing} a family]}, q.91.w.1-4: \textit{[battery cables not tight]}). If it is construed as a sentence missing an auxiliary, it is segmented away from the subject (e.g. q.475 \textit{[lab] [\textbf{throwing} up blood ?]}). In a way, this is an explicit application of the constituency test described earlier.
\subsection{Proper Names' Internal Structure}
The dataset contains many instances of product names followed by numbers denoting model (e.g. query 2347: \textit{how to replace crank sensor on \textbf{95 saab cs9000}}. The guidelines in these cases were to place the phrase head as the last alphabetic word (here, \textit{saab}) and depend the post-modifier (\textit{cs9000}) on it as an \textit{npadvmod}.
Long proper names with complex internal structures were left as their underlying structures. E.g. query 3252: \textit{the$_1$ call of the$_2$ wild} is parsed with \textit{call} as the phrase head with a \textit{det} preceding and a \textit{prep} following, further decomposed into \textit{pobj(of,wild)} and \textit{det(wild,the$_2$)}. However, more trivial internal structures were flattened into the common proper name representation: q.3260, \textit{jeeves \& wooster}: \textit{nn(wooster,jeeves)}, \textit{nn(wooster,\&)}. Nominal compounds were tagged as successive \textit{NNP}s, e.g. q.3413, \textit{what cameras do they use in \textbf{planet\_NNP earth\_NNP}}: \textit{nn(earth,planet)}.
\subsection{Truncated Sentences}
Many of the queries in the dataset are in fact sentences aborted mid-way. Some because the query is a sentence-completion question (e.g. query 970 \textit{a major result of the european age of exploration was}) and some for more opaque reasons (e.g. q.3828 \textit{why are russians so}). In both these types, where the root of the sentence would normally lie in the missing complement, the root was assigned to the token nearest to it in the assumed full graph (\textit{was} and \textit{so}, respectively), with the other dependents collapsed unto it. The same goes for any phrase which is truncated before its grammatical head (e.g. q.3840) or mandatory complement (e.g. q.1861).
\subsection{Foreign Languages}
The dataset contains several non-English queries that are treated as nonce (part-of-speech tag = `FW', dependency relation = `dep'). Their parse tree is, as a rule, a flat tree headed by the final word (proper name convention). E.g. q.3299, q.3319. Other cases where foreign words are tagged as `FW' is when they function meta-linguistically. E.g. q.3472, \textit{what does \textbf{baka\_FW} mean in japanese}.
\subsection{Grammatical Errors}
By the nature of Web Queries being written in real time by users possessing diverse proficiency and competence, the dataset contains many grammatical errors (as well as typos -- see Section~\ref{ssec:typos}). These were not corrected during pre-processing in order to maintain the authenticity of the data. The guidelines in such cases, unless re-segmentation was in order, called to retain as much of the intended structure as possible. For example, in query 3274, word 4 \textit{part} is meant to be \textit{parts} and as such is tagged as a plural noun. In some cases, different parts of the sentence were fused together due to incorrect grammar. The solution was to represent the fused token by the head of the intended phrase if it is fully contained within (e.g. q.3233, \textit{dogs} for \textit{dog 's}), or ignore a dependent if there is structural crossover (e.g. in q.3590, \textit{my sister in \textbf{laws} husband}, the possessive \textit{'s} was in effect excluded from the tree).
Another common case was auxiliary deletion common in ESL writing, e.g. query 3949 \textit{why you study in university}. Here we opted again for the intended meaning as a sentence missing the auxiliary \textit{do} rather than treating the full query as a constituent (adverbial clause, clausal complement, etc.) of a deleted governing predicate.
\subsection{Typographical Errors}
\label{ssec:typos}
In general, obvious typos were treated as the intended word whether they resulted in a legal English word or not. E.g. q.3 w.13, \textit{trianlgle\_NN}, or q.3786 w.5 \textit{if\_VBZ} (for the intended \textit{is}).\\
Sometimes, typos result in word merge, in which case they were either POS-tagged as \textit{XX} (e.g. q.252 w.3) or by the head if they can be construed as a coherent phrase (e.g. q.3115 w.1 \textit{searchwhere}, which is probably a mis-concatenation of a meta-linguistic \textit{search} with the first intended term \textit{where}, and so analyzed as if it were the latter alone).
Sometimes, extremely creative tokenization is employed by users. Behold the glory that is query 1814: \textit{green chemistry.in day today life}. We parsed \textit{day today} as if it were a noun phrase, and gave up representing the preposition \textit{in} altogether.
\subsection{BE-sentences}
Sentences with forms of the verb BE are often ambiguous between attributive sentences (where the BE-verb acts as copula to the head of the following phrase) and proper essential statements where BE is the main verb. We tended to go for the former in case of ambiguity, excepting for very clear cases of essence (e.g. query 3264 \textit{the free market is a myth}: \textit{root(ROOT,is)}) and, of course, where the following can only be a clausal or prepositional complement (e.g. q.799 \textit{trouble \textbf{is} you think you have time}, q.3593 \textit{what \textbf{is} on sheldons shirt in season 1 episode 4}).
\section{Known Annotation Errors}
\label{sec:errors}
\subsection{Segmentation Errors}
Test set:
\begin{tabular}{|l|l|l|}
\hline
Query ID & V 1.0 & Correct\\
\hline
194 & 1,3 & 1\\
304 & 1,3 & 1\\
325 & 1,3 & 1\\
362 & 1,3 & 1\\
425 & 1,4 & 1\\
847 & 1,3 & 1\\
911 & 1,4 & 1\\
3779 & 1,5 & 1\\
\hline
1147 & 1 & 1,3\\
1812 & 1 & 1,4\\
2784 & 1 & 1,2,3\\
2883 & 1 & 1,7\\
2912 & 1 & 1,5,7\\
3348 & 1 & 1,2\\
3366 & 1 & 1,2,3,4\\
\hline
\end{tabular}
\subsection{Attachment Errors}
Test set:
\begin{tabular}{|l|l|l|}
\hline
Query.Token & V 1.0 & Correct\\
\hline
3153.6 & 2 & 7\\
3153.7 & 6 & 2\\
\hline
\end{tabular}
\section*{Credits}
Segmentation tagged by \textbf{Bettina Bolla}, \textbf{Avihai Mejer}, \textbf{Yuval Pinter}, \textbf{Roi Reichart}, and \textbf{Idan Szpektor}. Parsing tagged by \textbf{Shir Givoni} and Yuval Pinter.
\bibliographystyle{acl}
|
1,116,691,499,814 | arxiv | \section{Introduction}
We consider a power law model of non-Newtonian fluid in $\Bbb{R}^3$
\begin{equation}\label{NNFE}
\left\{ \aligned &
- \vect{A}_p(u) + (u \cdot \nabla) u = - \nabla \uppi \quad \mbox{in} \quad \Bbb{R}^3,\\
&\quad\mathrm{div}\, u = 0,
\endaligned \right.
\end{equation}
where $u = (u_1(x), u_2(x), u_3(x))$ is the velocity field, $\uppi = \uppi(x)$ is the pressure field. The diffusion term is represented by
$$
\vect{A}_p(u) = \mathrm{div} ( | \vect{D}(u) |^{p-2} \vect{D}(u) ), \qquad 1 < p < + \infty
$$
and the deviatoric stress tensor is interpreted as $|\vect{D}|^{p-2} \vect{D} = \vect{\upsigma}(\vect{D})$, where $\vect{D}(u) = \frac 12 ( \nabla u + ( \nabla u )^{\top})$ is the symmetric gradient. If $2 < p < + \infty$, the equations describe shear thickening fluids, of which viscosity increases along with shear rate $|\vect{D}(u)|$. If $1 < p < 2$, shear thinning fluids satisfy them. In the case of $p = 2$, \eqref{NNFE} corresponds to the usual stationary Navier-Stokes equations which represent Newtonian fluids. We refer to Wilkinson \cite{W60} for continuum mechanical background of the above system.
The Liouville problem for the stationary Navier-Stokes equations (Galdi \cite{G11}, Remark X. 9.4, pp. 729) has attracted considerable attention in the mathematical fluid mechanics. Though it is still open, there are positive answers under additional conditions (see \cite{CY13,C14,CW16,CJL19,GW78,KNSS09,KPR15,KTW17,S16,S18,SW19}). And as a generalization, Liouville type theorems for non-Newtonian fluids have been investigated (see \cite{JK14, CW20}).
Let $u \in L^1_{loc}(\Bbb{R}^3)$ be a vector field, and let $\vect{V} = (V_{ij}) \in L^1_{loc}(\Bbb{R}^3;\Bbb{R}^3 \times \Bbb{R}^3)$ be a matrix valued function satisfying $\mathrm{div}\, \vect{V} = u$ in the distributional sense. In \cite{CW19} Chae and Wolf proved Liouville type theorem for the stationary Navier-Stokes equations when the following is assumed
$$
\left( \fint_{B(r)} |\vect{V} - \vect{V}_{B(r)} |^s \mathrm{d}x \right)^{\frac 1s} \leq C r^{\min \left\{ \frac 13 - \frac 1s, \frac 16 \right\}} \qquad \forall 1 < r < + \infty
$$
for some $3 < s < + \infty$. They also considered it in \eqref{NNFE} when $\frac 95 < p < 3$ but only for $s = \frac {3p}{2p - 3}$ in \cite{CW20}. We generalize these results.
As is well known, weak solutions are actually smooth for $p = 2$. Otherwise there is only partial regularity of weak solutions \cite{P96,FMS03}. In this paper, we consider weak solutions, which is defined as follows :
\begin{defn}\label{WS}
Let $\frac 95 < p < 3$. A function $u \in W^{1,p}_{loc}(\Bbb{R}^3)$ is called a weak solution to \eqref{NNFE} if
\begin{equation}\label{WF1}
\int_{\Bbb{R}^3} \left( | \vect{D}(u) |^{p-2} \vect{D}(u) - u \otimes u \right) : \vect{D}(\varphi) \mathrm{d}x = 0
\end{equation}
is fulfilled for all vector fields $\varphi \in C^{\infty}_c(\Bbb{R}^3)$ with $\mathrm{div}\, \varphi = 0$.
\end{defn}
\begin{remark}
Let $u$ be a weak solution. Then, there exists $\uppi \in L^{\frac p{p - 1}}_{loc}(\Bbb{R}^3)$ satisfying
\begin{equation}\label{PE}
\int_{B(r)} |\uppi - \uppi_{B(r)}|^s \mathrm{d}x \leq C \int_{B(r)} | | \vect{D}(u) |^{p-2} \vect{D}(u) - u \otimes u|^s \mathrm{d}x, \quad \forall 0 < r < + \infty
\end{equation}
for all $\frac 32 \leq s \leq \frac p{p - 1}$. And $(u, \uppi)$ holds
\begin{equation}\label{WF2}
\int_{\Bbb{R}^3} \left( | \vect{D}(u) |^{p-2} \vect{D}(u) - u \otimes u \right) : \vect{D}(\varphi) \mathrm{d}x = \int_{\Bbb{R}^3} \uppi \mathrm{div}\, \varphi \mathrm{d}x
\end{equation}
for any vector field $\varphi \in W^{1,p}(\Bbb{R}^3)$ with compact support. Hence, \eqref{WF2} { replaces} \eqref{WF1}. We refer to \cite{CW20} for a brief explanation.
\end{remark}
\begin{remark}
Let $(u, \uppi)$ be a weak solution and $\phi \in C^{\infty}_c(\Bbb{R}^3)$. If we take $\varphi = u \phi$, then \eqref{WF2} with $\varphi \in W^{1,p}(\Bbb{R}^3)$ yields the local energy equality
\begin{equation}\label{LEI}
\int_{\Bbb{R}^3} | \vect{D}(u) |^p \phi \mathrm{d}x= - \int_{\Bbb{R}^3} | \vect{D}(u) |^{p - 2} \vect{D}(u) : u \otimes \nabla \phi \mathrm{d}x + \int_{\Bbb{R}^3} \left( \frac 12 | u |^2 + \uppi \right) u \cdot \nabla \phi \mathrm{d}x.
\end{equation}
\end{remark}
\begin{theorem}\label{thm1}
Let $\frac 95 < p < 3$, and $\frac 32 < s < +\infty$ satisfy
$$
s \geq \frac {3p}{2(2p - 3)}, \qquad s > \frac {9 - 3p}{2p - 3}.
$$
Let $(u,\uppi) \in W^{1,p}_{loc}(\Bbb{R}^3) \times L^{\frac p{p - 1}}_{loc}(\Bbb{R}^3)$ be a weak solution to \eqref{NNFE}. We set
$$
\alpha(p,s) := \min\left\{ \frac 13 - \frac {5p - 9}{s(2p - 3)}, \frac 3p - \frac 43 \right\}.
$$
If we assume there exists a potential $\vect{V} \in W^{2,p}_{loc}(\Bbb{R}^3; \Bbb{R}^{3\times3})$ such that
\begin{equation}\label{AS}
\left( \fint_{B(r)} |\vect{V} - \vect{V}_{B(r)} |^s \mathrm{d}x \right)^{\frac 1s} \leq C r^{\alpha(p,s)} \qquad \forall 1 < r < + \infty,
\end{equation}
then $u \equiv 0$.
\end{theorem}
\begin{remark}
If $s = \frac {3p}{2p-3}$, then we obtain $\alpha = \frac {9 - 4p}{3p}$, which consistent with Theorem 1.3 (ii) in \cite{CW20}.
\end{remark}
\begin{remark}
In the case of $p=2$, we have $3 < s < + \infty$ and $\alpha = \min \{\frac s3 - 1, \frac s6\}$, corresponding to Theorem 1.1 in \cite{CW19}.
\end{remark}
We denote by $C(p, s) = C$ a generic constant that may vary from line to line.
\section{Caccioppoli type inequalities}
To prove Theorem~\ref{thm1}, we need the following two lemmas.
\begin{lemma}\label{LEMPP}
Let $\frac 95 < p < 3$. Let $u \in W^{1,p}_{loc}(\Bbb{R}^3)$ and $1 \leq R < + \infty$. For $0 < \rho < R$, we let $\psi \in C^\infty_c(B(R))$ satisfy $0 \leq \psi \leq 1$ and $| \nabla \psi | \leq C (R - \rho)^{-1}$. If we assume there exists $\vect{V} \in W^{2,p}_{loc}(\Bbb{R}^3)$ such that $\mathrm{div}\, \vect{V} = u$ and \eqref{AS} for some $\frac {3p}{2(2p - 3)} \leq s \leq \frac {3p}{2p - 3}$, then for $s \geq p$
\begin{equation}\label{PPPS}
\int_{B(R)} |\psi^p u|^p \mathrm{d}x \leq C R^{\frac 32 + \frac p6 - \frac {p(5p - 9)}{2s(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac p2} + C (R - \rho)^{-p} R^{3 + \frac p3 - \frac {p(5p - 9)}{s(2p - 3)}},
\end{equation}
and for $s < p$
$$
\int_{B(R)} |\psi^p u|^p \mathrm{d}x \leq C R^{\frac {p^2}{3(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac {p(sp - 3s + 3p)}{2sp - 3s + 3p}} + C (R - \rho)^{- \frac {3p}{2s} + \frac 32} R^{\frac p6 + \frac {p^2}{2s(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac p2}
$$
\begin{equation}\label{PPSP}
+ C (R - \rho)^{- \frac {sp^2}{sp + 3p - 3s}} R^{\frac {p^2(2sp + 3p - 3s)}{3(2p - 3)(sp + 3p - 3s)}} \| \psi \nabla u \|_{L^p}^{\frac {p(3p - 3s)}{sp - 3s + 3p}} + C (R - \rho)^{- p - \frac {3p}s + 3} R^{\frac p3 + \frac {p^2}{s(2p - 3)}}.
\end{equation}
\end{lemma}
\begin{proof}
We first let $\frac 95 < p < 2$. Beacuse $s \geq \frac {3p}{2(2p - 3)} \geq p$, we show \eqref{PPPS} in this case. H\"{o}lder's inequality implies that
\begin{equation}\label{PP}
\int_{B(R)} |\psi^p u|^p \mathrm{d}x \leq | B(R) |^{1 - \frac p2} \left( \int_{B(R)} |\psi^p u|^2 \mathrm{d}x \right)^{\frac p2} = C R^{3 - \frac {3p}2} \left( \int_{B(R)} | u |^2 \psi^{2p} \mathrm{d}x \right)^{\frac p2}.
\end{equation}
Recalling $u = \mathrm{div}\, \vect{V}$ and using integration by parts, we have
$$
\int_{B(R)} | u |^2 \psi^{2p} \mathrm{d}x = \int_{B(R)} \partial_i \left(\vect{V}_{ij} - (\vect{V}_{ij})_{B(R)} \right) u_j \psi^{2p} \mathrm{d}x
$$
$$
= - \int_{B(R)} \left(\vect{V}_{ij} - (\vect{V}_{ij})_{B(R)} \right) \partial_i u_j \psi^{2p} \mathrm{d}x - 2p \int_{B(R)} \left(\vect{V}_{ij} - (\vect{V}_{ij})_{B(R)} \right) u_j \psi^{2p - 1} \partial_i \psi \mathrm{d}x
$$
$$
\leq \int_{B(R)} | \vect{V} - \vect{V}_{B(R)} | | \psi \nabla u | \psi^{2p - 1} \mathrm{d}x + 2p \int_{B(R)} | \vect{V} - \vect{V}_{B(R)} | | \psi^p u | | \nabla \psi | \psi^{p - 1} \mathrm{d}x
$$
$$
\leq \int_{B(R)} | \vect{V} - \vect{V}_{B(R)} | | \psi \nabla u | \mathrm{d}x + C (R - \rho)^{-1} \int_{B(R)} | \vect{V} - \vect{V}_{B(R)} | | \psi^p u | \mathrm{d}x.
$$
Note that $\frac {3p}{2(2p - 3)} \leq s \leq \frac {3p}{2p - 3}$ implies
\begin{equation*}
\frac 1s + \frac 1p < 1
\end{equation*}
and
$$
\alpha = \frac 13 - \frac {5p - 9}{s(2p - 3)}.
$$
{ Using} H\"{o}lder's inequality and \eqref{AS}, { we obtain}
$$
\int_{B(R)} | u |^2 \psi^{2p} \mathrm{d}x \leq \left( \fint_{B(R)} | \vect{V} - \vect{V}_{B(R)} |^s \mathrm{d}x \right)^{\frac 1s} \| \psi \nabla u \|_{L^p} | B(R) |^{1 - \frac 1p}
$$
$$
+ C (R - \rho)^{-1} \left( \fint_{B(R)} | \vect{V} - \vect{V}_{B(R)} |^s \mathrm{d}x \right)^{\frac 1s} \| \psi^p u \|_{L^p} | B(R) |^{1 - \frac 1p}
$$
$$
\leq C R^{\frac {10}3 - \frac {5p - 9}{s(2p - 3)} - \frac 3p} \| \psi \nabla u \|_{L^p} + C (R - \rho)^{-1} R^{\frac {10}3 - \frac {5p - 9}{s(2p - 3)} - \frac 3p} \| \psi^p u \|_{L^p}.
$$
Inserting { this inequality} into \eqref{PP} and applying Young's inequality, we have
$$
\int_{B(R)} |\psi^p u|^p \mathrm{d}x \leq C R^{\frac 32 + \frac p6 - \frac {p(5p - 9)}{2s(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac p2} + C (R - \rho)^{- \frac p2} R^{\frac 32 + \frac p6 - \frac {p(5p - 9)}{2s(2p - 3)}} \| \psi^p u \|_{L^p}^{\frac p2}
$$
$$
\leq C R^{\frac 32 + \frac p6 - \frac {p(5p - 9)}{2s(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac p2} + C (R - \rho)^{-p} R^{3 + \frac p3 - \frac {p(5p - 9)}{s(2p - 3)}} + \frac 12 \int_{B(R)} |\psi^p u|^p \mathrm{d}x,
$$
which implies \eqref{PPPS}.
Now, we let $2 \leq p < 3$. Using $u = \mathrm{div}\, \vect{V}$, integration by parts and H\"{o}lder's inequality, we find
$$
\int_{B(R)} |\psi^p u|^p \mathrm{d}x = \int_{B(R)} \partial_i \left(\vect{V}_{ij} - (\vect{V}_{ij})_{B(R)} \right) u_j | u |^{p - 2} \psi^{p^2} \mathrm{d}x
$$
$$
= - \int_{B(R)} \left(\vect{V}_{ij} - (\vect{V}_{ij})_{B(R)} \right) \left( \partial_i u_j | u |^{p - 2} + (p-2) u_j u_k \partial_i u_k | u |^{p - 4} \right) \psi^{p^2} \mathrm{d}x
$$
$$
- p^2 \int_{B(R)} \left(\vect{V}_{ij} - (\vect{V}_{ij})_{B(R)} \right) u_j | u |^{p - 2} \psi^{p^2 - 1} \partial_i \psi \mathrm{d}x
$$
$$
\leq C \int_{B(R)} | \vect{V} - \vect{V}_{B(R)} | | \psi \nabla u | | \psi^{p + 1} u |^{p - 2} \mathrm{d}x
$$
$$
+ C (R - \rho)^{-1} \int_{B(R)} | \vect{V} - \vect{V}_{B(R)} | | \psi^{p + 1} u |^{p - 1} \mathrm{d}x
$$
$$
\leq C \| \vect{V} - \vect{V}_{B(R)} \|_{L^s(B(R))} \| \psi \nabla u \|_{L^p} \| | \psi^{p + 1} u |^{p-2} \|_{L^{\frac {sp}{sp-s-p}}}
$$
\begin{equation*}
+ C (R - \rho)^{-1} \| \vect{V} - \vect{V}_{B(R)} \|_{L^s(B(R))} \| | \psi^{p + 1} u |^{p-1} \|_{L^{\frac s{s-1}}} = I + II.
\end{equation*}
We assume $s \geq p$ first. Since
$$
1 \leq \frac {sp}{sp-s-p} \leq \frac p{p-2}
$$
by applying H\"{o}lder's inequality to $I$ we have
$$
I \leq C \| \vect{V} - \vect{V}_{B(R)} \|_{L^s(B(R))} \| \psi \nabla u \|_{L^p} \| \psi^p u \|_{L^p}^{p - 2} | B(R) |^{\frac1p - \frac 1s}.
$$
Subsequently, we use \eqref{AS} and Young's inequality. This { yields}
$$
I \leq C R^{\frac 3p} \left( \fint_{B(R)} |\vect{V} - \vect{V}_{B(R)} |^s \mathrm{d}x \right)^{\frac 1s} \| \psi \nabla u \|_{L^p} \| \psi^p u \|_{L^p}^{p-2}
$$
$$
\leq C R^{\frac 3p + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \| \psi \nabla u \|_{L^p} \| \psi^p u \|_{L^p}^{p-2}
$$
\begin{equation}\label{PPI1}
\leq C R^{\frac 32 + \frac p6 - \frac {p(5p - 9)}{2s(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac p2} + \frac 14 \int |\psi^p u|^p \mathrm{d}x.
\end{equation}
{ Arguing similarly to the above, having}
$$
1 < \frac s{s-1} \leq \frac p{p-1},
$$
we can calculate $II$ as follows
$$
II \leq C (R - \rho)^{-1} \| \vect{V} - \vect{V}_{B(R)} \|_{L^s(B(R))} \| \psi^{p + 1} u \|_{L^p}^{p-1} | B(R) |^{\frac 1p - \frac 1s}
$$
$$
\leq C (R - \rho)^{-1} R^{\frac 3p} \left( \fint_{B(R)} |\vect{V} - \vect{V}_{B(R)} |^s \mathrm{d}x \right)^{\frac 1s} \| \psi^p u \|_{L^p}^{p-1}
$$
$$
\leq C (R - \rho)^{-1} R^{\frac 3p + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \| \psi^p u \|_{L^p}^{p-1}
$$
\begin{equation}\label{PPII1}
\leq C (R - \rho)^{-p} R^{3 + \frac p3 - \frac {p(5p - 9)}{s(2p - 3)}} + \frac 14 \int |\psi^p u|^p \mathrm{d}x.
\end{equation}
\eqref{PPI1} and \eqref{PPII1} show \eqref{PPPS}.
Now we assume $s < p$. { Since it holds}
$$
\frac p{p-2} < \frac {sp}{sp-s-p} \leq \frac {3p}{3-p} < \frac {3p}{(3-p)(p-2)},
$$
the standard interpolation inequality implies that
$$
\| | \psi^{p + 1} u |^{p-2} \|_{L^{\frac {sp}{sp-s-p}}} \leq \| | \psi^{p + 1} u |^{p-2} \|_{L^{\frac {3p}{(3-p)(p-2)}}}^{\frac {3p - 3s}{sp(p - 2)}} \| | \psi^{p + 1} u |^{p-2} \|_{L^{\frac p{p-2}}}^{\frac {sp(p - 2) - 3p + 3s}{sp(p - 2)}}
$$
$$
= \| \psi^{p + 1} u \|_{L^{\frac {3p}{3-p}}}^{\frac {3p - 3s}{sp}} \| \psi^{p + 1} u \|_{L^p}^{\frac {sp(p - 2) - 3p + 3s}{sp}}.
$$
{ Thus,} by using Sobolev inequality here, { we arrive at}
$$
\| | \psi^{p + 1} u |^{p-2} \|_{L^{\frac {sp}{sp-s-p}}} \leq C \left( \| \psi^{p + 1} \nabla u \|_{L^p} + (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {3p - 3s}{sp}} \| \psi^{p + 1} u \|_{L^p}^{\frac {sp(p - 2) - 3p + 3s}{sp}}
$$
$$
\leq C \| \psi \nabla u \|_{L^p}^{\frac {3p - 3s}{sp}} \| \psi^p u \|_{L^p}^{\frac {sp(p - 2) - 3p + 3s}{sp}} + C (R - \rho)^{- \frac {3p - 3s}{sp}} \| \psi^p u \|_{L^p}^{p - 2}.
$$
{ Inserting the inequality we just obtained into} $I$ and applying \eqref{AS} along with Young's inequality, { we find}
$$
I \leq C R^{\frac 3s} \left( \fint_{B(R)} |\vect{V} - \vect{V}_{B(R)} |^s \mathrm{d}x \right)^{\frac 1s} \| \psi \nabla u \|_{L^p}^{\frac {sp + 3p - 3s}{sp}} \| \psi^p u \|_{L^p}^{\frac {sp(p - 2) - 3p + 3s}{sp}}
$$
$$
+ C (R - \rho)^{- \frac {3p - 3s}{sp}} R^{\frac 3s} \left( \fint_{B(R)} |\vect{V} - \vect{V}_{B(R)} |^s \mathrm{d}x \right)^{\frac 1s} \| \psi \nabla u \|_{L^p} \| \psi^p u \|_{L^p}^{p - 2}
$$
$$
\leq C R^{\frac 3s + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac {sp + 3p - 3s}{sp}} \| \psi^p u \|_{L^p}^{\frac {sp(p - 2) - 3p + 3s}{sp}}
$$
$$
+ C (R - \rho)^{- \frac {3p - 3s}{sp}} R^{\frac 3s + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \| \psi \nabla u \|_{L^p} \| \psi^p u \|_{L^p}^{p - 2}
$$
$$
\leq C R^{\frac {p^2}{3(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac {p(sp + 3p - 3s)}{2sp + 3p - 3s}} + C (R - \rho)^{-\frac {3p}{2s} + \frac 32} R^{\frac {3p}{2s} - \frac 32 + \left( \frac 32 + \frac p6 - \frac {p(5p - 9)}{2s(2p - 3)} \right)} \| \psi \nabla u \|_{L^p}^{\frac p2}
$$
\begin{equation}\label{PPI2}
+ \frac 14 \int |\psi^p u|^p \mathrm{d}x.
\end{equation}
We note that $s < p$ implies
$$
\frac p{p-1} < \frac s{s-1} < \frac {3p}{2(3-p)} < \frac {3p}{(3-p)(p-1)}.
$$
Thus, using Interpolation inequality and Sobolev inequality, we infer
$$
\| | \psi^{p + 1} u |^{p - 1} \|_{L^{\frac s{s - 1}}} \leq \| | \psi^{p + 1} u |^{p-1} \|_{L^{\frac {3p}{(3-p)(p-1)}}}^{\frac {3p - 3s}{sp(p - 1)}} \| | \psi^{p + 1} u |^{p-1} \|_{L^{\frac p{p-1}}}^{\frac {sp(p - 1) - 3p + 3s}{sp(p - 1)}}
$$
$$
= \| \psi^{p + 1} u \|_{L^{\frac {3p}{3-p}}}^{\frac {3p - 3s}{sp}} \| \psi^{p + 1} u \|_{L^p}^{\frac {sp(p - 1) - 3p + 3s}{sp}}
$$
$$
\leq C \left( \| \psi^{p + 1} \nabla u \|_{L^p} + (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {3p - 3s}{sp}} \| \psi^{p + 1} u \|_{L^p}^{\frac {sp(p - 1) - 3p + 3s}{sp}}
$$
$$
\leq C \| \psi \nabla u \|_{L^p}^{\frac {3p - 3s}{sp}} \| \psi^p u \|_{L^p}^{\frac {sp(p - 1) - 3p + 3s}{sp}} + C (R - \rho)^{- \frac {3p - 3s}{sp}} \| \psi^p u \|_{L^p}^{p - 1}.
$$
{ Inserting this inequality } into $II$ and using \eqref{AS} and { Young's} inequality, { it follows}
$$
II \leq C (R - \rho)^{-1} R^{\frac 3s} \left( \fint_{B(R)} |\vect{V} - \vect{V}_{B(R)} |^s \mathrm{d}x \right)^{\frac 1s} \| \psi \nabla u \|_{L^p}^{\frac {3p - 3s}{sp}} \| \psi^p u \|_{L^p}^{\frac {sp(p - 1) - 3p + 3s}{sp}}
$$
$$
+ C (R - \rho)^{- 1 - \frac {3p - 3s}{sp}} R^{\frac 3s} \left( \fint_{B(R)} |\vect{V} - \vect{V}_{B(R)} |^s \mathrm{d}x \right)^{\frac 1s} \| \psi^p u \|_{L^p}^{p - 1}
$$
$$
\leq C (R - \rho)^{-1} R^{\frac 3s + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac {3p - 3s}{sp}} \| \psi^p u \|_{L^p}^{\frac {sp(p - 1) - 3p + 3s}{sp}}
$$
$$
+ C (R - \rho)^{- 1 - \frac {3p - 3s}{sp}} R^{\frac 3s + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \| \psi^p u \|_{L^p}^{p - 1}
$$
$$
\leq C (R - \rho)^{- \frac {sp^2}{sp + 3p - 3s}} R^{\frac {p^2(2sp + 3p - 3s)}{3(2p - 3)(sp + 3p - 3s)}} \| \psi \nabla u \|_{L^p}^{\frac {p(3p - 3s)}{sp + 3p - 3s}}
$$
\begin{equation}\label{PPII2}
+ C (R - \rho)^{- p - \frac {3p}s + 3} R^{\frac {3p}s + \frac p3 - \frac {p(5p - 9)}{s(2p - 3)}} + \frac 14 \int |\psi^p u|^p \mathrm{d}x.
\end{equation}
By \eqref{PPI2} and \eqref{PPII2} we complete the proof.
\end{proof}
\begin{lemma}\label{LEM3P}
Let $\frac 95 < p < 3$. Let $u \in W^{1,p}_{loc}(\Bbb{R}^3)$ and $1 \leq R < + \infty$. For $0 < \rho < R$, we let $\psi \in C^\infty_c(B(R))$ satisfy $0 \leq \psi \leq 1$ and $| \nabla \psi | \leq C (R - \rho)^{-1}$. If we assume there exists $\vect{V} \in W^{2,p}_{loc}(\Bbb{R}^3)$ such that $\mathrm{div}\, \vect{V} = u$ and \eqref{AS} for some $\frac {3p}{2(2p - 3)} \leq s \leq \frac {3p}{2p - 3}$, then for $s \geq 3$
$$
\int_{B(R)} |\psi^3 u|^3 \mathrm{d}x \leq C R \| \psi \nabla u \|_{L^p}^{\frac {9p}{2sp + 3p - 3s}} + C R \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {9p}{2sp + 3p - 3s}}
$$
\begin{equation}\label{3P3S}
+ C (R - \rho)^{-3} R^{4 - \frac {3(5p - 9)}{s(2p - 3)}},
\end{equation}
and for $s < 3$
$$
\int_{B(R)} |\psi^3 u|^3 \mathrm{d}x \leq C R \| \psi \nabla u \|_{L^p}^{\frac {9p}{2sp + 3p - 3s}} + C R \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {9p}{2sp + 3p - 3s}}
$$
$$
+ C (R - \rho)^{-\frac {3s(2p - 3)}{sp + 3p - 3s}} R^{\frac {2sp + 3p - 3s}{sp + 3p - 3s}} \| \psi \nabla u \|_{L^p}^{\frac {3p(3 - s)}{sp + 3p - 3s}}
$$
\begin{equation}\label{3PS3}
+ C (R - \rho)^{-\frac {3s(2p - 3)}{sp + 3p - 3s}} R^{\frac {2sp + 3p - 3s}{sp + 3p - 3s}} \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {3p(3 - s)}{sp + 3p - 3s}}.
\end{equation}
\end{lemma}
\begin{proof}
{ Arguing similarly to the proof of Lemma \ref{LEMPP}, we get}
$$
\int_{B(R)} |\psi^3 u|^3 \mathrm{d}x
\leq C \| \vect{V} - \vect{V}_{B(R)} \|_{L^s(B(R))} \| \psi \nabla u \|_{L^p} \| \psi^4 u \|_{L^{\frac {sp}{sp-s-p}}}
$$
\begin{equation*}
+ C (R - \rho)^{-1} \| \vect{V} - \vect{V}_{B(R)} \|_{L^s(B(R))} \| | \psi^4 u |^2 \|_{L^{\frac s{s-1}}} = I + II.
\end{equation*}
{ Firstly, we estimate $I$}. { Since}
$$
3 \leq \frac {sp}{sp - s - p} \leq \frac {3p}{3-p},
$$
Gagliardo–Nirenberg interpolation inequality and H\"{o}lder's inequality imply that
$$
\| \psi^4 u \|_{L^{\frac {sp}{sp - s - p}}}
\leq C \left( \| \psi^4 \nabla u \|_{L^p} + (R - \rho)^{-1} \| \psi^3 u \|_{L^p} \right)^{\frac {3p + 3s - 2sp}{s(2p - 3)}} \| \psi^4 u \|_{L^3}^{\frac {4sp - 3p - 6s}{s(2p - 3)}}
$$
$$
\leq C \| \psi \nabla u \|_{L^p}^{\frac {3p + 3s - 2sp}{s(2p - 3)}} \| \psi^3 u \|_{L^3}^{\frac {4sp - 3p - 6s}{s(2p - 3)}} + C \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {3p + 3s - 2sp}{s(2p - 3)}} \| \psi^3 u \|_{L^3}^{\frac {4sp - 3p - 6s}{s(2p - 3)}}.
$$
We insert it into $I$ and use \eqref{AS}, { and then} we apply Young's inequality twice. This { yields}
$$
I \leq C R^{\frac 3s + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac {3p}{s(2p - 3)}} \| \psi^3 u \|_{L^3}^{\frac {4sp - 3p - 6s}{s(2p - 3)}}
$$
$$
+ C R^{\frac 3s + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \| \psi \nabla u \|_{L^p} \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {3p + 3s - 2sp}{s(2p - 3)}} \| \psi^3 u \|_{L^3}^{\frac {4sp - 3p - 6s}{s(2p - 3)}}
$$
$$
\leq C R \| \psi \nabla u \|_{L^p}^{\frac {9p}{2sp + 3p - 3s}} + C R \| \psi \nabla u \|_{L^p}^{\frac {3s(2p - 3)}{2sp + 3p - 3s}} \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {9p + 9s - 6sp}{2sp + 3p - 3s}}
$$
$$
+ \frac 14 \int |\psi^3 u|^3 \mathrm{d}x
$$
\begin{equation}\label{3PI1}
\leq C R \| \psi \nabla u \|_{L^p}^{\frac {9p}{2sp + 3p - 3s}} + C R \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {9p}{2sp + 3p - 3s}} + \frac 14 \int |\psi^3 u|^3 \mathrm{d}x.
\end{equation}
{ Secondly, we estimate $II$}. First, let $s \geq 3$. If we apply H\"{o}lder's inequality to $II$ and use \eqref{AS} along with Young's inequality, it follows
$$
II \leq C (R - \rho)^{-1} R \left( \fint_{B(R)} |\vect{V} - \vect{V}_{B(R)} |^s \mathrm{d}x \right)^{\frac 1s} \| \psi^3 u \|_{L^{3}}^2
$$
\begin{equation}\label{3PII1}
\leq C (R - \rho)^{-3} R^{4 - \frac {3(5p - 9)}{s(2p - 3)}} + \frac 14 \int |\psi^3 u|^3 \mathrm{d}x.
\end{equation}
\eqref{3PI1} and \eqref{3PII1} shows \eqref{3P3S}.
Now we let $s < 3$. Since
$$
\frac 32 < \frac s{s-1} \leq \frac {3p}{6 - p} < \frac {3p}{2(3 - p)},
$$
the { interpolation} inequality and Sobolev inequality imply that
$$
\| | \psi^4 u |^2 \|_{L^{\frac s{s-1}}} \leq \| | \psi^4 u |^2 \|_{L^{\frac {3p}{2(3-p)}}}^{\frac {p(3-s)}{2s(2p-3)}} \| | \psi^4 u |^2 \|_{L^{\frac 32}}^{\frac {5sp - 3p - 6s}{2s(2p - 3)}}
$$
$$
\leq \| \psi^4 u \|_{L^{\frac {3p}{3-p}}}^{\frac {p(3-s)}{s(2p - 3)}} \| \psi^4 u \|_{L^3}^{\frac {5sp - 3p - 6s}{s(2p-3)}}
$$
$$
\leq C \left( \| \psi^4 \nabla u \|_{L^p} + (R - \rho)^{-1} \| \psi^3 u \|_{L^p} \right)^{\frac {p(3-s)}{s(2p-3)}} \| \psi^4 u \|_{L^3}^{\frac {5sp - 3p - 6s}{s(2p - 3)}}
$$
$$
\leq C \| \psi \nabla u \|_{L^p}^{\frac {p(3-s)}{s(2p-3)}} \| \psi^3 u \|_{L^3}^{\frac {5sp - 3p - 6s}{s(2p-3)}} + C \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {p(3-s)}{s(2p - 3)}} \| \psi^3 u \|_{L^3}^{\frac {5sp - 3p - 6s}{s(2p-3)}}.
$$
Similarly as above, { we obtain}
$$
II \leq C (R - \rho)^{-1} R^{\frac 3s + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \| \psi \nabla u \|_{L^p}^{\frac {p(3-s)}{s(2p-3)}} \| \psi^3 u \|_{L^3}^{\frac {5sp - 3p - 6s}{s(2p - 3)}}
$$
$$
+ C (R - \rho)^{-1} R^{\frac 3s + \frac 13 - \frac {5p - 9}{s(2p - 3)}} \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {p(3-s)}{s(2p-3)}} \| \psi^3 u \|_{L^3}^{\frac {5sp - 3p - 6s}{s(2p - 3)}}
$$
$$
\leq C (R - \rho)^{-\frac {3s(2p - 3)}{sp + 3p - 3s}} R^{\frac {2sp + 3p - 3s}{sp + 3p - 3s}} \| \psi \nabla u \|_{L^p}^{\frac {3p(3 - s)}{sp + 3p - 3s}}
$$
\begin{equation}\label{3PII2}
+ C (R - \rho)^{-\frac {3s(2p - 3)}{sp + 3p - 3s}} R^{\frac {2sp + 3p - 3s}{sp + 3p - 3s}} \left( (R - \rho)^{-1} \| \psi^p u \|_{L^p} \right)^{\frac {3p(3 - s)}{sp + 3p - 3s}} + \frac 14 \int |\psi^3 u|^3 \mathrm{d}x.
\end{equation}
By \eqref{3PI1} and \eqref{3PII2} we complete the proof.
\end{proof}
\section{Proof of Theorem~\ref{thm1}}
We assume all conditions for Theorem~\ref{thm1} { are fulfilled}. Note that in the case of $\frac {3p}{2p - 3} < s < + \infty$, we have by Jensen's inequality and \eqref{AS}
$$
\left( \frac 1{| B(r) |} \int_{B(r)} |\vect{V} - \vect{V}_{B(r)} |^{\frac {3p}{2p - 3}} \mathrm{d}x \right)^{\frac {2p - 3}{3p}} \leq \left( \frac 1{| B(r) |} \int_{B(r)} |\vect{V} - \vect{V}_{B(r)} |^s \mathrm{d}x \right)^{\frac 1s} \leq C r^{\frac 3p - \frac 43}.
$$
This { shows that} we can { reduce \eqref{AS} to that case of} $s = \frac {3p}{2p - 3}$.
{ Hence, in general we may ristrict the range of $s$ to}
$$
\frac {3p}{2(2p - 3)} \leq s \leq \frac {3p}{2p - 3}, \qquad s > \frac {9 - 3p}{2p - 3}.
$$
Let $1 < r < + \infty$ be arbitrarily chosen. We set $r \leq \rho < R \leq 4r$ and $\overline{R} = \frac {R + \rho}2$. The first claim is that
\begin{equation}\label{EI}
\int_{\Bbb{R}^3} \left| \nabla u \right|^p \mathrm{d}x < + \infty.
\end{equation}
Let $\zeta \in C^{\infty}_{c}(B(\overline{R}))$ be a radially non-increasing function such that $0 \leq \zeta \leq 1$, $\zeta = 1$ on $B(\rho)$ and $|\nabla \zeta| \leq C (R - \rho)^{-1}$ for some $C > 0$. If we insert $\phi = \zeta^p$ into \eqref{LEI}, then we have
$$
\int_{B(\overline{R})} |\vect{D}(u)|^p \zeta^p \mathrm{d}x = - \int_{B(\overline{R})} | \vect{D}(u) |^{p - 2} \vect{D}(u) : u \otimes \nabla \zeta^p \mathrm{d}x
$$
$$
+ \frac 12 \int_{B(\overline{R})} | u |^2 u \cdot \nabla \zeta^p \mathrm{d}x + \int_{B(\overline{R})} (\uppi - \uppi_{B(\overline{R})}) u \cdot \nabla \zeta^p \mathrm{d}x.
$$
H\"{o}lder's inequality and Young's inequality imply that
$$
\int_{B(\overline{R})} |\vect{D}(u)|^p \zeta^p \mathrm{d}x \leq C \int_{B(\overline{R})} |u|^p |\nabla \zeta|^p
\mathrm{d}x
$$
$$
+ C \int_{B(\overline{R})} | u |^3 |\nabla \zeta| |\zeta|^{p-1} \mathrm{d}x + C \int_{B(\overline{R})} |\uppi - \uppi_{B(\overline{R})}|^{\frac 32} |\nabla \zeta| |\zeta|^{p-1} \mathrm{d}x.
$$
Employing Calderón-Zygmund's inequality, we obtain
\begin{equation}\label{CZ}
\int \left| \nabla ( u \zeta ) \right|^p \mathrm{d}x \leq C \int \left| { \vect{D}(u)} \right|^p \zeta^p \mathrm{d}x + C \int \left| u \right|^p \left| \nabla \zeta \right|^p \mathrm{d}x.
\end{equation}
{ Using \eqref{CZ} along with \eqref{PE}}, it follows
$$
\int_{B(\rho)} |\nabla u|^p \mathrm{d}x \leq C (R - \rho)^{-p} \int_{B(\overline{R})} |u|^p \mathrm{d}x + C (R - \rho)^{-1} \int_{B(\overline{R})} | u |^3 \mathrm{d}x
$$
$$
+ C (R - \rho)^{-1} R^{\frac 32 \left( \frac3p - 1 \right)} \left( \int_{B(R)} \left| \nabla u \right |^p \mathrm{d}x \right)^{\frac 32 \left(1 - \frac 1p \right)}.
$$
We consider $\psi \in C^{\infty}_c(B(R))$ a radially non-increasing function satisfying $0 \leq \psi \leq 1$, $\psi = 1$ on $B(\overline{R})$ and $|\nabla \psi| \leq C (R - \rho)^{-1}$ for some $C > 0$. By { the properties} of $\psi$ we have that
$$
\int_{B(\rho)} |\nabla u|^p \mathrm{d}x \leq C (R - \rho)^{-p} \int_{B(R)} |\psi^p u|^p \mathrm{d}x + C (R - \rho)^{-1} \int_{B(R)} |\psi^3 u|^3 \mathrm{d}x
$$
\begin{equation}\label{123}
+ C (R - \rho)^{-1} R^{\frac 32 \left( \frac3p - 1 \right)} \left( \int_{B(R)} \left| \nabla u \right |^p \mathrm{d}x \right)^{\frac 32 \left(1 - \frac 1p \right)} = I + II + III.
\end{equation}
Let $\epsilon > 0$ be an arbitrary real number. Before calculating $II$ first, we note that $\psi$ satisfies the assumptions for Lemma~\ref{LEMPP} and Lemma~\ref{LEM3P}. Observing that for $s > \frac {9 - 3p}{2p - 3}$
$$
\frac {9p}{2sp + 3p - 3s} < p,
$$
{ we may apply} Young's inequality to \eqref{3P3S} for $s \geq 3$. This yields
$$
II \leq C(\epsilon) (R - \rho)^{-\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}} R^{\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}} + I
$$
$$
+ C (R - \rho)^{-4} R^{4 - \frac {3(5p - 9)}{s(2p - 3)}} + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
$$
{ We continue estimating $I$}. Since
$$
4 - \frac {3(5p - 9)}{s(2p - 3)} < 4,
$$
{ we get for $R > 1$}
$$
II \leq C(\epsilon) (R - \rho)^{-\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}} R^{\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}} + C (R - \rho)^{-4} R^4 + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
$$
In the case of $s < 3$, { we see that} for $s > \frac {9 - 3p}{2p - 3}$ { it holds}
$$
\frac {3p(3 - s)}{sp - 3s + 3p} < \frac {3p(3 - s)}{s(2p - 3) - 3s + 3p} < p.
$$
Thus \eqref{3PS3} and Young's inequality give
$$
II \leq C(\epsilon) (R - \rho)^{-\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}} R^{\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}} + I
$$
$$
+ C(\epsilon) (R - \rho)^{-\frac {7sp + 3p - 12s}{sp + 3p - 9}} R^{\frac {2sp + 3p - 3s}{sp + 3p - 9}} + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
$$
Since
$$
\frac {2sp + 3p - 3s}{sp + 3p - 9} < \frac {7sp + 3p - 12s}{sp + 3p - 9},
$$
$R > 1$ shows that
$$
II \leq C(\epsilon) (R - \rho)^{-\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}} R^{\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}} + C(\epsilon) (R - \rho)^{-\frac {7sp + 3p - 12s}{sp + 3p - 9}} R^{\frac {7sp + 3p - 12s}{sp + 3p - 9}}
$$
$$
+ \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
$$
Hence, { in both cases we obtain the following estimate}
$$
II \leq C(\epsilon) (R - \rho)^{-\frac {7sp + 3p - 12s}{sp + 3p - 9}} R^{\frac {7sp + 3p - 12s}{sp + 3p - 9}} + C(\epsilon) (R - \rho)^{-\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}} R^{\frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}}
$$
\begin{equation}\label{EII1}
+ \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
\end{equation}
Now we estimate $I$. If $s \geq p$, using \eqref{PPPS} and Young's inequality, { we see that}
$$
I \leq C(\epsilon) (R - \rho)^{-2p} R^{3 + \frac p3 - \frac {p(5p - 9)}{s(2p - 3)}} + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
$$
Since
\begin{equation*}
3 + \frac p3 - \frac {p(5p - 9)}{s(2p - 3)} \leq 6 - \frac {4p}3
\end{equation*}
for $s \leq \frac {3p}{2p - 3}$, $R > 1$ implies
\begin{equation}\label{SISP}
I \leq C(\epsilon) (R - \rho)^{-2p} R^{6 - \frac {4p}3} + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x
\end{equation}
$$
\leq C(\epsilon) (R - \rho)^{-2p} R^{2p} + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
$$
{ Notice that for $s < p$, }
$$
0 < \frac {p(sp - 3s + 3p)}{2sp - 3s + 3p} < p
$$
and
$$
0 < \frac {p(3p - 3s)}{sp - 3s + 3p} < p.
$$
By applying Young's inequality to \eqref{PPSP} { we get}
\begin{equation}\label{SIPS}
I \leq C(\epsilon) (R - \rho)^{- 2p + 3 - \frac {3p}s} R^{\frac p3 + \frac {p^2}{s(2p - 3)}} + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
\end{equation}
Since
\begin{equation}\label{EST1}
\frac p3 + \frac {p^2}{s(2p - 3)} < 2p - 3 + \frac {3p}s \leq 6p - 9
\end{equation}
for $s \geq \frac {3p}{2(2p - 3)}$, $R > 1$ and $R(R - \rho)^{-1} > 1$ imply that
$$
I \leq C(\epsilon) (R - \rho)^{-(6p - 9)} R^{6p - 9} + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
$$
Therefore, in each case we obtain
\begin{equation}\label{EI1}
I \leq C(\epsilon) (R - \rho)^{-2p} R^{2p} + C(\epsilon) (R - \rho)^{-(6p - 9)} R^{6p - 9} + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
\end{equation}
Applying Young's inequality to $III$, { we find}
$$
III \leq C(\epsilon) (R - \rho)^{-\frac {2p}{3-p}} R^3 + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
$$
By $3 < \frac {2p}{3-p}$ and $R > 1$, it follows that
\begin{equation}\label{EIII1}
III \leq C(\epsilon) (R - \rho)^{-\frac {2p}{3-p}} R^{\frac {2p}{3-p}} + \epsilon \int_{B(R)} |\psi \nabla u|^p \mathrm{d}x.
\end{equation}
We define
$$
\gamma := \max \left\{ \frac {7sp + 3p - 12s}{sp + 3p - 9}, \frac {2sp + 3p - 3s}{2sp - 3s + 3p - 9}, 2p, 6p - 9, \frac {2p}{3-p} \right\}.
$$
From \eqref{EII1}, \eqref{EI1}, \eqref{EIII1} and $R(R - \rho)^{-1} > 1$ { we deduce that}
$$
I + II + III \leq C(\epsilon) (R - \rho)^{-\gamma} R^{\gamma} + \epsilon \int_{B(R)} |\nabla u|^p \mathrm{d}x.
$$
{ Inserting this estimate into} \eqref{123} and applying the iteration Lemma in \cite[Lemma~3.1]{G83} for sufficiently small $\epsilon > 0$, we are led to
$$
\int_{B(\rho)} \left| \nabla u \right|^p \mathrm{d}x \leq C (R - \rho)^{-\gamma} R^{\gamma}.
$$
By taking $R = 2r$, $\rho = r$ and passing $r \rightarrow + \infty$, we obtain \eqref{EI}.
Secondly we claim that
\begin{equation}\label{LPo1}
r^{-p} \int_{B(2r) \setminus B(r)} \left| u \right|^p \mathrm{d}x = o(1) \qquad \mbox{as} \qquad r \rightarrow + \infty.
\end{equation}
We consider a cut-off function $\psi \in C^{\infty}_c (B(4r) \setminus B(\frac r2))$ satisfying $0 \leq \psi \leq 1$, $\psi = 1$ on $B(2r) \setminus B(r)$ and $|\nabla \psi| \leq Cr^{-1}$. Then $\psi$ satisfies the assumptions for Lemma~\ref{LEMPP} when $R = 4r$ and $\rho = r$. Hence, in the case of $s \geq p$ we use \eqref{SISP} to obtain
$$
r^{-p} \int_{B(4r)} |\psi^p u|^p \mathrm{d}x \leq C r^{6 - \frac {10p}3} + C \int_{B(4r)} |\psi \nabla u|^p \mathrm{d}x
$$
$$
\leq C r^{6 - \frac {10p}3} + C \int_{B(4r) \setminus B(\frac r2)} |\nabla u|^p \mathrm{d}x.
$$
If $s < p$, using \eqref{SIPS}, we have that
$$
r^{-p} \int_{B(4r)} |\psi^p u|^p \mathrm{d}x \leq C r^{\frac p3 + \frac {p^2}{s(2p - 3)} - \left( 2p - 3 + \frac {3p}s \right)} + C \int_{B(4r) \setminus B(\frac r2)} |\nabla u|^p \mathrm{d}x.
$$
Thus, observing \eqref{EST1} and \eqref{EI}, we obtain
\begin{equation}\label{SILPo1}
r^{-p} \int_{B(4r)} |\psi^p u|^p \mathrm{d}x = o(1) \qquad \mbox{as} \qquad r \rightarrow + \infty
\end{equation}
which implies \eqref{LPo1}.
Next, we claim
\begin{equation}\label{L3o1}
r^{-1} \int_{B(2r) \setminus B(r)} \left| u \right|^3 \mathrm{d}x = o(1) \qquad \mbox{as} \qquad r \rightarrow + \infty
\end{equation}
and
\begin{equation}\label{L3O1}
r^{-1} \int_{B(r)} \left| u \right|^3 \mathrm{d}x = O(1) \qquad \mbox{as} \qquad r \rightarrow + \infty.
\end{equation}
We set the same function $\psi \in C^{\infty}_c (B(4r) \setminus B(\frac r2))$ with $R = 4r$ and $\rho = r$. For $s \geq 3$ we can use \eqref{3P3S} { to infer}
$$
r^{-1} \int_{B(4r)} |\psi^3 u|^3 \mathrm{d}x
$$
$$
\leq C \bigg( \int_{B(4r) \setminus B(\frac r2)} |\nabla u|^p \mathrm{d}x \bigg)^{\frac {9}{2sp + 3p - 3s}} + C \bigg( r^{-p} \int_{B(4r)} |\psi^p u|^p \mathrm{d}x \bigg)^{\frac {9}{2sp + 3p - 3s}} + C r^{- \frac {3(5p - 9)}{s(2p - 3)}}.
$$
{ In case} $s < 3$, \eqref{3PS3} gives that
$$
r^{-1} \int_{B(4r)} |\psi^3 u|^3 \mathrm{d}x
$$
$$
\leq C \bigg( \int_{B(4r) \setminus B(\frac r2)} |\nabla u|^p \mathrm{d}x \bigg)^{\frac {9}{2sp + 3p - 3s}} + C \bigg( r^{-p} \int_{B(4r)} |\psi^p u|^p \mathrm{d}x \bigg)^{\frac {9}{2sp + 3p - 3s}}
$$
$$
+ C r^{\frac {s(9 - 5p)}{sp + 3p - 3s}} \bigg( \int_{B(4r) \setminus B(\frac r2)} |\nabla u|^p \mathrm{d}x \bigg)^{\frac {3(3 - s)}{sp + 3p - 3s}} + C r^{\frac {s(9 - 5p)}{sp + 3p - 3s}} \bigg( r^{-p} \int_{B(4r)} |\psi^p u|^p \mathrm{d}x \bigg)^{\frac {3(3 - s)}{sp + 3p - 3s}}.
$$
In each case \eqref{EI} and \eqref{SILPo1} imply \eqref{L3o1}.
To verify \eqref{L3O1} we choose $r_0 > 1$ arbitrarily and let $r > r_0$. Then for $j \in \Bbb{N}$ satisfying $r < 2^j r_0$, we have
$$
r^{-1} \int_{B(r)} \left| u \right|^3 \mathrm{d}x \leq r^{-1} \int_{B(2r) \setminus B(r)} \left| u \right|^3 \mathrm{d}x + \frac 12 \bigg( \left( \frac r2 \right)^{-1} \int_{B(r) \setminus B(\frac r2)} \left| u \right|^3 \mathrm{d}x \bigg) + \ldots
$$
$$
+ \frac 1{2^j} \bigg( \left( \frac r{2^j} \right)^{-1} \int_{B(\frac r{2^{j - 1}}) \setminus B(\frac r{2^j})} |u|^3 \mathrm{d}x \bigg) + r^{-1} \int_{B(r_0)} |u|^3 \mathrm{d}x
$$
$$
\leq 2 \sup_{r_0 \leq \widetilde{r} < + \infty} \left\{ \widetilde{r}^{-1} \int_{B(2\widetilde{r}) \setminus B(\widetilde{r})} \left| u \right|^3 \mathrm{d}x \right\} + r^{-1} \int_{B(r_0)} |u|^3 \mathrm{d}x.
$$
Then for sufficiently large $r_0 > 1$, \eqref{L3O1} is obtained by \eqref{L3o1} and
{ $u \in W^{1,p}(B(r_{0})) \hookrightarrow L^3(B(r_{0}))$}.\\
\begin{pfthm1}
Let $\psi \in C^{\infty}_{c}(B(2r))$ satisfy $0 \leq \psi \leq 1$, $\psi = 1$ on $B(r)$ and $|\nabla \psi| \leq C r^{-1}$.
{ We observe \eqref{LEI} with $\psi^2 = \phi$, and apply H\"{o}lder's inequality, Young's inequality and Calderón-Zygmund's inequality to get }
$$
\int_{B(r)} |\nabla u|^p \mathrm{d}x \leq C \int_{B(2r)} |u|^p |\nabla \phi|^p \mathrm{d}x + C \int_{B(2r)} |u|^3 |\nabla \phi| \mathrm{d}x
$$
$$
+ C \int_{B(2r)} |\uppi - \uppi_{B(2r)}||u| |\nabla \phi| \mathrm{d}x
$$
$$
\leq C r^{-p} \int_{B(2r) \setminus B(r)} |u|^p \mathrm{d}x + C r^{-1} \int_{B(2r) \setminus B(r)} | u |^3 \mathrm{d}x
$$
$$
+ C r^{-1} \int_{B(2r) \setminus B(r)} |\uppi - \uppi_{B(2r)}||u| \mathrm{d}x = IV + V + VI.
$$
The properties \eqref{LPo1} and \eqref{L3o1} directly shows that
$$
IV + V \rightarrow 0 \qquad \mbox{as} \qquad r \rightarrow + \infty.
$$
To estimate $VI$ we use H\"{o}lder's inequality and \eqref{PE} when $s = \frac 32$. This { yields}
$$
VI \leq C \left( r^{-1} \int_{B(2r)} |\pi - \pi_{B(2r)}|^{\frac 32} \mathrm{d}x \right)^{\frac 23} \left( r^{-1} \int_{B(2r) \setminus B(r)} | u |^3 \mathrm{d}x \right)^{\frac 13}
$$
$$
\leq C \left( r^{-1} \int_{B(2r)} |\nabla u|^{\frac {3(p-1)}2} \mathrm{d}x + r^{-1} \int_{B(2r)} |u|^3 \mathrm{d}x \right)^{\frac 23} \left( r^{-1} \int_{B(2r) \setminus B(r)} | u |^3 \mathrm{d}x \right)^{\frac 13}.
$$
{ According to \eqref{L3O1}} it is sufficient to show that
$$
r^{-1} \int_{B(2r)} |\nabla u|^{\frac {3(p-1)}2} \mathrm{d}x = { O(1)} \qquad \mbox{as} \qquad r \rightarrow + \infty.
$$
On the other hand, H\"{o}lder's inequality with $\frac {3(p-1)}2 < p$ implies that
$$
r^{-1} \int_{B(2r)} |\nabla u|^{\frac {3(p-1)}2} \mathrm{d}x \leq r^{-1} |B(2r)|^{\frac 3{2p} - \frac 12} \left( \int_{B(2r)} |\nabla u|^p \mathrm{d}x \right)^{\frac {3(p - 1)}{2p}}
$$
$$
= r^{\frac 9{2p} - \frac 52} \left( \int_{B(2r)} |\nabla u|^p \mathrm{d}x \right)^{\frac {3(p - 1)}{2p}}.
$$
This implies
$$
VI \rightarrow 0 \qquad \mbox{as} \qquad r \rightarrow + \infty
$$
and
$$
\int_{B(r)} \left| \nabla u \right|^p = o(1) \qquad \mbox{as} \qquad r \rightarrow + \infty.
$$
Accordingly, $u \equiv const$ and by means of \eqref{L3o1}, we have that $u \equiv 0$.
\end{pfthm1}
\hspace{0.5cm}
$$\mbox{\bf Acknowledgements}$$
Chae's research was partially supported by NRF grants 2021R1A2C1003234, and by the Chung-Ang University research grant in 2019.
Wolf has been { supported by }NRF grants 2017R1E1A1A01074536.
The authors declare that they have no conflict of interest.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.